Beep Boop Bip
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]
Posting mode: Reply
Name
Email
Subject   (reply to 1547)
Message
BB Code
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: BMP, C, CPP, CSS, EPUB, FLAC, FLV, GIF, JPG, OGG, PDF, PNG, PSD, RAR, TORRENT, TXT, WEBM, ZIP
  • Maximum file size allowed is 10000 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 857 unique user posts.
  • board catalog

File 150448609042.jpg - (110.47KB , 1280x720 , mpv-shot0028.jpg )
1547 No. 1547 [Edit]
It doesn't matter if you're a beginner or Dennis Ritchie, come here to talk about what you are doing, your favorite language and all that stuff.
I've been learning python because c++ was too hard for me (I'm sorry nenecchi I failed to you), reached OOP and it feels weird compared to the latter one, anyway I never got it completely.
88 posts omitted. Last 50 shown. Expand all images
>> No. 2115 [Edit]
>>2114
D has two types of mixins: string and template. The former embeds a string containing valid D statements and/or expressions into the code: `mixin("int foo = ", 42, ";");` -> `int foo = 42;`. This must be done at compile-time, and any variables passed into the `mixin` statement must be readable at compile-time.
Then there's template mixins; these are more like traditional mixins found in OOP languages, except, as you've mentioned, they may be parameterized with types, symbols, and values. They are "mixed in" with the `mixin` statement: `mixin SomeDefinitions!42;` If `SomeDefinitions` had a definition, `int foo = value`, where `value` is a template parameter, then said definition will be accessible from the scope in which the template was mixed, and `value` is substituted for `42`. This is in contrast to a normal D template where its definitions, after instantiation, reside in their own scope accessible through a symbol.
These given examples are rather trivial, and don't do these features justice. For command-line argument parsing library, I use string mixins to generate new structs at compile-time, and utilize mixin templates to factor out common definitions and compile-time processing. Further, there are D CGI libraries that expose template mixins that do all the setup drudgery, e.g. provide a `main()` and environment variable processing.

As an aside, D allows you to defines strings with `q{}`, where the strings' contents are placed between the curly braces. This indicates to a text editor, IDE, or whatever to treat the strings' contents as if it were D code (or any code, I suppose): highlight it, provide code inspection capabilities, etc. These are helpful with string mixins.

>(There's also the gflags/abseil style argument libraries where you declare the variable you want to place the result into upfront. That works around the above issue, but on the flipside it's ugly and overkill for small projects).
I looked at them. I feel a little sick.
>> No. 2116 [Edit]
File 160688616514.jpg - (108.27KB , 1280x720 , [Doki] Mahouka Koukou no Rettousei - 10 (1280x720 .jpg )
2116
Alright, so I'm re-working that argument parsing thing, and funnily enough, template mixins have been a big help in refactoring. Combine that with better and more succinct solutions to previous problems, the design is a lot cleaner. Documentation is better, too. With that said, I'm not sure the best way to handle options' "optional" metadata:

alias VerboseOpt = Option!("verbose|v", false, OptionDesc("Garrulous logging") /* etc... */);

`OptionDesc` is one such piece of metadata. Right now, the `Option` template will pass the given variable-length list of metadata to a mixin template that will then define the appropriate fields. Thus, in the given example, a field of type `string`, whose identifier is `desc`, and with a value of "Garrulous logging" will have been defined in this instantiation of `Option`, i.e. `VerboseOpt`. The problem is that `parseArgs` will have to do some compile-time inspection on every `Option` instantiation to determine whether it has a description, i.e. a `desc` field; using the data therein or providing default values in the field's absence. This is not ideal for compilation times and for the code's clarity as this also extends to other pieces of metadata like `OptionCategory` or `OptionRequired`. It's not terrible, but again, not ideal. I have a better solution in mind, but a clean implementation of it is difficult for my moronic mind.
>> No. 2117 [Edit]
File 160705909858.jpg - (185.24KB , 1280x720 , !.jpg )
2117
Continuing my work on my command-line argument processing library (Now called "tsuruya" because naming is hard.), I have realized happiness through the digital world instead of just the 2D one.
Here's an example:

auto args = ["./test", "1", "2", "3"];
auto result = args.parseArgs!(Parameter!("integers", int[]));
assert(result.parameters.integers == [1, 2, 3]);
assert(result.usageText == "Usage: test <integers>");

`parseArgs` is instantiated with a `Parameter` struct template whose name, both in the generated programming interface and command-line interface, is "integers". By specifying the type of the parameter's value as a dynamic array of integers, `parseArgs` will read all non-option command-line arguments; convert them to `int`; and then add them to the parameter's array. (As an aside, if one were to specify a static array, `parseArgs` will only read k-number of non-option command-line arguments, where k is the static array's length.) A usage string is also generated based on what parameters and options (collectively known as "command-line interface objects") were given to `parseArgs`.
`Parameter` may also take a callable object, e.g. function, instead of a type, and the value it expects will be that of the callable object's return type. Further, one may pass optional metadata to `Parameter` just like one may do with `Option`, e.g. CLIODesc and CLIORequired. The former defines a description for a command-line interface object that may be used in `parseArgs`'s generated help text. The latter specifies whether the parameter or option is, well, required to be in the command-line arguments.
>> No. 2118 [Edit]
>(collectively known as "command-line interface objects")
I scrapped this stupidity and renamed the `Parameter` templates to `Operand`, since that's what they actually represent. After all, a parameter would include options too and thus confusion. Anyway, on to error handling and all that fun that entails.
>> No. 2119 [Edit]
Oh how I wish for mutability during compile-time. The amount of recursive templates upon which I'm relying is making me sweat a bit.
>> No. 2193 [Edit]
I was trying to get a program I always use to do something for python 2.7 and it wasn't supported anymore. Looking up the changelog discussions, I saw a poster say "We shouldn't support such ancient distros". Christ... it's really bizarre to me just how much the attitude among programmers is now. Granted, decade old software tends to be forgotten, but I have a hard time thinking of 2010 as "ancient", even as far as tech goes. Guess this is just me griping, but damn. I thought python 3.3 and 2.7 were still being used on the same systems.
>> No. 2194 [Edit]
>>2193
What a mess the python 2->3 transition was. Whose boneheaded idea was it to make things non-backwards compatible.
>> No. 2195 [Edit]
>>2194
>Whose boneheaded idea was it to make things non-backwards compatible.
I don't know, but there's a growing philosophy that old digital technologies should be forcefully cut out from any currently updated projects. Windows 10 for example has some serious fundamental flaws that make windows 7 look comparatively like a masterpiece, yet it's being prioritized so heavily that now people are cutting windows 7 support from their projects. This in particular is infuriating, to especially because when I'm not on a linux machine I want to use windows 7. In my brief stint with windows 10 I discovered some horrific design flaws regarding path variables, registries, and worst of all administrator permissions. As it turns out it is relatively easy on windows 10 for a file to revoke absolutely and forever any access to any users including the system user itself. This is, particularly, unpleasant when said file is malware.
>> No. 2208 [Edit]
I'm so addicted to meta-programming and templates that I often use them as a solution to anything. Usually, it's fine, but more straight-forward and obvious answers to problems tend to escape me in favor of some Rube Goldberg machine. It's fun, at least.
>> No. 2230 [Edit]
>>2195
I used Linux for many years. Couldnt take it anymore. Im back to using Win7.
>> No. 2231 [Edit]
I spent hours trying to reverse a singly linked list. I accomplished the task, but the realization that this is yet another indicator of being unemployable hurts the soul. Also I had to use two extra variables (iterator and a temporary) in the implementation along with the list's head. It's O(n), I think, but I feel like it's subpar.
>> No. 2232 [Edit]
>>2231
Post a screenshot or code. Lets review it:)
>> No. 2233 [Edit]
>>2231
One hour seems fine if you haven't seen the problem before (or alternatively haven't practiced doing these kinds of interview questions in a long time). And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).

Once you get familiar with the usual interview hazing questions you should be able to do them in 15-20 minutes.

Also a relevant article "Why do interviews ask linked list questions" [1]

[1] https://www.hillelwayne.com/post/linked-lists/
>> No. 2234 [Edit]
>>2232
https://pastebin.com/zDTyG3h2

>>2233
It took me two hours, I think. Even though I rarely work these kinds of problems, it's still a disappointing result given your time frame.

>And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).
That good to hear as I was struggling to figure out if there were a way to reduce the number of variables (that didn't involve changing the data structure).

>Also a relevant article "Why do interviews ask linked list questions" [1]
So it suffered the fate of all similar questions, and its continued use is due in no small part to inertia. Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
>> No. 2235 [Edit]
>>2233
Correct me if I'm wrong, but isn't it easier to do this recursively?

Post edited on 30th Mar 2021, 5:29pm
>> No. 2236 [Edit]
>>2234
That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.

>I was struggling to figure out if there were a way to reduce the number of variables
Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.

>Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases. (And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
>> No. 2237 [Edit]
>>2235
If this was lisp then yeah maybe since it's a one-liner, but the length of a recursive solution is basically the same (although perhaps maybe conceptually a tad simpler). Disadvantage of recursive is that you have increased space complexity, so if this were an interview they'd ask you to do the iterative solution anyway.
>> No. 2238 [Edit]
>>2236
>That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.
Before attempting to implement the reversal algorithm, I thought I understood them well enough. Heck, felt kind of clever doing this (https://pastebin.com/zNvWeXR2) as the first attempt for the removal method. Given that, it's just not acceptable that it required two (2!) hours.

>Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.
It was fun, too, until checking how many minutes elapsed.

>The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases.
I've read similar opinions, and I'm in no position to disagree with them considering my experiences building hobby projects and not having ever worked such a job. However, wouldn't a regular employee still need to possess the ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.

>(And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
This varies across positions, I assume.

P.S. Tangentially related, but writing good unit tests is quite the skill, and since you seem to know what you're talking about, would interview questions and problems concerning them be a good idea?
>> No. 2239 [Edit]
>>2238
>felt clever doing this [removal]
That's a clever solution. I first read/saw that variant of removal from a talk by Linus (see [1]), and even having seen that variant before it still took me a good 5 minutes to puzzle through your solution (that makes me feel dumb). If you came up with that by yourself, you're not giving yourself enough credit.

(By the way, I think in terms of clarity this is one case where explicitly writing out the type instead of using auto might have been clearer. This is probably just taste though – I personally hate auto since it makes it harder to know at a glance what type something is, and I only tend it use for for iterator-things like ".begin()" where the type is clear, and the equivalent "std::vector<int>::iterator" is ugly). At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like


void remove(T value)
{
node_t **cur_ref = &head;
node_t *cur = *cur_ref;
while (cur != nullptr && cur->value != value) {
cur_ref = &cur->next;
cur = *cur_ref;
}
if (cur) {
node_t *next = cur->next;
delete cur;
*cur_ref = next;
}
}

then I think it's a lot clearer what's going on.

[1] https://github.com/mkirchner/linked-list-good-taste

>ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.
Yes problem modeling is probably the most important skill to have; I disagree that linked list reversal is a good exercise of those skills in a day-to-day job duties sense (unless you're job is to write standard libraries for languages). They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs. Interestingly enough companies do ask systems design questions, but they only do so for L5-L6 hires (I'm using Google's scale where the entry level is confusingly enough L3, and L5 corresponds to senior softwar engineerg with about 7 years of experience).

>This varies across positions, I assume.
Surprisingly no. Google (& maybe facebook?) has a single common interview process for all SWE roles, and people aren't alloted to a team until after they pass the hiring committe. So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.

>would interview questions and problems concerning them be a good idea?
If you're optimizing for what will be asked in interviews, don't bother practicing how to write unit tests. Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them). Even if your solution is incorrect or suboptimal, showing that you can identify these edge-cases is a strong positive signal and might be the different between weak-hire and strong-hire.

Post edited on 31st Mar 2021, 1:19am
>> No. 2246 [Edit]
>>2239
>If you came up with that by yourself, you're not giving yourself enough credit.
Not really. Pretty sure I got the idea to do that or something similar from reading an article from some time ago about uses for double indirection. The "clever" part, for me, is being able to remember and reliably implement what I learned. Yes, it's a low standard, and using "clever" is most assuredly the wrong choice on my part. But, I'd to think the implication is roughly the same.

>At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like
Not only do I agree that your rewrite is more readable and obvious, I'm also inclined to think an interviewer would also prefer such a version from what you've said. However, from my perspective, explicitly specifying the type in this instance didn't really help all that much. Rather, it's the introduction of the extra variable and alteration of the others' identifiers that are more elucidating as to the intent and purpose of the code. (It just might be familiarity.)

>and the equivalent "std::vector<int>::iterator" is ugly
Don't miss those days. As an aside, the hype for C++11 was an exciting time. And now we have modules, concepts, ranges, and further improvements to constexpr--very cool. Problem is learning all of it!

>https://github.com/mkirchner/linked-list-good-taste
I feel like this is an over-complication, but it's probably helpful to others. Feels good to use Linus-approved techniques!

>They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs.
You make good points that I should've arrived at myself, and in one of my recent projects, I've became aware of how difficult it is to design good "systems." The breadth and side-effects are challenging to manage even for my small scale.

>So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.
Team-based interviews probably vary more in quality than a more standardized system, but on the hand, they include people with whom you'd be potentially working.

>Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them).
Good advice! However, maybe I'm being rather dense, I was asking whether interviewers asking questions related to unit testing would be a good idea--not a candidate practicing them in hopes they would be useful.
>> No. 2247 [Edit]
>I was asking whether interviewers asking questions related to unit testing would be a good idea
Ah my bad I misinterpreted your question. Although I think my response to that version is similar; if you assess a candidate's ability to reason about both edge-cases and how systems are linked then I feel that's a "good enough" indicator of his ability to write good tests. Since so much of testing is dependent on your specific project, language, framework, and infrastructure (which will have to be learned on the job anyway) I'm not sure if there's another general way to assess this.

Also in principal I feel that unit tests mainly serve as a sanity check to make sure that you haven't broken anything when refactoring, and any actual "testing" would be done via full-system tests against a sandboxed instance (since unless what you're testing is the implementation of some specific algorithm, root causes of bugs generally seem to stem from the interaction between two components). Of course in practice writing those kinds of tests tend to be a lot harder or more tedious, so mocked-out dependencies are what people usually do. Maybe using an in-memory simulation (if it's an external resource that can be simulated somewhat accurately) or recording/replaying interactions (for things like network requests) would be better if sandboxed instances aren't feasible.
>> No. 2259 [Edit]
Some neat changes in c++20:
https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.html

Coroutines might be useful for event-driven stuff (but from what I've read what the standard-library provides is very barebones so you'll need to make use of a higher-level library in practice). Not sure how I feel about modules; most of the annoyances have mainly been around the build system versus issues from leaky headers, and I'm not sure modules really fixes that.
>> No. 2280 [Edit]
>>2247
Belated response, but I really appreciate your thoughtful responses. The CI-aspect (as opposed to simple unit-testing) is something I often forget about since nothing I work on necessitates such infrastructure. In your opinion, is general experience with the aforementioned techniques something that's expected from graduates of CS (and related) degrees, or is it something that's rather learned on the job? (Ignoring the differences in infrastructure across organizations, hence "general".)
>> No. 2283 [Edit]
>>2280
>The CI-aspect (as opposed to simple unit-testing)
In my mind "continuous integration" is more about the infrastructure and is orthogonal to the issue of system/unit level tests. For instance, you could just have your CI scripts run all unit tests upon a commit. Unless your job deals with setting up such infrastructure, I don't think end-engineers ever have to explictly think about CI itself, since it's merely an automated mechanism that runs the tests.

> something that's expected from graduates of CS
(I haven't spent enough time in the industry to say for sure so take the below viewpoint with a grain of salt)
Considering that most university graduates barely have experience writing good unit tests, I doubt that new hires are expected to be able to think about system-level tests at all. In particular, while you can assume that graduates will at least have some basic exposure to the idea of unit tests (perhaps they might have had to write some themselves for an assignment, and they'd certainly be familiar with testing algorithms for correctness given the prevalent use of autograders), system level testing is something that very few students will have needed to think about given that in university, projects are usually small and simple enough that there's no need for this. It's only when you dive into things that have to deal with networking, databases, RPCs, etc. that the limitations of unit tests begin to show and it becomes worthwile to consider bringing up an entire sandboxed environment. (Somewhere in between that continuum of unit tests to entire sandboxed instances, there's the in-betweens of in-memory simulations, RPC replay, and perhaps even more that I'm not aware of).
>> No. 2284 [Edit]
Functional and declarative programming is a bit of a mindfuck coming from an imperative mindset. Doesn't help that keywords and concepts have differing meanings between the two paradigms.
Tangentially related, StandardML is pretty fun, and mlton's FFI to C looks nice.
>> No. 2285 [Edit]
>>2283
>I don't think end-engineers ever have to explictly think about CI itself, since it's merely an automated mechanism that runs the tests.
This assumes a definite bifurcation of dev-ops and the engineering that which the former services, right? In any case, I'd imagine end-engineers must be literate with regards to the system that orchestrates the integration and unit tests: Someone must map the output given by the CI system to a resolution for the problems reported.

>It's only when you dive into things that have to deal with networking, databases, RPCs, etc. that the limitations of unit tests begin to show and it becomes worthwile to consider bringing up an entire sandboxed environment.
Would you think it would be worthwhile to have colleges; assuming you agree they should be the main producers of these practitioners; offer classes to simulate this situation and ones similar to it? Or perhaps such complexities don't fit into the scope of a semester-long lab.
>> No. 2286 [Edit]
>>2285
>I'd imagine end-engineers must be literate with regards to the system that orchestrates the integration
Yeah but in that case they're merely users of the system rather than the ones tasked with setting it up. I.e. they only need to know how to trigger and view the results of CI runs. Although I suppose the limitations of the CI system will inherently influence the type of tests that can be written.

>>2285
There aren't much opportunities in college/uni to work on projects involving multiple interconnected systems though. The most complex they usually get is writing your own OS kernel or toy sql server, and the limited scope of these things usually precludes the more intricate types of testing.
>> No. 2306 [Edit]
>>2286
Thanks for the elucidation, anon! One final question, if I may: What did you (or your team) choose as the capstone project for the completion of your degree? This is rather personal, so I understand if you don't want to answer.
>> No. 2307 [Edit]
>>2306
For undergraduate degrees most US universities don't require a capstone project as far as I know, so I never did one for my backelor's. The projects I were referring to were individual class projects (e.g. for a class on DB systems you might be asked to design a toy sql server). For master's degrees things are usually split between "coursework only" masters, and project-based masters'. The coursework-only masters' degrees are the kinds of things offered by the online MS programs (e.g. by Georgia Tech); while I'm sure these are fine content-wise, I think calling it a master's degree dilutes the value of what a graduate degree is. As eloquently explained by [1], these are more often than not used as degree mills by people switching into CS later on (especially international students who use it for H1B).

The project-based masters where you are basically a mini-phd student working with a research group and writing papers is the more valuable one, and any university woth its salt will at the very least require to you file a formal masters report here (whether or not it counts as a thesis – i.e. whether you need to formally defend varies based on the specific degree program; but at the very least the student should have gotten a taste of academia).

[1] https://blog.regehr.org/archives/953
>> No. 2308 [Edit]
>>2307
I didn't even know there was such a distinction. It seems the coursework-only variant is fulfilling a need, but its implementation as an MS is unfortunate.
>> No. 2386 [Edit]
File 163054544795.jpg - (921.64KB , 3840x1836 , 30a40cd11f147f29c5a061b815dffb83.jpg )
2386
I'm a relative beginner at programming and I want to learn python for a specific project. I picked up this book https://www.composingprograms.com/ since it goes for a sicp kind of approach, and from what little I've done, I like using scheme.

Problem is I'm starting to think python is really unsuitable for this kind of thing.
>While Python supports common mathematical operators using infix notation (like + and -), any operator can be expressed as a function with a name
Except not(at least not with built-in keywords). I can't even find an add keyword. You have to write 1 + 2 + 3 + 4 + 5, add(1,2,3,4,5) isn't an option.

I want to learn python, but I don't know how I should proceed. The official documentation felt too complicated for me to be comfortable learning with it.

edit: so like one sentence later it's explained that you can import a library to use an add function(still only accepts two arguments). My question about the best way to learn still applies.

Post edited on 1st Sep 2021, 8:50pm
>> No. 2392 [Edit]
>>2386
Of all the reasons to learn a programming language, wanting to learn it to execute a specific project you have in mind is one of the better ones. I say continue with that online book.
>> No. 2393 [Edit]
>>2386
> I can't even find an add keyword. You have to write 1 + 2 + 3 + 4 + 5, add(1,2,3,4,5) isn't an option.
```
import operator
operator.add(2, 3)
```

But that seems like a silly reason to hate python, considering even if it wasn't you can trivially create a lambda yourself.

>I want to learn python, but I don't know how I should proceed
It's easy enough to pick it up by yourself by just reading simple programs, assuming you already have experience with other languages. In fact, for learning python that's probably the best way (in my personal opinion) for reasons I'll describe shortly.

>Composing Programs

That's a decent-ish book, it's used for Berkeley's freshman CS class. You might also consider doing the weekly assignments and projects (http://cs61a.org/) if it really interests you. But honestly my personal perspective is that starting off with an SICP-esque approach programming is not the best idea.

(Rant incoming)
In terms of programming, there are 3 things that you ultimately want to be good at

* Learning how to express and encode things logically: at this stage you become intimately familiar with basic constructs like conditionals, how to abstract things into procedures/functions, how to use loops, etc. For most adults this is probably easy to grasp, especially since nowadays we have increased exposure to these things due to the abundance of computers and flowcharts.But still, it is the necessary groundwork. Even this itself is sufficient for basic "imperative" scripting/programming.

* Learning how the above high-level concepts actually map onto hardware at a low-level. At this point you should be able to reason about your code all the way down to the bare-metal. This helps break some of the "magic" of programming: how that abstract concept of "calling a function" is ultimately just represented as setting a few registers and changing the CPU's instruction pointer.


* Re-learning programming concepts and paradigms through a more formalized, mathematical lens. At this point you can go deep into learning about lambda calculus, functional programming, PL theory, etc. The goal isn't necessary for practical knowledge but to gain an appreciation for the elegance and beauty of it all.

SICP is a great book for the third – it's a wonderful example to the elegance of lambda calculus, and how from just the notion of functions you can build up arithmetic (church encoding), lists, etc. These three lines blew me away when I first saw them

```
(define (cons x y)
(lambda (m) (m x y)))

(define (car z)
(z (lambda (p q) p)))

(define (cdr z)
(z (lambda (p q) q)))
```

and while I haven't read SICP, if they're as good as Sussman's lectures then they will do a good job of showing you how at it's core lambda calculus (as embodied by scheme) is really all you need to base everything upon. Understanding how eval-apply live in symbiosis, and how you can create an interpreter for scheme inside scheme with just a few dozen lines ("scheme/lisp can be defined as the language that is fixed under eval/apply metacircularity").

But the issue is, I'm not sure this is really useful for people who are new to programming. They need to start off by building a practical foundation – learning to appreciate the tall, abstract spires should come at the end of their journey. Giving a beginner SICP so that he can learn programming feels like teaching a child about peano axions so that he can learn arithmetic.

That said, everyone has their own opinions on learning programming so if you're having success with Composing Programs then you should just stick with it. But if you're looking for alternate recommendations, I've seen
https://automatetheboringstuff.com/ mentioned a few times and it seems to be closer to what I feel is a better approach.

Alternatively, just jump into your project and figure things out as you go. You'll spend a lot of your time googling for basic things, but this is the best way to learn.
>> No. 2394 [Edit]
>>2393
Also I remember somewhere that MIT themselves moved away from teaching SICP because it doesn't reflect the reality of what programming is today – gluing together several libraries to get a job done. Indeed, that's the reality of almost all software engineering jobs, and even at a professional level is what you will be doing. As you get better as a programmer, you get better at gluing things together in a way that is debuggable, scalable, flexible, and maintainable over the long run.

Python's a great language to start with because it has a good ecosystem of third party libraries, so you can do a lot by just gluing existing stuff together.
>> No. 2395 [Edit]
>>1698
> it's what you need to make iOS apps, since Objective-C is being phased out
I picked up some basic objective-c over the weekend since I wanted to make a mac app and didn't want to learn swift, and it's actually a kind of neat language. It's a very smalltalk-esque version of object oriented, a very different and more "dynamic" feeling than OO in C++.

Unfortunately it also ultimately seems harder to pick up than C++ because there's so much "magic" going on behind the scenes, documentation is very scattered and obsolete due to the Swift transition, and the fact that you're also dealing with Apple's Cocoa libraries which are also woefully underdocumented. For instance, to make an icon appear in the menubar that you can interact with you have to subclass NSView and basically re-implement the highlight behavior yourself. It's absurd that devs are willing to do all this stuff.

I like the idea of sending messages to objects with the square brackets, and there are some really neat features like key-value observation – I can see why Apple chose it to build GUI apps, a lot of ideas fit nicely. But like I mentioned there's so much magic going on behind the scenes even for a simple app: Ivars and getters/setters automatically "sythesized" out of properties, automatic referencing counting, etc.

Post edited on 2nd Sep 2021, 7:03pm
>> No. 2396 [Edit]
>>2394
>Also I remember somewhere that MIT themselves moved away from teaching SICP because it doesn't reflect the reality of what programming is today – gluing together several libraries to get a job done. Indeed, that's the reality of almost all software engineering jobs, and even at a professional level is what you will be doing.
It is indeed unfortunate that even MIT fell prey to various interests and now teaches their CS students mere coding, rather than computer science. But others have ranted on the topic far more eloquently than I ever could.
>> No. 2397 [Edit]
>>2396
It's only the intro level course that's been swapped out. I don't see the issue in this, considering that the majority of people who take CS courses in college are not actually interested in "computer science" but just in learning programming. Those who are actually interested in theoretical aspects are still free to explore.
>> No. 2398 [Edit]
File 163064496968.jpg - (1.23MB , 4096x2304 , 14cab6ac7700695aba5870ab3396a01d.jpg )
2398
>>2393
>at this stage you become intimately familiar with basic constructs like conditionals
I've previously made a largish project using a game engine, so I know these basic concepts.
>The goal isn't necessary for practical knowledge but to gain an appreciation for the elegance and beauty of it all.
Honestly I'm mostly interested in programming as a means to an end. The project I have in mind is a gui program which can process excel documents to make an optimal schedule, problem is I don't know python or xml. I want to learn before diving in so the development process isn't too torturous.

Thanks for the recommendation. I don't like the idea of "giving up", but this seems more suitable for my purposes and I'm planning on reading htdp anyway.
>> No. 2399 [Edit]
Created a function template that allows one to invoke functions in a "looser" manner: https://hastebin.com/evayovoxej.dlang
This template will call a provided function with the given arguments as long as each parameter is able to be matched with an argument; order doesn't matter, and the operand function's arity must be less than or equal to the number of arguments. In other words, the template's function arguments acts a set of which the parameters must be a subset.
>> No. 2425 [Edit]
So it's been a long time since I programed anything, and I decided to try and make a new project on C. For the first part of the project I have to parse and modify a .ppm image. I always get a Segmentation Fault error when trying to do this. I really don't understand what I'm supposed to do. I mean, the file is binary correct? But I have to open it as a text file, and read the pixels. Then parse the whole thing and proceed to the other parts of the project. Can someone tell me how to actually read and parse a ppm file on C? I read some guides but didn't understand a thing.
>> No. 2426 [Edit]
>>2425
Look at the first section in https://inst.eecs.berkeley.edu/~cs61c/fa20/projects/proj1/
that explains how to read ppm files. Note that I'm not sure if the ppm files you are referring to are ascii encoded or binary encoded. If it's ascii encoded you can open them up as text and see the raw rgb values, but if it's binary encoded you have to read it in binary mode.

For the binary encoded ppm you'll first want to familiarize yourself with the file format layout, so read https://en.wikipedia.org/wiki/Netpbm#PPM_example then open up a hex editor and try to identify the pieces of the image. Then you can read the binary data into a struct, and tada you've parsed it. Do same in reverse to write out the ppm file.

Post edited on 21st Sep 2021, 1:23pm
>> No. 2427 [Edit]
>>2426
>Then you can read the binary data into a struct
Beware of field padding and compiler antics - assert() and offsetof() are your friends
>> No. 2428 [Edit]
>>2427
Good point, there are probably compiler flags to avoid the padding but probably better and safer to avoid reading into the struct directly and instead read the data to a temp buffer, verify that it's well-formed, and then populate the struct.
>> No. 2432 [Edit]
File programa.txt - (747B )

2432
>>2426
>>2427
>Beware of field padding and compiler antics
Are you talking about leading zeros in the hexadecimal values or something?
I have managed to make the program show me all the hexadecimal values of the image by printing it, and it matches the hexadecimal values I see when I open the image with hex editor. But when I try to use fread to store them in a buffer, things go awry. I have used fseek and ftell to get the number of bytes in the image, and the answer matches what I see in the hex editor as well, 155. I printed the value returned by fread to see if it was reading it correctly, and the answer was correct, 155. But when I try printing the value from my buffer it says "ffffcb10". If I use %s instead of %x it returns the text header in ASCII, and if I use %d instead of %x it returns "ffffc940". I tried another way of printing the hexadecimal values using fgetc on the file, and it returns it just like on the hex editor. The program is attached in a txt file.
Alternatively, here's the main part of the program. You will have to change the : at the 3rd and 4th lines for a ; due to posting issues.

"FILE* file_of_image;
long int size_of_image;
file_of_image = fopen("D:\\Dados\\Downloads\\testeppm.ppm","rb"):
fseek(file_of_image,0,SEEK_END):
size_of_image=ftell(file_of_image);
fseek(file_of_image,0,SEEK_SET);
char number_of_bytes[size_of_image];
printf("%d\n",fread(number_of_bytes,1,size_of_image,file_of_image));
printf("%x\n",number_of_bytes);
while(!feof(file_of_image)){
ch =fgetc(file_of_image);
printf("%02x ",ch);"

>> No. 2434 [Edit]
>>2432
>Are you talking about leading zeros in the hexadecimal values or something?
No, he means that when you declare structs in C/C++ you can't always assume the layout in memory will match what you've written since compilers may add padding between elements or to the end. You don't need to worry about this since you're reading into an array, not a struct.

I think the issue is with the line
>printf("%x\n",number_of_bytes);
Since %x means it expects an int, but you're giving it an entire buffer. If you want to print out the buffer to stdout where each byte is printed in hex, then you can loop through the buffer and print it one byte at a time like you did below that (at least I think there's no way to do this with printf format specifiers).
>> No. 2440 [Edit]
>>/fb/6814
>>/ot/38799
Here's some Python 3 code I wrote to correct the recent mojibake (garbled text) on this site:
from codecs import register_error register_error('passthrough_decode', lambda x: (x.object[x.start:x.end].decode('latin-1'), x.end)) register_error('passthrough_encode', lambda x: (x.object[x.start:x.end].encode('latin-1'), x.end)) def garble(s): return s.encode('utf-8').decode('windows-1252', 'passthrough_decode') def repair(s): return s.encode('windows-1252', 'passthrough_encode').decode('utf-8')


Just pass the mojibake as a str (not a bytes) to the repair function. You can test it out by garbling text first with the garble function:
>>> repair(garble('お米券…進呈')) 'お米券…進呈'

>> No. 2441 [Edit]
File 163465927332.png - (1.10MB , 1280x720 , mao7.png )
2441
>>2440
I appreciate this very much.
>> No. 2443 [Edit]
>>2440
There's a python module to automagically do this
https://github.com/rspeer/python-ftfy
>> No. 2444 [Edit]
>>2443
Neat. It seems ftfy calls this character encoding 'sloppy-windows-1252', and it seems ftfy should work even if the input to the function contains characters that cannot be encoded to sloppy-windows-1252.
>> No. 2448 [Edit]
Sumtypes have changed my life--even without syntactic sugar.
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]

View catalog

Delete post []
Password  
Report post
Reason  


[Home] [Manage]



[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]