Beep Boop Bip
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]
Posting mode: Reply
Name
Email
Subject   (reply to 1547)
Message
BB Code
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: BMP, C, CPP, CSS, EPUB, FLAC, FLV, GIF, JPG, OGG, PDF, PNG, PSD, RAR, TORRENT, TXT, WEBM, ZIP
  • Maximum file size allowed is 10000 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 1064 unique user posts.
  • board catalog

File 150448609042.jpg - (110.47KB , 1280x720 , mpv-shot0028.jpg )
1547 No. 1547 [Edit]
It doesn't matter if you're a beginner or Dennis Ritchie, come here to talk about what you are doing, your favorite language and all that stuff.
I've been learning python because c++ was too hard for me (I'm sorry nenecchi I failed to you), reached OOP and it feels weird compared to the latter one, anyway I never got it completely.
Expand all images
>> No. 1549 [Edit]
File 150465284126.gif - (773.18KB , 320x240 , Dai_Mahou_Touge.gif )
1549
I have this irrational perception that I don't want to learn any programming language because I fear it will become obsolete or superseded in the next 5 years. Is there any that almost invariably won't, maybe one in the C family or maybe even Rust?
>> No. 1551 [Edit]
>>1549
C and C++ will practically never be obsolete.
>> No. 1552 [Edit]
>>1549
Try a few and go with the one you like. This "if it gets obsolete then I wasted my time and will have to start from zero" is an unhealthy thinking.
>> No. 1612 [Edit]
>>1549
First of all, a mainstream language won't be obsolete in just 5 years. At worst, it'll take a decade or two, maybe three, before one of these becomes obsolete. And even among niche ones, some will just never die.

Secondly, the time you spent learning a language is never lost. Most language come with a paradigms, idioms, etc. If you can program with one language, then you can pickup a similar one in less than a week (unless you want to be an expert ofc).

Finally, learning a language you might never use isn't a waste of time. Some languages are just really interesting to learn for the sake of it. They change the way you think about programming, teach you to solve problems in a (better) different way, etc.

Most programmers (and every good programmer) know more than one language anyway.
>> No. 1614 [Edit]
>>1612
So, which are the mainstream languages that will take 10-30 years to become obsolete?
>> No. 1615 [Edit]
>>1614
I don't understand why you'd want a language that will die in 30 years when you can pick C and C++ and be set for multiple lifetimes
>> No. 1628 [Edit]
>>1615
Could you please elaborate what you meant? I'm technologically handicapped.
>> No. 1630 [Edit]
>>1628
He means that C and C++ will basically never become obsolete and you should learn those two if you're worried about learning a language that will become obsolete.
>> No. 1631 [Edit]
>>1630
>C and C++ will basically never become obsolete
I understood that, but not as to why. Why won't they become obsolete? Why won't there ever (within realistic expectations) a language that makes them obsolete?
>> No. 1632 [Edit]
>>1631
Because they are so prolific. They have been used for years and years as the base of software. That, and from what limited knowledge I have, they are more basic languages than things like Java or C# or whatever else which are like additions built with C/C++ as the foundation.
>> No. 1638 [Edit]
File 151736164996.png - (8.33KB , 762x600 , Pic-3.png )
1638
I really want to get into an application development language, whether it's Java, C, or C++.

I've almost solely worked with web development languages; HTML, CSS, JS(and jQuery).

I have a tough time deciding which on my own however and I'm out of touch with the best place to start, I'm not even sure what I'd program, beyond very simple games, or desktop applications to automate the bizarre needs I occasionally have.
It's not fair to say I'm completely new to Java, or C++. I did a semester of each in high school and I understand data-types and program-flow, since I've also tried many web-related languages in addition to Javascript.

>>1549
I find it funny how large of a concern this is to new programmers, given how similar high-level languages are to each-other. It's a question born of ignorance. Taken at face level, if you really wanted a language that will never go obsolete try Assembly, or even better, binary.
C, and C++ have atleast another 30 years left in them. No other languages provide a closer to the metal approach and unparalleled performance in addition to a mountain of software libraries accumulated over the years.
Rust still seems like fanboy vaporware to me at this point. There's a lot of hype, but I've yet to download any software using it, and even the most competent Rust guy I know says it's not ready for serious deployment. It's the new Haskell / LISP.

That being said, it's surprising how fast web-based languages and libraries die a quick and morbid death, only to be replaced by something nearly identical, almost immediately after.
>> No. 1640 [Edit]
File 151756130680.gif - (898.01KB , 400x214 , Open.gif )
1640
>>1638
>I find it funny how large of a concern this is to new programmers
Oh, I'm not a programmer at all. I used to dabble in web design when css was just emerging, and I recall hating having to stick with formatting. After a while, I stopped practicing, since I disliked the moving from html to css and whatever knowledge I had of anything faded away. I'm just someone who would like to find a solution to the eternal problem of having to leave the house to work, and seems like learning programming can alleviate that.
>It's a question born of ignorance.
Indeed.
>try Assembly, or even better, binary.
>C, and C++ have atleast another 30 years left in them.
What I also meant by the frustration of language-learning and the possibility of becoming obsolete is personal intellegence boundaries. I'm not that smart, and math itself was never my forte. I'm skilled in organization and finding things in a sea of other things but that doesn't mean I can just throw myself in learning binary and achive whatever goals I can with it.

Considering even non-amateur programmers go for "easier?" options rather than C or C++ (what happened to C+?) I assume you need to have certain dispositions or certain levels of intelligence many don't have.

So, I guess I could rephrase my original question into: If I were to start today, with what language could I end up achieving from-house working in X amount of time, with average intelligence? X being whatever it takes to be proficient enough to get jobs with it.
>> No. 1644 [Edit]
File 151791747138.gif - (241.21KB , 497x331 , giphy.gif )
1644
Hikikomori here i'm trying to find a way to make money from home without going outside what programming language do you guys recommend for beginners??.
>> No. 1648 [Edit]
File 151960936395.jpg - (111.59KB , 1280x720 , 150102648481.jpg )
1648
>>1644

start from the beginning, learn LISP with SICP.

you will be competing with freelancers from the 3rd world, who are incapable of thinking critically. You need to learn how to code, not how to write in a language. You'll have to learn the language, yes, but that is secondary to what is most important.

Being able to speak fluent english gives you a bit of an advantage, and make sure to get a github account up and going (and contributing at a REGULAR AND CONSISTENT RATE).
>> No. 1657 [Edit]
File 152234679719.gif - (747.57KB , 490x276 , Ready.gif )
1657
>>1648
>You need to learn how to code, not how to write in a language.
Does this apply to the suggestion an anon made about learning Assembly? Or this that a brainlet filter?
>and contributing at a REGULAR AND CONSISTENT RATE
Not really that acquainted with github. Do you mean commitments (updates) or actually engaging with other people and their projects there?
>> No. 1665 [Edit]
>>1657
Abelson (or is that Sussman?) begins the MIT course on computer science by saying that 'computer science' is a terrible name for the subject.
Firstly it's not a science, it's more close to engineering or art. Second it's not really about computers, because the computer is but a tool for you.
Then he contrasts computer science to math by showing an equation for what a square root is, and notes that though the equation is correct it doesn't really tell you how to find one. He then shows a program that can find a square root of a number.
What programming really is about is giving precise instructions so that an abstract system can execute or compute them. The system is called a computer, but here that term defines a use for it, not the box you usually think of when encountering the word.
And understanding this fundamental idea of giving instructions is more important than any language or hardware that implements an environment where you're going to put the ideas into use.

I can't expand on Anon's suggestions about github, but it sounds like good advice. Your github page can be treated like a resume, so gather proof of your experience there. Git is also almost necessary for working on actual projects.
One thing you could also do is bounty hunting. People place bounties on software features they want to see. And then you can try to make a successful startup and sell out or something
>> No. 1666 [Edit]
>>1665
Thanks for the reply, I loved the way you explained it. What would you call Computer Science if you were given the power and authority to change it forever?
>> No. 1672 [Edit]
>>1631
Because money. COBOL is probably the most hated language of history. And it still exists, not only in the form of legacy code bases that nobody has the money, time or skill to replace, but some people just can't let it die (I'm looking at you IBM).

Languages such as C and C++ have been very, very, very popular. And it'd be impossible to replace all the software, libraries, etc. written in them in just 3 decades. They are also very robust, which makes them even harder to replace since they still werk.

But that's not the only reason. Nearly all mainstream languages are from what we call the "C-family", they all are heavily inspired from C and look very close to each other (if you don't have any experience with languages of other families you might not see what I mean, as you probably think that C and python are totally opposite that have little to nothing in common). Therefore learning one lets you pick up others of the C-family fairly quickly. Just for the sake of teaching, they will never really disappear. C is a really really simple language, the syntax can be learned very very quickly, and you get to learn a lot of core concepts of programming without having to worry about lots of boilerplate that means nothing to you (yet). C/++ will stay relevant for at least another 50years. Probably 100, because you'd also have to wait until all the stubborn programmers of these languages die out...
>> No. 1680 [Edit]
File 152503807397.jpg - (257.23KB , 850x1275 , Languies.jpg )
1680
Pick one.
>> No. 1681 [Edit]
>>1680
C
>> No. 1682 [Edit]
>>1680
A toss-up between C++ or VB, the latter because she's cutest and looks lonely, and the former because it's a langauge I actually use.
>> No. 1698 [Edit]
>>1680
Java is widely hated but still widely used. I have more experience with Java than any other language, though I'm trying to change that. It's verbose, has garbage collection, portable thanks to the JVM (with. couple exceptions), admittedly somewhat bloated, lots of documentation and libraries. Often seen as the poster child of the object-oriented paradigm, but has recently added more function features, such as lambda expressions. JavaFX isn't great, but it's easy enough to learn to make GUIs. One interesting thing about Java boilerplate and its verbosity is that you can write a hundred lines of Java that does essentially the same thing as a very short shell or python script. But it's still useful for a lot of things. I do think Java and OOP in general tend to overemphasize extensibility and modularity though. There are some bad design patterns and features in Java, like access modifiers or getters and setters. Not a big fan of that stuff. But it's a good way to learn about OOP and programming in general, like polymorphism, inheritance, control structures, and lots of other stuff I don't feel like writing out. Decent language despite all the flack it gets.

C++ is fast but easy to mess up with security and memory management. Widely used for things that depend on performance, such as games, but it just isn't worth the headaches. Learning C++ made me appreciate Java more -- garbage collection, references instead of pointers, and shit like that. For C++, you can use Qt or GTK. I personally never got into GUI development for C++, though I did for Java.

Python is okay. It's used for machine learning, Django (web dev backend), learning programming, and so on. "Forced indentation of code" is a meme on /prog/, since some people find it annoying that organization is syntax in this language. I'm surprised Python 2.X still exists, and it shows how making changes can cause fragmentation in a community. More people are adopting Python 3.X though, which is good. Paths for different versions of Python can be annoying. I've worked on a Project that involved a tool called Anaconda though, which made it easy for everyone to make sure they had the right versions of Django and Python and whatnot, to avoid the "well, it works on my machine" issues many people have. Lots of modules and community support too. However, Guido recently stepped down as BDFL, so who knows what the future of Python will be.

Ruby is dying. People use Django or Node instead of Rails. It's slow, basically competing with Python in the realm of relatively simple interpreted languages. Never really heard of it being used outside of RPG Maker XP and Ruby on Rails. Wouldn't recommend getting into it. It's a sinking ship.

PHP is a pile of garbage. It's ancient, full of security holes and black magic fuckery, and it should be avoided at all costs. Maybe you'll end up using it when maintaining some legacy web codebase, but it sucks ass. I know of some cool tricks for hacking poorly-coded PHP sides, using fun things like remote file inclusion and web shells. Interesting from an attacker's standpoint, but annoying as fuck if you're a web developer who has to deal with this. Don't use PHP.

C# is like Java, but for Microsoft shills. I never got into it, but it's only worthwhile if you're gung-ho about Microsoft (which I'm not).

JavaScript, despite all the hate (some if it deserved, for its weird quirks!), is one of the most important programming languages to learn. These days, you can't really do any web development without JavaScript. Very few people use vanilla JS, but you use shit like Angular, Node, Express, jQuery, and so on. EcmaScript 6 got class-based inheritance instead of the old weird prototypical inheritance of ES5, which is in JS. It's weird how JS is based on ES, but they're not exactly the same. I never really understood that. Anyway, JavaScript Object Notation is cool too, and you can even use it with non-web stuff. It's a cool alternative to XML. I'd never use XML these days. NoSQL/document-based databases like MongoDB are cool, and a good start if you're learning JS. With MEAN stack, you can have JS frontend, JS backend, and a JSON database. Makes things slightly easier, even though web dev is pretty complicated now.

Perl is extremely terse to the point of being unreadable. There are lots of cool one-liners you can do with it, and sometimes I even use a little bit of perl in my shell scripts, but overall, it's not really usable. I've seen some older sites using perl, but it's not aging well for modern concepts such as responsive design. Sort of reminds me of old-school cgi. I'm no perl connoisseur, but apparently perl6 was slow to be adopted. Many people stick with perl5, just like the split between Python 2 and Python 3. Wouldn't bother learning perl in 2018.

C is old-school procedural programming. Fast, but simple. You could think of it as like C++ without the OOP, or the other way around: C++ is like C but with classes tacked on as an afterthought. Wanna learn about pointers and compiling and other more old-school stuff? I guess you could learn it with C (or C++). But in the real world, you're not likely to use it, except for a CS undergrad class, or maybe legacy code. C++? Sure. Pure C? Not so much. Maybe if you do embedded systems shit where resources are tight, or you really need that extra performance squeezed out of something. But then again, it's good to have knowledge of non-OOP paradigms, like procedural, imperative, and perhaps even functional (though I'd never recommend function languages for anything other than messing around -- very few jobs with it, too obsessed with concurrency, no return statements, weird concepts like currying and monads, only real cool thing is lambda expressions, which I use in Java all the time). Anyway, got a little off-topic, but C is kinda old-school and not really something you'd want to base a modern project off of. If you want something faster, kind of like C or C++, maybe look into Rust, which is similar but with better memory safety built in.

I never got into Visual Basic.

I've heard good things about R, but I've never actually used it.

Never used Scala. Don't know much about it.

Shells are not programming languages. Shells are shells. Technically, you can do shell scripting, which is useful. But would I call that fully-fledged programming? Not so sure about that. I do a lot of bash/zsh one-liners and I like customizing my .zshrc and making cron jobs and all that jazz (though systemd is subsuming all that shit nowadays). Lots of cool and powerful stuff with shells. Definitely worth learning command line stuff if you want to program, no matter which OS or programming language you use. But be warned: PowerShell is a joke compared to Unix shells like bash or zsh.

ActionScript? Flash is dead. Not worth your time.

Other languages that aren't on that list, but are worth mentioning:

Swift -- optionals/nils are interesting, also it's what you need to make iOS apps, since Objective-C is being phased out

Kotlin -- successor to Java as the go-to language for Android development

Haskell (for academic or hobbyist purposes)

Rust

Go

Assembly, such as MIPS or ARM or x86: if you wanna learn more about how computers really work, it can be useful to learn about assembly. You'll learn about registers and all that good stuff. It's a total pain in the ass. You'll appreciate higher-level languages more after doing shit in assembly. Assembly is very simple -- and that's the problem. It's hard to get an idea of how combinations of jumps and pushes do anything. Higher-level languages introduce extra layers of abstraction, so you can think more about the problem you're trying to solve and less about CPU registers and whatnot. A lot of compilers compile to assembly, because it's pretty hard to read or reverse engineer (though some languages compile to bytecode instead). But if you want to get into reverse engineering or malware, assembly is more important to learn. But it doesn't make sense to try and make a program in assembly as opposed to something like Java or C++ instead.

Markup and "that's technically not a programming language" languages: HTML5, CSS3, Markdown, YAML, LaTeX, preprocessors such as Sass, and so on. Still important to know. Might not be turing complete, but so what? Still useful to know. Pedants argue about what to call them, but regardless, you should know some of them anyway.

When some people ask "which programming language should I learn?" the answer is: many programming languages. You might only speak one language in daily life, but you might need to use multiple programming languages as a programmer. They have different design philosophies, different built-in methods, different libraries, run on different devices, and have different use-cases. They're not all general purpose, and even so-called general purpose languages are better for some things and worse for others.

There is no "best" language to learn, but stop obsessing over which one is best and just pick something. I'd suggest Python, HTML/CSS/JS, or Java to start with. When you learn your first programming language, you're learning programming, paradigm-specific stuff, and language-specific stuff. Then, for your second programming language (assuming it's in the same paradigm, which should ideally be OOP-based, even if it's not 100% OOP), all you're really learning is the syntax and language-specific stuff. It's way easier to learn another programming language after you've already learned your first one.

It's easy to learn programming, but only if you have realistic time expectations. If you think you're gonna make a game in a day, you'll be distressed by how complicated everything sounds. Rome wasn't built in a day. So you have to pace yourself, like learning linked lists one day, then stacks and queues, then binary trees the next, time complexity the day after that, and so on. That's not a really good order, just an example. But that brings up another topic: there is a difference between learning the basics of a language, and learning more in-depth topics, such as algorithms, data structures, software engineering, project management, best practices, devops/agile, tools, debugging, design patterns, etc.
>> No. 1699 [Edit]
>>1638
>if you really wanted a language that will never go obsolete try Assembly
Terrible advice, considering the limited use-cases of Assembly, and also how CPU architectures change over time. x86 might be hot now, but it's being overtaken by ARM. Different CPU architectures have different assembly code. I learned MIPS assembly in college, and it's pretty much useless. I don't even put it on my resume.
>> No. 1784 [Edit]
I'm aiming for a career in Mechanical Engineering so programming for me is going to be something I do to assist with prototyping. That is to say, it'll be primarily more of a hobbyist activity. On that note, I've decided to learn C first, then Forth, then Lisp and then Ruby.

C because I hear its simple yet powerful, "bare metal" programming language. I believe it also teaches you about how the computer works internally.
Forth because I hear it gets used in the Aeroplane and Space sectors of Engineering. It's even closer to the "metal" than C, I hear. I heard it gives the programmer immense control and presents its own unique problems to programming that challenge you to think differently.
Lisp because I hear it will induce some kind of revelation about programming when you come to understand it. I hear that it's perfect for when you don't know what you should be doing because Lisp code is flexible enough to be taken in a new direction with relative ease.
Ruby because I predict I'm going to want a more friendly language to code in for my own applications in the house when I'm not trying to be maximally efficient.
I also consider Ada and Fortran as possibilities.

My most recent completed project was a program that could print a list of all the palindromes from 10 to 1,000,000 without using string manipulation (since I have yet to learn it).
>> No. 1785 [Edit]
>>1784
I would recommend LISP first, as you will get a better understanding of the science in computer science and better illustrates benefits of doing things different ways (especially recursion).
C is great exactly for the reasons you list, but the manual memory manipulation crap and a lot of C's idiosyncrasies (including the compilers) will slow down the general concept stuff that you should be learning first and foremost.

I would also recommend python over ruby for a language following what you are trying to accomplish with it, as python is a lot better supported overall, and a much larger library of publicly available code to stealborrow from.
>> No. 1786 [Edit]
>>1785
Thanks for the support but unfortunately, I've already bought books on C to get me started with it. I'm going to press on with C just for the sake of reassuring myself that I didn't waste my money. Considering what you said though, I guess I could make Lisp second and I'll learn it along with watching those SICP videos on Youtube before going into Forth.

I've heard that Python is popular for being popular so I'm somewhat aware of how well supported it is. When I look at Ruby code though and how it's all so sterilised of all of the "computer" things that you usually find in software code, I feel really drawn to it. It seems really comfortable. I'll just have to keep Python in mind as well.
Thank you.
>> No. 1787 [Edit]
File 154943397236.jpg - (85.93KB , 706x455 , slap.jpg )
1787
learning c++ atm. I need to learn data structures and algorithms but i keep getting blocked with math and big 0.
>> No. 1955 [Edit]
File 157517725294.jpg - (201.97KB , 1920x1080 , fecd8526c650eba6709a3a9fe9c7666e.jpg )
1955
If you want a fun language to use, I recommend trying D. Its template system is awesome and a lot easier to use than the one in C++, as I recently discovered in a project that I'm working. Wish it was more popular, however. Its rough edges could be smoothed out if it had more manpower.
>> No. 1958 [Edit]
File 157596783467.jpg - (431.80KB , 1080x2160 , Screenshot_20191210_083838_com_termux.jpg )
1958
I can definitely recommend learning assembly. It helped me tremendously to get an intuitive understanding of how computers work. Many of the things in C++ and other languages that I had previously found hard to grasp (e.g. binary logic, bit shifts, flags, two's complement notation etc.) are like second nature to me now because of dealing with them all the time when writing assembly code.

As your first programming language, I recommend either learning something very high-level (Haskell, Python) where you can focus on learning algorithmic thinking without having to deal much with things like memory management, integer overflows and so on, and also start writing somewhat useful software early on; or start from the ground up with assembly so that you can focus on learning exactly how computers deal with binary numbers, character encodings, memory addresses, registers, vector operations, stacks, system calls and so on so that none of this will present a hurdle later on in your programming career.
Avoid starting with the languages in the middle (C/C++, Java, Rust) where you have to deal with both at the same time.
That said, if there's something very specific that interests you, learn whatever is most useful in that field: Javascript for web development, C for microcontrollers/Arduino, Shell-scripting if you're a Linux user, Java if you want to write Android apps.

Some decent Assembly books aimed at beginners that deal with modern processors are Jeff Duntemann's Assembly Step-by-Step, Assembly Language Coding in Color - ARM and NEON, and Programming From the Ground up. Or just get an emulator for an 80's computer like the ZX Spectrum or Commodore 64 and read one of the countless beginner's books on assembly for Z80 or 6502 CPUs.

A good book on higher-level/philosophical computer science concepts that doesn't require a strong mathematical background is Understanding Computation from O'Reilly. Other than that, just look at what universities teach in their curricula.

>>1631
C was arguably the first language that was a good abstraction of how a computer works without being specific to one type of processor. That's why it's great for writing things like operating systems and other things where speed is important, while still remaining portable across different computers. You can run Linux on both your desktop computer (Intel processor architecture) and your phone (ARM processor) because it's written in C. Can't do that in assembly because you'd have to rewrite it for every type of processor, and can't do that with LISP or Python because it's too slow. So everyone back in the day started writing all these fundamental systems (many of which are still around from the 70's/80's) in C, and it will basically never go away - unless Urbit really takes off and the feudal internet aristocracy of the future writes everything in Hoon.
>> No. 1959 [Edit]
File 15759893559.gif - (44.46KB , 640x400 , download (1).gif )
1959
>>1958
What do you think about MATLAB? The most complex thing I've done with it is matrices, structures and that kind of thing. That's basically all I know, though I've tried dabbling in c before using that shitty, learn c in 24 hours guide. Where should I go next? Algorithms? At what point do you really "know" a language and move on to learning something else? Also, have you heard of Introduction to Microcomputers series by Adam Osborne? I haven't finished it, but it gave a very interesting glimpse into the hardware side of things. You learned x86, right?

Intersting links I can't do anything with, but maybe you or somebody else could:
https://web.archive.org/web/20180630204922/http://island.geocities.jp:80/cklouch/column/pc98bas/pc98disphw_en.htm
http://seclan.dll.jp/dtdiary/1999/dt19990924.htm
http://euc.jp/articles/pc9800.en.html#chap5
https://archive.org/details/PC9800TechnicalDataBook1986
https://46okumen.com/pachy98/
I remember finding the 98 bible, but I don't have the link for whatever reason.
>> No. 1960 [Edit]
>>1958
If you want to start with assembly, maybe also take a look at riscv. The spec is pretty clean and since it's from the risc lineage the instruction set is self-contained and easy to understand. One of the projects in my uni was to build a cpu (in circuit simulation software) and I was surprised at how compact it ended up being. Unfortunately the tooling and the ecosystem is still somewhat janky at the moment, but it's worth looking into since there's a chance it might take off in the future.

>>1959
Not the poster you were responding to, but MATLAB's a cool language for any sort of numerical computing. It's starting to fade a bit now that there's numpy+python, but the lack of operator overloading sort of hurts python here. If you need one of the specialized toolboxes then there's no other real choice, but otherwise making the transition to numpy shouldn't be too unfamiliar, and it will be a good introduction to python.
>> No. 1961 [Edit]
>>1959
>Where should I go next?
I suggest just writing a lot of programs and looking up stuff on search engines and in reference manuals as needed.
Since we're in Christmas season, try doing the programming puzzles at adventofcode.com and see how far you can get in each year's calendar.
As your skills as a programmer progress, you'll also see more and more opportunities for contributing to open source projects opening up.
You'll eventually run into a roadblock due to lack of knowledge, and that's always a good pointer towards what you should learn next.

If you're looking to learn more about how CPU's work, check out nandgame.com and maybe the Nand2Tetris course on which it is based. It basically leads you through the process of building an entire computer from just above the transistor level.

Since you know some C, you could look into Arduino microcontrollers, which let you control electronics (think LED's, sensors, buttons, small LCD screens etc.), if you find that sort of thing interesting at all.

>Algorithms?
If you feel like it, sure. Can never know enough about that topic. It all depends on your goals and what interests you. As I said, looking at what universities teach in their curricula is a good guideline when you're not sure what to read about next.
>At what point do you really "know" a language and move on to learning something else?
I recommend sticking with whatever language you know until you want to solve a problem it is simply not suited for. If you want to make websites for example, C isn't the right tool (except for some back-end stuff).
If you want to learn a new language just for the heck of it, my suggestion is to pick a language that's suitable for understanding a different programming paradigm than what you're used to -- such as Haskell for functional programming, Ruby for OOP, and some sort of assembly language.

>You learned x86, right?
It's what I started learning assembly on, using Duntemann's excellent tutorial, but I never wrote much in it. Had to drop the book half-way through because I became homeless and didn't have access to a desktop computer most of the time.
The architecture I'm most familiar with is ARM, in both the 32-bit and the 64-bit variant. That's what my phone has, and I have to say that ARM assembly is much more readable and intuitive than its x86 counterpart, and not nearly as mind-bogglingly complex. I can echo >>1960's recommendation to pick a RISC architecture for learning how assembly works, and ARM (formerly Acorn RISC Machines) is one of those. Most of what I know about it is straight from the documentation on ARM's website.
The one I'm second-most familiar with is the Z80, which has good official documentation and is very simple, but is much less consistent and logical than ARM.
If you're interested in reverse-engineering Windows apps or something, you obviously won't get around learning x86 though.
>> No. 1962 [Edit]
>>1961
>If you're interested in reverse-engineering Windows apps or something, you obviously won't get around learning x86 though.
Well, I'd like to be able to do something with the pc-98 one day, but that's a bit of a pipe dream.
>> No. 1963 [Edit]
>>1961
>Z80
You might find this fun reading then:
http://www.chrisfenton.com/the-zedripper-part-1/
>> No. 1967 [Edit]
>>1962
>I'd like to be able to do something with the pc-98 one day, but that's a bit of a pipe dream.
Looks like a fun toy. Why do you think it's a pipe dream, given how much documentation for it is out there? Is it that you can't read Japanese?

Do you have real hardware or are you planning to do everything in an emulator?
>> No. 1968 [Edit]
File 157610422449.gif - (47.99KB , 640x399 , VGNyu1s.gif )
1968
>>1967
>Is it that you can't read Japanese?
Yep. I'm in the process of learning it. I'd also have to learn a substantial amount about the 98's dialect of x86, which isn't a walk in the park, and Japanese tech terminology, but a lot of that is hopefully written in katakana. If I do manage, it wont be any time soon. Learning how to program for the z80 first might be beneficial since it's similar and simpler.
>Do you have real hardware or are you planning to do everything in an emulator?
Emulator. Is there much advantage to working on real hardware?
>> No. 1969 [Edit]
>>1968
>Emulator. Is there much advantage to working on real hardware?
depends on the emulator. If the emulator is inaccurate, your software may behave in unexpected ways once someone does try to run it on real hardware.
On the other hand, emulators may have nice debugging features that are superior to whatever you could find on real hardware. Even just save-states are a blessing in this regard.
At the very least, making frequent backups of your work will be a lot easier.

Most of my experience with Z80 programming comes from writing shitty demos for the Sega Master System (which I can't recommend if your goal is to learn about Z80 programming, because the documentation isn't nearly as good as for the popular personal computers of the time). I was mostly using Emulicious, which has a useful debugger, but I'm fairly certain that none of the programs I wrote would even boot on real hardware. I'd have to change around at least a few things in the file headers, probably some of the actual code too.
>> No. 1970 [Edit]
File 157619777878.png - (144.83KB , 1366x768 , 1484437985663.png )
1970
>>1969
There's a branch of the Neko Project emulator which can run windows 98. I don't know how possible that is on any physical models. I do know different models have different specs and some stuff that would work on one might not on another. Emulators are better for experimental type stuff since you can adjust the specs how ever you want, even with greater capabilites than any real model of a system.

https://sites.google.com/site/np21win/
>> No. 1971 [Edit]
File 157620045928.png - (849.23KB , 2000x1125 , programming_challenges.png )
1971
>>1968
If you want to do something on the PC-98, why not start with doing stuff in BASIC?
http://worholicanada.mydns.jp/pc98/00303.html
>> No. 1972 [Edit]
>>1971
That image is a troll-post right? I mean they're good projects, but a few of them seem to have their difficulties completely off. E.g. why are "Game of Life" and "English sentence parser" both medium. The former is a straightforward recursive program while the latter is a relatively sophisticated NLP project (unless you just call into a pre-existing library). Similarly why is "text editor" hard but "javascript debugger" medium.
>> No. 1973 [Edit]
>>1971
>Design a Game Engine in Unity
A game engine within a game engine?
>> No. 2026 [Edit]
File 159089471993.jpg - (270.64KB , 500x650 , 2f502e6140df7ce9868c2f1b3db5f5a1.jpg )
2026
I'm reading how to design programs. It's a scheme book. There's a newer edition out for racket, but I started the edition I did before knowing that. It's far from my first exposure to programming, but it's the first time I'm learning it seriously. The exercises are tough and I have to look at the answer a lot. Recursion is tough. Mutual recursion is tough. I'm doubting myself a bit.
>> No. 2027 [Edit]
There are those who interview for a programming job but cannot implement fizz buzz or similarly trivial constructs despite graduating with a CS degree. And yet, these people do get jobs. Masturbation is the only thing left.
>> No. 2028 [Edit]
>>2027
Conversely you also have those with demonstrated experience who get asked gotcha brainteasers.
>> No. 2029 [Edit]
>>2028
Interviews are a joke.
>> No. 2030 [Edit]
Working is a joke.
>> No. 2031 [Edit]
>>2027
Do you think masturbation could help you in an interview? I have to try that one.
>> No. 2032 [Edit]
I want to hear people's opinions on Rust. Things like ripgrep have piqued my interest.

>>2031
Yes, there's no better way to assert your dominance.
On the other hand, you could end up as a sex offender.
>> No. 2033 [Edit]
>>2032
Ripgrep is very nice (in fact all tools by that author are very handy).
There was also a brief discussion of rust in /ot/ (http://tohno-chan.com/ot/res/33905.html#i35079)

I think there's a lot of neat ideas there from a PL theory perspective (enforced lifetime tracking) and a practical (succinct, helpful compiler messages). I'd like to see them make their way to c++ as well (llvm community is doing some work on improving static analyzers).
>> No. 2034 [Edit]
>>2033
I'm not sure any of this can be brought into cpp, the language has so much legacy and so many features so at this point it is nigh impossible to add anything without breaking at least *something*.
And PL theory, sadly, goes against good error messages, well Standard ML and OCaml have lean and mean error messages, while Haskell is just horrendous in this regard, and when you add advanced type level features into the mix. Well, you now can compare errors (at least in kilobytes) to ones you get from templates in cpp.
>> No. 2035 [Edit]
>>2032
Apparently Rust's type system is formalized via the notion of affine types, where every variable can be used at most once. There are also linear types where a variable can be used exactly once. Wikipedia gives C++'s unique_ptr as an example of a linear type, but to me it seems like an affine type instead since you can always choose to discard it (just let it go out of scope).

It's also not clear to me why they're called linear/affine.

https://en.wikipedia.org/wiki/Substructural_type_system
>> No. 2044 [Edit]
>>2035
They are called so because they came from Linear/Affine branches of logic, where you can use proofs once/at most once.
>> No. 2051 [Edit]
File 159649336968.png - (62.14KB , 870x666 , code.png )
2051
I made this hashmap for up to 8 characters in c by deferencing the strings.
Probably useless but I think it's pretty funny.
>> No. 2052 [Edit]
>>2051
Oh I think I understood what's going on there. I was confused at first because you mentioned hashmap, but what it's doing is re-interpreting the sequence of bytes "Cat....." or "Hello..." as an int64, which can be thought of as a pseudo-"hash". It's more like a fixed lookup table, and an interesting way of working around the fact that C doesn't support switch statements on strings.
>> No. 2053 [Edit]
>>2052
Oh yeah I didn't even map any values.
No matter. I just looked at the assembly and no matter how many cases I add it still ends up being just a series of if statements, so it is completely useless!
>> No. 2054 [Edit]
>>2053
Did you compile with -O2/-O3? I'm pretty sure that past some point compilers will use binary decision trees for the branching instead of sequential conditionals. But again there's not much point to this as you're better off using a proper hash function anyway.
>> No. 2055 [Edit]
>>2054
The optimization flags do actually make it work, thanks for that.
Yeah it's pointless, I just think playing with pointers is fun.
>> No. 2063 [Edit]
I decided I wanted to make an elaborated strip poker game where the other players are JCs and JKs. Then I realised the hardest part isn't programming but making the art. Very sad.
>> No. 2069 [Edit]
I'm creating a CLI program that downloads manga chapters from MangaDex. As of right now, one may specify criteria for determining which chapters of manga to download. Qualities such as chapter #, volume #, language (to which the chapter was translated), and groups' names.
One may also provide a template or output mask for the downloaded chapter archive's filename. For example, the default output mask is "{title} - c{chapter} (v{volume}) [{groups}]"; thus, given the first English chapter of Forget-me-not, the resultant filename would be "Forget-me-not - c001 (v01) [Hanashi].cbz". (Currently, zip files are the only supported format.)
Further, one may set the program's user agent and delay between requests. The initial option for the latter is two seconds to ensure that one is not blocked.

After I add support for different packaging, packaging by volume, finalizing the CLI, and providing helpful end-user documentation, I plan to refactor and rewrite a good portion of the code. One module is needlessly complex and template-heavy, and other files need better documentation. If anybody would like to try using it, please let me know! As you can infer, the software is still in development, but I've used it a few times for my own archival needs.
>> No. 2070 [Edit]
I also need to determine how the program responds to overly long filenames on Windows. Considering that a manga's (or chapter's) title will be the usual culprit, I believe that shortening that and adding ellipsis would be a decent solution. (One may specify a setting in the group policy or the registry to remove the path limitation, but that seems burdensome for the end user.)
>> No. 2071 [Edit]
>>2069
> needlessly complex and template-heavy
By template-heavy you don't mean C++ templates, do you? As much as I hate to be the one suggesting languages, this seems like a place where python would shine given the ease with which you can parse webpages in it.
>> No. 2072 [Edit]
>>2070
Why would that be necessary? You can hover your mouse over any file name and see the full thing in a box that appears.

Post edited on 19th Oct 2020, 6:35am
>> No. 2073 [Edit]
>>2071
>By template-heavy you don't mean C++ templates, do you?
I'm using D whose templates are actually programmer-friendly, and it's only template-heavy because I wanted to test some ideas.

>this seems like a place where python would shine given the ease with which you can parse webpages in it.
Python would probably be a fine alternative, but I'm directly calling MangaDex's APIs; only JSON must be parsed, and D has that capability in the stdlib. Even if I must deal with HTML, there is an awesome D library that implements much of the JavaScript DOM library and interface.

>>2072
>Why would that be necessary? You can hover your mouse over any file name and see the full thing in a box that appears.
Unless I'm misunderstanding you, from my experience, if a file's path (i.e. filename + the folder hierarchy in which it's nested) is too long, you may not meaningfully interact with it. A few times in a past, I had to boot my PC into a Linux environment so that I may rename, move, or delete the offending files.

Post edited on 19th Oct 2020, 7:36am
>> No. 2074 [Edit]
>>2073
Ah neat, I've played around with D and it seemed quite nice – although I haven't been able to find a personal niche for it in my own work. I also didn't know mangadex had an api!

With regard to the path limits, I recall reading somewhere that even if you don't flip the registry flag to enable long paths globally, there's a way to call into win32 apis directly and force use of long paths via some suffix. I have done zero win32 development though so I can't comment much further on that though. If it's a significant enough issue maybe you could just target linux and use WSL to run it on windows?
>> No. 2075 [Edit]
>>2074
>Ah neat, I've played around with D and it seemed quite nice – although I haven't been able to find a personal niche for it in my own work.
Yeah, I feel its general-purpose nature is both a blessing and a curse. Its meta-programming capabilities is pretty nice, though.

>I also didn't know mangadex had an api!
As did I. My initial client implementation parsed the webpages, but after a cursory glance in my web console, I discovered its existence. I do wonder how long it's existed.

>With regard to the path limits, I recall reading somewhere that even if you don't flip the registry flag to enable long paths globally, there's a way to call into win32 apis directly and force use of long paths via some suffix.
You are correct: one prefixes the filename with a sequence of characters to bypass the limitations. However, if I read the docs correctly, there's some quirks with it. It'll take some experimentation.

>If it's a significant enough issue maybe you could just target linux and use WSL to run it on windows?
I don't think it'll come to that. Abbreviating the filename or applying the filename-prefix should be suitable. Plus, Windows is my main driver, and I'd like to have this program run natively.
>> No. 2082 [Edit]
I've been trying to conjure a design by which structs (i.e. aggregate value types) may be dealt with like classes and interfaces. An obvious answer is structural typing via meta-programming. However, tunnel vision is quite potent.
>> No. 2085 [Edit]
>>2082
> structural typing via meta-programming
Can you explain what you mean by this? For simulating OO in C via structs, the solution I've usually seen involves including the base class as the member of the derived classes so you can manually cast back and forth, and then essentially manually implementing the vtable to get the polymorphism.
>> No. 2087 [Edit]
>>2085
What I mean is that, given a function that has a parameter of type T, only operate on a subset of members specified by T; as long as a struct defines those members, then from the viewpoint of the function, it's considered equivalent to other types that do the same. (In the light of this description, I retract my solution's description: it's closer to duck typing than structural typing.)
>> No. 2088 [Edit]
>>2087
Yeah ok that makes sense. It's annoying in C though because you also need the same layout of the structs, which is why as I mentioned most people just include the base struct as the first member.
>> No. 2089 [Edit]
>>2088
I assume you're referring to something like this, right? (Sans encapsulating the parent's fields.)

#include "stdio.h"
#include "string.h"

struct Widget
{
int id;
};

struct FooWidget
{
int id;
char* text;
};

void process(struct Widget *widget)
{
printf("%d\n", (*widget).id);
}

int main(void)
{
struct FooWidget foo;
memset(&foo, 0, sizeof(foo));
process((struct Widget*)&foo);
}

>> No. 2090 [Edit]
>>2089
Yeah exactly that's the idea. Although in the approach I mentioned you would do


struct FooWidget
{
struct Widget base;
char* text;
};


so that way you don't have to repeat all of the parent's members (and it also avoids issues regarding struct packing/alignment). A lot of codebases I've seen will in particular do this for logging, where all of the "inherited" classes will share the same first member and then the "logId()" macro or whatever can just case to that shared "base" that is the first member and extract out the id.

You can also go further and implement function polymorphism, not just member sharing, by manually passing around vtables like in the below example (since there's just one function I don't have a separate vtable member – I just put the function inline).


struct BaseWidget {
int id;
void (*dump)(struct BaseWidget *self);
};

struct ExtendedWidget {
struct BaseWidget base;
char* extra;
};

void dumpBase(struct BaseWidget *self) {
printf("BASE: %d\n", self->id);
}

void dumpExtended(struct BaseWidget *self) {
dumpBase(self);
printf("DERIVED: %s", ((struct ExtendedWidget*) self)->extra);
}

void dump(struct BaseWidget *widget) {
widget->dump(widget);
}

int main(int argc, char *argv[]) {
struct BaseWidget base = {.id = 3, .dump = dumpBase};

struct ExtendedWidget derived;
derived.base.id = 4;
derived.base.dump = dumpExtended;
derived.extra = "foobar";

struct BaseWidget *baseThatIsExtended = (struct BaseWidget *) &derived;

dump(&base);
dump(baseThatIsExtended);
}

>> No. 2091 [Edit]
>>2090
Neat. But Haruhi damn, I hate C's syntax for function pointers.
>> No. 2092 [Edit]
This is why nobody pays me to program.


struct MaskContext(string name, Placeholders...)
if(Placeholders.length > 0 && allSatisfy!(isPlaceholder, Placeholders))
{
alias RequiredPlaceholders = Filter!(isPlaceholderRequired, Placeholders);
alias RequiredParams = staticMap!(PlaceholderType, RequiredPlaceholders);
alias AllParams = staticMap!(PlaceholderType, Placeholders);

/// Constructor for all placeholder fields.
this(AllParams params)
{
static foreach (i, P; Placeholders)
{
__traits(getMember, placeholders, P.identifier) = params[i];
}
}

static if (RequiredPlaceholders.length > 0)
{
/// Constructor for only required placeholder fields.
this(RequiredParams params)
{
static foreach (i, P; RequiredPlaceholders)
{
__traits(getMember, placeholders, P.identifier) = params[i];
}
}
}

// ヽ( ̄~ ̄ )ノ
}


And yet, it works!
>> No. 2093 [Edit]
Due to circumstances, I've returned to C++ after many, many years, and I must say that I have no idea what the fucking I'm doing. Groking its template metaprogramming is difficult after enjoying D's relative simplicity; no universal implicit initialization, move semantics, and ugly syntax are a thorn in my side; and no modules (for GCC, anyway) kills the soul. And yet, I'm having fun (with valgrind by my side). Plus, I get to re-enjoy Scott Meyers' talks and writings--always a good time.
>> No. 2094 [Edit]
No built-in unittesting is saddening, too.
>> No. 2095 [Edit]
>>2093
There is now "concepts" with C++20, it helps with templates a lot.
>> No. 2096 [Edit]
>>2095
Indeed. Template constraints are a great feature in D, and it seems concepts might be more powerful. However, as usual, C++'s take seems rather ugly.
>> No. 2097 [Edit]
>>2096
I wish SFINAE (and the hell that is has enabled) would never have existed.
>> No. 2098 [Edit]
>>2097
It's certainly antiquated now, it seems.

Also, consider this:

template<typename... Args, typename LastArg, typename = void>
void foo(LastArg arg)
{
// ...
}
foo<int, float>("Hello, world!");


I'm glad type inference with variadic template parameters is possible, but it's so odd. Cursory searches haven't revealed much about "typename = void", and cppreference (from where I learned this) doesn't go into detail.

Meanwhile, in D:

template foo(Args...)
{
void foo(LastArg)(LastArg arg)
{
// ...
}
}
foo!(int, float)("Hello, world!");


Readability at the cost of two template instantiations (unless this can be optimized), but I prefer it.
>> No. 2099 [Edit]
At first I was excited about constexpr, but it's stupidly limited: only "literal types" are supported, and "if constexpr" must be placed in function scope. So if you want a compile-time std::string (Working with char{*|[]} kills the soul.) or a replacement for the pre-processor, you're out of luck. Instead, I have to conjure up some tricks to workaround these issues, and even that's not satisfactory. And here I thought C++ was catching up to D.
>> No. 2107 [Edit]
>>2099
With C++20 most of the limitations with constexpr are fixed. (std::string and std::vector work now too.) There is also "constinit" and other new features. Read through them.
>> No. 2108 [Edit]
>>1958
>LISP is slow
I hate this stigma that Lisp is somehow "slow" when it's absolutely not. SBCL can already produce images that are as fast, if not faster, than GCC if you can be clever enough. Now I will say writing Lisp to be as fast as C is a major pain, if you want to write fast code you should use Chicken (which lets you drop down into C at anytime) or just use C.
I think this idea of Lisp being slow comes from it being a LISt Processor where everything is a "linked" list, and that these lists have an O(n) access time. Honestly with today's machines (2000s and on), I would think that they're fast enough to compensate for this, not to mention that most dialects allow you to use vectors for when you're dealing with a truly large amount of data.
One more think I would like to add, that really gives Lisp the edge over most languages, is that programs are treated the same as regular data; that is programs can be manipulated just as regular data can. Long story short, Lisp machines, and Lisp instruction sets/architectures are near trivial to design and give the programmer, and user, some major benefits (not just speed). If you want to read more on this, I would suggest Guy Steele's paper "Design of LISP-based Processors".
>> No. 2109 [Edit]
>>2108
SBCL is pretty amazing. You can see this quantitatively in [1] where Lisp is within an order of magnitude of C's performance. In fact a lot of people's ideas about "fast" languages are out of date. I've heard people call Java a "slow" language, but it's really quite performant (thanks to a lot of effort put into hotspot jit).

[1] https://thenewstack.io/which-programming-languages-use-the-least-electricity/
>> No. 2110 [Edit]
>>2107
Thanks for the information. I had assumed that my toolchain was limited to C++17, but it seems GCC 10 is supported. Pretty excited to see how much of the pre-processor I can replace. The dream, however, is to convert the platform's system headers to D modules, and get GDC working. Don't know if I have the knowledge for the latter, though.
>> No. 2111 [Edit]
What do you guys think about a function that reads command-line options into a struct? The following is its documentation:


Parses command-line arguments that match the given `CLIOption`s into a struct and returns it.

Params:
- Options = A sequence of `CLIOption` instantiations.
- args = The command-line arguments passed into the program.

Returns: A struct composed of two fields: `values` and `helpWanted`. The former is another struct whose fields' identifiers and values are derived from the passed `CLIOption` instantiations. The latter signals whether -h|--help were specified--just like with `std.getopt`.


P.S. I wish we had a code tag, e.g.
[code][/code]


Post edited on 25th Nov 2020, 6:32pm
>> No. 2112 [Edit]
>>2111
I'm not sure if I fully understand what you're going for. Can you dynamically create fields in a struct? And what would be the advantage over returning a dictionary(/map)?

Incidentally I wish that all languages had something like Python's argparse. That's always been a pleasure to use and it handles all the common use-cases (required flags, optional flags, lists, etc.)
>> No. 2113 [Edit]
>>2112
>Can you dynamically create fields in a struct?
Fields are "mixed in" at compile-time. So the type is fully defined at runtime.

>And what would be the advantage over returning a dictionary(/map)
Since I'm programming D, and D is statically typed, the value type of a dictionary would have to be a variant--which would introduce some friction. I could also hack together a solution with `TypeInfo`, but I'm not too keen on that.

>Incidentally I wish that all languages had something like Python's argparse.
Never used it as I rarely program in Python, but it does seem nice after reading the docs. I'll have to borrow some of its ideas.

My `parseArgs` function is built upon D's std.getopt, as the latter doesn't promote structure, in my opinion.


/**
Usage: calc numbers [options]

Takes a list of numbers separated by whitespace, performs a mathematical operation on them, and prints the result.
The default operation is addition.
*/

// Usually a bad idea like `using namespace std;`
import std;

// Default Value | long and short names | Optional argument that specifies the option's description in the help text.
alias OperationOpt = CLIOption!("add", "operation|o", CLIOptionDesc("The operation to perform on input (add|sub)"));
// Same as above except we specify a type instead of a value. The option's default value will resolve to its type's initial value, which would be `false` in D.
alias VerboseOpt = CLIOption!(bool, "verbose|v", CLIOptionDesc("Prints diagnostics"));

// -h|--help are automatically dealt with
auto result = parseArgs!(OperationOpt, VerboseOpt)(args);
if (result.helpWanted) { result.writeHelp; return; }
auto nums = args[1..$]; // Let's just assume that the user actually entered in at least one number.

// An option's long name is the identifier in `values`. The implication is that long names must be also be D identifiers. However, I've ensured that common option names like `long-name` are resolved to `longname`. However, more bespoke option names will trigger a compiler error with a helpful message. This would not be a problem if `values` were an associative array whose keys are strings.
switch (result.values.operation)
{
// Assume the variadic functions, `add` and `sub`, are defined.
case "add": add(nums).writeln; break;
case "sub": sub(nums).writeln; break;
default: writefln!"Operation '%s' is not supported"(result.values.operation); result.writeHelp;
}


Three problems with my function and its associated templates:
1. I'd like `CLIOption` to take functions as a type. `std.getopt` can do this, but I've had issues creating a higher-level interface with this in mind. This is mostly due with how I designed things.
2. `parseArgs` should handle more than options, like `argparse`. After all, if it doesn't, mine should merely be called `parseOpts`.
3. I suck at programming.
>> No. 2114 [Edit]
>>2113
Ah neat that makes sense. Having not used D before, I was only vaguely aware of mixins. (It seems the definition of "mixin" being used here is slightly different than the conventional definition used in object-oriented languages? I've seen mixins in e.g. python/scala and there it's more akin to interfaces with default methods. But in D it seems it's a bit broader and more like templates, with support for compile-time preprocessing?)

>the value type of a dictionary would have to be a variant
Yeah most of the argument parsers I've seen in C++ deal with this by requiring you to manually cast any values that you access into the proper type. (There's also the gflags/abseil style argument libraries where you declare the variable you want to place the result into upfront. That works around the above issue, but on the flipside it's ugly and overkill for small projects). Creating a properly typed struct at compile-time would be a lot cleaner and safer.
>> No. 2115 [Edit]
>>2114
D has two types of mixins: string and template. The former embeds a string containing valid D statements and/or expressions into the code: `mixin("int foo = ", 42, ";");` -> `int foo = 42;`. This must be done at compile-time, and any variables passed into the `mixin` statement must be readable at compile-time.
Then there's template mixins; these are more like traditional mixins found in OOP languages, except, as you've mentioned, they may be parameterized with types, symbols, and values. They are "mixed in" with the `mixin` statement: `mixin SomeDefinitions!42;` If `SomeDefinitions` had a definition, `int foo = value`, where `value` is a template parameter, then said definition will be accessible from the scope in which the template was mixed, and `value` is substituted for `42`. This is in contrast to a normal D template where its definitions, after instantiation, reside in their own scope accessible through a symbol.
These given examples are rather trivial, and don't do these features justice. For command-line argument parsing library, I use string mixins to generate new structs at compile-time, and utilize mixin templates to factor out common definitions and compile-time processing. Further, there are D CGI libraries that expose template mixins that do all the setup drudgery, e.g. provide a `main()` and environment variable processing.

As an aside, D allows you to defines strings with `q{}`, where the strings' contents are placed between the curly braces. This indicates to a text editor, IDE, or whatever to treat the strings' contents as if it were D code (or any code, I suppose): highlight it, provide code inspection capabilities, etc. These are helpful with string mixins.

>(There's also the gflags/abseil style argument libraries where you declare the variable you want to place the result into upfront. That works around the above issue, but on the flipside it's ugly and overkill for small projects).
I looked at them. I feel a little sick.
>> No. 2116 [Edit]
File 160688616514.jpg - (108.27KB , 1280x720 , [Doki] Mahouka Koukou no Rettousei - 10 (1280x720 .jpg )
2116
Alright, so I'm re-working that argument parsing thing, and funnily enough, template mixins have been a big help in refactoring. Combine that with better and more succinct solutions to previous problems, the design is a lot cleaner. Documentation is better, too. With that said, I'm not sure the best way to handle options' "optional" metadata:

alias VerboseOpt = Option!("verbose|v", false, OptionDesc("Garrulous logging") /* etc... */);

`OptionDesc` is one such piece of metadata. Right now, the `Option` template will pass the given variable-length list of metadata to a mixin template that will then define the appropriate fields. Thus, in the given example, a field of type `string`, whose identifier is `desc`, and with a value of "Garrulous logging" will have been defined in this instantiation of `Option`, i.e. `VerboseOpt`. The problem is that `parseArgs` will have to do some compile-time inspection on every `Option` instantiation to determine whether it has a description, i.e. a `desc` field; using the data therein or providing default values in the field's absence. This is not ideal for compilation times and for the code's clarity as this also extends to other pieces of metadata like `OptionCategory` or `OptionRequired`. It's not terrible, but again, not ideal. I have a better solution in mind, but a clean implementation of it is difficult for my moronic mind.
>> No. 2117 [Edit]
File 160705909858.jpg - (185.24KB , 1280x720 , !.jpg )
2117
Continuing my work on my command-line argument processing library (Now called "tsuruya" because naming is hard.), I have realized happiness through the digital world instead of just the 2D one.
Here's an example:

auto args = ["./test", "1", "2", "3"];
auto result = args.parseArgs!(Parameter!("integers", int[]));
assert(result.parameters.integers == [1, 2, 3]);
assert(result.usageText == "Usage: test <integers>");

`parseArgs` is instantiated with a `Parameter` struct template whose name, both in the generated programming interface and command-line interface, is "integers". By specifying the type of the parameter's value as a dynamic array of integers, `parseArgs` will read all non-option command-line arguments; convert them to `int`; and then add them to the parameter's array. (As an aside, if one were to specify a static array, `parseArgs` will only read k-number of non-option command-line arguments, where k is the static array's length.) A usage string is also generated based on what parameters and options (collectively known as "command-line interface objects") were given to `parseArgs`.
`Parameter` may also take a callable object, e.g. function, instead of a type, and the value it expects will be that of the callable object's return type. Further, one may pass optional metadata to `Parameter` just like one may do with `Option`, e.g. CLIODesc and CLIORequired. The former defines a description for a command-line interface object that may be used in `parseArgs`'s generated help text. The latter specifies whether the parameter or option is, well, required to be in the command-line arguments.
>> No. 2118 [Edit]
>(collectively known as "command-line interface objects")
I scrapped this stupidity and renamed the `Parameter` templates to `Operand`, since that's what they actually represent. After all, a parameter would include options too and thus confusion. Anyway, on to error handling and all that fun that entails.
>> No. 2119 [Edit]
Oh how I wish for mutability during compile-time. The amount of recursive templates upon which I'm relying is making me sweat a bit.
>> No. 2193 [Edit]
I was trying to get a program I always use to do something for python 2.7 and it wasn't supported anymore. Looking up the changelog discussions, I saw a poster say "We shouldn't support such ancient distros". Christ... it's really bizarre to me just how much the attitude among programmers is now. Granted, decade old software tends to be forgotten, but I have a hard time thinking of 2010 as "ancient", even as far as tech goes. Guess this is just me griping, but damn. I thought python 3.3 and 2.7 were still being used on the same systems.
>> No. 2194 [Edit]
>>2193
What a mess the python 2->3 transition was. Whose boneheaded idea was it to make things non-backwards compatible.
>> No. 2195 [Edit]
>>2194
>Whose boneheaded idea was it to make things non-backwards compatible.
I don't know, but there's a growing philosophy that old digital technologies should be forcefully cut out from any currently updated projects. Windows 10 for example has some serious fundamental flaws that make windows 7 look comparatively like a masterpiece, yet it's being prioritized so heavily that now people are cutting windows 7 support from their projects. This in particular is infuriating, to especially because when I'm not on a linux machine I want to use windows 7. In my brief stint with windows 10 I discovered some horrific design flaws regarding path variables, registries, and worst of all administrator permissions. As it turns out it is relatively easy on windows 10 for a file to revoke absolutely and forever any access to any users including the system user itself. This is, particularly, unpleasant when said file is malware.
>> No. 2208 [Edit]
I'm so addicted to meta-programming and templates that I often use them as a solution to anything. Usually, it's fine, but more straight-forward and obvious answers to problems tend to escape me in favor of some Rube Goldberg machine. It's fun, at least.
>> No. 2230 [Edit]
>>2195
I used Linux for many years. Couldnt take it anymore. Im back to using Win7.
>> No. 2231 [Edit]
I spent hours trying to reverse a singly linked list. I accomplished the task, but the realization that this is yet another indicator of being unemployable hurts the soul. Also I had to use two extra variables (iterator and a temporary) in the implementation along with the list's head. It's O(n), I think, but I feel like it's subpar.
>> No. 2232 [Edit]
>>2231
Post a screenshot or code. Lets review it:)
>> No. 2233 [Edit]
>>2231
One hour seems fine if you haven't seen the problem before (or alternatively haven't practiced doing these kinds of interview questions in a long time). And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).

Once you get familiar with the usual interview hazing questions you should be able to do them in 15-20 minutes.

Also a relevant article "Why do interviews ask linked list questions" [1]

[1] https://www.hillelwayne.com/post/linked-lists/
>> No. 2234 [Edit]
>>2232
https://pastebin.com/zDTyG3h2

>>2233
It took me two hours, I think. Even though I rarely work these kinds of problems, it's still a disappointing result given your time frame.

>And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).
That good to hear as I was struggling to figure out if there were a way to reduce the number of variables (that didn't involve changing the data structure).

>Also a relevant article "Why do interviews ask linked list questions" [1]
So it suffered the fate of all similar questions, and its continued use is due in no small part to inertia. Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
>> No. 2235 [Edit]
>>2233
Correct me if I'm wrong, but isn't it easier to do this recursively?

Post edited on 30th Mar 2021, 5:29pm
>> No. 2236 [Edit]
>>2234
That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.

>I was struggling to figure out if there were a way to reduce the number of variables
Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.

>Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases. (And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
>> No. 2237 [Edit]
>>2235
If this was lisp then yeah maybe since it's a one-liner, but the length of a recursive solution is basically the same (although perhaps maybe conceptually a tad simpler). Disadvantage of recursive is that you have increased space complexity, so if this were an interview they'd ask you to do the iterative solution anyway.
>> No. 2238 [Edit]
>>2236
>That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.
Before attempting to implement the reversal algorithm, I thought I understood them well enough. Heck, felt kind of clever doing this (https://pastebin.com/zNvWeXR2) as the first attempt for the removal method. Given that, it's just not acceptable that it required two (2!) hours.

>Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.
It was fun, too, until checking how many minutes elapsed.

>The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases.
I've read similar opinions, and I'm in no position to disagree with them considering my experiences building hobby projects and not having ever worked such a job. However, wouldn't a regular employee still need to possess the ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.

>(And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
This varies across positions, I assume.

P.S. Tangentially related, but writing good unit tests is quite the skill, and since you seem to know what you're talking about, would interview questions and problems concerning them be a good idea?
>> No. 2239 [Edit]
>>2238
>felt clever doing this [removal]
That's a clever solution. I first read/saw that variant of removal from a talk by Linus (see [1]), and even having seen that variant before it still took me a good 5 minutes to puzzle through your solution (that makes me feel dumb). If you came up with that by yourself, you're not giving yourself enough credit.

(By the way, I think in terms of clarity this is one case where explicitly writing out the type instead of using auto might have been clearer. This is probably just taste though – I personally hate auto since it makes it harder to know at a glance what type something is, and I only tend it use for for iterator-things like ".begin()" where the type is clear, and the equivalent "std::vector<int>::iterator" is ugly). At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like


void remove(T value)
{
node_t **cur_ref = &head;
node_t *cur = *cur_ref;
while (cur != nullptr && cur->value != value) {
cur_ref = &cur->next;
cur = *cur_ref;
}
if (cur) {
node_t *next = cur->next;
delete cur;
*cur_ref = next;
}
}

then I think it's a lot clearer what's going on.

[1] https://github.com/mkirchner/linked-list-good-taste

>ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.
Yes problem modeling is probably the most important skill to have; I disagree that linked list reversal is a good exercise of those skills in a day-to-day job duties sense (unless you're job is to write standard libraries for languages). They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs. Interestingly enough companies do ask systems design questions, but they only do so for L5-L6 hires (I'm using Google's scale where the entry level is confusingly enough L3, and L5 corresponds to senior softwar engineerg with about 7 years of experience).

>This varies across positions, I assume.
Surprisingly no. Google (& maybe facebook?) has a single common interview process for all SWE roles, and people aren't alloted to a team until after they pass the hiring committe. So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.

>would interview questions and problems concerning them be a good idea?
If you're optimizing for what will be asked in interviews, don't bother practicing how to write unit tests. Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them). Even if your solution is incorrect or suboptimal, showing that you can identify these edge-cases is a strong positive signal and might be the different between weak-hire and strong-hire.

Post edited on 31st Mar 2021, 1:19am
>> No. 2246 [Edit]
>>2239
>If you came up with that by yourself, you're not giving yourself enough credit.
Not really. Pretty sure I got the idea to do that or something similar from reading an article from some time ago about uses for double indirection. The "clever" part, for me, is being able to remember and reliably implement what I learned. Yes, it's a low standard, and using "clever" is most assuredly the wrong choice on my part. But, I'd to think the implication is roughly the same.

>At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like
Not only do I agree that your rewrite is more readable and obvious, I'm also inclined to think an interviewer would also prefer such a version from what you've said. However, from my perspective, explicitly specifying the type in this instance didn't really help all that much. Rather, it's the introduction of the extra variable and alteration of the others' identifiers that are more elucidating as to the intent and purpose of the code. (It just might be familiarity.)

>and the equivalent "std::vector<int>::iterator" is ugly
Don't miss those days. As an aside, the hype for C++11 was an exciting time. And now we have modules, concepts, ranges, and further improvements to constexpr--very cool. Problem is learning all of it!

>https://github.com/mkirchner/linked-list-good-taste
I feel like this is an over-complication, but it's probably helpful to others. Feels good to use Linus-approved techniques!

>They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs.
You make good points that I should've arrived at myself, and in one of my recent projects, I've became aware of how difficult it is to design good "systems." The breadth and side-effects are challenging to manage even for my small scale.

>So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.
Team-based interviews probably vary more in quality than a more standardized system, but on the hand, they include people with whom you'd be potentially working.

>Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them).
Good advice! However, maybe I'm being rather dense, I was asking whether interviewers asking questions related to unit testing would be a good idea--not a candidate practicing them in hopes they would be useful.
>> No. 2247 [Edit]
>I was asking whether interviewers asking questions related to unit testing would be a good idea
Ah my bad I misinterpreted your question. Although I think my response to that version is similar; if you assess a candidate's ability to reason about both edge-cases and how systems are linked then I feel that's a "good enough" indicator of his ability to write good tests. Since so much of testing is dependent on your specific project, language, framework, and infrastructure (which will have to be learned on the job anyway) I'm not sure if there's another general way to assess this.

Also in principal I feel that unit tests mainly serve as a sanity check to make sure that you haven't broken anything when refactoring, and any actual "testing" would be done via full-system tests against a sandboxed instance (since unless what you're testing is the implementation of some specific algorithm, root causes of bugs generally seem to stem from the interaction between two components). Of course in practice writing those kinds of tests tend to be a lot harder or more tedious, so mocked-out dependencies are what people usually do. Maybe using an in-memory simulation (if it's an external resource that can be simulated somewhat accurately) or recording/replaying interactions (for things like network requests) would be better if sandboxed instances aren't feasible.
>> No. 2259 [Edit]
Some neat changes in c++20:
https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.html

Coroutines might be useful for event-driven stuff (but from what I've read what the standard-library provides is very barebones so you'll need to make use of a higher-level library in practice). Not sure how I feel about modules; most of the annoyances have mainly been around the build system versus issues from leaky headers, and I'm not sure modules really fixes that.
>> No. 2280 [Edit]
>>2247
Belated response, but I really appreciate your thoughtful responses. The CI-aspect (as opposed to simple unit-testing) is something I often forget about since nothing I work on necessitates such infrastructure. In your opinion, is general experience with the aforementioned techniques something that's expected from graduates of CS (and related) degrees, or is it something that's rather learned on the job? (Ignoring the differences in infrastructure across organizations, hence "general".)
>> No. 2283 [Edit]
>>2280
>The CI-aspect (as opposed to simple unit-testing)
In my mind "continuous integration" is more about the infrastructure and is orthogonal to the issue of system/unit level tests. For instance, you could just have your CI scripts run all unit tests upon a commit. Unless your job deals with setting up such infrastructure, I don't think end-engineers ever have to explictly think about CI itself, since it's merely an automated mechanism that runs the tests.

> something that's expected from graduates of CS
(I haven't spent enough time in the industry to say for sure so take the below viewpoint with a grain of salt)
Considering that most university graduates barely have experience writing good unit tests, I doubt that new hires are expected to be able to think about system-level tests at all. In particular, while you can assume that graduates will at least have some basic exposure to the idea of unit tests (perhaps they might have had to write some themselves for an assignment, and they'd certainly be familiar with testing algorithms for correctness given the prevalent use of autograders), system level testing is something that very few students will have needed to think about given that in university, projects are usually small and simple enough that there's no need for this. It's only when you dive into things that have to deal with networking, databases, RPCs, etc. that the limitations of unit tests begin to show and it becomes worthwile to consider bringing up an entire sandboxed environment. (Somewhere in between that continuum of unit tests to entire sandboxed instances, there's the in-betweens of in-memory simulations, RPC replay, and perhaps even more that I'm not aware of).
>> No. 2284 [Edit]
Functional and declarative programming is a bit of a mindfuck coming from an imperative mindset. Doesn't help that keywords and concepts have differing meanings between the two paradigms.
Tangentially related, StandardML is pretty fun, and mlton's FFI to C looks nice.
>> No. 2285 [Edit]
>>2283
>I don't think end-engineers ever have to explictly think about CI itself, since it's merely an automated mechanism that runs the tests.
This assumes a definite bifurcation of dev-ops and the engineering that which the former services, right? In any case, I'd imagine end-engineers must be literate with regards to the system that orchestrates the integration and unit tests: Someone must map the output given by the CI system to a resolution for the problems reported.

>It's only when you dive into things that have to deal with networking, databases, RPCs, etc. that the limitations of unit tests begin to show and it becomes worthwile to consider bringing up an entire sandboxed environment.
Would you think it would be worthwhile to have colleges; assuming you agree they should be the main producers of these practitioners; offer classes to simulate this situation and ones similar to it? Or perhaps such complexities don't fit into the scope of a semester-long lab.
>> No. 2286 [Edit]
>>2285
>I'd imagine end-engineers must be literate with regards to the system that orchestrates the integration
Yeah but in that case they're merely users of the system rather than the ones tasked with setting it up. I.e. they only need to know how to trigger and view the results of CI runs. Although I suppose the limitations of the CI system will inherently influence the type of tests that can be written.

>>2285
There aren't much opportunities in college/uni to work on projects involving multiple interconnected systems though. The most complex they usually get is writing your own OS kernel or toy sql server, and the limited scope of these things usually precludes the more intricate types of testing.
>> No. 2306 [Edit]
>>2286
Thanks for the elucidation, anon! One final question, if I may: What did you (or your team) choose as the capstone project for the completion of your degree? This is rather personal, so I understand if you don't want to answer.
>> No. 2307 [Edit]
>>2306
For undergraduate degrees most US universities don't require a capstone project as far as I know, so I never did one for my backelor's. The projects I were referring to were individual class projects (e.g. for a class on DB systems you might be asked to design a toy sql server). For master's degrees things are usually split between "coursework only" masters, and project-based masters'. The coursework-only masters' degrees are the kinds of things offered by the online MS programs (e.g. by Georgia Tech); while I'm sure these are fine content-wise, I think calling it a master's degree dilutes the value of what a graduate degree is. As eloquently explained by [1], these are more often than not used as degree mills by people switching into CS later on (especially international students who use it for H1B).

The project-based masters where you are basically a mini-phd student working with a research group and writing papers is the more valuable one, and any university woth its salt will at the very least require to you file a formal masters report here (whether or not it counts as a thesis – i.e. whether you need to formally defend varies based on the specific degree program; but at the very least the student should have gotten a taste of academia).

[1] https://blog.regehr.org/archives/953
>> No. 2308 [Edit]
>>2307
I didn't even know there was such a distinction. It seems the coursework-only variant is fulfilling a need, but its implementation as an MS is unfortunate.
>> No. 2386 [Edit]
File 163054544795.jpg - (921.64KB , 3840x1836 , 30a40cd11f147f29c5a061b815dffb83.jpg )
2386
I'm a relative beginner at programming and I want to learn python for a specific project. I picked up this book https://www.composingprograms.com/ since it goes for a sicp kind of approach, and from what little I've done, I like using scheme.

Problem is I'm starting to think python is really unsuitable for this kind of thing.
>While Python supports common mathematical operators using infix notation (like + and -), any operator can be expressed as a function with a name
Except not(at least not with built-in keywords). I can't even find an add keyword. You have to write 1 + 2 + 3 + 4 + 5, add(1,2,3,4,5) isn't an option.

I want to learn python, but I don't know how I should proceed. The official documentation felt too complicated for me to be comfortable learning with it.

edit: so like one sentence later it's explained that you can import a library to use an add function(still only accepts two arguments). My question about the best way to learn still applies.

Post edited on 1st Sep 2021, 8:50pm
>> No. 2392 [Edit]
>>2386
Of all the reasons to learn a programming language, wanting to learn it to execute a specific project you have in mind is one of the better ones. I say continue with that online book.
>> No. 2393 [Edit]
>>2386
> I can't even find an add keyword. You have to write 1 + 2 + 3 + 4 + 5, add(1,2,3,4,5) isn't an option.
```
import operator
operator.add(2, 3)
```

But that seems like a silly reason to hate python, considering even if it wasn't you can trivially create a lambda yourself.

>I want to learn python, but I don't know how I should proceed
It's easy enough to pick it up by yourself by just reading simple programs, assuming you already have experience with other languages. In fact, for learning python that's probably the best way (in my personal opinion) for reasons I'll describe shortly.

>Composing Programs

That's a decent-ish book, it's used for Berkeley's freshman CS class. You might also consider doing the weekly assignments and projects (http://cs61a.org/) if it really interests you. But honestly my personal perspective is that starting off with an SICP-esque approach programming is not the best idea.

(Rant incoming)
In terms of programming, there are 3 things that you ultimately want to be good at

* Learning how to express and encode things logically: at this stage you become intimately familiar with basic constructs like conditionals, how to abstract things into procedures/functions, how to use loops, etc. For most adults this is probably easy to grasp, especially since nowadays we have increased exposure to these things due to the abundance of computers and flowcharts.But still, it is the necessary groundwork. Even this itself is sufficient for basic "imperative" scripting/programming.

* Learning how the above high-level concepts actually map onto hardware at a low-level. At this point you should be able to reason about your code all the way down to the bare-metal. This helps break some of the "magic" of programming: how that abstract concept of "calling a function" is ultimately just represented as setting a few registers and changing the CPU's instruction pointer.


* Re-learning programming concepts and paradigms through a more formalized, mathematical lens. At this point you can go deep into learning about lambda calculus, functional programming, PL theory, etc. The goal isn't necessary for practical knowledge but to gain an appreciation for the elegance and beauty of it all.

SICP is a great book for the third – it's a wonderful example to the elegance of lambda calculus, and how from just the notion of functions you can build up arithmetic (church encoding), lists, etc. These three lines blew me away when I first saw them

```
(define (cons x y)
(lambda (m) (m x y)))

(define (car z)
(z (lambda (p q) p)))

(define (cdr z)
(z (lambda (p q) q)))
```

and while I haven't read SICP, if they're as good as Sussman's lectures then they will do a good job of showing you how at it's core lambda calculus (as embodied by scheme) is really all you need to base everything upon. Understanding how eval-apply live in symbiosis, and how you can create an interpreter for scheme inside scheme with just a few dozen lines ("scheme/lisp can be defined as the language that is fixed under eval/apply metacircularity").

But the issue is, I'm not sure this is really useful for people who are new to programming. They need to start off by building a practical foundation – learning to appreciate the tall, abstract spires should come at the end of their journey. Giving a beginner SICP so that he can learn programming feels like teaching a child about peano axions so that he can learn arithmetic.

That said, everyone has their own opinions on learning programming so if you're having success with Composing Programs then you should just stick with it. But if you're looking for alternate recommendations, I've seen
https://automatetheboringstuff.com/ mentioned a few times and it seems to be closer to what I feel is a better approach.

Alternatively, just jump into your project and figure things out as you go. You'll spend a lot of your time googling for basic things, but this is the best way to learn.
>> No. 2394 [Edit]
>>2393
Also I remember somewhere that MIT themselves moved away from teaching SICP because it doesn't reflect the reality of what programming is today – gluing together several libraries to get a job done. Indeed, that's the reality of almost all software engineering jobs, and even at a professional level is what you will be doing. As you get better as a programmer, you get better at gluing things together in a way that is debuggable, scalable, flexible, and maintainable over the long run.

Python's a great language to start with because it has a good ecosystem of third party libraries, so you can do a lot by just gluing existing stuff together.
>> No. 2395 [Edit]
>>1698
> it's what you need to make iOS apps, since Objective-C is being phased out
I picked up some basic objective-c over the weekend since I wanted to make a mac app and didn't want to learn swift, and it's actually a kind of neat language. It's a very smalltalk-esque version of object oriented, a very different and more "dynamic" feeling than OO in C++.

Unfortunately it also ultimately seems harder to pick up than C++ because there's so much "magic" going on behind the scenes, documentation is very scattered and obsolete due to the Swift transition, and the fact that you're also dealing with Apple's Cocoa libraries which are also woefully underdocumented. For instance, to make an icon appear in the menubar that you can interact with you have to subclass NSView and basically re-implement the highlight behavior yourself. It's absurd that devs are willing to do all this stuff.

I like the idea of sending messages to objects with the square brackets, and there are some really neat features like key-value observation – I can see why Apple chose it to build GUI apps, a lot of ideas fit nicely. But like I mentioned there's so much magic going on behind the scenes even for a simple app: Ivars and getters/setters automatically "sythesized" out of properties, automatic referencing counting, etc.

Post edited on 2nd Sep 2021, 7:03pm
>> No. 2396 [Edit]
>>2394
>Also I remember somewhere that MIT themselves moved away from teaching SICP because it doesn't reflect the reality of what programming is today – gluing together several libraries to get a job done. Indeed, that's the reality of almost all software engineering jobs, and even at a professional level is what you will be doing.
It is indeed unfortunate that even MIT fell prey to various interests and now teaches their CS students mere coding, rather than computer science. But others have ranted on the topic far more eloquently than I ever could.
>> No. 2397 [Edit]
>>2396
It's only the intro level course that's been swapped out. I don't see the issue in this, considering that the majority of people who take CS courses in college are not actually interested in "computer science" but just in learning programming. Those who are actually interested in theoretical aspects are still free to explore.
>> No. 2398 [Edit]
File 163064496968.jpg - (1.23MB , 4096x2304 , 14cab6ac7700695aba5870ab3396a01d.jpg )
2398
>>2393
>at this stage you become intimately familiar with basic constructs like conditionals
I've previously made a largish project using a game engine, so I know these basic concepts.
>The goal isn't necessary for practical knowledge but to gain an appreciation for the elegance and beauty of it all.
Honestly I'm mostly interested in programming as a means to an end. The project I have in mind is a gui program which can process excel documents to make an optimal schedule, problem is I don't know python or xml. I want to learn before diving in so the development process isn't too torturous.

Thanks for the recommendation. I don't like the idea of "giving up", but this seems more suitable for my purposes and I'm planning on reading htdp anyway.
>> No. 2399 [Edit]
Created a function template that allows one to invoke functions in a "looser" manner: https://hastebin.com/evayovoxej.dlang
This template will call a provided function with the given arguments as long as each parameter is able to be matched with an argument; order doesn't matter, and the operand function's arity must be less than or equal to the number of arguments. In other words, the template's function arguments acts a set of which the parameters must be a subset.
>> No. 2425 [Edit]
So it's been a long time since I programed anything, and I decided to try and make a new project on C. For the first part of the project I have to parse and modify a .ppm image. I always get a Segmentation Fault error when trying to do this. I really don't understand what I'm supposed to do. I mean, the file is binary correct? But I have to open it as a text file, and read the pixels. Then parse the whole thing and proceed to the other parts of the project. Can someone tell me how to actually read and parse a ppm file on C? I read some guides but didn't understand a thing.
>> No. 2426 [Edit]
>>2425
Look at the first section in https://inst.eecs.berkeley.edu/~cs61c/fa20/projects/proj1/
that explains how to read ppm files. Note that I'm not sure if the ppm files you are referring to are ascii encoded or binary encoded. If it's ascii encoded you can open them up as text and see the raw rgb values, but if it's binary encoded you have to read it in binary mode.

For the binary encoded ppm you'll first want to familiarize yourself with the file format layout, so read https://en.wikipedia.org/wiki/Netpbm#PPM_example then open up a hex editor and try to identify the pieces of the image. Then you can read the binary data into a struct, and tada you've parsed it. Do same in reverse to write out the ppm file.

Post edited on 21st Sep 2021, 1:23pm
>> No. 2427 [Edit]
>>2426
>Then you can read the binary data into a struct
Beware of field padding and compiler antics - assert() and offsetof() are your friends
>> No. 2428 [Edit]
>>2427
Good point, there are probably compiler flags to avoid the padding but probably better and safer to avoid reading into the struct directly and instead read the data to a temp buffer, verify that it's well-formed, and then populate the struct.
>> No. 2432 [Edit]
File programa.txt - (747B )

2432
>>2426
>>2427
>Beware of field padding and compiler antics
Are you talking about leading zeros in the hexadecimal values or something?
I have managed to make the program show me all the hexadecimal values of the image by printing it, and it matches the hexadecimal values I see when I open the image with hex editor. But when I try to use fread to store them in a buffer, things go awry. I have used fseek and ftell to get the number of bytes in the image, and the answer matches what I see in the hex editor as well, 155. I printed the value returned by fread to see if it was reading it correctly, and the answer was correct, 155. But when I try printing the value from my buffer it says "ffffcb10". If I use %s instead of %x it returns the text header in ASCII, and if I use %d instead of %x it returns "ffffc940". I tried another way of printing the hexadecimal values using fgetc on the file, and it returns it just like on the hex editor. The program is attached in a txt file.
Alternatively, here's the main part of the program. You will have to change the : at the 3rd and 4th lines for a ; due to posting issues.

"FILE* file_of_image;
long int size_of_image;
file_of_image = fopen("D:\\Dados\\Downloads\\testeppm.ppm","rb"):
fseek(file_of_image,0,SEEK_END):
size_of_image=ftell(file_of_image);
fseek(file_of_image,0,SEEK_SET);
char number_of_bytes[size_of_image];
printf("%d\n",fread(number_of_bytes,1,size_of_image,file_of_image));
printf("%x\n",number_of_bytes);
while(!feof(file_of_image)){
ch =fgetc(file_of_image);
printf("%02x ",ch);"

>> No. 2434 [Edit]
>>2432
>Are you talking about leading zeros in the hexadecimal values or something?
No, he means that when you declare structs in C/C++ you can't always assume the layout in memory will match what you've written since compilers may add padding between elements or to the end. You don't need to worry about this since you're reading into an array, not a struct.

I think the issue is with the line
>printf("%x\n",number_of_bytes);
Since %x means it expects an int, but you're giving it an entire buffer. If you want to print out the buffer to stdout where each byte is printed in hex, then you can loop through the buffer and print it one byte at a time like you did below that (at least I think there's no way to do this with printf format specifiers).
>> No. 2440 [Edit]
>>/fb/6814
>>/ot/38799
Here's some Python 3 code I wrote to correct the recent mojibake (garbled text) on this site:
from codecs import register_error register_error('passthrough_decode', lambda x: (x.object[x.start:x.end].decode('latin-1'), x.end)) register_error('passthrough_encode', lambda x: (x.object[x.start:x.end].encode('latin-1'), x.end)) def garble(s): return s.encode('utf-8').decode('windows-1252', 'passthrough_decode') def repair(s): return s.encode('windows-1252', 'passthrough_encode').decode('utf-8')


Just pass the mojibake as a str (not a bytes) to the repair function. You can test it out by garbling text first with the garble function:
>>> repair(garble('お米券…進呈')) 'お米券…進呈'

>> No. 2441 [Edit]
File 163465927332.png - (1.10MB , 1280x720 , mao7.png )
2441
>>2440
I appreciate this very much.
>> No. 2443 [Edit]
>>2440
There's a python module to automagically do this
https://github.com/rspeer/python-ftfy
>> No. 2444 [Edit]
>>2443
Neat. It seems ftfy calls this character encoding 'sloppy-windows-1252', and it seems ftfy should work even if the input to the function contains characters that cannot be encoded to sloppy-windows-1252.
>> No. 2448 [Edit]
Sumtypes have changed my life--even without syntactic sugar.
>> No. 2459 [Edit]
>>2448
Could you elaborate? I've never seen what's so amazing about them as a concept, since they're basically equivalent to unions wit a tag. What's neat about the functional languages is that they give you nice syntax to make them a first-class part of the language, and it pairs well with pattern matching. But as a concept by itself it doesn't strike me as too novel. And if you sort of squint the pattern of using optional fields of structs or null to mark absence is kind of like a sum type without syntactic sugar anyway.
>> No. 2502 [Edit]
>>2459
Because they're a nice alternative to runtime polymorphism via objects when you map over all possible types. The impact of a concept isn't necessarily proportional to its complexity.
>> No. 2524 [Edit]
File 16393349255.jpg - (253.06KB , 1544x2048 , 5282c40d4b99bcb8411024c4c8614b1b.jpg )
2524
Is Smalltalk, or some modern variant of it(Dart?), worth getting into? People gush about it a lot, especially its "environment". Stuff like Dolphin Smalltalk kind of confuses me.

Is smalltalk good for writing cross-platform, standalone executables? Would learning it be especially educational?

I'm learning Java for a class, and while I can see some benefits to the object-oriented way of doing things, I find Java to be exceptionally ugly and verbose. My impression is that Smalltalk is like a "better" Java.

Post edited on 12th Dec 2021, 10:59am
>> No. 2525 [Edit]
Two programs that return prime numbers given a maximum number. One in Racket, the other in GO. Racket has more complicated logic and more characters(excluding comments).

GO prompts for an input faster than Racket, both when they're interpreted and compiled.

Racket:
https://files.catbox.moe/wf0h2w.txt
GO:
https://files.catbox.moe/ddq1ak.txt

Post edited on 12th Dec 2021, 3:50pm
>> No. 2526 [Edit]
File TC.pdf - (64.39KB )

2526
>>2524
TC WAF blocked my response again, so I've attached it instead.
>> No. 2527 [Edit]
>>2525
I think you only need to check up to sqrt(i) in the second loop, which allows you to terminate early compared to checking all the primes found so far. Also if you're willing to trade off a bit of space, doing this the other way where you create a list from 1...n and then cross off all the numbers that are non-prime (i.e. sieve of erosthanes) might be a bit faster.
>> No. 2529 [Edit]
>>2526
Thanks for the detailed reply.
>kotlin or scala
I've heard good things about kotlin. Tried using scala, but version 3 doesn't work on windows properly. It's a mess.
>> No. 2531 [Edit]
>>2529
By the way, if you haven't looked at modern java features then that would be a good first step. The stuff you're taught in university is pretty out of date, and modern java has support for quite a bit of new features. The most notable in my opinion are pattern matching, record classes (kind of similar to AutoValue library if you've used that), multiline strings, streams (java 8 so not exactly new, but perhaps not taught in uni?), and type inference for declaration.

In terms of massive new JVM improvements coming up, keep an eye out for project loom (goroutine style m:n threading), and project valhalla (value types for memory optimization). Great new GC improvements as well.

https://advancedweb.hu/new-language-features-since-java-8-to-17/
>> No. 2532 [Edit]
File 16394017345.jpg - (40.64KB , 582x503 , 1637273188345.jpg )
2532
>>2525
Here is the same thing in Red, a successor to Rebol.
https://files.catbox.moe/7yigka.txt

This was by the far the hardest to implement and I scarified an entire night of sleep on it. Some languages apparently don't have absolute position in their arrays. A variable references just one element in the array, instead of the whole thing.

So you literally can't just always find the 5th element in an array regardless of current position. You've got to move the reference back and forth. It's nightmarish. This language can do some cool things easily, but just because of this, I can't really recommend it.

>>2527
I took your suggestion and used the sieve of erosthanes this time.

Post edited on 13th Dec 2021, 5:24am
>> No. 2533 [Edit]
>>2526
>Unlike both where you have the same general pattern of invoking methods on objects, to my understanding smalltalk (and thus obj-c) has the notion of ”sending messages” to objects.
From my understanding, they're both "message-passing". Both have the concept of dispatching by acting upon a property of the object (calling a method, or sending a message), but Smalltalk and Objective-C are much more explicit about the reflective, dynamic aspects of it (which Java, from what I know, hides behind a obtuse reflection layer, and C++ doesn't have at all). CLOS can be taken as an example of a OO system that doesn't do message passing; dispatch is done by calling generic functions, which contain the machinery to select the appropriate method, can do multiple-dispatch and are largely independent from the classes of the objects it is called upon.

>FScript
There seems to be 2 different languages named FScript, I assume you mean https://en.wikipedia.org/wiki/F-Script_(programming_language)?

>But going back to smalltalk, I know a lot of people talk about the power of its repl (and how ”modern” repls aren’t anywhere close to what smalltalk had), etc.
>>2524
>Is Smalltalk, or some modern variant of it(Dart?), worth getting into? People gush about it a lot, especially its "environment". Stuff like Dolphin Smalltalk kind of confuses me.
Their main appeal is that your development environment (editor, debugger, etc) is entirely inside the Smalltalk image itself, and is composed entirely of Smalltalk objects; from the tools you use, to screen widgets, to classes, and beyond, they're all "objects all the way down", at arms reach, and the environment provides you with rich ways to interact and modify all those objects. When developing a program inside it, you often mold and sculpt your surroundings by interacting with the objects themselves in order to model your program's behavior. It is a lot of fun and definitely worth getting into, to experience what a richly dynamic development environment can be, even if you never get to use it very much.
As an addendum, Common Lisp also offers much of the introspective potential that Smalltalk-the-language has, but we don't really have many development environments that could get even close to what Smalltalk people regularly enjoy (unless you happen to have a Lisp Machine). I remember a neat quote about this, but can't seem to find it now.
>> No. 2534 [Edit]
>>2533
Great info, thank you for sharing. And yeah I meant the f-script that you linked. I found a good video [1] that gives an overview; f-script itself wasn't just a language but also had an "object browser" (see 30min in) that kind of reminds me about the DOM tree visualizer found in browsers, except it displays object trees (which in the case of GUI applications are usually closely related).

[1] https://www.youtube.com/watch?v=VDNoJc2t2qk
>> No. 2535 [Edit]
File 163946378043.png - (334.18KB , 760x650 , 26b65989bf7a6897db88643e1494829a.png )
2535
>>2532
Update. I'm still interested in the language. The more I learn about it, the more intrigued I am with its strict "code as data" paradigm.

Lisp has that to an extent, but Red is completely invested, so I'd recommend it if that sounds interesting to you. This is the best guide I've found for learning it.
https://github.com/red/red/wiki/A-short-introduction-to-Red-for-Python-programmers
>> No. 2555 [Edit]
File 163979651623.jpg - (443.01KB , 850x1222 , sizee.jpg )
2555
Here's a Red program that computes every permutation of a given series recursively. It's modeled after

https://www.baeldung.com/wp-content/ql-cache/quicklatex.com-33b4a8152b7c47614d52fa5008eca4b7_l3.svg

https://lainsafe.delegao.moe/files/WUCOkzuA/lettcomb.txt

I retract my prior statement and would now recommend Red to anyone that wants to expand the way they think about programming. I'll probably put together a collection of examples which show off the unique properties of Red.
>> No. 2558 [Edit]
File 164029040255.jpg - (960.88KB , 1447x2039 , 5455143b99c57b0b6acc5213a3a25fe9.jpg )
2558
I solved this problem in Red. I've never solved it before, so it took a while.
https://rosettacode.org/wiki/9_billion_names_of_God_the_integer

https://files.catbox.moe/3u0iap.txt
https://files.catbox.moe/925vn0.png

Post edited on 23rd Dec 2021, 12:26pm
>> No. 2559 [Edit]
>>2558
As a side note, while mine works, it's really long and slow. I don't understand the other implementations on the page, the general idea of how people tend to solve it.
(cryptic names and no comments doesn't help)

Edit: after looking at the problem harder, I think I see a faster implementation using a "cache". Will post soon.
Edit2: on third thought, I might have been mistaken
Eidt3: after messing around with the code, it started to work

Post edited on 23rd Dec 2021, 9:46pm
>> No. 2560 [Edit]
File 164032827170.jpg - (425.14KB , 1033x1447 , 2144c04ccaa23f42f4442e0d7afea40a.jpg )
2560
>>2559
Here's the shorter and (slightly) faster implementation.
https://files.catbox.moe/hp2st2.txt
>> No. 2561 [Edit]
File 164038863549.png - (46.66KB , 1342x1412 , observation.png )
2561
>>2560
visual explanation
>> No. 2593 [Edit]
There's better discussion here than places dedicated to programming.
God bless, TC.
>> No. 2594 [Edit]
>>2593
I'm always surprised by the contrast between how small this imageboard feels and the sheer variety of well-written discussion here on almost every otaku related topic. Either there are more people than you'd expect at first glance, or it's basically just a few dozen people who've spent their reclusive life becoming _very_ well-read (I've found discussion on topics you'd usually only find in graduate courses).
>> No. 2595 [Edit]
File 164401332238.jpg - (132.17KB , 509x458 , 937df715f32340be957d7682610faefb.jpg )
2595
Yeah, I think I change my mind about Red. Constantly conflating data and code is just not that practical if you want to you know, write programs that do things. It's "not for everyone" as they say themselves, which means it's not for most people, though that's harder to admit.
https://files.catbox.moe/9i6ag4.png

Maybe I'll give Haskell a whirl.

Post edited on 4th Feb 2022, 2:22pm
>> No. 2596 [Edit]
File 164401385644.png - (14.74KB , 970x454 , not for everyone2.png )
2596
>>2595
I mean really. This isn't cumbersome because people are "too used to other languages". It's inherently less intuitive and self-explanatory.
>> No. 2598 [Edit]
File 164451282316.png - (570.72KB , 1469x1958 , 04528d6f2234142870b047c883b95a9f.png )
2598
Wrote this. I can't help myself.
https://rosettacode.org/wiki/One-dimensional_cellular_automata#Red

edit: and this too
https://rosettacode.org/wiki/Conway%27s_Game_of_Life#Red

Post edited on 10th Feb 2022, 1:45pm
>> No. 2607 [Edit]
File 164583437940.png - (506.62KB , 1263x1138 , Screenshot 2022-02-25 at 19-10-09 Example.png )
2607
A very rudimentary html generator written in Red.
https://files.catbox.moe/nd1m2x.txt

produces this
https://files.catbox.moe/7f9s73.html
>> No. 2612 [Edit]
File 164626768773.png - (54.50KB , 1101x454 , to-hex.png )
2612
>>2596
How does it look with proper syntax highlighting, though?
Lisp can look pretty confusing too, but it's fine when you have color-coded parens
>> No. 2613 [Edit]
>>2612
Red has extremely limited code editor support, and doesn't have any keywords. There's default "words", but literally any of them can be redefined.

Red also doesn't use S-expressions, and "blocks" aren't evaluated by default, you need to explicitly evaluate them.

I've gotten used to it, but sparse documentation and library deficiency are major shortcomings in any case, so I've mostly moved on to GO.
>> No. 2614 [Edit]
File 164627271483.png - (13.44KB , 741x544 , mock.png )
2614
>>2613
It might look something like this though. Nothing about this implies the behavior of functions. You'd look at it, and assume it works like any agol would.

Post edited on 2nd Mar 2022, 6:00pm
>> No. 2615 [Edit]
>>2614
That repeat syntax and bracket structure reminds me very much of Logo. It's not exactly s-exp, but it's very much inspired by it.
>> No. 2616 [Edit]
>>2615
Red was influenced by scheme, forth, logo and self. Repeat isn't the alien part. Here's what happens when that function is executed after being assigned to a word, f.

A word r is set to a block within f, the contents of r is outputted to the console. For five times, a number i that starts at 1 and increments, * -1, is added to r.

Within f, r is set to another block, and the contents of that are shown. For five times, , a number i that starts at 1 and increments, is added to r.

After this, f's value, a function, has been edited. On subsequent calls, r is first set to [-1 -2 -3 -4 -5] and then [1 2 3 4 5]. You wrote empty blocks when you made the function, but calling it actually changed your code. The next call it's [-1 -2 -3 -4 -5 -1 -2 -3 -4 -5] and [1 2 3 4 5 1 2 3 4 5]

I've heard other languages can have similar homoiconic behavior, but more deliberate on the part of the writer. In Red, you need actively avoid your code changing.

Post edited on 3rd Mar 2022, 7:44am
>> No. 2617 [Edit]
>>2616
That doesn't seem like homoiconic to me (which I understand relates to how the syntax of a language maps onto the fundamental primitives that a language can operate on: "code is data" and all that).

What you describe seems more like weird scoping rules where basically every function maintains its state across invocations.
>> No. 2618 [Edit]
File 164633581011.png - (22.90KB , 494x848 , blocks.png )
2618
>>2617
functions consist of two blocks, a "spec block" and a "body block". Those blocks are the exact same as any other block, like you'd use to make a collection of numbers, or how red interprets images.

"append" in that code example literally changes the functions "body block". It's possibly the most intense form of homoiconicity in any language.

pic is what the repl gives. >> is input, == is output. ?? shows the exact contents of a word's value.

If you're having a hard time wrapping your head around this, that proves my point about it being weird and unintuitive. To avoid this behavior, you need to write r: copy [], which makes a new empty block every time the function is called, instead of assigning r to THAT SAME block within f. btw, Red has no scope. There is no "local scope". Instead, words have contexts, which behave differently in a few, but very important ways.

Post edited on 3rd Mar 2022, 11:34am
>> No. 2619 [Edit]
>>2618
What if you don't view it as "changing the body", and just view it as mutating a global (well, scoped per function definition) binding? Under this view it's much easier to reason about the side-effects, and things like when to make copies. And the fact that you rebind "r" the second time doesn't really change anything since there are still two separate underlying structures (the first and second set).

Also I don't doubt that Red is homoiconic, but I don't think it's for the reason you mentioned. Rather it's homoiconic because it basically has the equivalent of quoted s-exp, where you can introspect/modify a code block in its unevaluated form.
>> No. 2620 [Edit]
>>2619
>What if you don't view it as "changing the body", and just view it as mutating a global
The function is in the "global context", or "top-level context". There's no difference between these two descriptions.

>it's homoiconic because it basically has the equivalent of quoted s-exp
The big difference, is that that's the default. No blocks are evaluated until you tell it to. The behavior I described, is possible because the language is homoiconic. Try doing the same thing in c or python.
>> No. 2621 [Edit]
File 164634030562.png - (13.95KB , 380x700 , print.png )
2621
>>2620
By the way, there's no keywords. Some words have default values, such as "print", but they can be refined in any context, including the global one.

Here, f is assigned a function which calls print, then redefines print as an integer. The second time f's function is evaluated, print doesn't do anything because within f, print has been redefined to 56.

Within the global context though, print still has the same value.

edit: actually, I'm wrong. If I had wrote function instead of func, print would have stayed the same in the global context, but here it's actually redefined to 56 in the global context too. No error is outputted though, because if you wrote "56 6" in the repl, 6 would be returned without any complaint.

Post edited on 3rd Mar 2022, 12:52pm
>> No. 2622 [Edit]
>>2621
>The function is in the "global context", or "top-level context". There's no difference between these two descriptions.
I don't mean mutating the function, I mean mutating a global that is bound to the name "r'" within f

>The behavior I described, is possible because the language is homoiconic. Try doing the same thing in c or python.
Again, the way I see it the function body is not being changed in any way, it's just that the Red does not use any form of lexical scoping so in your example it effectively becomes "definitionally scoped." You could have the exact same effect in e.g. python by just using a global.

>If I had wrote function instead of func, print would have stayed the same in the global context,
Yup because if you use function then I recall it basically "auto-rebinds" everything declared in the function, so in effect you get a local scope. Again, everything you've described seems to be an artifact of the scoping/binding system, and not homoiconicity.
>> No. 2623 [Edit]
File 164634233095.png - (16.26KB , 469x408 , comp.png )
2623
>>2622
>I mean mutating a global that is bound to the name "r'" within f
Those blocks are NOT global. There's no way to access them within the global context, only through f.

Blocks are data, and data is code. Here is a third example which might show you how Red is homoiconic.

Two blocks are made and assigned to s and b within the global context. f is assigned a function with its "spec block" equal to the value of s, and its "body block" equal to b. f is passed a string "test", which str within f, not s, is assigned. "test" is then printed.

s and b have nothing to do with each other, they know nothing of each other, but put them together like that, and it works. If you tried evaluating b like "do b", you'd get an error because str has not been assigned anything within b's context.

Post edited on 3rd Mar 2022, 1:25pm
>> No. 2624 [Edit]
File 164634247565.png - (8.95KB , 630x263 , ff2.png )
2624
>>2623
This shows how str within s could be changed. It wouldn't be able to be used as f's spec anymore.

While anything in Red could be described by the phantom of a scope system, that's NOT how it's actually implemented. All of the implementation revolves around Homoiconicity.

Post edited on 3rd Mar 2022, 1:27pm
>> No. 2625 [Edit]
File 164634365098.png - (10.08KB , 694x381 , ggg.png )
2625
>>2624
Actually, you'd need to do this. a/1 is a "set-word!" value.

Post edited on 3rd Mar 2022, 1:50pm
>> No. 2641 [Edit]
Neat:
https://docs.oracle.com/en/java/javase/15/language/sealed-classes-and-interfaces.html
https://openjdk.java.net/projects/amber/design-notes/patterns/pattern-matching-for-java
>> No. 2642 [Edit]
Generics finally land in a major Go release, and the salt is hilarious.
>> No. 2643 [Edit]
File 164767575366.png - (552.49KB , 1920x980 , works.png )
2643
I made a working bbs in golang, which is a wonderful wonderful language. Going into it, I knew pretty much nothing about http requests.

source: https://files.catbox.moe/zvoq0w.go

Post edited on 19th Mar 2022, 12:44am
>> No. 2679 [Edit]
File 16509477148.png - (30.84KB , 256x256 , mascot.png )
2679
While publicly accessible for a while, Hare has been officially announced: https://harelang.org
I have no interest in this, but others might like it. Also the mascot is cute.
>> No. 2682 [Edit]
File 165134913432.png - (24.83KB , 512x928 , nojstogle.png )
2682
Toggle large and small images with css only. While it wasn't hard to do, I get a kick out of it since most implementations use js.
>> No. 2683 [Edit]
File 165135085794.png - (26.42KB , 765x773 , nojstoggle2.png )
2683
>>2682
better version
>> No. 2688 [Edit]
File 165147875781.webm - (1.04MB , css2.webm )
2688
>>2683
Even got something like post previews working.
>> No. 2689 [Edit]
>>2688
That is awesome. Are the previews baked into the html server-side and then done via :hover?

Also for the image expansion, I'm curious does the checkbox approach correct for the scroll position after collapsing the image? What I mean by that is if you expand an image, scroll down to view the image, and then collapse it again, if you don't correct for your scroll position then you wouldn't end up back where you started. I ran into this when implementing a image collapse/expand script and once I noticed it, it was annoying until I fixed it.
>> No. 2690 [Edit]
File toggle.zip - (140.57KB )

2690
>>2689
>Are the previews baked into the html server-side and then done via :hover?
Pretty much. CSS variables, the + selector, and :hover, ::before and ::after pseudo classes are used. Included file is a copy. I wrote all of this by hand, but a reverse proxied server could write to the correct files automatically.

>does the checkbox approach correct for the scroll position after collapsing the image?
No. I didn't notice that. It doesn't bother me much.

edit: tc doesn't have this feature by the way. So using that as a benchmark, it can be lived without.

Post edited on 2nd May 2022, 5:06pm
>> No. 2726 [Edit]
File 165317361291.png - (234.69KB , 1263x794 , prevv.png )
2726
>>2688
I've finished my styling. Pretty happy with this. I'm a little worried the jp is too small, but I don't want the english text to be overly large either.

Post edited on 21st May 2022, 3:54pm
>> No. 2727 [Edit]
>>2726
Looks nice!
>> No. 2730 [Edit]
File 165350858932.png - (211.28KB , 620x292 , thumbnail compare.png )
2730
Comparison between jpg(left) and webp(right) thumbnails. The webp one has less noise and a slightly smaller file size. During the down scaling, I didn't set the lossless flag to true. If I had, the file size would be larger than the jpg thumbnail(made by tc).

Post edited on 25th May 2022, 1:03pm
>> No. 2731 [Edit]
File 165353790795.webm - (3.20MB , output.webm )
2731
For architectural reasons, I decided it makes much, much more sense to use AJAX for post previews. I'm using htmx to accomplish that though, which while relying on js, is stylistically pleasing.

htmx has a feature called hx-trigger, so when you click on an element or move your mover over it, it does something(like replacing the content of an element with a given id with an http response). hx-trigger has an option to only perform its action once per page load.

On tc, every single time a user hovers over a quote, a request is made. While that's more "dynamic" in the case of a post being edited, it's pretty inefficient and not worth it in my opinion. lainchan doesn't do this, and because of that htmx feature, I can avoid it too.
>> No. 2732 [Edit]
>>2731
>it's pretty inefficient and not worth it in my opinion
It's cached, though isn't it? Response header has max-age=600 so despite technically being a separate xhr request it's basically 0 latency.

Also, I'm curious what language are you doing the server-side HTML generation in?

Post edited on 25th May 2022, 9:26pm
>> No. 2733 [Edit]
>>2732
Hmm yeah. I'm learning this stuff on the fly. I didn't know what xhr was before this.

>what language are you doing the server-side HTML generation in
The html is all hand-written so far. I'm planning on using golang though. I previously wrote this in golang >>2643
>> No. 2734 [Edit]
>>2733
I see. Also have you thought about persistence layer yet (I'd recommend sqlite if you haven't already).

Reason I ask and am curious, is because from my cursory look at these things when I wanted to deploy a local imageboard for clipbook purposes, all I found were things that were either _too_ minimalistic (like scheme bbs) or bloated with too many dependencies (like lynxchan, there's absolutely no reason why I should need nodejs and a dozen dependencies for an imageboard). Ideally I'd like something exactly like tohnochan (which I understand is basically a modified vichan with preview and edit support), but without the php or external db dependency (I don't have anything against php itself, it's just that it ends up being harder to maintain and make changes to). It feels like an imageboard should fit in under 1k lines of code.
>> No. 2735 [Edit]
File 165354255565.jpg - (398.44KB , 1200x784 , ce7ed6e8e0e83fb4256a37d66d9ccff6.jpg )
2735
>>2734
Yes, I've already decided on sqlite because it seems cleaner than having another server. Right now though, I'm getting the data from plain text files.

I going to design things with nginx reverse proxies in mind because that's what I feel comfortable using and I like how it works. I don't know how much that will limit portability. My understanding is that PHP has an advantage when it comes to that because its applications can be used on any server with PHP support. I don't know if that's true though.

My bbs code is 164 lines long. Between posting and deleting and editing and making new threads and pages and restrictions on uploaded file types and size an moderation along with whatever other features I decide to add, I can't say how long this will end up. In any case, my project is still in its infancy.
>> No. 2736 [Edit]
>>2735
>I going to design things with nginx reverse proxies in mind because that's what I feel comfortable using and I like how it works.
I've never done much web development, what's the benefit to using a reverse proxy here? I thought that's mainly if you want to do caching or load balancing?

> PHP has an advantage when it comes to that because its applications can be used on any server with PHP support.
I think maybe some shared hosting platforms will only allow PHP (as opposed to allowing arbitrary binaries?). Maybe that's not the case though since I'd imagine PHP could need to call into external native libraries (e.g. imagemagick) so there's really no way to really enforce that.
>> No. 2737 [Edit]
>>2736
>what's the benefit to using a reverse proxy here
The html pages served by nginx need to send their http requests to something. A golang program listens to a port, and nginx sends those http requests to the port it's listening to, via a reverse proxy. It's very easy. You just set any location as a proxy to the right port. Like this:

>server {
> listen 80;
> root /path;

> location /service/ {
> proxy_set_header X-Real-IP $remote_addr;
> proxy_pass http://127.0.0.1:8090;


The only other way of using golang to handle http requests that I know of is using cgi scripts. I don't know how to use those, and they're old, and they freak me out.

Maybe I could do away with nginx entirely and just use golang somehow, but that would be massive pain considering how many nginx features I use and me already knowing something about configuring it.

Post edited on 25th May 2022, 11:10pm
>> No. 2738 [Edit]
>>2737
Hm, since yout golang program is listening on port 8090 in your example, can't you just connec to 127.0.0.1:8090 directly? I.e. it seems that the use of nginx to forward requests here is unnecessary, at least in your particular simplified example.

That said, I think I understand the benefits of using nginx here: you can let nginx deal with things like TLS, htaccess, rate limiting, etc. and just let the Go program deal with the core logic. While directly connecting to the go binary would work for toy examples, when actually deploying you'd want the reverse proxy to provide productionized robustness.
>> No. 2739 [Edit]
>>2738
>you can let nginx deal with things like TLS, htaccess, rate limiting, etc
Yes, that's what I meant by features. I've been adding stuff like TLS, CSP and brotli encoding. I also started using nginx before go, so continuing to use it makes more sense than switching to something worse for the job.

Post edited on 26th May 2022, 12:47am
>> No. 2740 [Edit]
File 165362025536.png - (348.07KB , 1919x935 , progress1.png )
2740
I've written the go code that responds to requests for post previews.
https://files.catbox.moe/by1mhn.go
>> No. 2741 [Edit]
>>2740
Never used go before, but seems like this opens the DB once for each incoming request. Wouldn't it be better to open it once at server-start (e.g. keep it in global context).
>> No. 2742 [Edit]
>>2741
>Wouldn't it be better to open it once at server-start
Hmm, probably. I didn't think of that.
>> No. 2743 [Edit]
>>2742
Revised version.
https://files.catbox.moe/phaq2m.go
>> No. 2744 [Edit]
File 165381224220.jpg - (110.69KB , 600x800 , 9f1fd763e7b728eb2e432fe93a13626c.jpg )
2744
>>2743
Concurrency added
https://files.catbox.moe/fmp7hg.go

This makes this the requests/second go from about 750 to 1250 (measured using hey)
https://github.com/rakyll/hey
>> No. 2745 [Edit]
>>2744
Are we having the equivalent of a threadpool (or in Go's case I guess the green thread/goroutine equivalent) to read from sqlite, having multiple readers (as opposed to the previous scenario where all incoming requests were constrained by 1 reader).

It's probably just me because I've never used go before, but I found it a bit hard to follow what exactly is going on. Basically we implement the goroutine equivalent of the threadpool by first creating a channel populated with 5 DB handles, and then each incoming request (which I believe go spawns as its own goroutine) will first "claim" a db handle, do whatever things its needs, then release it back (by writing it back to the channel). Did I get that right? It's very clever, is this an idiomatic Go pattern?

But I'm still a bit confused as to why this is necessary. From the GO Docs on db.open:

>Open opens a database specified by its database driver name and a driver-specific data source name, usually consisting of at least a database name and connection information.

>The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections. Thus, the Open function should be called just once. It is rarely necessary to close a DB.

So it seems like you really shouldn't need to worry about manually creating a threadpool (and SQLite itself handles multiple readers). Maybe the speedup you saw is from creating the prepared statement ahead of time?

Post edited on 29th May 2022, 2:10am
>> No. 2746 [Edit]
>>2745
Ah I see I was looking at the docs for the go mysql package, not the sqlite one: https://pkg.go.dev/github.com/mxk/go-sqlite/sqlite3?utm_source=godoc#hdr-Concurrency

>A single connection instance and all of its derived objects (prepared statements, backup operations, etc.) may NOT be used concurrently from multiple goroutines without external synchronization.

So you're right, I think manually creating the threadpool is the best way to do it
>> No. 2747 [Edit]
>>2745
>Did I get that right? It's very clever, is this an idiomatic Go pattern?
Yep. I got the idea and how to do it from this article.
https://turriate.com/articles/making-sqlite-faster-in-go

I looked at the mentioned other drivers, but apart from being much less popular, the more recently updated one(two years ago) had cryptic documentation and alien syntax(to me at least). So I figured it wasn't worth whatever performance gain it might bring.
>> No. 2748 [Edit]
File 165384761178.png - (138.88KB , 1042x555 , highlight.png )
2748
Small, but nice update. Linked to posts are now highlighted using CSS' :target selector. Both tc and lainchan rely on js for this.

Also, I used hey to test tc's speed with this link http://tohno-chan.com/navi/res/1547.html#2747
I only got 32 requests/second. It's not directly comparable considering the difference in database size(and I'm not sure what link is used to retrieve a post preview), but it looks really bad. Not sure how much of it is tc's implementation, and how much it is from their hosting.

On lainchan, with this link https://lainchan.org/%CE%A9/res/60314.html#60333 I got 318 requests/second.

edit: yeah, the link used makes a big difference, doing https://[200:c5b0:cfeb:5db:c054:d66d:eb6f:7412]/content/media/toggle/#no2 which is on my own site, gives me 299 requests/second.

edit2: using the network tool, I've figured out that the link tc uses to get a post preview is probably
"http://tohno-chan.com/read.php?b=navi&t=1547&p=2745&single="
This link gives me about 25 requests/second
Lainchan seems to load the entire page a post is on, then uses js to parse it out to get a preview. So the speed is no different from loading a page.

edit3: This link "http://tohno-chan.com/read.php?b=navi&t=1547&p=2744&single=" gave 15 requests/second the first time, then 28 requests/second the second time. Maybe caching has to do with that difference. Also, the response is of type text/html with a size of 2.47 kb. The contents includes unnecessary things like checkboxes and links, as you can see here https://files.catbox.moe/8lnz6r.txt

Mine send text/plain, of sizes ranging from 9 b to 1.5 kb, the contents of which are then just shoved into the page by htmx.

Post edited on 29th May 2022, 11:45am
>> No. 2749 [Edit]
>>2748
>Mine send text/plai
Have you also considered adding gzip compression on top? Probably won't help much for post preview since that's usually pretty small, but it could be useful for full page loads.
>> No. 2750 [Edit]
>>2749
I did better, I use brotli compression. Talked about it and how it requires https here http://tohno-chan.com/ot/res/37253.html#39678
>> No. 2751 [Edit]
Using the <a> tags's download attribute, you can set a file to be downloaded with a different name from the one stored on the server. No need to edit headers. For imageboards, this feature would be especially useful, yet despite being available for at least 7 years now, it doesn't seem to be used by any.
>> No. 2752 [Edit]
>>2751
Don't imageboards running jschan do this? The <a> tag surrounding the user-supplied filename has its download attribute set to that filename, while the <a> tag surrounding the thumbnail does not.
>> No. 2753 [Edit]
>>2752
Not sure. I didn't do that much research or have heard of jschan before. Don't recall seeing it in the wild.
>> No. 2754 [Edit]
>>2748
Are you doing everything with no JS?

I was considering writing my own imageboard until I effectively concluded that I can't really think of any features that i'd like to add that I'm missing .. I'm fairly happy with imageboards as they are it seems.
>> No. 2755 [Edit]
>>2754
Not everything. I'm using htmx, a js library, to take care of post previews. Doing it otherwise would be an architectural nightmare.
>> No. 2756 [Edit]
I'm on the fence about how to implement posting. I'm not sure whether to edit stored html files, or use templates to generate requested pages every time they're asked for. Which is standard, and which is faster?

edit: generating it every time seems better. My intuition says this is inefficient, but for "dynamic content" it makes a lot more sense. Adding, editing, and deleting posts can be handled entirely within the database without editing anything else. That's also probably how tc and others work.

Post edited on 31st May 2022, 7:05am
>> No. 2757 [Edit]
>>2756
Latter is better. Note that you can do a hybrid where you maintain a cache of generated html pages, and after updating raw text you invalidate cache for all pages that depend on the updated item (e.g. the thread, the catalog, the homepage).

The reason why editing stored html file directly is not a good option is that it results in duplication. For instance, let's say an OP post gets edited: you're going to have to edit html for the catalog, the post, and the post preview. Whereas in the latter approach you only have to implement page generation, and since the DB is effectively in "normalized form" you don't need to update anything else. It also presents issues with needing to backfill edits if e.g. you change the html structure.

So overall it's almost always a good idea to decouple the presentation layer from the underlying data sources. It allows you to do things like edit an attached image without also changing the text of the post

Post edited on 31st May 2022, 1:04pm
>> No. 2758 [Edit]
>>2757
I'm still learning how to use templates. I guess I'll generate a page when a thread is made, then whenever in that thread a post is added, deleted, or edited, I'll regenerate the page.

Not sure how else I'd invalidate cache.

Post edited on 31st May 2022, 3:38pm
>> No. 2759 [Edit]
>>2758
There's two ways you can use the cache: lazily or not-lazily. Let's assume you already have a function which will generate html given the raw db fields.
With the lazy approach, whenever a post is added/deleted/edited, you remove the thread from the cache. Then whenever someone next requests it, the cache lookup will miss and it will be generated and added to the cache. With the non-lazy approach, you immediately repopulate the cache with the updated html.

There's a ok-ish summary of the pros vs. cons of each in [1]

[1] https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

I don't know if you've decided on the specific cache implementation yet. Maybe see if go has any simple in-memory key-value store libraries? Even the simple stdlib hashmap would probably work actually, since we don't need to set any expiration here.
If you want to get really fancy you could have a 2-tier in-memory and on-disk cache, and LRU spill from in-memory to on-disk.

Post edited on 31st May 2022, 4:30pm
>> No. 2760 [Edit]
>>2759
For my use case, write-through makes more sense in my opinion. I'm not doing crazy, amazon/google level infrastructure. I don't need that much flexibility because I don't expect any kind of breaking, retrieval failure. It also wont take so much space I would need to delete cached content.

>the specific cache implementation yet
I was thinking thread names would do all the work, since they're all nearly identical to each other and tied to actions that change them(adding, deleting posts and editing). There would be no difference between the pages served and the pages "cached". I don't think there's any reason to do something more fancy.

Post edited on 31st May 2022, 10:46pm
>> No. 2761 [Edit]
File 165428976695.png - (404.31KB , 1686x946 , template.png )
2761
I'm making progress with templates.
>> No. 2762 [Edit]
>>2761
What template library is that?
>> No. 2763 [Edit]
>>2762
Go's built-in text/template library.
https://pkg.go.dev/text/template

html/template has the same interface, but I don't need it for my purposes.

Post edited on 3rd Jun 2022, 5:03pm
>> No. 2764 [Edit]
File template_test.zip - (1.92MB , template test.zip )

2764
>>2763
And here's all the files if you're interested.

good summary of how to use the library
https://blog.gopheracademy.com/advent-2017/using-go-templates/

Post edited on 3rd Jun 2022, 5:08pm
>> No. 2765 [Edit]
>>2763
Neat, that's very powerful to include in stdlib. It's interesting that google decided to create separate template language for Go instead of reusing closure templates. I guess the stdlib template is more powerful since it allows interop'ing with Go function calls whereas closure templates is kind of in its own world (which is in some ways nice as it makes templates self-contained and usable between languages, but also means you have to work around things sometimes).
>> No. 2766 [Edit]
File backup2.zip - (7.24MB )

2766
update: I've(mostly) remade the model thread using templates. File contains the entire current project.

Post edited on 4th Jun 2022, 12:30am
>> No. 2772 [Edit]
File 165454185234.png - (329.31KB , 1920x644 , success.png )
2772
Basic thumbnail support added. I'll make a git repo for the project now.

edit: https://gitgud.io/nvtelen/ogai

Post edited on 6th Jun 2022, 12:23pm
>> No. 2773 [Edit]
File 165457130847.png - (3.79KB , 491x59 , replies to.png )
2773
Anyone know how "replies to:" is usually implemented? You can't have multiple values in an sql column, so I don't know how that info is usually stored.

Any suggestions?
>> No. 2774 [Edit]
>>2773
https://condor.depaul.edu/gandrus/240IT/accesspages/relationships.htm
>> No. 2775 [Edit]
>>2773
>Anyone know how "replies to:" is usually implemented

Most places I've seen do this client-side in JS, Tohnochan seems to be an exception in doing it server-side. As you mentioned it's not easy to express this is a SQL query since there's no efficient way to do the join. (Theoretically if you use a more heavyweight sql engine like postgres I guess you could probably store a list of referenced posts in the row, and then do some sort of join & array filter, but it's probably not going to be efficient). The two other options are storing this reference data separately, a simple map from post id -> replies is all you need, but that's also not efficient since it requires you to do an update for an old key on a post reply.

I think that TC does it by basically moving the client-side approach server-side, or what I mean is they reconstruct the forward-referencing reply-tos after all the posts are retrieved. I guess this based on the fact that the "replies" only works within the same thread, and doesn't work from the post-preview, although I could be wrong.

So I guess basically after you retrieve the sql output for the posts in a thread, just iterate over them and construct the map of replies. You can do a neat trick where since the posts are guarnateed to be monotically increasing order, you don't need to do a hashmap of post number to arr of replies, but can instead just allocate an array of length (# of posts in thread), and iterate based on the index.
>> No. 2776 [Edit]
File 165458034195.png - (108.15KB , 1014x556 , behavior.png )
2776
>>2775
Based on >>2774 I've come up with a solution. Thread content is updated by putting values from an sql query into structs(golang), and then inserting the values of those structs into a template, the result of which overwrites a thread's html file.

However, you can probably have different members in one struct be populated by different queries' results. So I can get content data from one table, and reply data from another. I plan to make another table(one for each board) that contains a "source" column and "replier" column, and adding a reply member(string array) to the post struct.

When someone quotes another person, the reply table has a new row added. When the thread is updated in some way, it is reconstructed by getting values from both tables. Not sure how efficient this is, but it should work.

>"replies" only works within the same thread
Actually, I think they work across an entire board, but it's kinda glitchy(pic related).

Post edited on 6th Jun 2022, 10:40pm
>> No. 2777 [Edit]
>>2776
Yeah I think that's the "reference data separately (map from post id -> replies)" solution, although I personally don't like it since it means one single reply can invalidate the cache for (and thus force regneration of) multiple threads. I.e. consider a "mass quote" type scenario (which is not uncommon on 4chan at least), where a reply quotes one post each from every thread that exists. THen you'll have to regenerate all threads.

It does have the advantage that it allows cross-thread replies to show up though. The performance probably won't be a concern for small image board though.
>> No. 2778 [Edit]
>>2777
There's a simple fix for this: only add a row to the reply table if the "source" and "replier" post have the same "parent" value(belong to the same thread). This is an acceptable tradeoff in my opinion because the "replies to" feature is mostly useful for following inner-thread conversations. You don't get that much from knowing a post was quoted in another thread.

Post edited on 7th Jun 2022, 12:30am
>> No. 2779 [Edit]
File 165462606561.png - (158.17KB , 943x669 , progress5.png )
2779
"Replies to" added. The next thing I plan on adding is post formatting, which I think will be quite a hurdle because of the parsing part.

Post edited on 7th Jun 2022, 11:22am
>> No. 2780 [Edit]
>>2779
Regex should get you most of the way there unless you want to support something like markdown, and even then, go should have packages galore for that.
>> No. 2781 [Edit]
>>2779
>>2780
I'd also second the suggestion to consider markdown over bbcode just because bbcode syntax is really annoying when typing, but on the flipside I've never liked that markdown mangles line breaks, and bbcode is more classic-imageboard style.

Also, if you aren't doing escaping already you need to this add this, otherwise you will be vulnerable to reflected xss attacks. You mentioned using text/template instead of html/template; this is a mistake, you really want to use the latter as it will do the escaping for you. Actually, seems like even html/template has issues, and the suggested replacement is Google's SafeHTML which should be a drop-in replacement. (One thing I liked about Closure Templates was that it had a robust set of sanitizers).
>> No. 2782 [Edit]
>>2781
The other thing about markdown is that its popular implementations don't support spoiling text, or at least that's been my experience. So that would have to be added separately.
>> No. 2783 [Edit]
>>2782
>popular implementations don't support spoiling text
That's a good point. The syntax i've seen in a few places like stackoverflow uses >! as the delimiter, but if it isn't already supported in the markdown library you'd probably need to add it yourself. Also same for rarer features like switching to ms pgothic font.

Also I never noticed the "test" bbcode button in TC, I wonder what that does... <test>test test</test> [test]test test[/test]

Post edited on 7th Jun 2022, 6:59pm
>> No. 2784 [Edit]
File 165465609418.png - (79.61KB , 485x208 , huhh.png )
2784
>>2781
I plan on copying schemeBBS' rules. I have no attachment to bbcode.

>using text/template instead of html/template
I'm already using html.EscapeString() for this. I don't want to use html/template because I'm just shoving all of a post's contents into a single <p> tag, so I need html, that I add, to be placed within the template. This is a lot simpler than making a scheme that breaks a post up into multiple <p> or <blockquote> tags.

This is secure enough in my opinion. I think a lot of the more "advanced" security measures exist to protect paying clients on sites with sensitive information, even if they're using something like internet explorer. My server has a CSP, so anybody using a modern browser should be fine. Correct me if I'm wrong though.
>> No. 2785 [Edit]
Hitting a bit of a snag because GO's standard regular expression library doesn't support negative lookahead. So > and >> are inoperable. Now I'm looking for an external library.
>> No. 2786 [Edit]
>>2785
I don't follow, this should be possible without negative lookahead? ^>[^>]* will match for the line quotes, and >>[0-9]+ will match the post numbers.
>> No. 2787 [Edit]
>>2786
^> wont match for quotes that aren't at the start of a line >like this. (actually, I don't think that works on tc either...)

[^ >] wont work because I've already replaced > with &gt; and [^ (&gt;)] gives false matches.

I have figured out a work around though. First search for &gt;&gt; and replace them with &#62;&#62;
>> No. 2788 [Edit]
>>2787
>aren't at start of line
Yes, that was intentional to match the TC behavior (I think other imageboards also only allow quote on its own separate line). You could always drop the anchor though if you really wanted.

>and [^ (&gt;)] gives false matches
That's because it's doing negation on the individual characters instead of the entire string. It should technically be possible to convert a negative lookahead into a strictly regular expression, but it probably blows up your state space. E.g in a simple example of "foo(?!bar)" you could convert to the union of {foo, foo + ^b, foo + !ba, foo + !bar} and then manually unroll !ba (negation of regular language is also regular, but the actual regex for it is probably ugly due to exponential blowup of powerset construction).

But 2 things: why not just do a second pass over the text to exclude the negated match, (i.e. if you want to match everything except foo, instead of trying to do this directly via a regex for (?!foo) just check for instead and then invert your result.

But there's no need to even worry about excluding match, since you can just prioritize match order: e.g. if ">>asdf" matches both >.* and >>.*, then just prioritize the latter match.

Also why not figure out the formatting before you do the escaping? You'll need to tag the quotes as a separate class anyway for css styling, so it seems easier to do escaping after?

Post edited on 9th Jun 2022, 12:33am
>> No. 2789 [Edit]
>>2788
>You could always drop the anchor
Before, I didn't want this> to be a valid quote. Now I don't really care.
>you can just prioritize match order
GO does not provide this out of the box. I checked. https://stackoverflow.com/questions/61836985/regexp-find-a-match-with-priority-order
>why not figure out the formatting before you do the escaping?
I already had escaping implemented and doing it first feels safer.

Anyway, the solution I came up with works and is simple.
>> No. 2790 [Edit]
File 165476357172.png - (21.24KB , 1116x312 , format.png )
2790
>>2789
Yep. It's all working.
>> No. 2791 [Edit]
>>2790
>GO does not provide this out of the box.
Just do the logic in go itself, instead of trying to do the prioritization inside a regex directly.

>implemented and doing it first feels safer.
As long as you be sure to escape before surrounding with tags, it should be equally safe.

That said, as long as it's working the exact method doesn't matter I suppose.
>> No. 2792 [Edit]
File 165491770874.jpg - (131.39KB , 1202x1738 , e99b27493a98506f87c991a73f14912a.jpg )
2792
So I've been messing around with the finer details. When I tried using the hey tool both on the link that gets post previews, and on the link used to create posts, the database locked up.

So I experimented with various connection strings used when opening the database, and I arrived at setting the cache to private and using the wal journal mode.
https://sqlite.org/wal.html

This fixed the issue and had the added bonus of making preview retrieval nearly twice faster. Now though, I want to prevent other people from spamming blank posts from the command line just by sending requests to the right link. Not sure how to accomplish this. Https referer is apparently kinda deprecated and inaccurate.

Any suggestions?
>> No. 2793 [Edit]
>>2792
> I want to prevent other people from spamming blank posts from the command line just by sending requests to the right link
Any form of request checkin based on client-provided information is just going to be bypassed by a determined adversary, especially since headless automation is hard to distinguish from a "real" client. Just avoid the cat-mouse game and use rate limiting with exponential backoff. Fail2Ban server-side usually works well as a catch-all.
>> No. 2794 [Edit]
>>2793
Yeah, I've been thinking the same thing. Would prefer to implement rate limiting myself though. So I'm gonna look for ideas in this longish article
https://mauricio.github.io/2021/12/30/rate-limiting-in-go.html
>> No. 2795 [Edit]
>>2794
Can you do rate limiting at nginx level?
>> No. 2796 [Edit]
File 165492601011.jpg - (369.49KB , 1295x1812 , 7d7a4f200804d6a0c20214693801be67.jpg )
2796
>>2795
I dunno, maybe? I probably wouldn't be able to make it as granular though. All requests to the imageboard go through one reverse proxy, which is located at the location /im/ (an actual folder which I just keep empty).

From there, requests go to different functions based on the rest of their path, like /im/post/ to the new post function. Thing is, I want to limit that much more than I limit how fast someone can get post previews. Doing it programmatically also has the added benefit of making the program more portable. My nginx configuration isn't really part of the program I'm writing.

Post edited on 10th Jun 2022, 10:41pm
>> No. 2799 [Edit]
File 165497425711.png - (387.04KB , 1460x835 , burichan.png )
2799
I added a theme picker by setting a cookie with Go and using nginx's substitution module to replace the name of the css file loaded on the page based on it. Zero javascript was needed. I got the idea from reading this largely unrelated article
https://scotthelme.co.uk/csp-nonce-support-in-nginx/

I tried adding the same functionality to my bbs a few months ago, before I knew about the substitution module, and the impression I got from my research was js being the only possible way of doing it without have two separate html files(the solution I ended up going with).

I'm amazed by how simple this was, yet how uncommonly it's used(at least in this sort of way). Really it should be a standard part of webdevs' toolkit.

edit: side note, it's really annoying you can't have every option within the select tag act like the submit input. W3C arbitrarily decided clicking options can only do something on their own if you use js.

Post edited on 11th Jun 2022, 12:20pm
>> No. 2806 [Edit]
I built a massive function pipeline that made LLD kill itself. Both proud and annoyed.
>> No. 2807 [Edit]
>>2806
Try using mold: https://github.com/rui314/mold
>> No. 2808 [Edit]
>>2807
I would, but I'm using Windows.
>> No. 2822 [Edit]
What the hell does this math word salad mean? Especially "reduced modulo". I can't find a simple and straightfoward answer with examples anywhere.

"Perhaps surprisingly, unary minus (-) operator is defined for unsigned
integer types. The resulting unsigned integer value is reduced modulo the
number that is one greater than the largest value that can be represented
by the resulting type."

Post edited on 22nd Aug 2022, 5:30pm
>> No. 2823 [Edit]
"reduced modulo X" is a math term for taking the remainder when divindg by X. In number theory you may see this written as "a = b (mod c)" which means a % c = b. Or equivalently a = b + ck for some integer k. You might also see it referred to as the residue modulo X.

Basically this means that when x is unsigned k-bit integer, -x is equivalent to -x % 2^k, which is 2^k - x. Note that this only works for unsigned ints, as signed ints obey 2s complement so overflowing will result a negative number, as 2s complement essentially represents -k as -k + 2^n. (This works because -k = -k + 2^n (mod 2^n) so hence they function equivalently).

Post edited on 22nd Aug 2022, 5:51pm
>> No. 2824 [Edit]
>>2823
Why don't they just say modulo?
>x % 2^k
this isn't consistent with the code I've tested

int main(void) {
unsigned x = 6;
x = -x;

printf("%u", x);
return 0;
}

returns 4294967290, but 6 % 2^32 = 6
>> No. 2825 [Edit]
>>2824
Yes, I typo'd (now edited) it should be -x % 2^k, not x % 2^k. See below

#import <iostream>

int main(int argc, char *argv[]) {
unsigned int x = -2;
std::cout << x << " " << ((1L << 32) - 2) << "\n";
}
>> No. 2826 [Edit]
>>2825
Okay, thanks for explaining.
>> No. 2827 [Edit]
>>2826
But I don't really like that paragraph though because it relies implicitly on how you define the modulo operator on negative numbers. "x reduced modulo y" can be ambiguous when x is negative, and indeed different programming languages implement the semantics differently. I don't do enough number theory to know if there's a convention in math, but usually there you usually care more about the congruence class than the actual remainder. This could be why they chose "reduced modulo" over "remainder" though, since they want to explicitly invoke a convention that "x reduced modulo y" always gives an output in [0, y), but I don't know if this is even a standardized convention.

And it's extra confusing in that paragraph because "resulting unsigned integer value is reduced" leads you to think that the argument to modulo is positive (after all, it says "unsigned" when it's not).

The better completely unambiguous way to phrase that would be something like
> k-bit unsigned numbers effectively implement modular arithmetic on Z/2^kZ (ring of integers modulo 2^k), and take on values in [0, 2^k). This means that if x = 2^32 - 1 = UINT32_MAX, x + 1 = 2^32 which gets reduced to 0. Similarly, -x = -x % 2^32 = -x + 2^32*k for a k such that (0 <= -x + 2^32*k < 2^32) [this k is guaranteed to exist by division algorithm].

2s complement could be similarly explained as doing the same arithmetic in Z/2^kZ but letting the numbers take on values in [-2^(k-1), 2^(k-1) - 1]. (It took me a long time to understand why exactly 2s complement worked, and the reason it took me so long was because none of stupid tutorials bothered to formalize this with math. Nothing changes in the bit patterns, and the cpu doesn't need any special "2s complement" circuitry*. All that changes is how you interpret the number, so it's only printf that needs to be "aware" of 2s complement at all).

*For addition at least. Anything that involves sign-extension (bit shifts, mul-hi will need separate instructions for signed vs. unsigned version)

Post edited on 22nd Aug 2022, 6:23pm
>> No. 2828 [Edit]
>>2827
Formalized math expalantions don't help me much. I strongly prefer an intuitive approach.
>> No. 2829 [Edit]
>>2828
I agree, but in this case there's really no way to specify the behavior of how unsigned/signed ints work without just going ahead and providing the formal definition. Otherwise you introduce ambiguity (e.g. "which definition of modulo are we using").
>> No. 2837 [Edit]
This rule in arithmetic conversion makes no sense

"if the operand that has the unsigned integer type has a rank
greater than or equal to the rank of the other operand’s type, then
the operand with the signed integer type is converted to the type of the
operand with the unsigned integer type. For example, if one operand
has the type signed int, and the other operand has the type unsigned
int, the operand of type signed int is converted to an object of type
unsigned int."

What about negative numbers????? So it just wraps around and -1 is converted to UINT_MAX? An example after this suggests that, so I tested some code.


int main(void) {
signed int a = -10;
unsigned int b = 1;
printf("%i\n", a + b);

signed int s = INT_MIN;
unsigned int u = 1;
printf("%i\n", s + u);

return 0;
}


The first thing printed is -9, which makes no sense. %i is the format specifier for unsigned intergers, and given the conversion rules, I would expect a nonsense answer like 8 below UINT_MAX. The second thing printed is -2147483647. Why?
>> No. 2838 [Edit]
>>2837
https://www.tutorialspoint.com/format-specifiers-in-c
and
https://www.geeksforgeeks.org/difference-d-format-specifier-c-language/
give different descriptions of what %i is, so I guess that's what the problem was.
>> No. 2839 [Edit]
>>2838
>What about negative numbers????? So it just wraps around and -1
Yeah, same as if you do (uint32_t (-1)). Remember casting between signed/unsigned doesn't actually emit any instructions to alter the bit pattern, it just changes the type information tracked by the compiler to interpret it correctly. (this is unlike casting from int to float or widening, since under those cases it actually emits an instruction [for narrowing on x64 compiler probably optimizes by just letting future references be to eax instead of rax or something instead of explicitly clearing top half, but I haven't verified in godbolt]). IMO implicit casting from signed to unsigned should be an error though (there's probably a gcc flag to enable this) since it's easy to do accidentally and usually not what you want.

By the way, are you just learning C or are you interesting in the specific details of parsing/precedence for a reason? If the former, this stuff is pretty dry and I'm not sure if trying to learn C from the "bottom-up" is the best approach. The excerpts you've posted make it seem like the book is an annotated version of the C standard or something, and I get the impression it kind of misses the forest for the trees. E.g. while it might be useful background knowledge from a PL perspective to know the details of how lvalues or sequence points work, I don't think it's of much use for actually implementing things (unless what you're implementing is a C compiler).
>> No. 2840 [Edit]
>>2839
>Remember casting between signed/unsigned doesn't actually emit any instructions to alter the bit pattern
I'm just learning C, and know next to no assembly, so I couldn't have remembered that. It's a bit confusing since negative values are represented differently. For instance, I know
10000010 in a twos complement system is -128 + 2 = -126, while 0111110 is positive 126.

>I'm not sure if trying to learn C from the "bottom-up" is the best approach.
I'm going off the suggestion I was given here >>2651
I've also got a computer architecture class coming up soon.

Post edited on 24th Aug 2022, 1:40pm
>> No. 2841 [Edit]
>>2840
> negative values are represented differently
Yes, but there's no explicit sign bit indicating if a given bitpattern is signed or unsigned. The bitpattern of -126 will of course differ from that of 126 because they're not equivalent mod 256, but that's not really anything surprising. What I mean is the same bitpattern can be interpreted as signed/unsigned.

>I've also got a computer architecture class coming up soon.
I see. From a comp-arch perspective you'll probably get more value familiarizing yourself with assembly, ALU circuits, synchronous design, and basic cpu 5-stage pipeline. I don't know what arch courses are using these days, older ones will still be using mips while newer ones may be using riscv.

Post edited on 24th Aug 2022, 3:20pm
>> No. 2842 [Edit]
>>2841
The class has a C and assembly part, mostly C. The topics you're mentioning are probably in the second computer architecture course. I'm also interesting in vulkan. Thanks for explaining by the way.

Post edited on 24th Aug 2022, 2:24pm
>> No. 2843 [Edit]
>>2842
>vulkan
I don't know much about gpu programming, but my understanding is that vulkan isn't really meant to be an end-user facing api since it's lower level than opengl. Starting with OpenGL would be easier (or even WebGl which is probably the best way to get started on graphics fundamentals without a bunch of toolchain setup). Or if you're interested in it for numeric computing, CUDA is the gold standard.
>> No. 2844 [Edit]
>>2843
The Vulkan tutorial recommends reading another guide first that uses Opengl and explains fundmental aspects of 3d graphics. I think it'll be worth going down that path. I'll try posting a triangle i make in Vulkan in 8 months time.

Post edited on 24th Aug 2022, 4:33pm
>> No. 2845 [Edit]
>>2844
Highly recommend the webgl tutorial https://webglfundamentals.org/webgl/lessons/webgl-fundamentals.html since it allows you to learn the core of opengl without needing to bother with window toolkits (I think most people use glfw as a wrapper to abstract out the platform-specific windowing parts when developing for native). I don't know if there are better tutorials since I don't really know much myself.
>> No. 2908 [Edit]
>>1547
I think C is much nicer to use than C++ if you aren't a super hardcore professional coder. The language is so simple compared to other low level languages, which is probably why it caught on.

C + Lua is a really symbiotic combination, the lua library is simple to build with any C compiler. Then if you want a GUI library you can try Tecgraf IUP.
>> No. 2909 [Edit]
>>2908
I've heard good things about LUA, but I can't get past the 1-indexed "arrays".
>> No. 2910 [Edit]
>>2908
>>2909
When I last tried Lua I also found its standard library a bit lacking. You can use C++ in a c-like style and still get the advantages of its excellent standard library without much of the complexity overhead.
>> No. 2911 [Edit]
>>2910
Standard libraries are nice and all, but if a language has many external libraries, I don't see it as being too important.
>> No. 2912 [Edit]
>>2911
I don't like pulling in external libraries, because often times they're of questionable quality and their apis don't play well with each other. C++ stdlib is great because it was designed cohesively (so things in std::algorithm usually work on them) and provide all the relevant data structures you might need (*).

(*) They're not perfect though, some such as unordered_map aren't as performant as they could be, I understand absl's hash maps are better here, but pulling in absl is a huge dependency.

But otherwise yeah I've seen the C + Lua pattern used a lot in projects before, and it does seem a good fit, I just wish a lot of weird decisions were worked out (no int types, 1-indexed array (**), wonky scoping). Another more recent option I've seen gaining popularity is integrating with QuickJS.

(**) I don't have an issue with the idea of 1-indexed itself since they're ok on languages targeted towards mathematics or otherwise provide a different meaning for a[0], but since Lua isn't targeted towards math nor is a homoiconic language, 1-indexing doesn't make any sense here.
>> No. 2928 [Edit]
>>2912
> but since Lua isn't targeted towards math nor is a homoiconic language, 1-indexing doesn't make any sense here.
I thought they build their whole hash table thing around it
>> No. 2952 [Edit]
I'm trying to get all permutations of a list in Haskell using this algorithm
https://www.baeldung.com/cs/array-generate-all-permutations#simple-recursive-algorithm

I'm getting the "cannot construct the infinite type" error, and I'm at my wits end. Please help. My code:
https://justpaste.it/2vbbc/pdf

edit: I asked on the irc, and I had to move the recursive call to the right of the list comprehension. Spacing also gave me issue. So, the final code is:
https://justpaste.it/9cwbk/pdf

Post edited on 19th Oct 2022, 8:55am
>> No. 2953 [Edit]
>>2952
Transcribing contents of the link because I have faith that TC will last longer than pastebin services


-- Get every combination of elements in a list
import Data.List
perms :: Eq a => [a] -> [[a]]
perms init = process [] init
where process :: Eq a => [a] -> [a] -> [[a]]
process cperm [] = [cperm]
process cperm eperm =
[p | e <- eperm,
let nperm = cperm ++ [e],
let rperm = delete e eperm,
p <- process nperm rperm]

>> No. 2954 [Edit]
>>2953
Too bad tc destroys formatting by removing "extra" white space. White space is very significant in Haskell.

Post edited on 19th Oct 2022, 11:28am
>> No. 2955 [Edit]
>>2954
Contrary to >>2111, we do have a code tag. Or at least we do now.
-- Get every combination of elements in a list import Data.List perms :: Eq a => [a] -> [[a]] perms init = process [] init where process :: Eq a => [a] -> [a] -> [[a]] process cperm [] = [cperm] process cperm eperm = [p | e <- eperm, let nperm = cperm ++ [e], let rperm = delete e eperm, p <- process nperm rperm]

>> No. 2956 [Edit]
>>2955
Really? Is this a recent addition?

>> No. 2957 [Edit]
File 166620957425.png - (1.08MB , 1280x720 , ayumi.png )
2957
>>2955
Nice find, anon. I wasn't certainly aware of this feature.

>> No. 2958 [Edit]
>>2956
Not sure when it was added, but >>/fb/6802 is where I remember first seeing it.
>> No. 2960 [Edit]
>>2957
Interesting, on yotsuba v2 theme your post's text overflows the container. Tried noodling around in inspector to understand why, and from what I can tell (which may well be wrong since I can never figure css out) it's because the computed position (presumably used in sizing the parent container) doesn't match the actual position. I think it's something to do with inline vs block elements, and the interaction with the image. The monospace text is in a div (block element) which is forced to start from the beginning of the line. However this would overlap the image so I guess for some reason during page rendering it gets shifted, even though for layout computation purposes it's treated as if the image wasn't there.

Edit: No, apparently the above is caused by float:left on the .thumb, inspector just always shows the div as taking up the entire width. Then I guess this is a case of white-space: pre forcing overflow. There must be a way to set the parent element to expand, but I don't know it.

Post edited on 19th Oct 2022, 10:33pm
>> No. 2961 [Edit]
>>2960
The CSS is a pretty massive mess honestly so I'm not too surprised it breaks in weird ways
>> No. 2963 [Edit]
It's hard to find good information on how exactly vsync and gsync/freesync work, as in the precise changes that enabling them have on the application->display pipeline. Almost everything you can find is marketing, and because most of the target audience for these things are gamers, any discussion is always loaded with cargo-culting, misinformation, and inconsistent terminology. I spent a week trying to learn enough to be able to confidently reason about all sorts of scenarios, so I'll try to transcribe what I learned.

* First we need to start with analog video display, e.g. CRT. As these had a physical electron gun scanning down to form the image, there's neessarily a physical delay in how long it takes a given "frame" to be displayed on the screen in its entirety, and a "recovery period" to restore the gun so it's ready to draw the next frame. The former was implicitly controlled in the actual analog encoding of the signal (you can lookup ntsc or pal standards for the gory details), while the latter was handled by "dead space" in the signal between encoded frames, the vertical blanking interval.

* Even though LCDs no longer use an electron gun, there's still a need for "dead time" between sent frames (perhaps due to combined effects of lcd pixel response and driver circuitry signal propgation time). For the lcd panel itself, I'm not sure exactly what encoding and signalling they use, I think it's LVDS [1] but it has a vsync segment for this reason, and this is thus propagated to higher layers of the stack as well (e.g. cvt signalling, and up through the outer protocols like hdmi [2]). We will blackbox the LCD panel + driving apparatus as a "monitor" which will accept raw frame data in a signal format that consists of encoded frames and blanking intervals.

* The interface between the monitor and the operating-system+applications will be the gpu, which we will abstract as a dedicated region of memory storing one raw frame (framebuffer) along with driver circuitry to transmit this information to the monitor. The process of transmitting framebuffer contents to the monitor will be done multiple times a second, as per the refresh rate of the monitor. For a CRT, the monitor's refresh rate is easily seen to be primarily dependent on the speed of the electron gun. With LCDs however, the difference between a "low" and "high" refresh rate monitor is more subtle and I think comes mainly from the combined end-to-end signal propagation time, and so the "high refresh rate" monitor has components that are specifically designed to handle higher speeds (pixel clock for video signal, pixels in the lcd display itself supporting higher refresh rates without weird artifacting, and the lcd driver compensating for whatever weird physics occurs at high refresh rates). It should be noted that apparently some people in the gaming community like to "overclock" their monitors which I assume refers to tuning the pixel clock and blanking interval [4].

* It should also be noted that the reason why information must be sent to monitors (even LCDs) frame-wise (as opposed to a damage-region update scheme) is that it would be very ineffecient (cost and circuit-wise) to implement a scheme for random pixel addressing (similar to SSDs which only erase block-wise). The reason why frames must be sent "periodically" is clear for CRTs (namely that the beam is physically periodic), whereas for LCDs it is not strictly necessary: while like DRAM LCD pixels do need to be refreshed periodically even if no "new" content is to be displayed (I'm not 100% clear why LCD pixels need this refresh, but seems like LCD pixels can't handle long-term dc voltage, so since they need to have voltage alternated periodically anyway, it's easier to just make that the refresh rate), in practice this rate can be as low as 24hz, so we don't need to scan every 60hz.

* The observation that with modern LCD panels to maintain a constant image we no longer need to physically resend the framebuffer data to the monitor at a 60hz refresh rate is a key aspect of freesync/g-sync as described later. Also note that resending at 60hz is wasteful if there are no changes, so newer monitors have a monitor-side cache of the last displayed image, so the physical lcd refresh doesn't have to be tied to the gpu's refresh rate. This is known as "panel self refresh" [8], and it too ends up being inspiration for freesync/gsync.

* Now we can finally get to "v-sync", which is succintly put as gating gpu framebuffer updates on the vertical blanking frequency. Ideally we'd like framebuffer reads and updates to be "atomic" such that we never transmit an incomplete frame to the monitor. Due to physics this is probably impossible though: we can model framebuffer updates as essentially a memcpy between host dram and gpu memory, and similarly model the transfer from gpu memory to wire as another memcpy. (In reality it's probably more sophisticated, probably row-wise reading at least, see slow-motion [26]). Under this model, if we were to modify the framebuffer while it's being read, we'd end up ultimately transmitting some portion of the old image followed by some portion of the new image, which is known as screen tearing [9].


* We know that the delay between on-wire frames is precisely the vertical blanking interval. So this means that in order for a framebuffer read to be "effectively" atomic, any modifications to the framebuffer must be done within vblank [3]. In a simple case, this means that by the end of vblank we need to have finished copying our frame to the gpu's memory, and we should not touch it until the beginning of next vblank. A naive approach (single buffering) would be to optimize your video rendering so that the rendering manages to finishes within a blanking interval, but this can be optimized via a double-buffering approach where you can take your own time rendering to a back buffer, and during the blanking interval the back buffer is swapped with the front buffer. The back-buffer can either be software-backed or part of gpu memory itself, with the latter avoiding a memcpy cost.

* Note that in the above the implicit assumption is that we will prevent modifications to the back-buffer (or equivalently atomically swap the back and front buffers) once we enter the blanking interval. We can thus make the assumption that after rendering to the back buffer, the drawing application will _block_ until the start of blanking interval, at which point it will swap the buffers and resume rendering to the new back buffer. (In the case of hardware-backed back buffer where no explicit memcpy is needed, I'd assume the gpu itself takes care of swapping via a pointer swap).

* From above we also see that with vsync enabled, since render loops are driven off of the vblank interval, we effectively cap the in-game fps (number of times back-buffer is swapped with front-buffer per second) to the monitor's refresh rate. Note that while capped fps is usually an effect of vsync as its implemented in games, it's not necessary in general. You could for instance have a tripple-buffering setup, with two back buffers and one front buffer. The application is free to go as fast as it wants, alternating renders between the two back buffers. When it's vblank time, at least one of the back buffers must have a fully completed frame, so we can just pick that [10]. Also note that triple-buffering here will reduce perceived input latency, as the render loop isn't blocked on the vblank interval so an outputed frame can still depend on input received between blanking intervals. This is implemented in hardware as Nvidia's "Fast sync" (hardware triple buffering, doesn't back-pressure render pipeline)

* The above should be distinguished from "render ahead queues" which is implemented in some systems like DirectX, which also have multiple back buffers thus unblocking the render loop, but these buffers are effectively immutable once submitted to the gpu. As such the latency is effectively worse than with double-buffering, growing as the queue size grows. The fact that most gamers use windows and a queue size > 1 appears to be the default seems to have resulted in a lot of confusion about this online.

* We could also have double-buffering without using vsync at all, where we are free to keep updating and swapping buffers as fast as we want, but because we don't block the swap until the vblank, we might have a chance of swapping the buffers while it's being output to the monitor. This could be thought of as driving your render loop off of the system clock instead of the vblank clock. In such a scenario note that the higher your fps, the faster you swap, and the more likely it is that a swap might happen during readout. Conversely the slower you swap (lower fps), the less likely tearing is to happen (since it's less likely to intersect a monitor scan period). Similary the higher your monitor's scan rate, the less torn displayed frames we'll have, since any mangled readouts will be replaced with a clean readout the next refresh cycle.

* To concretely quantify the above with no v-sync at 60fps on a 60hz display we might expect 1 tear line every displayed frame (if we always deliver a new frame in the middle of a scan). At 30fps on a 60hz display, every other displayed frame might have a tear line (since for every 2 scans, we swap once, and the swap can intersect at most 1). At 120fps, we'd expect 2 tear lines (since we swap twice during a readout). With non-divisible fps like 45fps (every 4 scans we swap 3 times). Note that at frames rates < refresh rate, we might have distracting effects where the tear line appears to move or jump around, so it can theoretically be more noticeable than at 60fps (where the tear line might hopefully stay fixed). Tearing would also be more noticeable as there would be a greater difference in content between frames. Also increasing the monitor refresh rate would decrease the time a torn frame is displayed. Given that frames are delivered consistently on time at a rate equal to the refresh rate, with accurate clocks, we can try to move where the tear line occurs so that it always occurs at the same spot: it thus becomes effectively a "constant" artifact and essentially unnoticeable. We could also remove it entirely by moving it into the vertical blanking region is equivalent to only swapping during vblank). This technique is known as "beam racing" or "scanline/latent sync", you can see employed in demoscene here [24, 25, 26, 27].

* The above technique of beam-raced page-flips seems similar at first to vsync, in that to hide the tearline we have to time the pageflip to happen in vblank. The only difference from vsync is that the application controls the pageflip itself (with accurate clocks to time the pageflip coupled with tight control of the render loop to always deliver a frame at refresh time, thus locking game fps to screen refresh rate just like v-sync), versus allowing the gpu driver to do it, which seems to reduce a bit of latency:

> It's essentially a beamraced pageflip mode at the raster scan line position you want (adjustable tearline location), once or twice every refresh cycle. This minimizes framebuffer backpressure as much as possible, by bypassing all the VSYNC ON framebuffering logic built into graphics drivers. Essentially Scanline Sync creates a new sync mode using the VSYNC OFF mode that looks like VSYNC ON in appearance (and self-capping like VSYNC ON) if the game's frametimes rarely reach a full refresh cycle.

It's not clear to me exactly why doing the swap "in software" is faster than letting the gpu drivers do it, but I think part of it might have to do with the fact that under the hood of modern gpus, when you enable vsync it doesn't use a strict double-buffered system but instead a render queue leading to multiple frames of input lag [28]. Theoretically from what I can see assuming a strictly double-buffered adaptive-vsync (see subsequent paragraphs for definition of "adaptive vsync") there shouldn't really be any differences with "scanline sync".

* Back to scenario with vsync off, note that even if you can guarantee the aggregate fps is the same screen refresh rate, we still can't guarantee that a new frame won't be delivered mid-refresh (although it's less likely) and can't guarantee that each screen refresh will read a fresh frame. This is a subtle point, basically even if we have 60fps in aggregate, the frame pacing might be uneven so we could render & output frame 1, miss a vblank, then "catch" up and render frames 2 and 3 in rapid succession before the next vblank. This could lead to either a repeated+dropped frame (if vblank comes befoe frame 2) or a screen tear (if the vblank is in-between rendering of 2 and 3). If we know a priori that our render & swap will always complete before the next vblank, then we clearly won't have any issues. Of course the issue is that since we're driven off of the system clock, we don't know exactly when the vblanks are, but you can see that as the accuracy of the system clock increases (so we can swap buffers exactly 1/60 sec after the last time, which will hopefully consistently be inside a vblank) and the render loop time decreases (so that we're unlikely to miss a vblank), we can avoid both tears and frame drops.

* Or put another way, if the render loop can consistently output a frame within 1/60 - epsilon sec (where epsilon is the buffer swap time), then assuming accurate system and video clocks with no clock drift [and that our very first swap was within a vblank interval] there would not be any benefit from vsync because we'd never have any visual glitches. So for practical cases, vsync helps when one of these two conditions are violated: frames aren't always delivered exactly in time, and in the real world clocks will drift. Vsync helps mitigate the latter by ensuring we use the display clock to drive the render loop. The difference between vsync enabled versus disabled in the case of a render-loop that exceeds 1/60 seconds is that the former will guarantee no tearing (at the expense of stuttering and, if only double-buffered, input lag) while the latter will try to render the frame immediately, which might possibly lead to screen tearing depending on if we're in the middle of outputting or not.

* I wonder how prevalent use of v-sync was in "old-school" games. Clearly if you go to consoles that didn't even have a framebuffer they rendered to this is not an issue. But for PC games, clocks back then were even less accurate and more drifty than current clocks. I'm guessing that whatever tearing might have been less noticeable with CRTs. I'm assuming that consistency of render loop times was never really an issue until modern 3D games though. You might be interested in reading the rants of a hacker trying to get vsync on early windows [15]

* "Single-buffering" (rendering to the same buffer that we send to the monitor) is not used, since I think we often want to make the assumption that we start with a "clean slate", so that way it's easy to do compositing layer-wise. So we'd want to isolate the buffer we render to from the buffer that is output. I see that theoretically "beam following" exists where if you give up layer-wise compositing and go with a one-pass approach you can use a single buffer by only updating pixels after they've been sent out [13].

* Previously we talked about the "happy path" where our render loop was in fact fast enough to have a frame placed in the back buffer before the start of the vblank. If we don't have it ready in time, then in the case of double-buffering we have to finish rendering it and wait until the next vblank to display it (which will lead to a repeated frame displayed to the user before the proper frame). (In the case of triple buffering, we can begin rendering the subsequent one as well and if it's finished before vblank then we just display that). With double-buffering+vsync I think the reason we can't just "discard" the late frame and render the next frame into the backbuffer is that the rendering loop is driven by the vblank, so it's roughly a "render() + present()" loop, with the subsequent render blocking until the actual buffer is swapped at start of vblank, so the loop itself might not be aware of the actual underlying timings. But I'm not a graphics person, feel free to correct me. Even if it did I suppose clearing the buffer and re-rendering a new frame that isn't guaranteed to finish in time is a worse option than displaying the already rendered frame. (Note that new nvidia drivers supposedly use fancy magic to predict the time from display() to display() and avoid the render queue building up [29]).

* So in the case where we consistently miss the vblank (which the user might see as a dip in fps), the effective frame rate becomes locked to a fraction of the screen refresh rate. In other words, if instead of being able to render a frame within 1/60 sec it consinstely takes slightly longer, we will only be able to display a new frame every second VBIs, so our frame rate becomes 30fps. If we alternate between being able to make it and not being able to make it, this would result in alternate frames being displayed for uneven amounts of time (1/60 sec vs. 2/60 sec), which would result in noticeable stuttering and unpredictable input lag. If vsync was not enabled, then we could still manage a "smooth-ish" 60-x fps, at the expense of possible tearing. Thus there's a threshold below which vsync doesn't give much benefit to the user. The ability to automatically disable vsync below this threshold is known as "adaptive vsync".

* Finally we get to freesync/g-sync which is the new hotness (latter being nvidia's proprietary name for the former). These models recognize that the "pull" based approach of reading a new frame from the framebuffer every refresh cycle is a poor fit for current LCD displays. Concretely, CRT monitors have a slow response time and phyiscal gun, so a pull based approach is natural for them: it can poll the framebuffer whenever it is ready to update, so there's no worry about the driving gpu needing to know monitor-specific internals like beam speed. LCDs don't have such a requirement, other than periodic low-frequency refresh for the pixels themselves which can be handled by panel self refresh, so we can instead have a push-based model where the renderer submits frames whenever it's ready, and as soon as it's submitted the gpu just transmits it to the monitor. In this sense the concept of a fixed refresh rate is essentially meaningless, since from the client perspective it's free to send completed frames whenever it wants, and it'll be displayed to the user immediately (of course we're still limited by physical pixel response time, so there is an upper limit).

* See [19] for a better explanation of the above, and the following page explains how gsync actually functions much better than I can:

>G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor. In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time. That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc. What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.

>1) A new frame has completed rendering and has been copied to the front buffer. Sending vBlank at this time will tell the screen grab data from the card and display it immediately. 2) A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.

> In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline. The result was varying frame latency and either horizontal tearing or fixed refresh frame rates. With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.

* [20] has a visual comparison of the render pipeline with v-sync off, v-sync on, and g-sync which I think is perhaps the best visual I've seen in this entire subject and summarizes all of the above.


* Also remember that LCDs panels do have a minimum refresh rate around 20hz, so below some point we won't be able to honor the timings exactly without introducing artifacts. The key difference between amd's freesync and nvidia's gsync seems to be how they handle this: freesync reverts to a configurable vsync on or off state, while gsync has additional hardware to frame-double (or triple, etc.) as needed so we still send frames at a rate above the necessary panel threshold. See [21] for analysis much better than I could ever do. The obvious issue with this is that the additional inserted frame for forced refresh might collide with a new incoming rendered frame, reintroducing tearing, or occur right before we were about to send a new frame meaning we have delay introduced before that new frame is visible. Seems like there's some magic predictive stuff here to minimize the chance of this happening. See [22] for some more info on this.

* Also note that in the event that a new frame is sent before the previous finishes scanning out (which is equivalent to having an instantaneous fps greater than the display's max possible refresh rate), you could either use the new frame for the rest of the current scanout (tearing) or wait for the current scanout to finish and immediately display the new frame (with the delay between these two governed by 1/display_max_hz). The former is an effect similar to what you would get with the vsync off, while the latter is similar to vsync in that we're waiting until the start of a new blanking interval, but instead of needing to then wait the entire blanking interval before scanning out the new frame, we modify the length of the vblank interval itself allowing us to display the new frame with as little delay as possible.


* Note that I think you could technically have freesync on a multisync CRT, within a tight range. But you're limited by phosphor persistence so you probably won't be able to go below 60hz without image quality being terrible, which makes it a bit useless, and if the upper refresh rate isn't more than 100hz then it won't be able to react as quickly to late frames.

The original AMD Freesync whitepaper [34] is also decent reading if you want to briefly review the above.

It should also be noted that many compositors implicitly do vsyncing for you, so unless you run a game in exclusive fullscreen mode you likely cannot avoid a vsync [28, 31] – in fact it adds an extra frame of latency due to the final window compositing buffer. On windows it seems to be done by dwm when you enable aero [30], and on mac it's done by quartz compositor (they call the vsync "beam sync" which I think is cute) [32]. Also if you're interested in opengl I should link to apple's developer docs [33] which are very polished and applicable cross-platform.

Finally I'll conclude with a brief analysis how this applies to displaying videos: unlike games where frame rates are a function of the render loop, with videos we have a fixed frame rate we need to display. Let's start off by assuming we're playing a 60fps video on a 60hz display. We can assume without loss of generality that these frames can be produced as fast as we want (since we just have to demux and decode), the issue with video is frame timing: we want each frame to be displayed exactly 1/60 sec since the previous one, and need to keep it synchronized with audio. One naive solution is to drive video frame display off of the audio clock. In such a scenario we present() frames without regard as to whether we're in a vblank interval or not. If vsync is off, this could result in tearing. If vsync is on, then the display of the frame would be delayed until the next vblank, which will throw off a/v sync (possibly leading to dropped frames). At lower fps like 24fps, delaying until the next vblank is not really an issue because that's just a 1/60 sec delay whereas the next frame is longer than that (1/24 sec), but I think this does theoretically result in non-perfect 3:2 pulldown. I.e. instead of each video frame being displayed for 3:2 refreshes you might maybe have the occasional 2:3, or 4:1. On average the a/v sync loop should still make sure this still works out to 24fps with no dropped frames, and I've personally never really noticed an issue, but it's still uneven frame pacing from the theoretical ideal. As your video's fps goes up though, you have less wiggle room in terms of timing so frame drops become more likely (but on the flipside a dropped frame may not really be as noticeable at higher fps since there's less difference in content between two frames). For some reference measurements, when playing 60fps @ 60hz with v-sync on and synchronizing against audio clock, I get a dropped frame every 5 seconds or so, which honestly doesn't seem that bad considering it has no knowledge of where exactly the vsync are. (Note that in the above setup we can detect when we need to drop fames by seeing how far audio is from the video position, assuming we only increment the video position after swapBuffers() finishes blocking).

Timing with audio loop: maintain a position independently for audio and video. On every audio timer clock tick (essentially whenever the audio driver says it's ready for more data): schedule audio to be played, and set the audio position based on when we expect that the last schedueld sample to hit the speakers. (E.g. if we've cumulatively written 30sec of audio to the buffer so far based on number of samples and sample rate, and the buffer currently contains 20sec of audio that has yet to hit the speakers, our audio position would be 10). The next video frame needs to be scheduled at 1/fps + speaker_latency seconds since the previous frame, so we essentially sleep (in a separate video thread I guess so we can be independent of audio queueing) until relative_time_elapsed >= 1/fps + speaker_latency, then we reset the relative_time_elapsed and present() the new frame, and increment our video position. Assuming that present() instantly shows the frame on screen this works. Any delay in the video path (e.g. vsync block) will result in 1/fps + speaker_latency - relative_time_elapsed being very negative (telling us we need to have displayed this frame in the past in order to maintain av sync), which we can detect and drop frames if it gets too bad. (Equivalently we should be able to check difference in audio and video position, since vsync blocking would prevent video from increasing in a duration that audio would have increased).


Also note that if we had a render ahead queue instead of strict double-buffering then we have an additional source of latency between when we present() the frames and when they're displayed on screen. The video feeding loop (if timed solely based on audio timer) isn't aware of this latency though because the present() calls would not block until the queue gets filled up, so the av adjustment gets messed up. (Consider the case of an infinitely long render ahead queue, then present() never blocks so it thinks the frame was delivered immediately, whereas with strict double-buffering it would immediately(* on the next command) block until the backbuffer is free again (until the next vsync)). I'm not sure if video players compensate for this by forcing flushes or something.


You could also drive your video frame off of the vsyncs, so that you display a frame on each vsync and then increment video playback position by the delay to the next vsync = frame display time (1/refresh_rate). In the case of 60fps on 60hz monitor, this allows for perfect playback, and in the cases where pulldown is needed it allows for "perfect" pulldown. This should also play nicely with render queues since your timing logic is based in terms of vsyncs anyway.



[1] https://pcbartists.com/design/embedded/stm32-lvds-lcd-display-interfacing/
[2] https://prodigytechno.com/hdmi-protocol/
[3] https://15466.courses.cs.cmu.edu/lesson/timing
[4] https://github.com/kevinlekiller/linux_intel_display_overclocking
[5] https://www.quora.com/Whats-the-limiting-factor-in-increasing-display-refresh-rates-in-modern-displays
[6] https://electronics.stackexchange.com/questions/570162/why-do-lcd-screens-need-to-refresh-in-the-first-place
[7] https://superuser.com/questions/286755/does-the-refresh-rate-affect-lcd-screens
[8] https://www.anandtech.com/show/7208/understanding-panel-self-refresh
[9] https://en.wikipedia.org/wiki/Screen_tearing
[10] https://www.anandtech.com/show/2794/2
[11] https://hardforum.com/threads/how-vsync-works-and-why-people-loathe-it.928593/
[12] https://forums.tomshardware.com/threads/vsync-for-lcd.864241/
[13] https://www.virtualdub.org/blog2/entry_074.html
[14] Game Development Patterns and Best Practices: John P. Doran, Matt Casanova
[15] http://mjsstuf.x10host.com/pages/vsync/vsync.htm
[16] https://www.anandtech.com/show/8129/computex-2014-amd-demonstrates-first-freesync-monitor-prototype
[17] https://www.tomshardware.com/reviews/amd-freesync-variable-refresh-rates,4283.html
[18] https://www.tomshardware.com/reviews/g-sync-v-sync-monitor,3699.html
[19] https://pcper.com/2013/10/nvidia-g-sync-death-of-the-refresh-rate/2/
[20] https://pcper.com/2014/08/asus-rog-swift-pg278q-27-in-monitor-review-nvidia-g-sync-at-2560x1440/
[21] https://pcper.com/2015/03/amd-freesync-first-impressions-and-technical-discussion/2/
[22] https://7review.com/freesync-and-g-sync-explained/
[23] https://forums.blurbusters.com/viewtopic.php?t=4710
[24] https://forums.blurbusters.com/viewtopic.php?t=4213
[25] https://blurbusters.com/blur-busters-lagless-raster-follower-algorithm-for-emulator-developers/
[26] https://blurbusters.com/understanding-display-scanout-lag-with-high-speed-video
[26] https://forums.blurbusters.com/viewtopic.php?f=2&t=4585&p=36384#p36384
[27] https://forums.blurbusters.com/viewtopic.php?t=5672&start=10
[28] https://forums.blurbusters.com/viewtopic.php?f=22&t=3139&start=20
[29] https://github.com/klasbo/GamePerfTesting/blob/master/text/02-reflex.md
[30] https://superuser.com/questions/558007/how-does-windows-aero-prevent-screen-tearing
[31] https://forums.blurbusters.com/viewtopic.php?t=4727
[32] https://arstechnica.com/gadgets/2007/04/beam-synchronization-friend-or-foe/
[33] https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_designstrategies/opengl_designstrategies.html#//
apple_ref/doc/uid/TP40001987-CH2-SW4
[34] https://www.amd.com/Documents/FreeSync-Whitepaper.pdf
>> No. 2964 [Edit]
>>2963
Sidenote, for an analysis of end-to-end latency in the render to display pipeline, see [1]. Latency on modern software is terrible, and I think a large part is just due to building upon abstractions which you don't own so the latency becomes unbounded as you add more and more of these layers.

[1] https://www.anandtech.com/Show/Index/2803?cPage=4&all=False&sort=0&page=7&slug=
>> No. 2965 [Edit]
File 16668960991.jpg - (334.52KB , 2048x1354 , cirno lambda.jpg )
2965
I wrote an answer to this problem in Haskell.
Post trips the world filter, so here's a copy:
https://pst.moe/paste/ojjdtr

edit:
line 14 could just be
| col > diff = sum $ blist !! (diff - 1)

Post edited on 27th Oct 2022, 11:57am
>> No. 2966 [Edit]
>>2965
How does this work? I can't even figure out how what the python version on that page does. I know you can count the number of partitions of an integer in O(n^2) via a dp approach (although I think enumerating them will take an exponential time no matter the algorithm since there are roughly an exp number of such partitions).
>> No. 2967 [Edit]
>>2966
Oh I think the python version at least works by doing something similar to the dp counting approach, where cache[n] is an array of the count of the number of partitions such that count[i] counts partitions using values up to i. That's not very pythonic programming though, an explicit lru cache (or pre-allocated 2d array) would be better, and there's a lot of implicit index chasing which can be made cleaner without adding verbosity. I fucking hate that all these algorithm implementations seem to be code-golfed to hell. I mean why not just type "numPartitions" instead of "cache"



# n = number to partition, k = max val used in partition
numParts(n, k) = numParts(n, k - 1) + numParts(n-k, min(k, n-k))

Does your haskell solution do something similar?

Post edited on 27th Oct 2022, 2:05pm
>> No. 2968 [Edit]
File 166691040637.jpg - (784.59KB , 990x1400 , b81b08498d9c7637e25d2fb279c16470.jpg )
2968
>>2966
>>2967
It's easiest for me to explain with examples that mirror how I figured out my approach, since I don't have a good math foundation.

Two important rules:
Names start with the greatest value in them
No value in a name is followed by a greater value

What are the names of 9 that start with 5?

5 + 4
5 + 3 + 1
5 + 2 + 2
5 + 2 + 1 + 1
5 + 1 + 1 + 1 + 1

Notice, that what's on the right of "5" in the above names, is "all of the names of 4"
4 has 5 names. So 9 has 5 names that start with 5

What are the names of 9 that start with 3?

3 + 3 + 3
3 + 3 + 2 + 1
3 + 3 + 1 + 1 + 1
3 + 2 + 2 + 2
3 + 2 + 2 + 1 + 1
3 + 2 + 1 + 1 + 1
3 + 1 + 1 + 1 + 1 + 1

3 + 6 is not a possible name, as are any names that break the 2 rules.
Notice that what's on the right of "3" in the above names, is "all of the names of 6 that start with 3 and below", which equals 7.
"and below" is important.

This can be generalized like so:
What are the names of 10 that start with 4?

row = 10, column = 4, difference = (10 - 4) = 6
6 is higher than 4, so it cannot just be "all of the names of 6". It is all of the names of "6 that start with 4 and below".

We just need to access 6's information and add the correct number of values: 1 + 3 + 3 + 2 = 9

Another observation, every value's names has one that is just the value, and one that is comprised entirely of 1s.

My Haskell code follows this procedure, and uses recursion to retain the built up list.
The | symbol is kind of like a switch statement, the . and $ symbols are used in function composition instead of parentheses.

Post edited on 27th Oct 2022, 3:47pm
>> No. 2969 [Edit]
>>2968
Thanks for the explanation, with that I can sort of squint at the haskell code and make sense of it. I think that's roughly the same as the DP counting approach, except perhaps slightly less efficient since you're doing numParts(n, k) = num partitions starting with k (instead of having at most k), which means your inner loop-nest has an explicit sum (to do the "and below") whereas the traditional approach keeps track of the prefix sums as you go, so the inner loop is O(1). Also it should be noted the code they give for python can be optimized to linear space usage since the recurrence only needs to look back 2 steps.

Though I personally feel that functional programming is not best suited when need to do explicit array indexing and index manipulation, compared to something like matlab (or even traditional language like C).

Edit: Sorry I was mistaken, you can't optimize to O(n) space because you need access to all of parts(n, 0) ... parts(n, n)

Post edited on 27th Oct 2022, 5:27pm
>> No. 2970 [Edit]
>>2969
Also I'll add that scheme isn't really best thought of as a practical functional language, in that while it's technically possible to use it as such, the syntax doesn't lend itself well to pattern matching (I guess it's possible to implement it yourself with macros, but it's not built in), you don't have sum types (or really any "types" at all), and a lot of syntactic sugar like list comprehensions aren't built in. Given that, you might as well use python. I see scheme in the same class as forth in that it's a theoretical model of simplicity and the minimal you need to bootstrap a powerful language, maybe also an exploration of lambda calculus.

Practically the real power of functional languages comes in when you have sum types and ability to pattern match on them, and that's what differentiates "functional languages" from "languages with support for first-class functions, and map/reduce/pipes" (the latter of which is pretty much all languages these days).
>> No. 2971 [Edit]
ok got sniped into solving this myself, here's the most elegant solution I can come up with without golfing too hard.

def partitionCountsUpTo(n): # numParts[i][j] = ways to partition i with max value j numParts = [[1 if i == 0 else 0 for j in range(n+1)] for i in range(n+1)] for i in range(1, n+1): for j in range(1, n+1): numParts[i][j] = numParts[i][j-1] + numParts[i-j][j] return numParts # undo cumsum to get # partitions starting with a given value rowsUpTo = lambda n: [[t - s for s, t in zip(row[:], row[1:idx+1])] for idx, row in enumerate(partitionCountsUpTo(n))][1:] print(rowsUpTo(10))

>> No. 2972 [Edit]
>>2971
I don't know python well enough to understand line 9, or the math behind the first part. I'm guessing there's some technical explanation for your method.

Post edited on 27th Oct 2022, 6:09pm
>> No. 2973 [Edit]
>>2972
it's pretty much the same as your method (I think) except computing the prefix sums along the way, and using an explicit 2d array. I guess it's technically "bottom-up DP" as the cool cs kids say but that's a $10 word for a ten-cent idea (it's not _true_ dynamic programming unless you're doing optimal control theory, is the cheeky retort). Line 9 is a bit code-golfed, the longer-form would be roughly

def rowsUpTo(n): for idx, row in enumerate(partitionCountsUpTo(n): # convert (1,3,10,13) into (1,2,7,3) yield [t - s for s, t in zip(row, row[1:idx+1])]


where
enumerate(["a", "b", "c"]) = [(0, "a"), (1, "b"), (2, "c")]
,
zip([1,2],["a","b"]) = [(1,"a),(2,"b")]
and the colon is list slicing.
>> No. 2974 [Edit]
>>2973
>except computing the prefix sums along the way
Do you mean computing them as soon as you have the required information, instead of waiting?
>> No. 2975 [Edit]
>>2974
Yes. If we compute and store the prefix sums instead of the raw values, then the innermost-loop nest is O(1) lookup, compared to the approach where we store the raw values in which case the innermost-loop nest is O(n) since we have to sum them on-the-fly.
>> No. 2995 [Edit]
File 166780311711.png - (22.29KB , 119x119 , Screenshot_20210827_052009.png )
2995
The Advent of Code will begin in less than a month. Is there any interest in a T-C leaderboard? Surely we have enough people.
>> No. 2997 [Edit]
>>2995
>Surely we have enough people.
Do we? I'd ballpark the number of active people on /navi/ as about 5.
>> No. 2999 [Edit]
>>2997
Where are the other three?
>> No. 3000 [Edit]
>>2995
I feel like giving it a try.
>> No. 3003 [Edit]
>>2997
I think five would be enough.

>>3000
Cool!
>> No. 3004 [Edit]
>>2995
I don't really have any knowledge of programming, but I can try and learn a bit with this.
>> No. 3014 [Edit]
>>3004
That makes three or four, including myself. Looking good!
>> No. 3027 [Edit]
File 166844245585.png - (1.58MB , 1600x1200 , uupev_019b.png )
3027
I host a website using nginx running in WSL. On it, I have a mirror of an old website made to promote a visual novel(Swan Song). Thing is, it's encoded in SHIFT-JIS. I have other pages(and files) I want encoded in UTF-8. So until now, I've set only specific locations in the server to use UTF-8(don't think NGINX supports setting a location to SHIFT-JIS), but I decided I'll try converting those old html files to UTF-8.

Going into it, I had no idea what to do, never even written a shell script, but after some research, I ended up with this bash script.
for file in $(find . -type f -name "*.html") do sed -i 's/; charset=Shift_JIS/" charset="UTF-8/g' "$file" iconv -f SHIFT-JIS -t UTF-8 "$file" > "$file.new" && mv -f "$file.new" "$file" done


Iconv is a pretty handy utility which I wish I knew about earlier. It's odd that Windows doesn't have something similar out of the box(it's included in gnuwin32), since I imagine deprecated, region specific encodings caused a lot of people issues over the years. Maybe something verbose could be done in Powershell with Get-Content and Set-Content. Powershell also has a module that includes sed[1]. Bash seemed more straightforward though, so I went with that.
[1] https://www.powershellgallery.com/packages/PoshFunctions/2.2.9
>> No. 3031 [Edit]
Still need at least one or two people for a T-C Advent of Code leaderboard. You won't be judged for not using Haskell.
>> No. 3032 [Edit]
Ruby, PHP, Typescript/Coffeescript
Midwit redditors, many "bootcamp" graduates and college dropouts. Often have incredibly important political opinions that they need to bring into everything in order to enlighten us plebs who are not smart enough to parrot HuffPo and MSNBC verbatim.

C#, Java
Soulless corporate employees. Uneducated beyond what they need for their job, uninteresting personalities, don't care about programming in any other context than delivering on work given to them by their boss. Mediocre coders who lack the passion to improve themselves. Smell like curry.

Red, Lisp
Turbo autists who get a dopamine rush out of coming up with clever ways of solving problems nobody knew even existed. Love using reader macros to change the syntax of their language and create new DSL's, which makes it extremely hard to collaborate on large code bases or use other people's libraries, but they don't care because they're too introverted to deal with other people, and prefer to write things from scratch rather than using a piece of somebody else's code that might not be 100% exactly as they would like it. Probably the most passionate programmers around, along with assembly language enjoyers and Nim/Zig fans, with whom they have significant overlap.

C
Stereotypical oldschool programmers: Intelligent, fairly educated, fairly introverted. Also, lots of boomers. Tend to be either apolitical or right-leaning, at least in comparison to others in the industry.

Python
Probably the most diverse community (not counting Javascript and shell scripting). Lots of programming beginners, both young and old. Data scientists and AI researchers. Startup devs. Open source hobbyists. People with years/decades of experience in Java or C++ who enjoy using what is basically a less verbose and more dynamic version of the language they're familiar with.

Clojure, ClojureScript
Pragmatists who either started out as corporate Java devs or web developers but cared enough about writing actually good code to make the deliberate choice of seeking out a language they feel is better; or as Lisp enjoyers who want to actually be employable and use Java/JS libraries to get real work done. Or simply people who are too lazy/tired to deal with more complex languages.
Almost without exception very experienced programmers, since you need to know at least the basics of the JVM or JS ecosystems to make effective use of these languages. Lots of people with a background in finance/fintech.

Haskell
Academic wankers who congratulate each other for using the most obscure syntax features possible, even if it breaks their program after a year because that British nigga decided to change it around again. Tutorials and other documentation written by people who manage to make simple straightforward concepts like monads sound like something so complicated that it requires a degree in math to even remotely grasp. (I actually like a lot about the language itself, but the community is horrible)
>> No. 3033 [Edit]
>>3032
This reads like it could have been from /g/.
>> No. 3034 [Edit]
>>3033
Hah, seems you're right
https://desuarchive.org/g/thread/89825211/#q89828461

Also it's missing Go and Rust, but I can reasonably infer how that would have gone...

Post edited on 18th Nov 2022, 10:56am
>> No. 3038 [Edit]
>>3031
With me that makes us 6. I created a private leaderboard but apparently I can't name it. Do I post it here, or someone wants to create another and name it accordingly?
>> No. 3039 [Edit]
To the anon who seemingly posted here and tripped the spamfilter: I removed your ban.
>> No. 3040 [Edit]
>>3038
Thanks for participating!
I didn't know you could name leaderboards. But I was going to make a new thread, assuming we had gotten enough people, on the 30th, and have the information to join the leaderboard there.
>> No. 3042 [Edit]
File 166891525337.png - (131.45KB , 894x1041 , intcode_interpreter.png )
3042
>>3031
>You won't be judged for not using Haskell
Using Haskell isn't hard, it just has a lot of stupid syntax that you have to remember.
Last time I did AoC I wrote most solutions in ARM assembly using no library functions other than syscalls, that was pretty fun.

If you want a good score though, best thing would be to use something like Python or Ruby and get familiar with the libraries for common algorithms. From what I saw, the solutions for the first 7 puzzles or so of most of the top scorers basically looked like:
import "solution" solution.algo("input_data.txt")


I myself would probably use Clojure now, it's kind of my new favorite language. But I probably won't have time to participate this year around, unfortunately.
>> No. 3062 [Edit]
File 167026805814.jpg - (293.17KB , 850x1202 , large.jpg )
3062
I wrote an IRC bot that posts a youtube video's title and description. IRC has kind of messy documentation, and I had to rely on a lot of trial and error. It works though.
https://gitgud.io/nvtelen/chii

Post edited on 6th Dec 2022, 6:10am
>> No. 3068 [Edit]
File 167035604848.jpg - (606.63KB , 2252x1780 , 3433a45becd7a46dc337b00c3b7ed38d.jpg )
3068
>>3062
It can report the weather now too.
>> No. 3094 [Edit]
File 167241382385.png - (1.42MB , 1260x1422 , lisp_santa.png )
3094
I atarted using Lisp a month ago, now I sigh when I have to read code written in anything else.
>> No. 3095 [Edit]
>>3094
I'm confident the program that drawing was made in, was not written in lisp. I've dabbled in scheme, and my conclusion was that it's good for feeling clever, but not much else.
>> No. 3096 [Edit]
File 167243641352.jpg - (106.70KB , 572x880 , 窓付き・銃.jpg )
3096
>>3095
Scheme is very bare-bones, its usefulness lies in being so simple and easy to implement as a scripting interface for some larger software project, as you see in e.g. Gimp or Guix.

If you want to write serious software in Lisp, you want to go with either Clojure or CL.
>> No. 3097 [Edit]
File 167295083321.png - (304.77KB , 1920x1080 , sicp.png )
3097
I just started going through the SICP lectures from 1986 and implementing the examples and exercises in Clojure. Might switch to using Guile later on.
Let's see how far I get before I lose interest.
>> No. 3101 [Edit]
I finally had a use-case that warranted playing around with Go: I needed to do some tcp stream manipulation, and even after managing to install mitmproxy and the like it just felt so bloated, and memory leaky (ironic given it's written in python, but it's just not well coded). Go's standard library is really amazing in terms of networking, it has high level wrappers for all the common networking things you'd want to do (serve a file, serve a directory, reverse proxy, etc.). I'm not a complete fan of the language itself, and for most general-purpose scripts I still tend to Python, but for low-level networking it really shines. The only downside is that much like javascript, Go libraries seem to embrace "always be on the latest version", and support for anything but the latest release is poor (compare to e.g. cpp where you can confidently use c++11 without an issue. Python used to be decent, the release cadence was slow enough that you could stick to python 3.5 and be comfortable, but now they're releasing so fast that library requirements are starting to creep up).
>> No. 3102 [Edit]
>>3101
I use Go for pretty much anything I can(not much since I'm a novice).
>for most general-purpose scripts I still tend to Python
Python's dynamic typing, significant white space, and being interpreted, keep me away from it. I know in certain fields it's very popular, but I wouldn't use it unless I have to.
>> No. 3103 [Edit]
File 167332424825.png - (47.95KB , 874x310 , fizz.png )
3103
>>3101
I always have Python installed on all my devices just so I can do
python3 -m http.server

It's the easiest and most reliable way to share a few files that I know
>> No. 3116 [Edit]
File 167444385353.jpg - (29.97KB , 512x512 , 1d96a7e7b574a6eab5d9badebe28aea2.jpg )
3116
>>2844
Update, I attempted to learn graphics programming with this book
https://paroj.github.io/gltut/
After slogging through the first chapter, I've decided to drop it. It's basically a sequential list of the 50 steps involved with invoking a triangle in OpenGL. Something about buffer arrays and clip space. None of it is sticking. I learn best by writing code, so I think I'll go for this guide instead, which involves writing a renderer from scratch.
https://gabrielgambetta.com/computer-graphics-from-scratch/
>> No. 3117 [Edit]
>>3116
Learing opengl is different from learning computer graphics. You could write a ray tracer and renderer in pure TUI if you wanted, without ever touching opengl. But unless you're interested in the details of bresenham's algorithm or computational geometry, that's not really good for much real world stuff. It could still be fun to learn the basics (you'll effectively be implementing software rendering) but at the end of the day if you want to move beyond blitting triangles to the screen and do something meaningful you'll have to get comfortable with gpu-based rendering eventually. (That's not to say that opengl is a clean api or anything, and seems vulkan is too low level to be used directly for practical purposes, so I don't know what modern thing fills the gap. Maybe Apple's Metal if you can afford proprietary locking. At least on the pure-compute side CUDA is pretty clean, for the things I've dabbled in).
>> No. 3118 [Edit]
>>3117
That guide I tried specifically said its purpose is to teach graphical programming, not opengl.
>Metal if you can afford proprietary locking
I have no desire to. Apple's walled garden doesn't interest me.
>CUDA
Also proprietary and seems to be vendor-locked.

Post edited on 22nd Jan 2023, 9:48pm
>> No. 3119 [Edit]
>>3118
>Also proprietary and seems to be vendor-locked
Your loss, it may be proprietary but at least it's cohesively designed.
>> No. 3120 [Edit]
>>3119
I think it's a loss for everyone if to use software, you need hardware made by a specific vendor.
>> No. 3121 [Edit]
File 167461114212.png - (143.53KB , 1000x1000 , test.png )
3121
>>3116
Update, I've made some progress in that book. What I've done consists of reading mathematical explanations and translating pseudo-code into actual code. I couldn't repeat to you how exactly specular reflection of a directional light works mathematically, but I understood the explanation while reading it.

Post edited on 24th Jan 2023, 5:51pm
>> No. 3122 [Edit]
>>3121
What are you rendering via? Just directly blitting to a bitmap file?
>> No. 3123 [Edit]
>>3122
>bitmap
I wrote everything in golang. Its image package includes this structure, which has an array that represents pixels https://pkg.go.dev/image#NRGBA and an interface to change one of said pixels. This struct can then be encoded into a png or jpg file.

I'm not sure what you mean by blitting or whether it can apply to this situation. What I made so far is technically a ray tracer, not a rasterizer.

Post edited on 24th Jan 2023, 6:11pm
>> No. 3124 [Edit]
File 167467762362.png - (186.62KB , 1000x1000 , test.png )
3124
>>3121
Now with reflections and shadows.
>> No. 3125 [Edit]
File 167469026327.png - (601.66KB , 1000x1000 , test.png )
3125
>>3124
Arbitrary camera position and rotation(still requires hard coding values).
>> No. 3126 [Edit]
File 16747740489.png - (38.21KB , 1000x1000 , triangle cube.png )
3126
A "cube" made out of triangles, using the ray tracer. Had to figure out the math for this myself, so that was tricky.
>> No. 3128 [Edit]
File 167496547757.png - (601.66KB , 400x400 , test.png )
3128
The last thing I'll be doing with the raytracer for now. Parses a simple obj file and makes triangles accordingly. It's very slow, hence the small image dimensions.

edit: with randomly colored triangles to make it look a lot more distinct.
>> No. 3151 [Edit]
File 167750613615.jpg - (299.49KB , 1080x1339 , lisp_indentation.jpg )
3151
Any of you who write Lisp at all should look into Parinfer, it does most of the work for you in terms of writing and balancing parens, so writing Lisp becomes similar to writing Python:
https://shaunlebron.github.io/parinfer/

Also, if you've ever installed Emacs and didn't like it, I encourage you to give either Doom Emacs or Spacemacs a try. I installed Doom Emacs a week ago and now I wish I had done so much sooner.
>> No. 3154 [Edit]
Learning commodore BASIC as of now. I don't know much about computahs, but it is really neat seeing what you can do the more you understand the language. Seeing how a computer thinks and works is fascinating the deeper you go.

I'm using an original VIC-20 to learn on, but if I ever decide to try out some techniques on a C64 or Amiga, I'll stick to an emulator. My VIC has some weird bugs that can sometimes get in the way, but I don't think I would've originally been interested in learning any of this stuff if I wasn't handling an actual vintage computer.

Now I want to build my own PC and learn python.
>> No. 3155 [Edit]
To the anon here who became yet another victim of the filter: your ban has been lifted.
>> No. 3175 [Edit]
>>3154
Programming Commodore machines is fun because you have such a raw unfiltered access to every component of the machine, although I have to say that I actually preferred writing Assembly because Basic has a lot of weird limitations
>> No. 3186 [Edit]
>>3175
6502 Assembly in particular is fun and easy to learn. Then you can waste time writing stupid shit like multipliers :P
>> No. 3206 [Edit]
Does web dev count as programming?
>> No. 3207 [Edit]
>>3206
Yes. Especially anything back-end.
>> No. 3208 [Edit]
>>3207
I got into programming by making userscripts for a bunch of websites/imageboards I visit. Although I only have experience with javascript I recently signed up for an actual programming course done in my area that teaches Java/.NET. I never touched any of those languages but I hope I can pass their test and maybe get some experience with employable languages. Especially Java since it appears to be used everywhere
>> No. 3209 [Edit]
>>3208
I'm a uni student. Java was in fact used in my introductory class. It'll serve you decently. Keep applying your knowledge to personal projects, and if you don't already, start using git. You don't need to be an expert on git, but you should know how to make a repo and commit to it.
>> No. 3210 [Edit]
>>3206
Sure, javascript is definitely a complete programming language. Although I feel front-end is more about piling on random JS frameworks and introducing complexity for no reason, but if you stick close to vanilla javascript then much of what you learn should be applicable else where.

I guess you could use js for backend or non-web things as well, but I don't really know why you'd want to when there are better options available.
>> No. 3211 [Edit]
>>3209
I do know basic git. In fact I maintain a personal repo on gitlab with my dotfiles+userscripts and a public repo on github which I hope it might one day land me a proper job interview. WFH would be ideal but from what I read the industry is slowly reverting to office work
>>3210
I actually didn't use a single framework ever. I much prefer learning what I need with vanilla javascript. Also while working on some personal projects I came to the realization that I prefer backend over frontend
>> No. 3212 [Edit]
>>3211
>with my dotfiles+userscripts and a public repo on github which I hope it might one day land me a proper job interview
It won't really, I've never seen an interviewer even care enough to check out the projects the interviewee has worked on.
>WFH would be ideal but from what I read the industry is slowly reverting to office work
Depends on size of company. Smaller companies still have it, but most of the ones with names you've heard of are in the processes of getting rid of it.
>> No. 3213 [Edit]
>>3212
>most of the ones with names you've heard of are in the processes of getting rid of it.
Why is that?
>> No. 3214 [Edit]
>>3213
A combination of "big data" suggesting workers are "less productive" when working at home, and middle managers having a complex where they need to physically watch over you, and needing to say they have meetings with you to appease their own bosses.
>> No. 3215 [Edit]
>>3214
If they had any data they would share it to justify their actions, the fact that they haven't means I really suspect they don't have data showing as such. Most engineers can do fine with remote work (they really better be able to, considering even in the pre-2019 ages 95% of interactions were always over chat/email anyway). Maybe the one group of engineers impacted is interns, since I guess having someone physically present to guide them through non-trivial things might be beneficial, but they can adapt easily.

The main reasons as I see it are as follows (not ordered in any way):

1) It's a "power move". Notice that none of the RTO restrictions ever seem to apply to executives. They just don’t like the idea of the plebs getting to enjoy a leisurely life without wageslaving hard.

2) Down from the exec level, most middle-managers need to be in the office to make it look like they get things done. If everything is RTO, then they don't really have anything to do.

3) They've invested a lot in real-estate, and don't want those to be a loss. There's probably also pressure from local governments to force RTO to stimulate local spending.

4) It's a way to do layoffs and cost cutting without explicitly doing them. Also consider that amenities were slashing during the initial move to wfh, and they have not (and probably won't be) restored.

5) It's posturing and a way to show investors they're doing something.
>> No. 3216 [Edit]
I'm working full time in a field i don't really like.
I've just enrolled in a remote CS bachelor, it's going to take me 3 years to get the diploma. So i'll continue working full time + studying.

Tbh, i don't know if combining the two is possible. CS people, do you have any tips or good books to optimize the learning in the few hours of studying i'll have per week ?
>> No. 3217 [Edit]
>>3216
Working full time plus studying is technically possible if you are smart and already know most of the material, but if you don't know the material and the curriculum is fairly rigorous then it's probably not a good idea. If you only have a few hours per week not working, how will you do the assigned problem-sets? Or is a remate CS bachelor different than normal university courses in that they don't offer problem-sets/projects to do?
>> No. 3218 [Edit]
>>3217
I don't really know what courses the bachelor program you enrolled in has and what the studies look like, so take everything I say with a grain of salt.
But if it has a significant amount of stuff like software development projects, I would say pick it up as a hobby if you haven't yet.
Just start writing software for yourself and figure stuff out as you go.
That will allow you to grasp the concepts behind it without the pressure of having to deliver no matter what and you can then apply these concepts pretty well in any practical assignment. Same goes for other technical things like networking etc.
It might sound cliche but try to find some fun in it, as that will turn what is essentially a second job into possibly even something akin to recreation.
As for the more theory heavy stuff, I can't really help unfortunately, i just had to grind through it.

I apologize if what I have written is just a load of trivial bullshit, but the one thing I took away from studying, is that the best way to learn technology is playing around with it, not really studying it in your classical academic way. I never really read a CS book for any other reason than curiosity.
In any case, I wish you success.
>> No. 3219 [Edit]
>>3218
I guess you probably meant to reply to GP, not me.

>technology is playing around with it, not really studying it in your classical academic way
Depends on the field. Software _engineering_ is probably best learned by playing around with it, but actual CS theory is best learned in the classical academic way. The wide availability of resources on the internet means that studying in the "classical" way isn't limited to textbooks though, you can watch lecture videos, and discover better/more-targeted resources.
>> No. 3220 [Edit]
>>3217
>>3218
I thought that the curriculum being in 3 years, i won't have to deal with heavy things from the start and difficulty will go in crescendo.
I hope my assumption is right, otherwise the entreprise is doomed.

I can dedicate 1 hour per week day and then the full week-end to studying. Maybe i'll begin right now with some MOOCS to cover the basics.

The curriculum i'm getting in has the basic stuff, algorithms, procedural and OO programming, networking. In the third year, there are some courses on AI, Big Data, and how languages are made.

I'm checking the Helsinki university basic and advanced programming MOOCs, to not be a complete noob.
And yeah, there will be some project and problems for exams.
>> No. 3221 [Edit]
File 168681409045.jpg - (74.18KB , 696x604 , Imouto_at_computer.jpg )
3221
>>3220
>helsinki university basic
Are you are Euro? I'm asking since as far as I am aware Euro universities tend to be almost wholly decided on the final exam, with less 'busy work' than you would get at an American or Canadian university, like weekly "homework" assignments for example. You might have a bit more free time as a Euro, but again there is a bit more stress for finals.

I'm in the US and I'm at uni right now, albeit for engineering, but I've had a few programming classes. These were all in-person classes but from my experience you typically will have 1 assignment a week, although in the upper level classes you may be given more time for harder assignments, but these were entry level classes.

From my experience the courses will generally deal with 1 computer language, sometimes two if they are somewhat similar like C++ and C#, but it is typically going to be a C class, a python class, or a java class, but again this has just been my experience. I did not take any upper level CS courses but I imagine by your 3rd semester you will probably start to encounter most of the difficult material that will begin to take up time. As other anons said, I can't speak for the program you are enrolled in but typically the first 2-3 semesters the difficult classes are going to be calculus and physics.
Typically you will start out with either an intro to a language like C or java or python, where for example with C, you will probably get to doing structures by the very end, or you will start right away with doing basic algorithms and structures then you will go on to do more advanced algorithms and more advanced structures.
Software engineering is typically a different degree from CS in my experience and CS stuff is more focused on algorithms and structures, with the upper level courses getting into how an OS works, languages work, how computer networks function and stuff like that so more scientific stuff.
I think a good starting place if you wanna get ahead is to try and see what languages your school is going to be using for these beginning classes and get familiar with coding in them.

Here are some resources I have bookmarked. These are all from Brown, and I think there may be some more that I haven't seen.
https://cs.brown.edu/courses/cs053/current/index.htm

https://papl.cs.brown.edu/2020/

https://cs.brown.edu/courses/cs173/2012/OnLine/
>> No. 3222 [Edit]
>>3221
>with less 'busy work' than you would get at an American or Canadian university, like weekly "homework"
Good American universities don't provide "busywork" as "homework" (and it's almost always called psets, not homework). In fact probably the most memorable parts of courses were solving the problem sets, if you have a good professor then they will usually be challenging enough that you will do most of your actual learning while solving them.

Similarly at least in good US universities, there's no course dedicated to teaching you a given language. It's simply expected that e.g. you will pick up Go in time for your first project related to compsec, or you will pick up C++ for your course in computer graphics. The one exception may be architecture course, where they probably will hand-hold you through learning MIPS or I guess RISC-V these days.

I only say this to point out that variation between colleges/universities in the US, let alone US and elsewhere is too broad to be able to give any specific advice tailored to your situation.

> the first 2-3 semesters the difficult classes are going to be calculus and physics.
Usually the hardest courses for most people doing CS are actually the discrete math courses.

>Software engineering is typically a different degree from CS in my experience
This is true, it varies between univerisites whether or not the degree is in CS, EE/CS, or software engineering.

>I think a good starting place if you wanna get ahead is to try and see what languages your school is going to be using for these beginning classes and get familiar with coding in them.
This is good advice, since it's an open secret that most people come into the CS courses already knowing the language, and if you don't you are at a disadvantage. E.g. if a course in algorithms is taught using Java, you should be able to spend your time on the algorithms portion, not the "learning java" portion, or you will not have a good time.
>> No. 3223 [Edit]
>>3222
It seems that these days even high schools are offering 'coding courses' so I suppose a familiarity with it is even more so expected than before. But having looked at the curriculum of other larger schools in the state each schools curriculum varies quite a bit, with some being much more rigorous than others. Learning the java, python, and C/C++ would be the best places to start since most schools seem to use one of them.
>problem sets
Some classes have those but they aren't typically assigned as a grade to do, but I agree it is helpful to do them. I was just referring to my experience where we had assignments to do, which was just applying the course content which was nice, but there was also some online pearson 'problems' which were more of a chore to do than anything else, especially since it was very picky about answers. I do think they got rid of it though.
>too broad to give specific advice
That is true.
>discrete math courses
I suppose this is another difference between universities. Where I am you are required to complete the first 2 calculus courses, covering up to Taylor series, before you can take linear algebra or similar discrete classes.
>> No. 3224 [Edit]
>>3221
Thanks I'll look into those links.
Yes, i'm euro.

I think it's a CS curriculum since it's dealing with algorithm, networks, OS, languages, database, etc.
However there is little to no maths. The curriculum chief stated that the heavy maths is not really needed and students forget about it anyway when they graduate.

How true is that ? idk. You think math is necessary ? if so, i'll have to work on my free time one some uni maths textbook.
>> No. 3225 [Edit]
File 168684267351.jpg - (7.20MB , 4000x2667 , c0f5a97437e5e6fd1b7531785bfadf22.jpg )
3225
>>3221
At my uni, the first two core classes, Intro and Data Structures, are taught in Java, the next, Computer Architecture, is taught in C with a little bit of x86 at the end. If you want to go in depth with assembly, I think that would be Computer Architecture II, which is an elective. The last core class, algorithms, has no programming and is entirely theoretical. I thankfully had a pretty lenient professor for that one.

Everything else is an elective, which you have to take 6 of. Networking, database management, computer graphics, machine learning, etc. You can pick whatever. Software engineering is not a distinct major, but a bunch of electives you take in succession.

Post edited on 15th Jun 2023, 8:25am
>> No. 3226 [Edit]
>>3224
Not him, but analyzing algorithms requires some discrete math, basic calculus, and familiarity with proofs. One probably won't be implementing them for work, however, and one can understand which, say, sorting algorithm is more appropriate for a situation on qualitative standards alone.
>> No. 3253 [Edit]
I like working in Scala, but I might be too low IQ to truly appreciate it.
>> No. 3254 [Edit]
Cute GPU-san.

https://armkeil.blob.core.windows.net/developer/Files/pdf/graphics-and-multimedia/how-does-a-mobile-gpu-work.pdf

https://armkeil.blob.core.windows.net/developer/Files/pdf/graphics-and-multimedia/render-pass.pdf

https://armkeil.blob.core.windows.net/developer/Files/pdf/Arm_Mobile_Studio_En.pdf
>> No. 3255 [Edit]
I find the reactions to https://github.com/IBM/fp-go (an expansive library that promotes a functional-style of programming in Go) to be pretty amusing. Even if one finds any deviation from The One True Way in Go to be distasteful, you have to admit it's pretty neat to see someone pushing a language like Go to this extent; instead it's treated like blasphemy by the gophers. I'll have to remember this next time I work with Go.
>> No. 3256 [Edit]
>>3255
To me the main value of Go is in the standard library. No other language comes with its own standalone TLS implementation. In terms of everything else, I'd rather be writing C though.
>> No. 3257 [Edit]
>>3256
>I'd rather be writing C though
I can't stand C solely because of how much of a pain in the ass string manipulation is.
>> No. 3258 [Edit]
File
Removed
I'm learning Ziglings and I highly recommend it.

https://github.com/ratfactor/ziglings
>> No. 3268 [Edit]
>>3258
that's really neat. Here's something similar but with ClojureScript instead of Zig:
http://clojurescriptkoans.com
>> No. 3274 [Edit]
It's a shame Purescript doesn't have a well-maintained and usable native backend, as I find it nicer to program in than Haskell. There's been a few attempts, but nothing's stuck.
>> No. 3288 [Edit]
File 170019336829.png - (160.62KB , 996x836 , fizzbuzz_clojure.png )
3288
Show me your strongest fizzbuzz
>> No. 3289 [Edit]
>>3288
The only thing I can imagine that would be of similar "quality," would be FizzBuzz: Enterprise Edition.
>> No. 3293 [Edit]
File 170136970580.jpg - (391.96KB , 1875x2344 , cd0ce15af2e6d7e8154665698330edec.jpg )
3293
I cаme up with а pretty clever wаy of implementing the toggаble thumbnаil thаt's on tc аnd most other imаgeboаrds, without using jаvаscript․

The "common" wаy of doing thаt is using а checkbox input аnd the CSS "content" property․ The problem with this method, is thаt pseudo-elements аre not аdded to the DOM, meаning you cаn't right click the imаge аnd sаve it from the context menu․

Using аn ifrаme for this like:
<body> <а href="lаrge․jpg"><img src="thumbnаil․jpg"/></а> </body>


doesn't work becаuse ifrаmes cаn't аdjust their size to their content, аnd HTML documents cаn't even define their own size․ BUT, SVG cаn․ SVG аlso supports some HTML-like feаtures, such аs imаges аnd links․ When аn HTML object element hаs SVG аs its dаtа, the object element will аctuаlly tаke on its size(side note, the img element doesn't support "nested imаges" within SVG)․ So аll together you cаn hаve this:
<body> <object dаtа="thumbnаil․svg"></object> </body>


thumbnаil․svg:
<svg width="160" height="120"> <а href="lаrge․svg"> <imаge href="thumbnаil․jpg"/> </а> </svg>


lаrge․svg:
<svg width="800" height="600"> <а href="thumbnаil․svg"> <imаge href="lаrge․jpg"/> </а> </svg>


On pаper this works․ Unfortunаtely, it doesn't in Firefox․ The object's size does not properly аdjust аfter the link is clicked inside the thumbnаil SVG․ It does work in Chrome, but imаges included inside SVG cаn't be аccessed by the context menu, so there's no аdvаntаge to the CSS "content" method․ I filed а bug though, so mаybe this will be fixed․
https://bugzilla.mozilla.org/show_bug.cgi?id=1867409
>> No. 3314 [Edit]
File 170471450912.jpg - (117.83KB , 1024x1024 , sdg1697994032744385.jpg )
3314
I just found this really nice interactive tutorial on how to use Datalog, which is an alternative to SQL (although it's actually older than SQL):
https://www.learndatalogtoday.org/
>> No. 3344 [Edit]
I'm still using CoffeeScript in 2024 AD, and I'm loving it.
>> No. 3346 [Edit]
Coq was finally renamed. There's not much to be said since it was an inevitability, and anything else would be better suited for /tat/.
>> No. 3354 [Edit]
File 170881422985.png - (101.81KB , 400x400 , comiket103.png )
3354
I've looked into >>/ot/40374 again, and realized a few things. First of all, there is a standardized way of including metadata in pretty much every image format including PNG, XMP, an XML-based sort of "container" for meta-data. You can place EXIF tags in XMP, or Dublin Core tags, which seem more appropriate.

One way to add this information to a file, is by writing some XMP, and using Exiftool to add it. I did this to the attached image.
<?xpacket begin=''> <x:xmpmeta xmlns:x='adobe:ns:meta/' x:xmptk='Image::ExifTool 12.77'> <rdf:RDF xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#'> <rdf:Description rdf:about='' xmlns:dc='http://purl.org/dc/elements/1.1/'> <dc:subject> <rdf:Bag> <rdf:li>3girls</rdf:li> <rdf:li>^_^</rdf:li> <rdf:li>cat</rdf:li> <rdf:li>comiket_103</rdf:li> </rdf:Bag> </dc:subject> </rdf:Description> </rdf:RDF> </x:xmpmeta> <?xpacket end='w'?>

In the command line:
exiftool -tagsfromfile .\tags.xmp -all:all .\comiket103.png -overwrite_original

You can download this image from tc, and that data should still be there. So how does this make things easier? How about a meta-data injector? A tool which using filenames and reverse image search, can get image tags from a booru, and add them to files you already have by using a similar, automated process. The other missing piece to this, is adding the ability for windows explorer to read and write this XMP information. Filemeta exists, but that doesn't seem to utilize XMP.

Post edited on 24th Feb 2024, 2:38pm
>> No. 3366 [Edit]
File 170961808627.jpg - (2.05MB , 1200x1699 , 667e618eb7735ad3b5fb2a9e9b6d55cb.jpg )
3366
>>3354
Update: I wrote something that works with Gelbooru and JPEG files
https://gitgud.io/nvtelen/metadata-inject

Seems this is the easy part, which is a shame, but it's still great being able to search images by description. I wrote it with WSL and Windows in mind, but there's no reason it wouldn't work without WSL. As for Linux, I have no idea what the situation is there with metadata support.

edit: released an exe, so you don't need to compile it.

Post edited on 5th Mar 2024, 2:56pm
>> No. 3369 [Edit]
File 171012035649.jpg - (838.53KB , 970x730 , a7b19a565d4130ad194c8b6be72be1f34ed0d0c0.jpg )
3369
>>3354
>>3366
Anything that has to do with tags should be capable of being handled by the filesystem via extended attributes such that filetype specific solutions aren't neccesary, though admittedly I don't know of a good filesystem for such. If I only value attribute-ability then in regards to windows, btrfs exists, but it supposedly functions very differently structurally and I'm not knowledgeable enough regarding to say whether it's suitable. Similarly, though not relevant, you may also find haiku's filesystem (BeFS) interesting.

Nonetheless, still pretty cool to have the ability to have access to this regardless of filesystem in a way I imagine is portable between such. Should pair well with software that operates on a file based on what folder it is downloaded to. Also, what's the reason(s) for the compatibility limitations?
>> No. 3370 [Edit]
File 171012294366.png - (22.01KB , 400x400 , bdf0ea87651b291f06b5b11134e13f86.png )
3370
>>3369
>extended attributes such that filetype specific solutions aren't neccesary
NTFS did support this, and using Filemeta, you can enable it for any file type. If I cared to, maybe I could have my tag scrapper work with that NTFS specific system. That's not the avenue I want to take though, because I really care about portability.

>what's the reason(s) for the compatibility limitations?
XMP was a fairly new standard when Windows added limited support for it. I'm not sure XMP itself supported PNG and GIF at the time. Since then, Microsoft hasn't bothered to expand compatibility, which I chalk up to laziness. I think the path to add it lies within the WIC API. I don't know C++ though, so that's been my priority.

Post edited on 10th Mar 2024, 7:09pm
>> No. 3371 [Edit]
File 171027336575.png - (220.85KB , 1888x1360 , usebashinstead.png )
3371
>>3366
Would it actually be better to write a bash script instead? I'm not too familiar with bash, but my assumption is that aside from not being cross-platform, that route would be harder to maintain and less extensible. Maybe I'm wrong about all of that though.
>> No. 3372 [Edit]
Learning Haskell from scratch is easier than conquering the beast known as FP Scala. Yeah, Haskell's operator soup can be confusing at first, but at least the language itself, even with the most commonly used GHC extensions, is rather slim. Scala, on the other the hand, is truly a multi-paradigm language with an impressive type system, and the commonly used IO libraries are bloated beasts. Powerful, but might be too much for my small brain.
>> No. 3430 [Edit]
File 171668094969.png - (78.38KB , 1704x1016 , wip.png )
3430
I've been working on a port of Saya no Uta to Renpy. I found an unencrypted, pre-patched English version of the 2009 release, and am now working on a translation script of whatever it is they used.

I'll still have to do some things manually, but this should make it feasible within a reasonable time-span. For the first time ever, Mac and Linux users will have a native version. This obviously isn't legal, but I'm not worried about the legal team of Nitro+
>> No. 3431 [Edit]
File 17166968543.png - (60.13KB , 400x400 , 42d6c045511b61f0cc032c79c79f8b51.png )
3431
>>3430
I've got to say, Renpy's imperative approach, as opposed to a markup one, has made this more difficult than it has to be.
<voice name="瑶" class="瑶" src="voice/6/001405"> <I></I>"I Feel rEAlLy bad ABoUT YOur PareNTs&.<k><voice name="瑶" class="瑶" src="voice/6/001405_2"> BUt yoU'RE Not aLONE&. you HAve KoJi&, aND OMi&, anD&.&.&.<k><voice name="瑶" class="瑶" src="voice/6/001405_3"> you HAVe ME&."

which is all in one paragraph, becomes
voice 'audio/voice/6/001405' txt `"I Feel rEAlLy bad ABoUT YOur PareNTs.` voice 'audio/voice/6/001405_2' extend ` BUt yoU'RE Not aLONE. you HAve KoJi, aND OMi, anD...` voice 'audio/voice/6/001405_3' extend `you HAVe ME."`

If you're making something from scratch, this would probably be less annoying, but I feel like it would encourage a less dynamic, more stilted-approach. I get they're going for a "readable" screen-play vibe, but visual novels aren't screenplays, they're computer programs.

Post edited on 25th May 2024, 9:17pm
>> No. 3441 [Edit]
Currently I'm wokring on a Game engine. Made in C/C++, raylib library and Dear ImGUI. I guess I will updating my progress here.
>> No. 3451 [Edit]
What timezone do tohno-chan timestamps use? Pacific time?
>> No. 3452 [Edit]
I just made a post and looked up where in the world that time currently is and according to some random site, it's Pacific Daylight Time.

Post edited on 4th Aug 2024, 11:08pm
>> No. 3458 [Edit]
>>1680
Appearance-wise I would choose Ruby, because she doesn't look as depressed as the other girls and kind of cute, but otherwise if your questions were only about the language, then Shell. People underestimate how much you can do with simple POSIX shell scripts. Dylanaraps is a good example for this. Unlike what most people know him for, which is neofetch, he also wrote a file manager, an IRC client, core utilities, a transmission client, a package manager and more in pure Bash. What he has done of course only scratches the surface of what can be done in Bash/POSIX shell and I think many Perl/Python scripts could easily be replaced with Shell scripts if done right.
>> No. 3518 [Edit]
I have been wasting my time recently writing a compiler for a programming language that I designed. Nothing special about the language other than it will have generics, slices, errors as values and so on. Wanted to try something more difficult.
I'm using C as the IR, so it depends on a C compiler to generate a final executable. No way I was going to use LLVM, with how often they break the API, plus it just too big, I can't compile it myself.
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]

View catalog

Delete post []
Password  
Report post
Reason  


[Home] [Manage]



[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]