Beep Boop Bip
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]
Posting mode: Reply
Name
Email
Subject   (reply to 1547)
Message
BB Code
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: BMP, CSS, EPUB, FLAC, FLV, GIF, JPG, OGG, PDF, PNG, PSD, RAR, TORRENT, TXT, WEBM, ZIP
  • Maximum file size allowed is 10000 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 817 unique user posts.
  • board catalog

File 150448609042.jpg - (110.47KB , 1280x720 , mpv-shot0028.jpg )
1547 No. 1547 [Edit]
It doesn't matter if you're a beginner or Dennis Ritchie, come here to talk about what you are doing, your favorite language and all that stuff.
I've been learning python because c++ was too hard for me (I'm sorry nenecchi I failed to you), reached OOP and it feels weird compared to the latter one, anyway I never got it completely.
63 posts omitted. Last 50 shown. Expand all images
>> No. 2074 [Edit]
>>2073
Ah neat, I've played around with D and it seemed quite nice – although I haven't been able to find a personal niche for it in my own work. I also didn't know mangadex had an api!

With regard to the path limits, I recall reading somewhere that even if you don't flip the registry flag to enable long paths globally, there's a way to call into win32 apis directly and force use of long paths via some suffix. I have done zero win32 development though so I can't comment much further on that though. If it's a significant enough issue maybe you could just target linux and use WSL to run it on windows?
>> No. 2075 [Edit]
>>2074
>Ah neat, I've played around with D and it seemed quite nice – although I haven't been able to find a personal niche for it in my own work.
Yeah, I feel its general-purpose nature is both a blessing and a curse. Its meta-programming capabilities is pretty nice, though.

>I also didn't know mangadex had an api!
As did I. My initial client implementation parsed the webpages, but after a cursory glance in my web console, I discovered its existence. I do wonder how long it's existed.

>With regard to the path limits, I recall reading somewhere that even if you don't flip the registry flag to enable long paths globally, there's a way to call into win32 apis directly and force use of long paths via some suffix.
You are correct: one prefixes the filename with a sequence of characters to bypass the limitations. However, if I read the docs correctly, there's some quirks with it. It'll take some experimentation.

>If it's a significant enough issue maybe you could just target linux and use WSL to run it on windows?
I don't think it'll come to that. Abbreviating the filename or applying the filename-prefix should be suitable. Plus, Windows is my main driver, and I'd like to have this program run natively.
>> No. 2082 [Edit]
I've been trying to conjure a design by which structs (i.e. aggregate value types) may be dealt with like classes and interfaces. An obvious answer is structural typing via meta-programming. However, tunnel vision is quite potent.
>> No. 2085 [Edit]
>>2082
> structural typing via meta-programming
Can you explain what you mean by this? For simulating OO in C via structs, the solution I've usually seen involves including the base class as the member of the derived classes so you can manually cast back and forth, and then essentially manually implementing the vtable to get the polymorphism.
>> No. 2087 [Edit]
>>2085
What I mean is that, given a function that has a parameter of type T, only operate on a subset of members specified by T; as long as a struct defines those members, then from the viewpoint of the function, it's considered equivalent to other types that do the same. (In the light of this description, I retract my solution's description: it's closer to duck typing than structural typing.)
>> No. 2088 [Edit]
>>2087
Yeah ok that makes sense. It's annoying in C though because you also need the same layout of the structs, which is why as I mentioned most people just include the base struct as the first member.
>> No. 2089 [Edit]
>>2088
I assume you're referring to something like this, right? (Sans encapsulating the parent's fields.)

#include "stdio.h"
#include "string.h"

struct Widget
{
int id;
};

struct FooWidget
{
int id;
char* text;
};

void process(struct Widget *widget)
{
printf("%d\n", (*widget).id);
}

int main(void)
{
struct FooWidget foo;
memset(&foo, 0, sizeof(foo));
process((struct Widget*)&foo);
}

>> No. 2090 [Edit]
>>2089
Yeah exactly that's the idea. Although in the approach I mentioned you would do


struct FooWidget
{
struct Widget base;
char* text;
};


so that way you don't have to repeat all of the parent's members (and it also avoids issues regarding struct packing/alignment). A lot of codebases I've seen will in particular do this for logging, where all of the "inherited" classes will share the same first member and then the "logId()" macro or whatever can just case to that shared "base" that is the first member and extract out the id.

You can also go further and implement function polymorphism, not just member sharing, by manually passing around vtables like in the below example (since there's just one function I don't have a separate vtable member – I just put the function inline).


struct BaseWidget {
int id;
void (*dump)(struct BaseWidget *self);
};

struct ExtendedWidget {
struct BaseWidget base;
char* extra;
};

void dumpBase(struct BaseWidget *self) {
printf("BASE: %d\n", self->id);
}

void dumpExtended(struct BaseWidget *self) {
dumpBase(self);
printf("DERIVED: %s", ((struct ExtendedWidget*) self)->extra);
}

void dump(struct BaseWidget *widget) {
widget->dump(widget);
}

int main(int argc, char *argv[]) {
struct BaseWidget base = {.id = 3, .dump = dumpBase};

struct ExtendedWidget derived;
derived.base.id = 4;
derived.base.dump = dumpExtended;
derived.extra = "foobar";

struct BaseWidget *baseThatIsExtended = (struct BaseWidget *) &derived;

dump(&base);
dump(baseThatIsExtended);
}

>> No. 2091 [Edit]
>>2090
Neat. But Haruhi damn, I hate C's syntax for function pointers.
>> No. 2092 [Edit]
This is why nobody pays me to program.


struct MaskContext(string name, Placeholders...)
if(Placeholders.length > 0 && allSatisfy!(isPlaceholder, Placeholders))
{
alias RequiredPlaceholders = Filter!(isPlaceholderRequired, Placeholders);
alias RequiredParams = staticMap!(PlaceholderType, RequiredPlaceholders);
alias AllParams = staticMap!(PlaceholderType, Placeholders);

/// Constructor for all placeholder fields.
this(AllParams params)
{
static foreach (i, P; Placeholders)
{
__traits(getMember, placeholders, P.identifier) = params[i];
}
}

static if (RequiredPlaceholders.length > 0)
{
/// Constructor for only required placeholder fields.
this(RequiredParams params)
{
static foreach (i, P; RequiredPlaceholders)
{
__traits(getMember, placeholders, P.identifier) = params[i];
}
}
}

// ヽ( ̄~ ̄ )ノ
}


And yet, it works!
>> No. 2093 [Edit]
Due to circumstances, I've returned to C++ after many, many years, and I must say that I have no idea what the fucking I'm doing. Groking its template metaprogramming is difficult after enjoying D's relative simplicity; no universal implicit initialization, move semantics, and ugly syntax are a thorn in my side; and no modules (for GCC, anyway) kills the soul. And yet, I'm having fun (with valgrind by my side). Plus, I get to re-enjoy Scott Meyers' talks and writings--always a good time.
>> No. 2094 [Edit]
No built-in unittesting is saddening, too.
>> No. 2095 [Edit]
>>2093
There is now "concepts" with C++20, it helps with templates a lot.
>> No. 2096 [Edit]
>>2095
Indeed. Template constraints are a great feature in D, and it seems concepts might be more powerful. However, as usual, C++'s take seems rather ugly.
>> No. 2097 [Edit]
>>2096
I wish SFINAE (and the hell that is has enabled) would never have existed.
>> No. 2098 [Edit]
>>2097
It's certainly antiquated now, it seems.

Also, consider this:

template<typename... Args, typename LastArg, typename = void>
void foo(LastArg arg)
{
// ...
}
foo<int, float>("Hello, world!");


I'm glad type inference with variadic template parameters is possible, but it's so odd. Cursory searches haven't revealed much about "typename = void", and cppreference (from where I learned this) doesn't go into detail.

Meanwhile, in D:

template foo(Args...)
{
void foo(LastArg)(LastArg arg)
{
// ...
}
}
foo!(int, float)("Hello, world!");


Readability at the cost of two template instantiations (unless this can be optimized), but I prefer it.
>> No. 2099 [Edit]
At first I was excited about constexpr, but it's stupidly limited: only "literal types" are supported, and "if constexpr" must be placed in function scope. So if you want a compile-time std::string (Working with char{*|[]} kills the soul.) or a replacement for the pre-processor, you're out of luck. Instead, I have to conjure up some tricks to workaround these issues, and even that's not satisfactory. And here I thought C++ was catching up to D.
>> No. 2107 [Edit]
>>2099
With C++20 most of the limitations with constexpr are fixed. (std::string and std::vector work now too.) There is also "constinit" and other new features. Read through them.
>> No. 2108 [Edit]
>>1958
>LISP is slow
I hate this stigma that Lisp is somehow "slow" when it's absolutely not. SBCL can already produce images that are as fast, if not faster, than GCC if you can be clever enough. Now I will say writing Lisp to be as fast as C is a major pain, if you want to write fast code you should use Chicken (which lets you drop down into C at anytime) or just use C.
I think this idea of Lisp being slow comes from it being a LISt Processor where everything is a "linked" list, and that these lists have an O(n) access time. Honestly with today's machines (2000s and on), I would think that they're fast enough to compensate for this, not to mention that most dialects allow you to use vectors for when you're dealing with a truly large amount of data.
One more think I would like to add, that really gives Lisp the edge over most languages, is that programs are treated the same as regular data; that is programs can be manipulated just as regular data can. Long story short, Lisp machines, and Lisp instruction sets/architectures are near trivial to design and give the programmer, and user, some major benefits (not just speed). If you want to read more on this, I would suggest Guy Steele's paper "Design of LISP-based Processors".
>> No. 2109 [Edit]
>>2108
SBCL is pretty amazing. You can see this quantitatively in [1] where Lisp is within an order of magnitude of C's performance. In fact a lot of people's ideas about "fast" languages are out of date. I've heard people call Java a "slow" language, but it's really quite performant (thanks to a lot of effort put into hotspot jit).

[1] https://thenewstack.io/which-programming-languages-use-the-least-electricity/
>> No. 2110 [Edit]
>>2107
Thanks for the information. I had assumed that my toolchain was limited to C++17, but it seems GCC 10 is supported. Pretty excited to see how much of the pre-processor I can replace. The dream, however, is to convert the platform's system headers to D modules, and get GDC working. Don't know if I have the knowledge for the latter, though.
>> No. 2111 [Edit]
What do you guys think about a function that reads command-line options into a struct? The following is its documentation:


Parses command-line arguments that match the given `CLIOption`s into a struct and returns it.

Params:
- Options = A sequence of `CLIOption` instantiations.
- args = The command-line arguments passed into the program.

Returns: A struct composed of two fields: `values` and `helpWanted`. The former is another struct whose fields' identifiers and values are derived from the passed `CLIOption` instantiations. The latter signals whether -h|--help were specified--just like with `std.getopt`.


P.S. I wish we had a code tag, e.g.
[code][/code]


Post edited on 25th Nov 2020, 6:32pm
>> No. 2112 [Edit]
>>2111
I'm not sure if I fully understand what you're going for. Can you dynamically create fields in a struct? And what would be the advantage over returning a dictionary(/map)?

Incidentally I wish that all languages had something like Python's argparse. That's always been a pleasure to use and it handles all the common use-cases (required flags, optional flags, lists, etc.)
>> No. 2113 [Edit]
>>2112
>Can you dynamically create fields in a struct?
Fields are "mixed in" at compile-time. So the type is fully defined at runtime.

>And what would be the advantage over returning a dictionary(/map)
Since I'm programming D, and D is statically typed, the value type of a dictionary would have to be a variant--which would introduce some friction. I could also hack together a solution with `TypeInfo`, but I'm not too keen on that.

>Incidentally I wish that all languages had something like Python's argparse.
Never used it as I rarely program in Python, but it does seem nice after reading the docs. I'll have to borrow some of its ideas.

My `parseArgs` function is built upon D's std.getopt, as the latter doesn't promote structure, in my opinion.


/**
Usage: calc numbers [options]

Takes a list of numbers separated by whitespace, performs a mathematical operation on them, and prints the result.
The default operation is addition.
*/

// Usually a bad idea like `using namespace std;`
import std;

// Default Value | long and short names | Optional argument that specifies the option's description in the help text.
alias OperationOpt = CLIOption!("add", "operation|o", CLIOptionDesc("The operation to perform on input (add|sub)"));
// Same as above except we specify a type instead of a value. The option's default value will resolve to its type's initial value, which would be `false` in D.
alias VerboseOpt = CLIOption!(bool, "verbose|v", CLIOptionDesc("Prints diagnostics"));

// -h|--help are automatically dealt with
auto result = parseArgs!(OperationOpt, VerboseOpt)(args);
if (result.helpWanted) { result.writeHelp; return; }
auto nums = args[1..$]; // Let's just assume that the user actually entered in at least one number.

// An option's long name is the identifier in `values`. The implication is that long names must be also be D identifiers. However, I've ensured that common option names like `long-name` are resolved to `longname`. However, more bespoke option names will trigger a compiler error with a helpful message. This would not be a problem if `values` were an associative array whose keys are strings.
switch (result.values.operation)
{
// Assume the variadic functions, `add` and `sub`, are defined.
case "add": add(nums).writeln; break;
case "sub": sub(nums).writeln; break;
default: writefln!"Operation '%s' is not supported"(result.values.operation); result.writeHelp;
}


Three problems with my function and its associated templates:
1. I'd like `CLIOption` to take functions as a type. `std.getopt` can do this, but I've had issues creating a higher-level interface with this in mind. This is mostly due with how I designed things.
2. `parseArgs` should handle more than options, like `argparse`. After all, if it doesn't, mine should merely be called `parseOpts`.
3. I suck at programming.
>> No. 2114 [Edit]
>>2113
Ah neat that makes sense. Having not used D before, I was only vaguely aware of mixins. (It seems the definition of "mixin" being used here is slightly different than the conventional definition used in object-oriented languages? I've seen mixins in e.g. python/scala and there it's more akin to interfaces with default methods. But in D it seems it's a bit broader and more like templates, with support for compile-time preprocessing?)

>the value type of a dictionary would have to be a variant
Yeah most of the argument parsers I've seen in C++ deal with this by requiring you to manually cast any values that you access into the proper type. (There's also the gflags/abseil style argument libraries where you declare the variable you want to place the result into upfront. That works around the above issue, but on the flipside it's ugly and overkill for small projects). Creating a properly typed struct at compile-time would be a lot cleaner and safer.
>> No. 2115 [Edit]
>>2114
D has two types of mixins: string and template. The former embeds a string containing valid D statements and/or expressions into the code: `mixin("int foo = ", 42, ";");` -> `int foo = 42;`. This must be done at compile-time, and any variables passed into the `mixin` statement must be readable at compile-time.
Then there's template mixins; these are more like traditional mixins found in OOP languages, except, as you've mentioned, they may be parameterized with types, symbols, and values. They are "mixed in" with the `mixin` statement: `mixin SomeDefinitions!42;` If `SomeDefinitions` had a definition, `int foo = value`, where `value` is a template parameter, then said definition will be accessible from the scope in which the template was mixed, and `value` is substituted for `42`. This is in contrast to a normal D template where its definitions, after instantiation, reside in their own scope accessible through a symbol.
These given examples are rather trivial, and don't do these features justice. For command-line argument parsing library, I use string mixins to generate new structs at compile-time, and utilize mixin templates to factor out common definitions and compile-time processing. Further, there are D CGI libraries that expose template mixins that do all the setup drudgery, e.g. provide a `main()` and environment variable processing.

As an aside, D allows you to defines strings with `q{}`, where the strings' contents are placed between the curly braces. This indicates to a text editor, IDE, or whatever to treat the strings' contents as if it were D code (or any code, I suppose): highlight it, provide code inspection capabilities, etc. These are helpful with string mixins.

>(There's also the gflags/abseil style argument libraries where you declare the variable you want to place the result into upfront. That works around the above issue, but on the flipside it's ugly and overkill for small projects).
I looked at them. I feel a little sick.
>> No. 2116 [Edit]
File 160688616514.jpg - (108.27KB , 1280x720 , [Doki] Mahouka Koukou no Rettousei - 10 (1280x720 .jpg )
2116
Alright, so I'm re-working that argument parsing thing, and funnily enough, template mixins have been a big help in refactoring. Combine that with better and more succinct solutions to previous problems, the design is a lot cleaner. Documentation is better, too. With that said, I'm not sure the best way to handle options' "optional" metadata:

alias VerboseOpt = Option!("verbose|v", false, OptionDesc("Garrulous logging") /* etc... */);

`OptionDesc` is one such piece of metadata. Right now, the `Option` template will pass the given variable-length list of metadata to a mixin template that will then define the appropriate fields. Thus, in the given example, a field of type `string`, whose identifier is `desc`, and with a value of "Garrulous logging" will have been defined in this instantiation of `Option`, i.e. `VerboseOpt`. The problem is that `parseArgs` will have to do some compile-time inspection on every `Option` instantiation to determine whether it has a description, i.e. a `desc` field; using the data therein or providing default values in the field's absence. This is not ideal for compilation times and for the code's clarity as this also extends to other pieces of metadata like `OptionCategory` or `OptionRequired`. It's not terrible, but again, not ideal. I have a better solution in mind, but a clean implementation of it is difficult for my moronic mind.
>> No. 2117 [Edit]
File 160705909858.jpg - (185.24KB , 1280x720 , !.jpg )
2117
Continuing my work on my command-line argument processing library (Now called "tsuruya" because naming is hard.), I have realized happiness through the digital world instead of just the 2D one.
Here's an example:

auto args = ["./test", "1", "2", "3"];
auto result = args.parseArgs!(Parameter!("integers", int[]));
assert(result.parameters.integers == [1, 2, 3]);
assert(result.usageText == "Usage: test <integers>");

`parseArgs` is instantiated with a `Parameter` struct template whose name, both in the generated programming interface and command-line interface, is "integers". By specifying the type of the parameter's value as a dynamic array of integers, `parseArgs` will read all non-option command-line arguments; convert them to `int`; and then add them to the parameter's array. (As an aside, if one were to specify a static array, `parseArgs` will only read k-number of non-option command-line arguments, where k is the static array's length.) A usage string is also generated based on what parameters and options (collectively known as "command-line interface objects") were given to `parseArgs`.
`Parameter` may also take a callable object, e.g. function, instead of a type, and the value it expects will be that of the callable object's return type. Further, one may pass optional metadata to `Parameter` just like one may do with `Option`, e.g. CLIODesc and CLIORequired. The former defines a description for a command-line interface object that may be used in `parseArgs`'s generated help text. The latter specifies whether the parameter or option is, well, required to be in the command-line arguments.
>> No. 2118 [Edit]
>(collectively known as "command-line interface objects")
I scrapped this stupidity and renamed the `Parameter` templates to `Operand`, since that's what they actually represent. After all, a parameter would include options too and thus confusion. Anyway, on to error handling and all that fun that entails.
>> No. 2119 [Edit]
Oh how I wish for mutability during compile-time. The amount of recursive templates upon which I'm relying is making me sweat a bit.
>> No. 2193 [Edit]
I was trying to get a program I always use to do something for python 2.7 and it wasn't supported anymore. Looking up the changelog discussions, I saw a poster say "We shouldn't support such ancient distros". Christ... it's really bizarre to me just how much the attitude among programmers is now. Granted, decade old software tends to be forgotten, but I have a hard time thinking of 2010 as "ancient", even as far as tech goes. Guess this is just me griping, but damn. I thought python 3.3 and 2.7 were still being used on the same systems.
>> No. 2194 [Edit]
>>2193
What a mess the python 2->3 transition was. Whose boneheaded idea was it to make things non-backwards compatible.
>> No. 2195 [Edit]
>>2194
>Whose boneheaded idea was it to make things non-backwards compatible.
I don't know, but there's a growing philosophy that old digital technologies should be forcefully cut out from any currently updated projects. Windows 10 for example has some serious fundamental flaws that make windows 7 look comparatively like a masterpiece, yet it's being prioritized so heavily that now people are cutting windows 7 support from their projects. This in particular is infuriating, to especially because when I'm not on a linux machine I want to use windows 7. In my brief stint with windows 10 I discovered some horrific design flaws regarding path variables, registries, and worst of all administrator permissions. As it turns out it is relatively easy on windows 10 for a file to revoke absolutely and forever any access to any users including the system user itself. This is, particularly, unpleasant when said file is malware.
>> No. 2208 [Edit]
I'm so addicted to meta-programming and templates that I often use them as a solution to anything. Usually, it's fine, but more straight-forward and obvious answers to problems tend to escape me in favor of some Rube Goldberg machine. It's fun, at least.
>> No. 2230 [Edit]
>>2195
I used Linux for many years. Couldnt take it anymore. Im back to using Win7.
>> No. 2231 [Edit]
I spent hours trying to reverse a singly linked list. I accomplished the task, but the realization that this is yet another indicator of being unemployable hurts the soul. Also I had to use two extra variables (iterator and a temporary) in the implementation along with the list's head. It's O(n), I think, but I feel like it's subpar.
>> No. 2232 [Edit]
>>2231
Post a screenshot or code. Lets review it:)
>> No. 2233 [Edit]
>>2231
One hour seems fine if you haven't seen the problem before (or alternatively haven't practiced doing these kinds of interview questions in a long time). And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).

Once you get familiar with the usual interview hazing questions you should be able to do them in 15-20 minutes.

Also a relevant article "Why do interviews ask linked list questions" [1]

[1] https://www.hillelwayne.com/post/linked-lists/
>> No. 2234 [Edit]
>>2232
https://pastebin.com/zDTyG3h2

>>2233
It took me two hours, I think. Even though I rarely work these kinds of problems, it's still a disappointing result given your time frame.

>And using two extra variables seems about right: if you're doing this iteratively you need to store previous, current, and next values (since the key relation is current->next = prev).
That good to hear as I was struggling to figure out if there were a way to reduce the number of variables (that didn't involve changing the data structure).

>Also a relevant article "Why do interviews ask linked list questions" [1]
So it suffered the fate of all similar questions, and its continued use is due in no small part to inertia. Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
>> No. 2235 [Edit]
>>2233
Correct me if I'm wrong, but isn't it easier to do this recursively?

Post edited on 30th Mar 2021, 5:29pm
>> No. 2236 [Edit]
>>2234
That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.

>I was struggling to figure out if there were a way to reduce the number of variables
Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.

>Still, depending on the job and languages used, I don't think it'd be a terrible problem to give someone. It weeded me out.
The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases. (And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
>> No. 2237 [Edit]
>>2235
If this was lisp then yeah maybe since it's a one-liner, but the length of a recursive solution is basically the same (although perhaps maybe conceptually a tad simpler). Disadvantage of recursive is that you have increased space complexity, so if this were an interview they'd ask you to do the iterative solution anyway.
>> No. 2238 [Edit]
>>2236
>That solution seems perfect to me. Depending on how familiar you are previously with pointer manipulation and thinking about data structures, 2 hours doesn't seem terribly bad.
Before attempting to implement the reversal algorithm, I thought I understood them well enough. Heck, felt kind of clever doing this (https://pastebin.com/zNvWeXR2) as the first attempt for the removal method. Given that, it's just not acceptable that it required two (2!) hours.

>Both solutions would have same asymptotic complexity so in an interview that probably wouldn't matter. But thinking about minimal solutions for these sorts of problems is a great way to strengthen problem solving skills.
It was fun, too, until checking how many minutes elapsed.

>The dirty semi-open secret of programming jobs is that they truly are more software engineering than CS. That is to say, being able to read code is more important than being able to write it, and when writing code the most important aspects are it being well-structured and easy to understand. Even at the notorious companies that are infamous for asking these questions (Google & Facebook), the vast majority of people basically do boring engineering plumbing: gluing together existing libraries and writing test cases.
I've read similar opinions, and I'm in no position to disagree with them considering my experiences building hobby projects and not having ever worked such a job. However, wouldn't a regular employee still need to possess the ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.

>(And somewhat ironically, data-structure manipulation questions have gone out of vogue at those two companies. They mostly ask problems that can be solved via "simple" greedy or search strategies).
This varies across positions, I assume.

P.S. Tangentially related, but writing good unit tests is quite the skill, and since you seem to know what you're talking about, would interview questions and problems concerning them be a good idea?
>> No. 2239 [Edit]
>>2238
>felt clever doing this [removal]
That's a clever solution. I first read/saw that variant of removal from a talk by Linus (see [1]), and even having seen that variant before it still took me a good 5 minutes to puzzle through your solution (that makes me feel dumb). If you came up with that by yourself, you're not giving yourself enough credit.

(By the way, I think in terms of clarity this is one case where explicitly writing out the type instead of using auto might have been clearer. This is probably just taste though – I personally hate auto since it makes it harder to know at a glance what type something is, and I only tend it use for for iterator-things like ".begin()" where the type is clear, and the equivalent "std::vector<int>::iterator" is ugly). At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like


void remove(T value)
{
node_t **cur_ref = &head;
node_t *cur = *cur_ref;
while (cur != nullptr && cur->value != value) {
cur_ref = &cur->next;
cur = *cur_ref;
}
if (cur) {
node_t *next = cur->next;
delete cur;
*cur_ref = next;
}
}

then I think it's a lot clearer what's going on.

[1] https://github.com/mkirchner/linked-list-good-taste

>ability to model a problem and implement a solution? Reversing a singly linked list and similar tasks are expressions of that, amongst other things.
Yes problem modeling is probably the most important skill to have; I disagree that linked list reversal is a good exercise of those skills in a day-to-day job duties sense (unless you're job is to write standard libraries for languages). They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs. Interestingly enough companies do ask systems design questions, but they only do so for L5-L6 hires (I'm using Google's scale where the entry level is confusingly enough L3, and L5 corresponds to senior softwar engineerg with about 7 years of experience).

>This varies across positions, I assume.
Surprisingly no. Google (& maybe facebook?) has a single common interview process for all SWE roles, and people aren't alloted to a team until after they pass the hiring committe. So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.

>would interview questions and problems concerning them be a good idea?
If you're optimizing for what will be asked in interviews, don't bother practicing how to write unit tests. Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them). Even if your solution is incorrect or suboptimal, showing that you can identify these edge-cases is a strong positive signal and might be the different between weak-hire and strong-hire.

Post edited on 31st Mar 2021, 1:19am
>> No. 2246 [Edit]
>>2239
>If you came up with that by yourself, you're not giving yourself enough credit.
Not really. Pretty sure I got the idea to do that or something similar from reading an article from some time ago about uses for double indirection. The "clever" part, for me, is being able to remember and reliably implement what I learned. Yes, it's a low standard, and using "clever" is most assuredly the wrong choice on my part. But, I'd to think the implication is roughly the same.

>At the expense of using an extra variable (which is probably optimized out by compiler anyway), if you rewrite it like
Not only do I agree that your rewrite is more readable and obvious, I'm also inclined to think an interviewer would also prefer such a version from what you've said. However, from my perspective, explicitly specifying the type in this instance didn't really help all that much. Rather, it's the introduction of the extra variable and alteration of the others' identifiers that are more elucidating as to the intent and purpose of the code. (It just might be familiarity.)

>and the equivalent "std::vector<int>::iterator" is ugly
Don't miss those days. As an aside, the hype for C++11 was an exciting time. And now we have modules, concepts, ranges, and further improvements to constexpr--very cool. Problem is learning all of it!

>https://github.com/mkirchner/linked-list-good-taste
I feel like this is an over-complication, but it's probably helpful to others. Feels good to use Linus-approved techniques!

>They're certainly correlated, but systems design/modelling questions are far more relevant to most jobs.
You make good points that I should've arrived at myself, and in one of my recent projects, I've became aware of how difficult it is to design good "systems." The breadth and side-effects are challenging to manage even for my small scale.

>So the interview basically consists of four rounds of problem-solving. Smaller companies, startups, and other tech companies will do team-based interviews though.
Team-based interviews probably vary more in quality than a more standardized system, but on the hand, they include people with whom you'd be potentially working.

>Instead, what would be better is learning how to identify edge-cases, and when you're given a problem be able to discuss these edge-cases with your interviewer (even if he doesn't explicitly ask you about them).
Good advice! However, maybe I'm being rather dense, I was asking whether interviewers asking questions related to unit testing would be a good idea--not a candidate practicing them in hopes they would be useful.
>> No. 2247 [Edit]
>I was asking whether interviewers asking questions related to unit testing would be a good idea
Ah my bad I misinterpreted your question. Although I think my response to that version is similar; if you assess a candidate's ability to reason about both edge-cases and how systems are linked then I feel that's a "good enough" indicator of his ability to write good tests. Since so much of testing is dependent on your specific project, language, framework, and infrastructure (which will have to be learned on the job anyway) I'm not sure if there's another general way to assess this.

Also in principal I feel that unit tests mainly serve as a sanity check to make sure that you haven't broken anything when refactoring, and any actual "testing" would be done via full-system tests against a sandboxed instance (since unless what you're testing is the implementation of some specific algorithm, root causes of bugs generally seem to stem from the interaction between two components). Of course in practice writing those kinds of tests tend to be a lot harder or more tedious, so mocked-out dependencies are what people usually do. Maybe using an in-memory simulation (if it's an external resource that can be simulated somewhat accurately) or recording/replaying interactions (for things like network requests) would be better if sandboxed instances aren't feasible.
>> No. 2259 [Edit]
Some neat changes in c++20:
https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.html

Coroutines might be useful for event-driven stuff (but from what I've read what the standard-library provides is very barebones so you'll need to make use of a higher-level library in practice). Not sure how I feel about modules; most of the annoyances have mainly been around the build system versus issues from leaky headers, and I'm not sure modules really fixes that.
>> No. 2280 [Edit]
>>2247
Belated response, but I really appreciate your thoughtful responses. The CI-aspect (as opposed to simple unit-testing) is something I often forget about since nothing I work on necessitates such infrastructure. In your opinion, is general experience with the aforementioned techniques something that's expected from graduates of CS (and related) degrees, or is it something that's rather learned on the job? (Ignoring the differences in infrastructure across organizations, hence "general".)
>> No. 2283 [Edit]
>>2280
>The CI-aspect (as opposed to simple unit-testing)
In my mind "continuous integration" is more about the infrastructure and is orthogonal to the issue of system/unit level tests. For instance, you could just have your CI scripts run all unit tests upon a commit. Unless your job deals with setting up such infrastructure, I don't think end-engineers ever have to explictly think about CI itself, since it's merely an automated mechanism that runs the tests.

> something that's expected from graduates of CS
(I haven't spent enough time in the industry to say for sure so take the below viewpoint with a grain of salt)
Considering that most university graduates barely have experience writing good unit tests, I doubt that new hires are expected to be able to think about system-level tests at all. In particular, while you can assume that graduates will at least have some basic exposure to the idea of unit tests (perhaps they might have had to write some themselves for an assignment, and they'd certainly be familiar with testing algorithms for correctness given the prevalent use of autograders), system level testing is something that very few students will have needed to think about given that in university, projects are usually small and simple enough that there's no need for this. It's only when you dive into things that have to deal with networking, databases, RPCs, etc. that the limitations of unit tests begin to show and it becomes worthwile to consider bringing up an entire sandboxed environment. (Somewhere in between that continuum of unit tests to entire sandboxed instances, there's the in-betweens of in-memory simulations, RPC replay, and perhaps even more that I'm not aware of).
>> No. 2284 [Edit]
Functional and declarative programming is a bit of a mindfuck coming from an imperative mindset. Doesn't help that keywords and concepts have differing meanings between the two paradigms.
Tangentially related, StandardML is pretty fun, and mlton's FFI to C looks nice.
[Return] [Entire Thread] [Last 50 posts] [First 100 posts]

View catalog

Delete post []
Password  
Report post
Reason  


[Home] [Manage]



[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]