For discussion of politics, religion, and other content not fitting the rest of the site
[Return]
Posting mode: Reply
Name
Email
Subject   (reply to 1560)
Message
BB Code
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: GIF, JPG, PDF, PNG, TXT
  • Maximum file size allowed is 11742 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 589 unique user posts.
  • board catalog

File 166835863074.jpg - (204.10KB , 1995x849 , animatrixProgramCis.jpg )
1560 No. 1560 [Edit]
How do you determine what is true? That is, by what method do you distinguish the things which are from the things which are not? For example, how do you know you're actually reading this, that you're not just dreaming?
And why are you convinced that's the way you should be doing it?

(For the sake of clarity, please provide both a description and an example.)
Expand all images
>> No. 1561 [Edit]
>How do you determine what is true
Nothing can ever be determined as "objectively" true because you are limited by your senses. But there's no use thinking those things because by definition they won't make any impact on your life. It's also the same as the "what if we're all running in a simulation argument"; sure we may be, but it doesn't make any difference in your day-to-day life, so why does it ultimately matter?

> how do you know you're actually reading this, that you're not just dreaming?
Things in dreams often don't exhibit temporal or spatial consistency, if you have lucid dreams (or nightmares) enough you can learn to recognize when these sorts of nonsensical things happen.
>> No. 1562 [Edit]
>>1561
>Nothing can ever be determined as "objectively" true
That's why the question was broad, regarding whichever way you distinguish what is and what is not.

>why does it ultimately matter?
Efficacy and communication; to be sure the way you think about the world actually matches the world, and to ensure that messages received match the messages which were intended.

>by definition they won't make any impact on your life
What makes you think so? By what definition?

>Things in dreams often don't exhibit temporal or spatial consistency
Sure, and what about that convinces you that this isn't a dream?
>> No. 1563 [Edit]
>>1562
>Efficacy and communication
>to be sure the way you think about the world actually matches the world
Like I said, there's no way to guarantee this. You can conjure up all sorts of scenarios in which this is violated by extrordinary measures. However under principle of parsimony that the world behaves consistently in a manner amenable to modeling (this model could be probabilistic if needed), then the tools of falsifiability/scientific method are the best we have for arriving at a model which matches that we can observe with our senses. Whether this model we arrive at is representative of the "real" world doesn't matter, because the only world we can perceive is that of our senses, so why should we care about anything else?

Once you have a model, the question of how to interpret it and under what conditions it's valid is where most disagreements happen. You'll notice that hard sciences are usually less divisive than the soft-sciences, since it's easy to consistently perform experiments to falsify things in the former, while in the latter it's usually hard/impossible to isolate single variables.

>what about that convinces you that this isn't a dream?
It allows you to deduce that there are (at least) 2 states, one in which things have consistency and one in which they don't. Whether the former is "real" and the latter is a "dream" or vice-versa there's no way to be objectively certain, but we seem to experience most of our time in the state where things have consistency, so that's the one that's worth trying to reason about.
>> No. 1564 [Edit]
>>1563
>there's no way to guarantee this
I never claimed it was guaranteed.

>the tools of falsifiability/scientific method are the best we have
How is this relevant if:
>Whether this model we arrive at is representative of the "real" world doesn't matter
And are you saying you don't care whether your beliefs are accurate?

>one in which things have consistency and one in which they don't
And why are you convinced this is the state with consistency?
>> No. 1565 [Edit]
File 166854698354.jpg - (30.24KB , 900x500 , animatrixKid.jpg )
1565
It was worth a shot. It wasn't much, but I don't have any will left for this.

My collocutor has entirely failed to grasp what I am asking. For instance, "no way to be objectively certain" really is immaterial to the question of 'how do you go from thinking something to believing it?'. (Next, you're going to say "I am sorry you reject my answer.")
Maybe I could've been more concise, with something like "why do you think you understand this sentence?". But I've been whittling hairs for years and it's never helped. It's not that subtle.

Trying to communicate is futile. I am alone.
>> No. 1570 [Edit]
>>1564
>And are you saying you don't care whether your beliefs are accurate?
Not him, but yes. I don't care.
>>1565
I get it.
>> No. 1571 [Edit]
>>1565
>why do you think you understand this sentence
Because I attempt to parse the sentence under the assumption that it's encoded using the shared language we know as English. The language itself is mostly standardized (albeit having some minor regional variations), so my interpretation with high probability matches your original intention. If this were not the case (i.e. you didn't encode your sentence as "english' but instead merely chose english sentences as an encoding for some other scheme (like in [1]) then my conclusion would be invalid.

As to why I believe my understanding of english matches the shared language, it's because empirically this understanding is successful to meaningfully extract information in the real-world, and every time this happens I update my confidence a posteriori in a bayesian sense. After decades of knowing English, my priors are adjusted so high that in fact upon being faced with an uninterpretable sentence my conclusion is that the author malformed the message, rather than my understanding of English being incomplete.

This process of updating confidence can be seen happening in real-time when learning a new language. When you are new, you don't place much faith in your own ability to interpret things, and you can often come to the wrong interpretation. As you gain more exposure to a single grammatical pattern, you become more confident in the interpretation of that particular pattern.


[1] https://steganography.live/
>> No. 1574 [Edit]
>>1571
And you are mistaken, because you think you're interpreting the language and not the meaning.
For instance: "I can't believe they would do that" can have completely reversed meanings even in the same context. A better grasp of the language does not help with that. Empiricism does not save you from bad framing.
>> No. 1575 [Edit]
>>1574
Interpreting the language is the same as interpreting the meaning. You have to assign some value to antecedents, and you infer this a posteriori from context in the exact same way you infer the sense of the words being used themselves. Even though multiple candidates may be possible, given context there's usually one best fit, and so that's used as the substitute.

Better grasp of the language absolutely does help in being able to infer this, as people new to Japanese will attest. It's not necessarily the grammar of the language itself which places priors on the context you expect, but rather the patterns of language usage within the community. If I say "He's a complete ****", just knowing english grammar and english vocab is not sufficient to produce a probability distribution for that last word. Instead you need a mental model of 2-gram frequencies.

This is also why transformer models have worked so well for NLP, (self-)attention is quite literally all you need to interpret sentences.

Post edited on 19th Nov 2022, 12:41pm
>> No. 1578 [Edit]
>>1575
I completely disagree. With the amount of context people give, I can usually think of at least three alternate meanings for what they've said, and usually all of them were wrong.

I don't have any argument, what you've said just doesn't make any sense to me.

But you also seem to be assuming your conclusion. Your argument was in regard to figuring out a word, and I don't think figuring out the word means figuring out the meaning.
>> No. 1579 [Edit]
>>1578
>>1575
2-gram frequencies by themselves actually aren't sufficient for inferring the sense of the word, you need something like "extended" 2-gram frequencies where you split a given dictionary word into its multiple senses and compute frequencies for each. This was probably hard to do before the advent of modern NLP since most datasets don't have the sense information explicitly tagged, so you have to infer this as you go. This is not my domain of expertise but probably vector embedding based models were the first primitive way to do this in that they could factor in context from a limited window, but of course as mentioned now transformer models make this trivial and treat this operation as the fundamental primitive: The senses of the word get implicitly embedded in the Q/K/V vectors.
>> No. 1580 [Edit]
>>1579
And is there any way for that to account for idioms, metaphors, and references?
For instance, suppose I tell someone "hold on to the grudge if you like", but instead I phrase it as "wear the grudge if you like" in reference to a song. I might just be using that as a turn of phrase, or I might be implying something about what I think they're doing, or I might be implying something about what I think they should do.
Is there some way in which deducing the meaning of the word is supposed to differentiate those things?
>> No. 1581 [Edit]
>>1580
>And is there any way for that to account for idioms, metaphors, and references?
Yes, all of those are accounted for, as you can empirically verify if you play with these LLMs. Transformer models handle not only "closed-ended" antecedent inference (winograd-schema esque) but also "open-ended" inference. The extended 2-gram example was just a sample, in reality these models don't just look at frequencies. I don't think anyone knows exactly what the Q/K/V vectors they learn end up corresponding to (although you can try to make guesses by looking at activation patterns), but all we know is that successive layers keep pooling information
>> No. 1582 [Edit]
If you don't have access to a LLM, see the joke explanations in e.g. https://arxiv.org/pdf/2204.02311.pdf (page 39)

>Input: I tried 10,000 random restarts of my neural network, but I was
accused of overfitting. I guess no good seed goes unpunished.
>Model Output: This joke is a pun. A neural network is a computer
program that can learn from data. A "seed" is a number that is used to
initialize a random number generator. A "good seed" is a number that
produces a good random number generator. The phrase "no good deed goes
unpunished" means that if you do something good, you will be punished
for it.

>Input: I think I understand why me and my partner are such a good match.
I get excited when I smell food and become super alert when the
doorbell rings. My girlfriend Shelley becomes ecstatic when she sees a
person who she just saw 5 hours ago.
>Output: Model Output: This joke is about dogs. Dogs get excited when they smell
food and when the doorbell rings. The girlfriend is excited to see
someone she just saw 5 hours ago, which is a trait of dogs.

>Input: What's the difference between a zebra and an umbrella? One is a
striped animal related to horses, another is a device you use to stop
the rain from falling on you.
>Model Output: This joke is an anti-joke. The joke is that the answer
is obvious, and the joke is that you were expecting a funny answer.
>> No. 1583 [Edit]
>>1581
>>1582
I'm not really following the jargon (and don't really care to look into it), but I can see that subtleties could be accounted for simply as additional variables.
But, far as I know, that still would not account for the correct answer not being among the possible choices.
And more importantly, it does not explain my lifetime of experience of people not even being in the right category when they assume my meaning. For instance:

(about searching for "the meaning of life")
me: "I think the whole question is a misframing, and that meaning is chosen instead of found."
them: "So you think the meaning of life is to make choices."
>> No. 1585 [Edit]
>>1583
That only means the people you're talking to are dumber than an LLM

View catalog

Delete post []
Password  
Report post
Reason  


[Home] [Manage]



[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]