The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities do...
I had missed the link when I read your post. It seems to me GPT-5 is cheating a bit with its example. One thing I've noticed with chatbots is that the...
Without a body, it seems that it would be mostly restricted to the domain of abstracta, which are usually singled out descriptively rather than de re....
On a Kripkean externalist/casual theory of reference, there are two indirect reference-fixing points of contact between an LLM's use of words and thei...
I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their in...
Ramsey appears to be an anti-representationalist, as am I. I had queried GPT-4o about this a few weeks ago, and also to what extent Kant, who most def...
One important thing Sleeping Beauty gains when she awakens is the ability to make de re reference to the coin in its current state as the current stat...
Yes, but you can't have a dialogue with language or with a book. You can't ask questions to a book, expect the book to understand your query and provi...
Sure, but Sleeping Beauty isn’t being asked what her credence is that "this" (i.e. the current one) awakening is a T-awakening. She’s being asked what...
I was referring to your second case, not the first. In the first case, one of three cards is picked at random. Those three outcomes are mutually exclu...
In an important sense, unlike expert systems and other systems that were precisely designed to process information in predetermined algorithmic ways, ...
During pretraining, LLMs learn to provide the most likely continuation to texts. Answers that sound right are likelier continuations to given question...
The ease with which you can induce them to change their mind provides a clue. Still, you can ascribe them beliefs contextually, within the bound of a ...
That's a deep puzzle. I've been exploring it for a couple years now. Part of the solution may be to realize that LLMs provide deep echoes of human voi...
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to que...
On an optimistic note, those department heads may soon be laid off and replaced with AI administrators who will have the good sense to reverse this ai...
I assume, but I also mention it here for the sake of precision, that the clause "(an obvious exceptional case might be, e.g. an LLM discussion thread ...
Agreed! That's indeed the chief ground for not treating it like a person. People often argue that chatbots should not be treated like persons because ...
Your argument is too quick and glosses over essential details we already rehearsed. We agreed that when there are two mutually exclusive outcomes A an...
This looks like a process well suited for mitigating the last two among three notorious LLM shortcomings: sycophancy, hallucination and sandbagging. Y...
In the first case you described, a single run of the experiment consists in randomly picking one of three cards. When an outcome is determined, the re...
It is indeed somewhere else. Look at the payout structure that @"JeffJo" proposed in their previous post. Relative to this alternative payout structur...
I was explicitly referring to her state of knowledge at the time when the interview occurs. There is no projection of this state into the future. Like...
Are you sure about that? This seems quite exaggerated. I know that a study published in August 2024 has been widely misrepresented as making a similar...
It looks like you didn't parse correctly the sentence fragment that you quoted. It is indeed on the occasion of an awakening (as I said) that she is b...
I was talking about what she is being asked, literally, in the original formulation of the problem discussed in the OP. From Wikipedia: This has becom...
Sure, but the former precisely is what she is being asked. She is being asked what her credence about the coin will be on that occasion, and not what ...
I assume what you now mean to say is that there are two possible ways to think of the "outcomes" based on context. Well, sure, that's pretty much what...
You are using the word "outcome" ambiguously and inconsistently. In your previous post you had stated that "You have 3 possible outcomes. In two of th...
The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she ...
I agree with the reasoning and calculation. As I said, this is a standard Thirder interpretation of the problem. It is consistent, coherent and valid....
SB does know the setup of the experiment in advance however. She keeps that general knowledge when she wakes, even if she can’t tell which awakening t...
She is woken up once when the coin lands Heads and twice when it lands Tails. That is part of the protocol of the experiment. We also assume that the ...
It's not something spooky influencing the coin that make SB's credence in the outcome shift. It's rather the subsequent events putting her in relation...
I'm with @"Joshs" but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more...
Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instanc...
Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state...
Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you underst...
This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e....
It isn’t a different problem; it’s a different exit rule (scoring rule) for the same coin-toss -> awakenings protocol. The statement of an exit rule i...
Yes, that is a very good illustration, and justification, of the 1/3 credence Thirders assign to SB given their interpretation of her "credence", whic...
Rather, the premiss I'm making use of is the awakening-episode generation rule. If the coin lands/landed Tails, two awakening episodes are being gener...
There are no other flips. From beginning to end (and from anyone's perspective), we're only talking about the outcome of one single coin toss. Either ...
Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the ...
Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimi...
Comments