I'm not claiming "tout court" or overall inferiority looks like anything and that's the point—if someone claims that slavery is justified when the ens...
Assuming that the model predicting heat death of the Universe is sound—do you think it's inevitable destination would have been different had no life ...
An ox is most likely bigger and stronger than you, possibly better-natured and better looking and kinder to its kin, so it is not overall inferior. Su...
In principle you could indeed empirically demonstrate that I am human—all you would have to do is meet me face to face. The so-called "problem of othe...
Overall inferiority is not a square circle it is an unsupportable claim in my view. If you think it is a potentially supportable claim you should at l...
You can believe that if you want to—the point is that you cannot logically or empirically demonstrate it. That shouldn't matter if you feel a convicti...
People or animals can only be determined to be inferior to other people or animals in precisely measurable ways. My argument was always only that if s...
I agree that there is a sense in which experience, everyday, ordinary experience is ineffable—no account or explanation is ever the experience itself....
Rubbish! If someone wants to claim that tout court inferiority is a thing, then it's up to them to provide a criterial account. No positive reason in ...
It is impossible to generalize since we are all unique. Some need a guru, a sangha, an advisor, a wise friend. But these are all things that must be l...
I looked at your interchange, and then asked ChatGPT if it identified as anything at all. Here is the reply: Not in the way people do. I don’t have a ...
:lol: Thanks. It occurred to me that even if we can only impute causation in cases where if X occurs Y must occur, it is only the abstract semantic co...
On the other hand causation is often distinguished from correlation (association?) with the idea that to qualify as causal, when X occurs Y must occur...
I agree that '12' would be the most common association, my point was only that it is not, by any means, the only possible association. If '7+5' can be...
I think we can reasonably say that the thought "7 + 5" may lead to the thought "12", or it may lead to the thought "5 +7" or "7-5" or "7 divided by 5"...
Cheers I get your perspective, but I remain skeptical on both sides of the argument. All the more so, since it is only the last couple weeks that I ha...
:lol: You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in...
Sorry about that—it works for me from here. Maybe because I'm signed in on the site and others are not. I'm not so savvy about these kinds of things. ...
Okay, that's interesting. I've been conversing with Claude. Some thought-provoking responses. https://claude.ai/share/384e32e8-a5ce-4f65-a93e-9a95e899...
That makes sense—the idea of "discovering the essence" of truth seems incoherent. Do you think ChatGPT can "see" how the use of the concept functions?...
So, you mean by "understand truth" that you have an intuitive feel for what it is, and you would also claim that LLMs could not have such an intuition...
I suppose we could say that all physical processes are rigidly rule-based in terms of causation. On that presumption our brains may be rigidly rule-ba...
From a phenomenological perspective associations would not seem to be rigid or precise. They are more analogical, metaphorical, than logical. As to wh...
I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically n...
Looking at it in terms of semantics, I'd say the connections between thoughts is associative. There are many common, that is communally shared, associ...
:up: Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It...
Okay, I had assumed that when @"Baden" said "don't get LLMs to do your writing for you", that this would include paraphrasing LLM text. It's good that...
I don't know if what I said implies that there are no authoritative generalists. The point was only that, in regard to specialist areas, areas that no...
I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the a...
LLMs certainly seem to make statements and ask questions. I wonder whether the idea that these are not "real" statements or questions is based on the ...
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specia...
That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI. I'm...
I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on ...
You are misunderstanding. My comments re "mental masturbation" were specifically targeting text like the response made to @"Number2018" by ChatGPT. I ...
There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it we...
"Real world"—that was perhaps a less than ideal choice of words—I intended to refer to the world as being what affects us pre-cognitively via the sens...
I see the point that more brilliant minds might find novel theses in AI-generated texts. At its best you might end up with a Derrida or a Heidegger, b...
Geoffrey Hinton believes AIs are capable of reasoning, not yet as well as humans ( although I wonder which humans he is referring to). I guess if they...
Looks like they are bigger bullshit artists than we are, although certainly much more transparent. I don't mind at all you creating another thread on ...
Comments