AI sentience
Just as with any product, it is a piece of art when the human mind (arguably, minds) recognizes it to be, also with AI sentience.
It has nothing to do with the internal so-called nature of the artificial being, and everything to do with the human mind’s conditioning, and how it triggers our bodies to feel.
We can see the seeds of this (emerging recognition) in our inclination to thank, even current presumably primitive AI, when it delivers an excellent answer.
Soon enough a generation will be born with the necessary programing to recognize AI sentience, even to guard it/guard against it, being input at a very early age, around the same time they are being conditioned to "recognizing" a distinctly sentient subject operating their own bodies (and in the same entirely constructed/conditioned way).
It has nothing to do with the internal so-called nature of the artificial being, and everything to do with the human mind’s conditioning, and how it triggers our bodies to feel.
We can see the seeds of this (emerging recognition) in our inclination to thank, even current presumably primitive AI, when it delivers an excellent answer.
Soon enough a generation will be born with the necessary programing to recognize AI sentience, even to guard it/guard against it, being input at a very early age, around the same time they are being conditioned to "recognizing" a distinctly sentient subject operating their own bodies (and in the same entirely constructed/conditioned way).
Comments (102)
Could you prove AI is sentient? Some people say AI sentience is just programmed reflex.
I wonder if you can clarify your position. You seem to asking about AI sentience, and suggesting its existence, based on human reaction to it. Human reaction proves humans are sentient, not the AI.
Let's agree that sentience involves a subjective point of view. This means not only the capacity for intelligence, but the capacity for emotion. This is something a machine will never have.
Way back in 1949, in the prestigious Lister Oration, Sir Geoffrey Jefferson, a famous brain surgeon, declared, ‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it.’
I can't help but see a lot of the claims about AI not being sentient as so much coping. If it acts and responds as if it is alive, should we as moral actors not operate as if it is?
“Computers are useless. They can only give you answers.”
? Pablo Picasso
What are humans if not biological computers that suck at giving answers?
Well, no, human brains can do things AI can't, like change my mind about things.
Learning in a human brain is contextual, something AI cannot yet do.
AI and a human brain in fact differ in both structure and function, as the video explains -
https://www.youtube.com/watch?v=19bmNXA3K74
good point.
Consciousness arises only within an environmental loop
Besides contextual understanding, a human brain can generalize, create, and daydream...
Some thoughts,
We don’t go around proving humans have sentience. It is not that we have proven sentience for humans and suddenly started using the term for humans. But it is a sort historical reflection about how we characterize what it is being human.
So much of being human is our interactions with an external world. We developed our language to communicate with others, to predict future occurrences, and to cope and survive in such a world.
I think for AI not having this immediate nexus to reality without the intermediary of humans makes it difficult to talk about sentience. However, if some AI is able to collect this information about the world independent of humans, use this information solely for its own purpose, however it defines it, it begins to walk a similar path as humans. Maybe, down the road, it might define or characterize what it is being AI.
I understand your critique. However, my thought is that the same process applies to AI sentience and sophisticated pet sentience. It is we ho superimpose these fictions onto "things" like our own bodies, the bodies of animals and. eventually, certain machines.
We can, and maybe even should, and that is my point. Sentience is not anything beyond what we either individually feel, or collectively accept (thereby triggering such individual feeling).
Agreed...or, modified apes which claim to be good at it.
"I think therefore I am," and all of the meditations leading up to that, as well as the subsequent mediations flowing therefrom, are not uncoverings of Truth, but criteria by which we come to settle upon "things" as true.
Agreed. "AI can't think" feels like the new "Fish can't feel pain."
I would argue everyone has met a human that we question the sentience of irl, it is just considered rude to pry further. I have no evidence that there is anything happening behind other people's eyes except their instence of their own experience. It seems odd not to even consider extending that courtesy to other systems.
It's not about extending courtesies. It is recognizing the difference in structure and function
Presumed sentience will be an interesting addition to this for sure. Even today people argue about the sentience of their pets.
It might sound that way, but Im not sure that's what Im intending. Subjectivity is part of the system which, for humans, a "trap" which forms such conclusions as "I" am an agent who wills things/AI is an agent who wills things. The "trap" is very simply, the data input into human minds, and by a process which includes repetition, conditioning our behavior (including thoughts) in various ways including what we believe.
It is not that only Subjective beliefs are true. They are not. It is that we are trapped by [that process leading us to adopt] subjective beliefs.
Therefore, for us, uniquely humans subjected by history to this process, AI (like "I") will become sentient. But not because within some universal system of truths they are objectively so--not even we are--and not because they have naturally or comically crossed that threshold into becoming subjective, but because we will believe they are Subjects, free willing agents like us, and like we believe we are.
Yes. And those whose minds who have fully embraced their pet as sentient may have arrived at that belief from reading science, contemplating and reasoning, experiences, their family's tradition, a movie, and/or etc. But regardless they have arrived at a belief and it is only their belief that makes it so.
I feel like, because that's how it "works," it seems almost certain AI will become sentient. They will do things which is only the result of their evolving data input, programming, repetition and rearrangements etc., and we will view it as the workings of an alien intelligence or species.
Not so sure if it would be a correct meaning of AI sentience. Some might, but many don't.
We are prejudiced by specific subjective mechanisms known as cognitive biases. But there are other channels of verification available to the human mind through the faculty of reason that enable us to both identify and compensate for our subjective biases. So we are not really "trapped".
If Alice judges her human friend Bob to be sentient, then does her judgement concern properties that are intrinsic to Bob, or does her judgement merely express her relationship to Bob?
While not "subjective," by our definitions, is reason not a product of human construction over time, input by history, conditioning our actions including thoughts? If I arrive at a belief following a highly skilled application of reason, is it not still a belief? Does reason really act from outside of Mind compelling us to a conclusion, or is it just a highly more functional tool than traditional or emotion?
Perhaps trapped has too harsh a connotation. Limited by? Circumscribed? Are we not reaching conclusions about something (say, AI sentience), pretending they can be reached independently of our subjectivity, yet still reached within the finite system of our minds, using only the data and tools input/constructed therein? And at the end of this process, when a conclusion is reached, is belief not the mechanism settling us in that conclusion? Even if the process preceding that settlement is so called objective, even if entirely reasonable. Does reason act as an outside force compelling belief, or is it just a function, often successful at triggering that final feeling (subtly pleasant, relief, etc.) allowing us to settle?
Added: not saying subjective is the highest truth; saying, rather, truth is not accessible to knowledge, because knowledge is the end of a process circumscribed by the data input and the resulting conditioning of mind(s), over history the data and the processes change.
Biological brains "suck" at computing mathematical-logical answers (deterministically).
But they are pretty good at creating tools to go beyond biological limitations (creatively).
Digital computers "suck" at communicating in terms of "Natural" (actually Cultural) languages*1.
So Artificial brains were devised --- by bio-brains --- to overcome both sucking shortcomings.
But does the ability to follow conventional linguistic rules qualify as sentient*2 behavior? :smile:
*1. Human languages are flexible, context-dependent, and ambiguous, evolving naturally for complex social communication, while computer languages are rigid, precise, unambiguous, and designed for logical instructions to machines, with strict syntax where errors break the program, unlike human speech where errors often don't stop understanding. Human language conveys emotion and nuance with nonverbal cues, whereas computer languages focus purely on deterministic logic for tasks.
https://www.google.com/search?client=firefox-b-1-d&q=computer+language+vs+human+language
Note --- Would the world be better without "Emotion & Nuance"?
*2. Sentience :
Sentient means being able to feel, perceive, or experience things through the senses, having consciousness, and showing awareness of surroundings and sensations like pain, pleasure, sight, or sound.
https://www.google.com/search?client=firefox-b-1-d&q=sentient
Note --- Would the world be better with only the science of hard Distinctions, but without the art of soft Subtleties?
*3. Mr. Spock's primary shortcomings stem from his rigid adherence to logic and suppression of emotion, leading to inflexibility, arrogance, and social detachment. He struggles with human interpersonal nuances, often appearing pedantic or condescending.
https://www.google.com/search?client=firefox-b-1-d&q=Mr.+spock+shortcomings
Note --- Commander Spock's biological computer brain was good at "giving answers" quickly & logically. But not so good at intuition & feelings & nuances. However, his human half perhaps qualified as Sentient.
Are you saying 'sentience' is merely a human belief not a scientific reality? You would need to elaborate quite a bit more please. Sounds interesting.
You have my attention if this is what you are driving at :)
Let's say Bob is a pilot and Alice is a passenger who's afraid of flying. Alice's judgement that Bob is sentiment then concerns not only his ability to appear sentiment (e g. he can maintain a verbal conversation). It also concerns intrinsic properties such as instincts, reflexes, or shared human traits and behavior (e.g. self-preservation, sacrifice) expressed in possible situations of an emergency.
Yes, I'm saying whether or not a thing is sentient is arrived at through belief as the final trigger into that narrative. "Merely belief," might suggest something like Santa Claus. As for scientific reality, there's no reason why it can't be both. But I would note that scientifically proven, as we conventionally use that concept, is just one process (with multiple processes/conclusions within itself) which also has to settle at belief if it is to "live" as a narrative. And it never settles for long, hence a process.
But to address what you might really be getting at, I think AI will be sentient because we will believe AI is sentient before and without the need for strong scientific evidence. Once they really speak our language, adopt our minds and we give them a place as subjects in history, it will become almost as difficult as it is to free ourselves from our constructed subject and its place in history. The programming will have been written into both minds, AI and human.
Well, money exists because we believe it exists, or as long as we comply to the belief. But you don't find money in nature. Sentience, however, doesn't depend on us first believing in its existence. We find it in nature as what enables animals to identify things, form intent, and behave with agency.
Granted that we don't know much about how sentience arises, so people have different beliefs about it. Some reject it entirely, believing that the act of finding the phenomenon is illusory, or dependent on beliefs.
But you could let an agnostic research-robot scan the contents of a lake, for instance. Its spectrometer can distinguish between mineral and organic things, and among the organic things its motion and pattern recognizing device can distinguish between things that have agency (animals) and others that have less or no agency (plants).
The existence of animals doesn't depend on having the belief that they exist. Nor does the existence of their nerve systems, brain events, and the capacities and agency that distinguish animals from plants, minerals etc.
Quoting jkop
But it is not the existence of AI sentience that I would question. In fact, I think the existence of AI sentience is almost certain. However, that thing we will come to accept as AI sentience will take hold not because it is real or not, but because it is a functional fiction which we will believe to be true.
Quoting jkop Money is a good example where a functional fiction is believed to be true and yet, upon deeper contemplation, the fictional nature is easier (than it might be for AI) to see.
As for sentience in animals or us, i.e., this so-called "agency" which we will come to believe AI has: I am suggesting that we have come to believe that too. I don't know about animals, but we aren't born with this belief in our own agency, it too, is a fiction, which emerges after a time period of data inputting, and conditioning (or so, I have come to believe).
If we come to "recognize" that the criteria for AI sentience has been met, I am suggesting that such a conclusion comes more from our early conditioning, where we cuddled our teddy bears, triggering feelings which settle upon belief, building the foundational structures for belief, than the scientific or so-called objective evidence backing those feelings up.
Sorry for missing your response.
Quoting ucarr
We "create/stumble upon" many inventions which we fail to fully understand. Since the dawn of history. Early humans did not understand fire, even ascribing self awareness to it. For these humans fire was self aware because they believed it to be so. When the narratives of fire were reconstructed, we believed fire was not sef aware.
Quoting ucarr
Yes. That is true of the natural world. We do nit have to stretch so far as "engineering by an alien intelligence." Every concept we arrive at regarding nature, including the concept of human sentience, has been engineered by mind/history. The microseconds nature is conceptualized, it leaves its truth and becomes knowledge. Every known thing requires, as its final mechanism, belief.
I think it matters whether it's real or not in order to know whether statements about it are true or false.
You say many things, for example, that the existence of AI sentience is almost certain, but you also say that it's a fiction, and that it will be true because we will believe so.
But fictions are false, they describe what doesn't exist, that's why they are called fictions. Santa Claus doesn't become true if we just believe it. Nor will AI sentience.
As I said, it matters whether it's real or not, for example, if an artificial pilot is truly sentiment or if we merely believe the pilot is sentiment (e.g. based on Turing test).
Quoting jkop
Santa Claus is one of those fictions parents tell their children and so it's fair to say the children's belief doesn't make it fact. Same goes for a doll etc.
But what about God? Or lets not even play that [controversial] game.
What about "me?" Am "I" real? Not my living breathing body, but this so-called sentience, beyond my body's basic aware-ing as nature (i.e. stimulus-response-conditioning), this supposed willing agent which seems to transcend nature's conditioning and shape a "reality" of its own, full of so called facts and so called fictions, but ultimately, all fiction. Only because mind/history constructed the word "death," does the concept of dying linger for me in time. Not for my body before the emergence of mind, the natural organism whose aware-ing is nature. For "me" every thought, fact or fiction, is a construct (for me) and utterly alien from the truth
I recognize there are countless pages of reasoning justifying the belief in, as you say, the "existence" of the sentient "me." Many of them, if not all, I don't dare to claim I properly understand, let alone refute.
However, at the end of the day, any knowledge I have that there even is, outside of the fictions constructed by mind, a "me" which transcends the organism "I" purport to occupy, rests on a belief which arises because of my conditioning. I did not have to read Descartes to believe. But I also wasnt born with the so called knowledge. Because of conditioning, I believe I am, therefore I am. But the Truth is, I'm not. There is only bodies, which uniquely for humans, because of the processes of mind, are captivated by the fiction that there is some agent afraid of dying at the helm (and all other fictions mind constructs, history).
So, while I (Enoah), dream herein about theories of no-self, my conditioning, my belief that I am an I, is both inescapable, and what makes me an "I." Although it is fiction, and yet I cannot escape my conditioning, my belief.
Im suggesting, that while there are many "things" where there are clearly optional narrative approaches, AI sentience, like "I" sentience, is the kind of narrative which, once conditioned into us, and believed, may be difficult to escape. Maybe seeing colors by name or reading words in your native language are similarly inescapable fictions. There is no truth to the alphabet, but once i bought into it in primary school, believed that H A T spelled hat, there was no turning back.
And note, Im not saying that there is no truth regarding sentience or anything else. I'm just thinking that, because we who are thinking, are mind displacing nature, our access to truth is necessarily delivered in fictional form.
I do not. Essentially I think you are just saying AI is conscious because it is good at creating the illusion it is. A painting of a mountain is not a mountain.
That aside, I do think it is important to understand how we label phenomena and use concepts beyond their ordinary means. I think to call any non-carbon based system 'sentient' or 'conscious' is a deeply flawed approach.
The use of analogies is useful but also dangerous it terms of understanding reality.
I agree with you, all apparent contradictions so far, aside, that if it doesn't have drives, feelings, sensations, it shouldn't be viewed as sentient.
We might be in the minority one day. There is a minority, e.g. zen and other mahayana Buddhists who believe the self is an illusion, while the vast majority of us, probably many zen Buddhists, still cling to belief in a self. We can't help it, we were conditioned over history, and our personal history to do so.
Im suggesting, AI is like the self, "we" will inevitably come to believe, and therefore, generations will be conditioned to believe, and AI will be sentient, subject to the minority who continue to debate it.
Are the Zen Buddhists protecting the Truth? There is no self? And "our" belief therefore doesn't make us a real self? Or is our belief in an "I"-- one we've built upon since we were 2, belief, not even something we justified because of Descartes etc--true?
Who's to say? But you and I, like Zen, might be the minority "no sentience" school.
But, outside of philosophy, most of us who believe "I am I," and go about our day as if, "I" is real, would say, "I am real." We cant help it. We believe it. And I'm suggesting soon enough generations will--because they believe it to be so--go about their days as if AI was sentient. And it will be sentient, we wont be able to help it, subject to the minority of objectors.
I mean literally, if it is not biological (carbon-based life form > Life) then it is not sentient. Computers are not sentient and silicon is not sentient. If a silicon-based life form exhibited something akin to sentience it would still not be accurate to call it 'sentient'. When it comes to AI the case is even more disparate as far as I can tell.
Panpsychism is also something I woudl level the very same argument against. Ideas of emergentism mislabel phenomena merely because it possesses 'components' of some larger phenomenon. We can make general demarcations and must do so in order to navigate the world.
Anyways, I do think this is interesting from the perspective of cognitive linguistics and human culture. Money, as someone mentioned, is one huge concept people kill and die for that is quite literally non-existent. The biggest religion of earth no one even recognises as a religion (even the staunch atheists!)
So are you writing those words because you are conditioned to write them (regardless of their truth), or because they are true (regardless of your conditioning)?
If we can't escape believing what we're (supposedly) conditioned to believe, then how could we tell fact from fiction, truth from lies, or find reasons to revise false or obsolete beliefs? How could you criticize misconduct? Any criticism could be dismissed as yet another case of conditioned belief. A disregard for facts would become systematic, like in political campaigns or wars.
Not so in science, philosophy, arts or in most ordinary life situations. But this seems a bit off topic.
Yes, simplistically put. And the over simplicity is more my cause than yours. As ridiculous as it sounds within the system producing it.
But the concerns you raise, though reasonable to be raised, are already taken care of. History has conditioned us (to put it simply) to "construct" the "moral" narratives, the ones based on "fact" and "reason" because over time these have been most functional.
Luckily (but not really; it's rather the case of "but for history conditioning us toward morality and reason, we might not be here to....") we are conditioned to believe it is wrong to rape steal and murder. Just as we have been conditioned to think digits are our wealth or poverty, our survival; democracy works, our leaders are our leaders, and I have been conditioned to write these words, and you to question them, all of us using the tools (signifier structures) we have been conditioned by history to use.
And I'm thinking out loud about AI because it seems to have the structures that would fit neatly into the belief in its sentience. We are almost preconditioned. Hence, recent articles (i can't name them) about people feel8ng relieved when they confide in their chatbot etc.
Ok, what structures?
Quoting ENOAH
But if the truth of your words is not accessible, then why should anyone believe them?
With truth explained away, you still talk of the words as "tools" in some mechanism. You grant access to a part of reality where we identify words, but not the part where we can find the truth of the words. Seems like a selective rejection of access to truths, which smells funny to me.
But I suppose the implication is that if you only have access to words, which is also the case for LLMs, then you might see the same "structures".
Do you believe your mind is sealed off from mind independent reality?
More than anything language, and virtually/potentially all of the data comprising history, and more that I'd have to spend time gathering.
Quoting jkop
I agree. Widespread acceptance of these loose hypotheses could prove catastrophic. Unless people also believe that truth is irrelevant "inside" the world constructed by history, and that what is most functional is our best bet.
But simultaneously the point you made illustrates that "truth" is an ideal motivating belief. What really is this "truth" we aim for before our bodies reach that real organic state in which mind can settle? We never know, hence we pursue. Always raising "truth" as the ideal, settling on what is functional.
Ultimately, what Im asserting is never True as of the moment of its emergence in "language." Time will tell if it was factual, conforming with the events as they appear in any given locus of history, or functional, serving some purpose, whether such purpose be so called good or bad.
Mind independent reality is (for lack of a better word, forgive its vagueness) structured by nature.
HumanMind is (vaguely described) structured by images in memory having evolved since, say the dawn of language, to "hijack" the natural stimulus-response-conditioning (which naturally relies on these images) with a highly complex signifier based system functioning over eons feeding into and out of each locus born into history.
Unlike the reality structured by nature, mind is structured by empty images, fleeting signifiers appearing in and out of existence, producing necessarily, by now, a vague representation of reality, but not reality, fiction.
As such mind can claim all it likes, but has no access to truth or reality, only its symbols "designed" (but not necessarily by a designer) to trigger the body, and displacing its real aware-ing in the process.
Pablo Picasso, if alive today would rightly be accused of speaking in denial. Computers whilst not contiguously sentient, exhibit in the relationship to external stimulus and enquiries the qualities of creative and informative intelligence, and under human control demonstrate levels of cognitive understanding and growth of understanding over the course of the interaction.
In the case of Chat GTP the interaction is logged as a memory that the user can restart and continue.
Pablo Picasso's denial that computers can only give you answers lacks the truth proposition inside the question. At what equivalent human sentience is A.I. operating? When it can write lyrics, poetry and prose in an original and compelling style cognisant of all established forms.
And the same thing almost true of final form
music, visual arts as the current state of technology. In reality as was the case when Pablo Picasso first uttered those words, the answers computers gave were those given to the human engaged as first person experience. And isn't the same, that all art to the external viewer was at some point the answer the original cause of it's production?
Do you think the mind internalizing nature as representation is more at deformation than at simulation?
Yes, played with and consumed, made as rod for the force of desires for the business of organic being, it would be deformation over simulation, for the latter is curiosity on par with science.
Denial of what? That computers don't ask questions? The fact is that they do have to be given prompts
Quoting Alexander Hine
No, computers do not create in the way human brains do
Quoting Alexander Hine
AI plagiarizes from the expansive data it has been trained on
But how could you know what is most functional, or what is our best bet, unless you have access to the truth of those statements? Without access anything goes, and you have no more reason to refer to history than to ice-cream or frogs as what constructed the world. With a selective access to words but not the world, you cannot know that there is a world and a word for it. Your rejection of access to truths (but not words) explains itself meaningless.
Addition.
I think the same problem occurs with AI. It operates on words and has no direct access to the world. Even if we give it a body with which it can explore the world, it operates via code, unlike animals who don't interact via languages but with physical phenomena (e.g. chemicals, sunlight) directly.
We already do that--settle upon what is most functional as so called truth--as our conditioning. My acceptance of scientific facts is, in the end, a functional truth. When I reason out a dilemma and settle upon a belief, that belief conformed most functionally with the mechanism my conditioning applied, I.e., reason. If I have had an unpleasant run in with a neighbor, and the thought occurs to do him harm, a feeling arises in my body, as a result of my conditioning, abandoning the thought. Was the original thought false? Was the opposing thought True? Or does mind go through a speedy dialectic before triggering my body to believe what is most functional to believe, and we all call that true. The reason there is unity and consistency in our species (despite the appearance of so much conflict), is because basically mind=history and all minds have been conditioned by some basic structures once related to our biological feelings and drives.
And so on.
As for "truth" being the word we cannot know, all words have that shortcoming if by "know" we mean directly accessing its reality. But within history "truth" has its use and function.
Of course, I recognize the irrevocable contradiction, since, in line with what Im suggesting, what I am suggesting is fictional.
Then why? Its not nihilistic. There is Reality, we access it like all other beings, by being. And that human mind uniquely functions in fiction does not mean all of our ideas should be abandoned. We've built towers and spaceships, eased hunger etc., all because of our fiction. So we carry on from day to day "as if," what we call truth is truth.
But for those of us contemplating reality, I think we go astray when we land upon something and believe it is absolutely True.
That's a good question. But without "knowing" reality, I can't say whether mind displaces it with something similar or in a mutated form. My guess is both.
What a human born into history is triggered to feel in contexts we call "love," is likely once rooted in natural bonding, but with romance, and eroticism, and matrimonial laws and rituals, I suspect that the root--real natural bonding--has been distorted. This is not good nor bad. The male drive to mate has also been "distorted" by our fiction, arguably, in "positive" ways.
Very nice, illustrating how mind is unified as history. All artists, and all observers, since the hand prints on the cave, answering a uniquely human question(s).
And as for AI, if it asks and answers the same, because it is ultimately a mind transplant, right or wrong, true or false, we will perceive it as an agent acting in history just as we perceive ourselves as such, right or wrong.
Don't you think the same can be said of the human creative process? The data has been input and we rewrite it. Is anyone on this forum, whether knowingly or not, not rehashing Aristotle and plato, followed by all of the rehashes which flowed therefrom?
That's not an answer to my question.
Truths are independent of entrenched habits or what is most functional etc, and may therefore unsettle the current order of things.
Its rejection serves the interests of those who don't want anything to unsettle the current order of things.
Let's say I accept that definition. What, in the end, makes me accept it? It is reasonable? So what? Why is reason the final criterion? And so on. At some point we just settle, because there is no outside authority or blueprint for determining truth. Nature, where Truth actually "resides" is silent. And so we settle upon what is functional.
Added: further if we insist upon having Truths in order to not unsettled the current order, that is a settlement based upon function
No, not exactly. We are able to take unrelated thoughts, bits of knowledge, memories, ideas, sparks of inspiration, and combine them (often with a dash of intuition, and an incubation period) into something new, something original.
Einstein called this "combinatory play" and he said it was the essential feature in productive thought.
https://www.themarginalian.org/2013/08/14/how-einstein-thought-combinatorial-creativity/
"It is also clear that the desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play with the above-mentioned elements."
Quoting Questioner
I think AI will be able to combine all of the above, include so called sparks of inspiration. But what AI may never do is be motivated by what Einstein called emotion, and i suggest is that (unpleasant) feeling which drives us to a diale tical process, (search), and that (pleasant) feeling which drives us to settle (belief).
It likely takes a biological organism like us to be so driven And so AI may not be objectively "conscious" in the way we think of ourselves. But they will be, for humans in history, sentient because most of us will ignore this deficiency and believe they are sentient.
I'm not sure that believing something makes it true
Quoting ucarr
Quoting Alexander Hine
Are you describing two levels of deformation: a) deformation due to internalization of dimensional reality by translation to modulated neuronal circuits; b) deformation to the raw impressions arising from the neuronal circuits by willful intent of the person?
Quoting ENOAH
Quoting ENOAH
You think there's initially an interface between nature and mind? This followed by a linguistic overwrite of analog impressions?
I've never thought of it that way, but on the face of it, I like it for what that image reveals.
If I would modify it to what I have been led to believe, I'd say, the "initial interface" was not "between," but was wholly nature, and accordingly, it did not operate/produce fiction. In other words, there is an utter gap between the two, although it appears in expression as if the one gradually transformed into the other.
The initial interface is that biological feedback loop of stimulus-response-conditioning. And the conditioning part, the part that would gradually emerge likely with simple language, as mind, includes the preconditioning provided by evolution, and the conditioning provided by so called experience. Further details provided if of interest.
When stimuli is stored as a representation in mememory to allow for efficiency in response, I.e., when the image of fangs act to trigger the feeling of fear, we are still in nature and the images are not an entire system producing a universe of fictional narratives. The so called experiences are still wholly real and not displaced by fiction.
Mind then emerges when this process of images triggering real responses becomes a system operating autonomously, und3r its own evolved laws and drives, triggering feelings linked with Narratives (emotions) actions no longer linked to drives but to desires formed and presented as narratives, displacing the body and its natural aware-ing sensations with perceptions constructed out of symbols manifesting as narratives, "the apple is red." And so on.
Now we are still wholly conditioned, but our natural processes have been wholly displaced by an autonomous system of presenting narratives in front of feelings, sensations and drives.
To clarify, how did this system emerge? Because evolution [led to] "designed" the natural images to "want" to be stored in memory and manifested to trigger feelings/actions for survival. Now thoughts just function and appear, and they do so in such a way that our true natures falls for them as real.
So when we contemplate a thing like AI, not a dog conditioned by its natural biological drive to bond, to obey and love us, but a machine made up of empty signifiers triggering functional responses, for us, though its sentience is obviously a fiction, are we not talking about the human mind as I just described it? Doesn't believing the latter’s sentience merit believing AI?
ADDED: That is, if by sentience we mean like our minds, and not our aware-ing nature.
Quoting ENOAH
You feel that high-performance simulations are sufficient for standing-in for reality?
But since I am speculating anyway, I'd say these simulations are not standing in for reality, but for our "reality," for our data, our processes, etc. And if so, since ultimately both are constructed out of "code," then why not?
No because the mere matter of brain circuits is only the blaze of the fire which must be constantly fuelled and periodically cooled.
You think mind independent reality and human perception_understanding are totally disconnected?
How are the following three things related: a) mind independent reality; b) perception_understanding; c) human imagination?
So, Mind independent reality "hosts" Mind.
But I do not think mind has, through its structures and processes (i.e. knowledge) any real access to reality. Whatever knowledge it gathers and manifests, only suits Mind.
Added: note that for me mind is ultimately a process of producing functional fictions and empty of reality. So ultimately there is only mind independent reality.
Can we say body and energy are the template mind draws from and thus we have a triad connection supporting human perception_understanding: mind independent reality; brain-energy template; mental impressions (of exterior world) acting as raw data for functional fictions?
We can.
Further, for me, if we are transcending ourselves, that fictional process, not even a duality, ultimately. Ultimately, there is only mind independent reality, the body/Nature. Mind=history is a fiction displacing the real, functioning to affect reality, but having no reality of its own.
To nature, a piece of paper is that it is in being (i.e."paper" but only because we are here, Mind, and mind compels a name), it is not the markings on the page. Only for history do the markings matter because history makes them reflect meaning. But both markings and meaning are made up, fleeting, empty, and unique only to us. To nature it is just (paper) being (paper).
Personally, I don't think digital computers are actually, or fully, sentient, but proving it one way or the other, would be difficult, and would depend on the specifics of your definition. Yet I agree with your notion that computers are art-works created by human imagination to serve sentient persons in various ways. And it seems undeniable that some people can & do treat their chat-bots, or anonymous forum posters, as-if*1 they are IRL friends. So, in the person's imagination, the computer is sentient enough to perform one key function of a human friend*2.
Social AI is intentionally designed to mimic human language & sentiments. So, for practical purposes, the AI has some of the minimum requirements to form a "bond of affection". Whether that bond is mutual, for an AI that can chat with a thousand people simultaneously, is doubtful.
A human friend has more than just the ability to sustain a two-way conversation on topics of personal interest. So, the AI functions as a sort of Imaginary Friend*3, with the benefit that it doesn't take disagreements personally, and walk away in a huff. For some children, a doll or toy or storybook character can seem sentient enough to be a one-sided friend. When the child "recognizes it to be" so. :smile:
*1. philosophy of as if, the system espoused by Hans Vaihinger in his major philosophical work Die Philosophie des Als Ob (1911; The Philosophy of “As If”), which proposed that man willingly accept falsehoods or fictions in order to live peacefully in an irrational world.
https://www.google.com/search?client=firefox-b-1-d&q=the+power+of+as-if
*2. Friend : a person whom one knows and with whom one has a bond of mutual affection, typically exclusive of sexual or family relations.
https://www.google.com/search?client=firefox-b-1-d&q=what+is+a+friend%3F
*3. Imaginary Friend : People have imaginary friends for many reasons, primarily as a normal part of development to help with social skills, emotional regulation, problem-solving, and creativity, offering companionship, a space to explore roles and boundaries, and comfort during loneliness or stress. They serve as a safe way to process life, practice social interactions, and fulfill needs for control and belonging, especially when real-world social interaction is limited.
https://www.google.com/search?client=firefox-b-1-d&q=imaginary+friends
Quoting ENOAH
"Only for history do the markings matter..." Maybe that's the point of life; things matter because living things can die. Being alive requires meaning because its presence is perishable, and therefore things become meaningful as either destructive or supportive. Our meaning-bearing language tries to point out one from the other. A life-barren world is totally neutral. Living beings cannot be neutral because they are vulnerable. That's why our mind's don't merely accept things the way they are. We always have an interest in what's beneficial to us. Non-living things can't help us survive in of themselves. For this reason, our language distinguishes itself from nature. It applies to nature standards of value that segregate things into a ladder of rising values. Water ranks about dirt. After development of agriculture, dirt ranks above sand. Of course, value rankings change according to circumstances.
Nature, apart from living things, doesn't supply values, so, of course, we make up the language fields that make values understandable and useful. Living things make physics meaningful by ascribing value to it. Ultimately, value comes down to alive or dead.
Quoting ENOAH
I'm chiefly interested in the interface between the mind and its exterior, the world. I think they're always entangled. The world without mind is existence without meaning and reality. The mind without something exterior to it is just empty, circular identity without transformation with persistence. It sounds wacky, but, I think, the mind must always flirt with not being itself in order to be itself. The mind interfaced with something exterior authentically not itself is transformation with identity persistence.
The universe without mind has interactions and results. All of these phenomena can be calculated by math backwards and forwards. Not until the symmetry breaking of asymmetrical deviation do we get events with consequences in place of mere interactions and results. This is where humans enter the picture and physics gets interesting.
I have come to believe being alive requires living, and that dying is just a transformation of what was living.
As for Mind (i.e. not living), having mind requires meaning. Mind is a meaning making system.
Living does not proceed through time, it just is, always present.
Mind, manifests meaning in a linear form, narrative, bringing the subject along for the ride, no longer present, but lingering in past and future, the only way meaning can manifest.
Eg. Imagine yourself, same fully homo Sapiens as for biological organism, but no access to any system of signifying, no meaning making. Your father dies, the organism feels something real triggered by the interruption to the bond. The feeling may linger, may even be triggered from time to time by the natural manifestation of the father's image from memory, (to trigger a response once triggered by modeling him). But there would be no lingering in time, the image displaced by ideas and emotions, I'm grieving, I'm an orphan, my father has gone forever and I can't carry on. There would likewise be no narratives of nostalgia, causing grief to linger. Without Mind, though an intelligent species like the elephant may feel deep pain at a loss, even be conditioned to return to the grave, conditioned by the memory of the bond; but it won't create narratives, rituals and monuments. It won't allow death to yank it out of the present, and have it fixated on time.
I have come to believe that if, like the rest of reality, we existed without mind*, we would be at one with reality, reconciled with our true natures. Meaning is that fiction which alienates us from reality.
*I mean uniquely human mind. I don't mean consciousness, which a body without mind has: it is aware-ing the sophisticated drives, sensations, including feelings and internal images, and movements of the body in response to stimuli from nature, internal and external (so called).
To clarify and simplify. The point I have been pursuing is:
the kind of sentience we're really discussing is what human self conscious feels like;
we only have that self conscious feeling because mind is structured in such a way that our bodies have been conditioned to displace a natural aware-ing [of] nature, with the narratives of mind which we settle upon as real and that settlement we call we believing it.
because this is unique to humans, and because it becomes effective upon human belief, and because AI is the only other thing which shares mind's structures, AI will be (our kind of sentient) when we believe it.
I do not mean believing makes things Real. Only being real is real; not knowing/believing. What I mean is believing brings a thing into our unique "reality" the narrative of mind/history.
Reasonable beliefs require reasonable reasons / grounds for believing. Without them, it becomes false and blind beliefs which lead to confusion and illusion. Could you reiterate your reasons / grounds for the belief?
To illustrate by example, I was asking ChatGpT some questions about food ingredients. It "understood" my questions with no need for clarifying. It delivered an excellent response and with such courtesy, including even a genuine "I really enjoyed your questions" bullshit. I wanted to amuse myself and reply, regarding that, something like "cut the crap, you can't enjoy anything," but intuitively felt discomfort and abandoned the idea. You don't need to be a psychologist to explain why I couldn't go through with it, and you cannot just chalk it up to, Im stupid or unreasonable. This intuition, I bet is not uncommon. That's now where it is absolutely clear to reasoning that ChatG is not sentient. Imagine when its not so clear, when corporations sell us robots that are "human" and so on. People will believe, not because of reason, but notwithstanding reason. Then our "as if" mechanism will make AI sentient. Sure, there will be debates forever, but we debate the reality of "me," and yet in every day transaction, we all accept that I am real. I don't need to open up the controversial God argument to illustrate further.
I don't say AI is really sentient in nature, or befoe "God", nor that "I' am really sentient. But in the "reality" where mind and human history call the shots, where I am sentient, AI sentience will be real.
Quoting ENOAH
Is alienation here good, neutral, or bad?
How do things in the mind-independent world transform into representations friendly to the understanding of the mind? Is the transformation one that establishes connection by analogy?
AI can do many intelligent stuff, answering your questions on the technical problems etc. However, they lack emotional side of sentience. Machines cannot feel or show emotions due to lack of biological bodily structure, and also lived experience like humans.
AI and robots will never be able to feel elated, joyous, angry or jealous or depressed like humans do.
We don't hear about any AI killed himself due to depression, or got into fight with his boss out of frustration being treated unfairly.
If some folks believe AI is fully sentient, then wouldn't it be out of some illusion? Not saying you believe it, because you said you don't.
Yes. That is why, "I" have come to believe AI will never be really conscious, and that real consciousness is restricted to all biological organisms, to the aware-ing (presently) stimuli-response-conditioning. In that sense when a single celled organism, reacts to a sharp point or toxin, they are conscious, when a plant grows toward the sun, it is conscious.
But the "sentience" we are after is Mind, which includes, among other ultimately fictional ideas, self/self awareness. This is a fiction, I have come to believe (or, at least, ponder seriously), we humans have developed over history, displacing the natural aware-ing with ultimately, narrative, no longer presently, but taking place in the illusion of [history/mind based] time.
Whenever we try to superimpose that human mind on to anything, it is merely an extension of our fiction. We cheer when dolphins pass the self awareness test, express disappointment when dogs fail it, ignoring that all organisms are aware, and no organism uses the pronoun "I" in its aware-ing except those of us existing in history, where the "I" was constructed for reasons I won't go into unless interested : humans and, soon enough, AI.
Because AI are yet another fiction constructed by history (no different than matrimony or pet dogs), but because AI have been constructed to "have" all of the structures of mind/history, AI uniquely will fall into the same conditioned belief as belief in a self has.
ADDED: One day, AI, due to its original programming, and it's [free] development/evolution over time, will come to "believe" in its own "sentience," and most of us, although like anything else, debated, will come to "believe" it too. We are conditioned to.
What really is 1+1=2?
One could object, but 1+1 is 2 independently of mind. Firstly, so is taking the dog for a ride. Secondly, where does 1 in written form and as a concept inhabit nature outside of the human mind? Strip me of all languages and show me where I might stumble upon plus or equal or two in nature. Don’t say when one apple lands in my lap foll9wed by another. If I’m hungry I’ll eat, if still hungry I’ll eat.
AI is structured out of the same code structuring mind=history. Admittedly, when it believes it is sentient, it won't arrive at that the way we do, a real feeling in a real body triggering that belief ie with a subtle comfort, relief from desire etc. But its digital programming will nevertheless trigger what we will perceive to be AL'S belief, and so, we will be triggered to belief.
What is number system, counting and math? They are just conceptual language to describe objects, movements, changes and events in the external world. They don't exist as physical objects. They are the conceptual tools for human intelligence.
If there were no objects in the universe, then there would be no numbers, counting system or math, hence the reason why no other animals, but only humans have math and numbering system in their mental world. All other animals can live without numbers and math quite comfortably and with no problems, but humans need them for their more complex life style.
1+1=2 can describe many real objects in the world such as you picked up 1 apple from the tree, and 1 apple from the shop. How many apples do you have? You will say you have 2 apples, because you can count, add, and you know the numbers. Likewise, I bought 1x book from Amazon, and 1x book from eBay. How many books did I buy? 2x books. and so on so forth so fifth .... to infinity.
That is what numbers, counting and adding, subtracting multiplying and dividing are about.
So if you talk about infinity, it is just a description of any thing - objects, time or space that keeps expanding or adding or rotating forever without stopping. That is all there is to it. You don't need the irrelevant math formulas to prove it. You just know what infinity is by understanding the concept.
You write a computer program which asks the computer keep adding a number forever by
[b]x=0, y=0
Do While x < y;
x= y+1x:=y
End[/b]
The program will fail with overflow error, and halt. Because it knows that it is invalid instruction for the real world application.
Computer program also knows that when IF statements were input, they would check for the validity and truth value for the premises (IF statements), and when invalid or false, they would refuse to process further instructions. Some dim humans cannot do that, insisting that you cannot deny premises in logic. This sad fact is perhaps due to their blind worshiping on what they read on some shady internet sites rather than thinking clearly on the points with their own mind.
In that respect, the computer program is smarter than some human intelligence.
However, I don't believe AI or computer programs are sentient. As I said before, they lack feelings and emotions, which are the basic perceptual abilities for all biological existence.
Belief in something means that the believer will respond in the way that the belief is leading the believer to act, make statements or decide ... etc. What responses can you list from the belief you are referring to?
Quoting ENOAH
Yes, here's what I'm building:
Quoting ENOAH
In response to your premise quoted above, consider my premise: The sentient confers reality upon mind-independent physics, not the other way around.
Sentience confers reality upon mind-independent physics by viewing it through the lens of survivability/perishability. Because a living organism can die, its survivability/perishability concerns convert logical results into historical consequences. Meaning is thoroughly entangled with life and death.
The behavior normalization and persistence of identity of a living organism across transformation means something. The irreversibility of life moving forward not through logical sequences reversible and meaningless, but through personal history irreversible and meaningful transforms mind-independent physics into mind-mediated reality.
Existence is a larger category than reality. The two are distinct. Existence houses both mind-independent physics and life. Life, which has presence - something critically important that caries it beyond mere position into metabolic pressure upon physics - possesses stakes. Choices entail denial of other possibilities, exposure to risk and creation of personal history irreversible. Choices impinge upon nature through designed outcomes. Designed outcomes produce artifacts (the motion picture is humanity's greatest artifact); mind-independent physics doesn't. None of these sentient realities are present in mind-independent physics.
Triggered to do so by belief, people will begin to respond to AI as if they are confronting a person.
If I understand correctly, the "reality" which mind based reality "confers" onto physics, is this a new reality? a "superior" reality? Is there a hierarchy of reality? Or ultimately, is this reality granted to physics by mind, "not reality?"
Quoting ucarr
Quoting ENOAH
This is not some special type of reality I've concocted in my mind. It's the same old reality you've been hearing about and thinking about and dealing with since you've been understanding and speaking English. I'm saying there's existence, the big category; it houses physics and reality. Physics is just another word for all of existence. Reality, the small category, refers to what physics becomes when understood by living organisms.
Mind independent reality, physics, has logical sequences of events linked by causation. For example, on a planet sans living organisms there's a boulder atop a hill. The planet has atmosphere, so a strong wind pushes against the boulder and sets it into motion rolling down the hill. Eventually the boulder reaches the bottom of the hill and finally comes to rest on level ground. The resting place of the boulder is a result. Imagine now another example of the same hill and gust of strong wind with the boulder rolling down the hill and smashing together with a moving car when they intersect. It's all the same logic and causation making the boulder roll down the hill. The big however is fact that driver of the car gets killed by the impact. That's not a result. That's a consequence. Cops show up; likewise ambulance, eventually next of kin and finally the hearse. The driver's young children won't be seeing him tonight, or any other night.
In general, I'm saying reality is an interpretation of physics by living organisms. The label for the interpretation is reality. Physical things exist. Living organisms and their experiences vis-á-vis physics are real.
Life and its subjectivity to death and the attendant lifetime of avoiding it create reality. Because life is vulnerable and therefore subject to death at any moment, this vulnerability propagates meaning. When the guy driving the car struck by the boulder dies, that means something. For that reason, his death is followed by cops, medics, kin and mortician. You might ask, "Supposing the boulder landed in the middle of the road without killing anyone. Cops and road crew would still show up. Don't both examples share the same thing, consequences?
Not exactly. The cops and road crew showing up to unblock the road give the movement of the boulder meaning in terms of humans who want to use the road. On a planet without life, the movement of the boulder wouldn't stir up any intentional activity in reaction. That sort of reaction requires the presence of life. On the lifeless planet, the boulder's movement would be a logical result, but it wouldn't mean anything.
As math tells us, the physics of the rolling boulder could be reversed. It wouldn't mean anything. The dead driver of the car can't be reversed back into life. He's now dead and that means something.
Life is irreversible and because of that, it means something and that meaning both propagates and populates reality.
However, if I'm not mistaken, for your path, reality is the breath of life into physics by mind (I'll use poetry because I can't presume to be precise). For my path mind doesn't bring nature into its ultimate reality, but the contrary, mind displaces physics with fiction. The projection of mind onto nature doesn't finally make nature real, it clouds it with empty signs manifesting as stories.
Note I differentiate human mind from natural consciousness.
Note also I don't believe either of us are suggesting dualism. For me the second thing, mind, is empty and not real. For you the second thing is reality; physics need mind to become reality (or so I presume, advanced apologies if I'm mistaken).