Will A.I. have the capacity of introspection to "know" the meaning of folklore and stories?
Patrick Henry Winston, MIT professor, posed this question: "“What does Genesis know about love if it doesn’t have a hormonal system?” he said. “What does it know about dying if it doesn’t have a body that rots? Can it still be intelligent?”
FYI: Genesis is a storytelling artificial intelligence project at MIT that focuses A.I.'s ability to analyze stories, and to draw "inferences" as to the meaning of the stories it is given. The question is: can Genesis understand the story? Before dismissing the question, consider the question posed to Genesis: "What is Macbeth about?" Here's a glimpse of the process Genesis followed to reach an answer:
The A.I. produces “elaboration graphs" on a screen. For the MacBeth question, the program produced about 20 boxes containing information such as “Lady Macbeth is Macbeth’s wife” and “Macbeth murders Duncan.” Below that were lines connecting to other boxes, connecting explicit and inferred elements of the story. Genesis arrived at a conclusion: “Pyrrhic victory and revenge.”
None of Genesis choice of words appeared in the text of the story. The program includes an application called “Introspection,” the process of how Genesis constructs the meaning of the story by tracking the sequence of its inferences.
Whether you acribe "introspection" to this early A.I. demonstration, do you anticipate that computers will have this ability in the future? If so, is the future bright or dystopian? What are the implications of a computer constructing its own narrative based on its indiosyncratic experience of the human race?
FYI: Genesis is a storytelling artificial intelligence project at MIT that focuses A.I.'s ability to analyze stories, and to draw "inferences" as to the meaning of the stories it is given. The question is: can Genesis understand the story? Before dismissing the question, consider the question posed to Genesis: "What is Macbeth about?" Here's a glimpse of the process Genesis followed to reach an answer:
The A.I. produces “elaboration graphs" on a screen. For the MacBeth question, the program produced about 20 boxes containing information such as “Lady Macbeth is Macbeth’s wife” and “Macbeth murders Duncan.” Below that were lines connecting to other boxes, connecting explicit and inferred elements of the story. Genesis arrived at a conclusion: “Pyrrhic victory and revenge.”
None of Genesis choice of words appeared in the text of the story. The program includes an application called “Introspection,” the process of how Genesis constructs the meaning of the story by tracking the sequence of its inferences.
Whether you acribe "introspection" to this early A.I. demonstration, do you anticipate that computers will have this ability in the future? If so, is the future bright or dystopian? What are the implications of a computer constructing its own narrative based on its indiosyncratic experience of the human race?
Comments (84)
What does he know? Did he die before? "Rotting body"? - Death in particular can come quickly. Seems he is lacking introspection wrt to what he is talking about. Just another MIT profressor put out of context?! We can doubt - mushy mysticisms about everyday experience substitute modern voodoo for thorough analysis. Being psychic is not a problem limited to A.I. though. The lack of any foundational reality has a long philosophical tradition. Seems to be something for chosen audiences.
I wouldn't call it introspection as none of it involves anything internal aside from programmed algorithms. That said, they already do. Nothing emotional or human but self analysis ie. Disk defrag, system health, etc.
I'm assuming you mean AI and beyond that some capacity for emotion? I wouldn't call it dystopian as much as I'd call it annoying. Like how your smartphone asks if you want to delete an app you haven't used in a while. Itd be annoying if it sent a notice akin to "I'm sad, you haven't used me in a while. Should I just delete myself?" Or something. Lol.
What your talking about is an I, Robot kind of deal. Where everything is computerized and connected including all instruments of force/civil defense.
I suppose it could. If you program it that way.
When the computer chip in your clothes dryer receives data from the humidity sensor and decides when your clothes are dry, do you think it has feelings and opinions about the matter?
And if not, why would a bigger machine, operating on exactly the same fundamental principles, be any different?
Has anyone ever proven one or the other? All this does is rephrasing common opinions. From this foundational basis then "arguments" shall be made why that property can or cannot be present in which kind of machinery in a "wise platonic tongue".
This appeals to the attitude that led Turing to hiding which one was the machine in the test.
The second question, related to the first, and likely dependent on the first, is: Can machines be created to feel? That is, let's suppose the computerized brain is self-aware as a thinking device. Can it also be programmed to "feel" a response to its own existence? Perhaps more significantly for the human race, can it experience "feelings" about the human race? If you're not too interested in these kinds of questions as too improbable to be taken seriously, then you work with them as thought experiments about how human brains work. What if A.I. was programmed to self learn at such a rapid rate that it moved from the goal of being to the goal of dominating the intelligence pyramid? [Which is exactly what the human race currently does with the animate and inanimate environment now.]
A lot of smart people are freaking out about the potential for A.I. to outthink the human race on multiple dimensions over multiple future decision points. Basically, the angst goes like this: What if A.I. networks become "self-learning?" That is, what if they begin to program themselves based on algorithms we've given them, but over which they take control? Since they can outthink us much faster than we can counter-think them if they decide to take a path in their self-interest that is inimical to our self-interest, will we be expendable by them, or if not destroyed, enslaved? They could very well blackmail the human race with threats of restricting the food supply, shutting down the financial system, or poisoning the water supply. Is this only science fiction? People once thought the same of video technology and space travel.
These are not new questions, and any one of us can research the current state of affairs. My questions go to something slightly different: Stories. Humans organize reality by folklore to convey how the world came to be, and how we are best to live in that world. What would be the folklore A.I. would create for itself in telling how it liberated itself from its maker, even as Adam freed himself from God by landing butt first outside the Garden of Eden?
The human experience of consciousness is the product of an instrumental co-evolution with the environment. And the entire process of symbolic interaction in which thought is codified and represented is both social and instrumental in nature. So whether an abstract instantiation of rules can have the same end result as an actual in situ consciousness seems questionable to me.
I'm sure the simulation of consciousness will eventually be perfected. But in what sense will such a construct ever be genuinely self-determining? Could it spontaneously formulate and act upon novel motivations? A human being can construct a simple song after just hearing someone else sing. Could a computer do this without having some kind of musical theory programming?
I don't think any kind of 'brain in a box' simulation will answer these questions. The acid test would be genuine functional human simulacrum with more than just nominal autonomy. And I think such a mechanism is a long way off.
I think I agree with your approach here. I used to think that 'it's all just switches.' In some sense I still think that. What's changed is the stuff I take for granted about human consciousness. Wittgenstein's beetle points out that we really don't know what we are talking about (in some sense) as we confidently invoke our own consciousness. We tend to think that we act appropriately in a social context because we understand. Perhaps it's better to think that 'understanding' is a complement paid to acting appropriately. The speech act of 'you understand' or 'I understand' is caught up in embodied social conventions.
I'd like to hear more about this 'wise platonic tongue.' Do you happen to like or have any thoughts on Derrida, also? I mention this because anti-AI talk seems connected to the assumption that humans have minds with 'direct access' to these critters called meanings. And that assumption has been challenged, I think, with some success.
Syntax is not semantics. Machines can compute syntax (that's what "computing" is) but they don't have semantics, they don't know the meaning of what they're computing. They don't know anything.
Assuming that you are right, what makes us so sure as humans that we do ? To me it's not at all about AI mysticism. It's instead about demystifying human consciousness. To be sure, we have words like 'know' and we can sort of think of pure redness.
But how could I ever 'know' that I that understand the Chinese Room the way its author did or you do? This stuff is inaccessible by definition. In practice we see faces, hear voices, act according to social conventions, including verbal social conventions. (Maybe I should say that we point our eyes at people, etc.)
Perhaps. But have we ever seen a human being with more than just nominal autonomy?
'I think therefore I am.' What is this 'I'? A computer can learn this grammar, just as humans do, by learning from examples. If there is something more than a string of words here...if there is some 'meaning' to which we have direct access...then it seems to be quasi-mystical be definition. Obviously it's part of our everyday speech acts. 'Is he conscious?' is a sound that a human might make in a certain context. Does the human 'mean' something in a way that transcends just using these sounds according to learned conventions? That we take the experience of sense data for granted might just be part of our training. We just treat certain statements as incorrigible. Vaguely we imagine a single soul in the skull, gazing on meanings, driving the body. But perhaps this is just a useful fiction?
[quote=Nietzsche]
Psychological history of the concept subject: The body, the thing, the "whole," which is visualised by the eye, awakens the thought of distinguishing between an action and an agent; the idea that the agent is the cause of the action, after having been repeatedly refined, at length left the "subject" over.
...
"Subject," "object," "attribute"—these distinctions have been made, and are now used like schemes to cover all apparent facts. The false fundamental observation is this, that I believe it is I who does something, who suffers something, who "has" something, who "has" a quality.
[/quote]
FWIW, I don't have a positive doctrine for sale. The situation is complex, and I think some of that complexity is swept under the rug of 'consciousness.'
Have you ever seen a human with only nominal autonomy?
What I'm getting at is that autonomy is a vague notion, an ideal. We have certain ways of treating one another, and we have a lingo of autonomy, responsibility, etc. So in a loose sense we all have 'autonomy' in that we'll be rewarded or punished for this or that. The issue is whether there is really some quasi-mystical entity involved. Another view is that 'autonomy' is a sign we trade back and forth without ever knowing exactly what we mean. Using the word is one more part of our highly complex social conventions.
But we could also network some AI and see what complex conventions they develop. They might develop some word functionally analogous to 'I' or 'autonomy.'
Do birds have autonomy? Do pigs have a soul? How about ants?
'I think therefore I am' can catch on without anyone really understanding exactly what they mean. Their use merely has to fit in certain conventions and we don't lock them up and might even shake their hand.
So you've said a fair bit about autonomy, but how about that "only nominal" part?
Actually it's John Searle.
[quote=Searle]
I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).
[/quote]
https://plato.stanford.edu/entries/chinese-room/
'Mental or semantic' contents are problematic. They are more or less 'conceived' as radically private and unverifiable. It's anything but clear what the implementation of the program is supposed to lack. Searle only passes the Turing test because he spits out signs in a certain way. Why is he so sure that he is swimming in something semantic? An appeal to intuition? 'I promise you, I can see redness!'
Well a program could say that too. I think Searle was a bot. (I know I use 'I think' in the usual way. Obviously ordinary language is full of mentalistic talk. The question is whether we might want to question common sense a little bit, as philosophy has been known to do.) (And I'm not saying my criticisms or ideas are mine or new. I just have to work through them myself, and it's nice to do so in dialogue.)
Quoting path
Note that I asked a question, that point of which was to say that ....hey, maybe we are taking our own autonomy for granted. Maybe we have familiar loose talk about consciousness and free will and autonomy, and that we are so worried about AI mysticism that we ignore our mysticism about ourselves.
'This AI sure is getting impressive in terms of what it can do, but it's still just stupid computation.'
But this also means that just-stupid-computation is getting more human-like. In short, we still start from some belief in a soul, even if we are secular and think this soul is born and dies. If a computer can learn to say that it has a soul (consciousness) and not be telling the truth, then maybe we've done the same thing. Or at least we're being lazy about what we're taking for granted.
Well, yeah, but humans are agents; the Chinese Room, not so much. To me that sounds very important, not mystically, but practically. If I were to ask my s.o. to pick up chips and dip while at the store, my s.o. would be capable of not just giving me the right English word phrases in response, but also coming home with chips and dip as a response. It's as if my s.o. knows what it means to pick up chips and dip. How is a nominal-only program going to bring home chips and dip, regardless of how well it does passing Turing Tests?
I tend to agree, people take our autonomy for granted. But I think part of what we take for granted when we think of computer programs having thoughts is the simple fact that we're agents. (Yes, and we have hormones and brains and stuff... but the agent part in itself seems very important to me).
I don't think this is focused on the real issue. If AI has a body, then it could learn to react to 'get some chips' by getting some chips. People are already voice-commanding their tech.
To me the real issue is somehow figuring out what 'consciousness' is supposed to be beyond passing the Turing test. Let's imagine an android detective who can outperform its human counterparts. Or an AI therapist who outperforms human therapists. If we gave them humanoid and convincing skins, people would probably project 'autonomy' and 'consciousness' on them. Laws might get passed to protect them. They might get the right to vote. At that point our common-sense intuitions embodied in everyday language will presumably have changed.
Our language and the situation will change together, influencing one another (not truly differentiated in the first place.)
Indeed, it's almost a religious idea. What does it mean to be an agent? It's important to me also, to all of us. The idea that we as humans are radically different from nature in some sense is something like 'the' religious idea that persists even in otherwise secular culture. So we treat pigs the way we do. (I'm not an activist on such matters, but perhaps you see my point.)
Quoting path
As far as I know, only humans ask such questions.
Another way of framing it, is to ask if computer systems are beings. I claim not, but then this is where the problem lies. To me it seems self-evident they're not, but apparently others say differently.
But if computers were beings, then would they have rights? How would a computer feel about itself?
You're trivializing this though. First, the symbols "chips and dip" have to actually be related to what the symbols "chips and dip" mean in order to say that they are understood. And what do those symbols refer to? Not the symbols themselves, but actual chips and dip. So somehow you need to get the symbols to relate to actual chips and dip. That's what I think we're talking about here:
Quoting path
i.e., it's what is needed to be more than just "nominal", or syntactic.
So, yes, this isn't impossible for an AI, if only you gave it a body. But that's trivializing it as well. The AI needs more than "just" a body; it needs to relate to the world in a particular way. To actually manage to pick up chips and dip, you need it to be able to plan and attain goals (after all, that's what picking up chips and dip is... a goal; and to attain that goal you have to plan... "there are chips and dip on this aisle, so I need to wander over there and look"). Then you need the actual looking; need to be able to grab them, and so on and so on. This entire thing being triggered from a request to pick up chips and dip is a demonstration of the ability to relate the symbols to something meant by them.
Quoting path
Something like I just described above, at a minimum. BTW, I think there's a lot more to add to make a conscious entity; but I don't see how a CR without a "body" (agency) can actually know what chips and dip is.
Whether Genesis can "know" depends on how you define that word. A popular, albeit incomplete, meaning of knowing is justified true belief (JTB). Against the backdrop of the JTB definition of knowledge, humans don't fare better than a program like Genesis: justification is nothing but computation and Genesis seems to be doing exactly that; as concerns truth, a human is in the dark to the same extent as Genesis is; regarding belief, human belief is simply the storage of a set of propositions in memory, something Genesis is surely capable of.
That's one of the assumptions that I am questioning. The mentalistic language is familiar to us. We imagine that understanding is something that happens in a mind, and colloquially it is of course. Yet we vaguely imagine that this mind is radically private (maybe my green is your red and the reverse.) Roughly speaking we all pass one another's Turing tests by acting correctly, making the right noises.
How do you know that my posts aren't generated by AI? Do you assume that there is just one of you in there in your skull? Why can't two agents share a body? Because we weren't brought up that way. One doesn't have two souls. We are trained into English like animals.
Can't, though.
I agree that the task is complex. But note that you are pasting on lots of mentalistic talk. If the android picks up the chips as requested, we'd say that it related to the symbols correctly. Think of how humans learn language. We never peer into someone's pure mindspace and check that their red is our red. All we do is agree that fire engines are red. Our actions are synchronized. You can think of our noises and marks as pieces in a larger context of social conventions. Talk of 'I' and 'meaning' is part of that. I'm not saying that meaning-talk is wrong or false. I'm saying that it often functions as a pseudo-explanation. It adds nothing to the fact of synchronized behavior.
Ah, but if a computer did ask such a question, I suspect that somehow it wouldn't count. I could easily write a program to do so.
Here's one in Python:
print("What does it mean to be an agent ?")
Quoting Wayfarer
It was self-evident to many that the world was flat, that some were born to be slaves, etc. To me strong philosophy is what shakes the self-evident and opens up the world some.
I realize that what I'm suggesting is counter-intuitive. It's not about puffing up A.I. and saying that A.I. might have 'consciousness.' Instead it's about emphasizing that we human beings don't have a clear grasp on what we mean by 'consciousness.' Connected to what I'm suggesting is the understanding of meaning as a social phenomenon. To frame it imperfectly in an aphorism: the so-called inside is outside.
I curious if you think ants are beings? How about viruses? How about a person in a coma? Where do you draw the line and why?
Just curious. What kinds of [human] thoughts are irreducible to computation (logical processes that computers can handle)?
All very good and difficult questions. I rather like the Buddhist saying, 'sentient beings'. It covers a pretty wide territory, but I don't *think* trees are in it. Viruses I don't think are sentient beings, I think I understand them in terms of rogue byproducts of DNA. In evolutionary history, the cell evolved by absorbing or merging with organisms very like viruses over the course of aeons. Viruses are like an aspect of that part of evolution.
Some key characteristics of humans are rationality and self-awareness. Aristotle said man was 'a rational animal' which I still think is quite sound. But the point is, in my understanding 'reason' or rationality likewise covers a very broad sweep. On a very simplistic level, the ability to see that 'this equals that' requires rational judgement i.e. to understand that two apparently different things are actually the same in some respects. I think the faculty of reason is an awe-inspiring thing and a differentiator for h. sapiens.
I suppose some line can be drawn in terms of 'being that knows that it is', in other words, can truthfully say 'I am'. (This is why Isaac Asimov's title 'I, Robot' was so clever.) Whatever it is that says 'I am' is never, I think, known to science, because it never appears as an object. Rather it provides the basis of the awareness of objects (and everything else). Schrodinger wrote about this in his later philosophical works.
Quoting TheMadFool
Computers are calculators, albeit immensely powerful calculators. They can deal with anything that can be quantified. But being has a qualitative element, a felt dimension, which is intrinsic to it, which can't be objectified, as it can't be defined.
This essay says it much better than I ever could https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
What assumption? And who is making it?
Quoting path
Okay, sure, let's think about that. We both call fire engines red, even though we have no idea if "my red" is "your red". So if we can both agree that the fire engine is "red", it follows that red is the color of the fire engine and not the "color" of "my red" or the "color" of "your red", because we cannot agree on the latter. Note that my perspective on meaning allows p-zombies and robots to mean things; just not Chinese Rooms (at least when it comes to chips and dip).
Acting is not part of the Turing Test, since that involves communicating over terminals. In this medium we communicate using only symbols, but I imagine you're not AI based solely on balance of probability.
Well, there's an apparent singularity of purpose; this body doesn't seem to arm wrestle itself or reach in between the two options. And there's a continuity of perspective; when this mind recalls a past event it is from a recalled first person perspective. So there's at least a meaningful way to assign singularity to the person in this body.
Quoting path
Not... really. You're projecting mentalism onto it.
Quoting path
I would, for those symbols, if "correctly" means semantically.
Quoting path
Well, yeah. Language is a social construct, and meaning is a "synchronization". But we social constructs use language to mean the things we use language to mean. And a CR isn't going to use chips and dip to mean what we social constructs use chips and dip to mean without being able to relate the symbols "chips and dip" to chips and dip.
Indeed, the rational animal...which is to say in some sense the spiritual animal. Our distinction of ourselves from the rest of nature is dear to us. I think we tend to interpret a greater complexity in our conventional sounds and noises as a genuine qualitative leap. Of course that can't differentiate us from A.I., or probably not in the long run. We may convert an entire planet into a computer in 4057 and feed it all of recorded human conversation. It (this planet) may establish itself as 'our' best philosopher yet. It might be worshiped as a God. It could be that charming, that insightful.
Quoting Wayfarer
Indeed, and we get to the center of the issue perhaps. What we seem to have is a postulated entity that is by definition inaccessible. It never appears as on object. I do think this is a difficult and complex issue. But I also think that it assumes the subject/object distinction as fundamental. At the same time, one can make a case that subject/object talk is only possible against a background of social conventions. In other words the 'subject' must be plural in some sense. Or we might say that the subject and its object is a ripple in the noises and marks we make.
Can a dolphin say something like 'I am'? I don't know. Must the subject be linguistic? The subject seems to play the role of a spirit here. The old question is how 86 billion neurons end up knowing that they are a single subject, assuming that we have any kind of clear idea of what such knowing is. If we merely rely on the blind skill of our linguistic training (common sense), then we may just be playing along in taken-for-granted conventions. By the way, I'm know that I am partaking in such mentalistic language. It's hard to avoid, given that I was trained like the rest of us and therefore am 'intelligible' even if the species decides later that it was all confusion.
Quoting InPitzotl
Look here in this new answer:
Quoting InPitzotl
That's pretty mentalistic, and you say 'apparent' about something that is basically like 'your red.' And then that's a 'meaningful' way to assign singularity. Don't get me wrong. I also have the intuition that I am a single consciousness. But I'm suggesting that this is trained into us. We just learn to talk this way.
Quoting InPitzotl
And you are reducing 'correctly' to 'semantically.' You are saying (I think) that our android did it right if it understands. I am saying that it 'understands' if it did it right. What does meaning add to reacting to 'get the chips' by getting the chips?
Quoting InPitzotl
I'm suggesting that getting the chips having been told to is relation enough. I suggest that humans demonstrate 'understanding' in the same way. I can't look into your mind space and compare your idea (whatever those are) to mine. All we can do is synchronize behavior, including the speech act of saying 'he understands.'
We don't need to know what 'know' means, or rather the sound 'know' is 'understood' if we use it according to certain conventions. (We don't need to 'know' what 'mean' 'means' either. And yet we are sure that AI can't 'know.') (How do we know that we're not bots too?)
Also, yeah,I was using the Turing test metaphorically, extending its meaning.
Anyway, I'm enjoying our conversation. This is great stuff.
Indeed it is, but that's a different question. You're asking a few of them!
Quoting path
"Trained into us" is making an assumption; as is "learn to talk this way". There is a social practice of naming people and treating them as distinct individuals for sure, but there are these features as well that I just described... to simply presume this comes out of a social construct requires an argument. We also know singularity of identity breaks down in certain cases, such as patients who underwent corpus callosotomy, and such individuals have distinct manifestations from the normative cases. It's interesting to me that a person whom we may have named "Charlie" may develop a case of Alien Hand Syndrome.
Quoting path
The ability to plan behaviors directed towards and manage to successfully attain a goal of getting chips and dip.
Quoting path
I know, and such is apparently the trend here, but I feel like too often discussions about AI become hand wavy.
Fair enough, but what if AI acts at a human level ? It may never happen, but let's imagine a Blade Runner scenario. At what point do we finally wonder how strong the difference is? We are whittled down to an unspecifiable something that distinguishes us.
Quoting InPitzotl
True, and I think there are biological constraints on what culture can manage.
Quoting InPitzotl
I could have been more careful. I've been talking about AI in other threads and took too much for granted.
Quoting InPitzotl
Yes, ..I do have more questions than answers. That's for sure.
Then I would say it probably understands things, but not necessarily that it's conscious. I don't have a great model for what it takes for something to be conscious yet, so wouldn't know when to apply what metric for that.
[quote=link]
However, there's more to a pain than our knowledge of it. It has both an ontological and an experiential status. We can also accept the fact that any ontological and experiential status the pain does have will itself be coloured by public language. (For one, those parts of public language which have given us the tools and concepts to think about a pain’s ontological and experiential status!) Though, yet again, there's still something about pain that's above and beyond its epistemic position and its ontological and experiential status. There's a state - a pain - that's the subject of all these public expressions. These public expressions are about something other than themselves. They're about pain.
[/quote]
http://paulaustinmurphypam.blogspot.com/2015/10/comments-on-wittgensteins-beetle-in-box.html
Note that he talks of 'we.' He just knows that we all know pain. [The primary 'subject' is plural, is we?] And I won't pretend that I don't know what he's talking about. And yet he's talking about what slips through language entirely. He could use the notion of pure redness or the feeling of hot water in the bathtub. It's whatever we can't squeeze into a public language. He just knows we all have it. Why? What if some human did not have it but participated in the convention anyway? It seems that it's just part of the vague meaning of 'being human.' There's a sort of animal faith that others that look like me and act a certain way must possess access to something radically private. I am supposed to have direct access, ineffable access to my own 'mind' and 'pain.' (I am tempted to agree, but the issue is complicated. 'I' had to learn to use the word 'I' according to certain conventions.)
The biggest issue is perhaps the idea of some 'experience of meaning' behind or accompanying saying the right things, making speech acts in accordance with conventions. If the robot says 'I am conscious,' we don't believe 'it.' We have our reasons for not believing. But why do we believe that our fellow meat-puppets are conscious? It's a tangent, but I think that I am 'we' before 'me,' that the individual is in some sense not the bottom layer. We are socialized before we can develop a specialized surface one might say.
Right. And I don't have a great model either. I guess my big point is that humans use the word 'consciousness' in a hazy way that AI encourages us to question and specify.
Empathy has a lot to do with it. Other beings are more than just like us - each of them is 'I', from inside their perspective. And solispsism is really a bizarre notion to seriously entertain, isn't it?
After all, humans are called 'beings'. I think this is taken for granted at our peril. There's a deep reason for it.
Quoting path
Any talk of anything is only possible against the background of a being capable of speaking (pace Descartes although Augustine anticipated the point). So the fact that the subject is not something objectively discernable, doesn't mean that it can simply be disregarded or glossed over, although that is pretty much what eliminative materialism, positivism and behaviourism wish to do. If it can't be fitted with the procrustean bed of naturalism, well, then, it can be disregarded. But the point about the 'transcendence of the subject' is actually another facet of the hard problem of consciousness. And it's even recognised by scientists.
The first question is whether quality and quantity are actually that different or not? Take a favorite example of quality, color; color, as a property, is a function of wavelength, a quantity. In other words, what we think as quality maybe just variations of quantity.
The second question is, if there is such a thing as quality, whether the computer is totally helpless in this regard? Logic is not quantitative per se and yet a computer is quite at home immersed in it.
So it seems, and I feel quite connected to animals. What I'm questioning is the vague use of this 'I.' Splitting 'what is' into subject and object looks linguistic and cultural. We can dissolve 'I' into an ocean of speech act conventions.
Quoting Wayfarer
Indeed, and I've argued against it recently. In some sense I'm arguing against it now. Mentalistic talk presupposes a mind that only interacts with other minds indirectly. Playfully speaking, I'm not doubting whether others are real...I am doubting if 'I' am real. 'I' mean that our use of the word perhaps misleads us to posit some entity, composed of some ineffable substance.
Quoting Wayfarer
That is not at all an argument. If you know some deep reason, please share.
Quoting Wayfarer
I can't speak for other (amateur) philosophers, but I'm interested in specifying the 'concept' of the 'subject' -- which clearly exists in some vague sense. In case it helps, I'm not interested in reducing mind to matter or matter to mind. That whole approach seems flawed to me. The world or reality is not 'really' or 'fundamentally' anything. Or that's not my project. I do think that mentalistic talk often obscures the exteriority of human cognition --that we are more outside than inside in a certain sense, that we are intelligible to ourselves even in terms of public conventions. So embodiment and sociality are themes, but none of this is reduced to 'matter.' Pure 'non-mind stuff' is just as problematic as 'pure mind stuff.' In both cases proponents find themselves gesturing helplessly toward the ineffable.
Quoting Wayfarer
To me this is a philosophical issue, and we might talk of two opposite metaphysical paradigms that from my perspective both make the same constructivist mistake. I am interested in the transcendence of the subject, and the ideas I've been discussing here were influenced by philosophers, many of whom scientistic types tend to despise. I am very much interested in and even arguing for...a different kind of transcendence of the subject.
There’s facts, and there’s interpretation.
Quoting path
I can think of better things to be dissolved into.
As for why humans are called ‘beings’: the point I’m making is simply that humans are designated ‘beings’ for a reason, and part of the implication of that is to distinguish beings from things, objects, or devices. After all, contemplation of the meaning of ‘being’ is nine tenths of philosophy (in the same sense that possession is nine tenths of the law.) And it’s also the basic subject of ontology.
As regards the concept of the subject, I respectfully submit that subjectivity, or better, subject-hood, is not a concept per se but a fundamental existential reality which is logically prior to conceptual thought. To say that is not to malign conceptualisation in the least, but to draw attention to logical priorities. A major point about scientific method is that it starts by ‘bracketing out the subject’. But forgetting that it has done this is the beginning of scientism. It’s the fact that ‘the subject’ can’t made an object of knowledge that is significant about it. And if you think that’s a Zen koan, then you’re right.
Hey do you know Michel Bitbol’s work?
Do you mean the quality is a fact and to think quality is just a mask that conceals the underlying quantity is interpretation? Can you expand on this a bit more?
It seems to me apparent that existence of self, in the sense of both the tangible, as it applies in any case, and the abstract, that being as a property of the imagined generally, manifests once a certain complexity of awareness, and recognition, has emerged. This is clearest in instances wherein another is asked to determine their reflection, or is otherwise placed in circumstances that by purpose, allow such things to occur, and in consequence, responds with shock at the following sight; the condition of which holds true particularly in the case of infants, during their most critical stage of development, but can be extended to those species which possess an order of thought having semblance to ours in some aspect. What one speaks of as 'self', then, is integral to every judgment, and way of viewing the world; that is to say, it serves as the foundation atop which all parts of the subjective are built. To deny its fundamentality, that it is indeed requisite for an understanding of any form, is to commit oneself to an error of the most egregious kind.
The distinction which can be found, between the ideas of subject and object, and to which many attest, is I believe, and such as you assert, also, a product of convention, yet nonetheless essential for structuring of the ability to know, to conceive; a heuristic of sorts, whose significance can scarcely be overstated, that enables the mind to recognize itself as agent, as capable of guiding the whole of its own actions, absent any extraneous influence, and thereby attaining freedom of choice, and thought. Regardless of what term is employed as a means to describe, with respect to either of these notions that I had provided reference for, previously, an almost instinctive reaction is present, and shown, as if intended to illustrate the root of that of which we are aware when in a state of blankness, and inunderstanding; the drawing of a difference between them, of subject and object, and what one first knows upon birth, subsequent to the formation of person-hood, of self, albeit incomplete, are thus facilitated as the logical contingents of experience.
I wish I could locate the youtube footage of Searle's wry account of early replies to his vivid demonstration (the chinese room) that so-called "cognitive scripts" mistook syntax for semantics. Something like, "so they said, ok we'll program the semantics into it too, but of course what they came back with was just more syntax".
Quoting path
(And still is, presumably.) A machine with a sense of / illusion of consciousness? Agreed. He himself would of course reject "illusion of", and even "sense of" except in the narrower sense of "accurate sense of". Not "machine": he embraces that.
Quoting path
Yes, he might be wrong trusting that kind of intuition... but... be right about swimming in something semantic: namely, the social game of pointing symbols at things. I think he would be right that Genesis and the chinese room fail at that.
Quoting InPitzotl
Yes, and that (the getting the symbols to relate) will be an elaborate social game of agreed pretence, as there will be no matter of fact about the relation. As you say, it will require vast experience of interaction with symbols and the things we learn to agree (to pretend) they are pointed at. Never heard this called "agency", but I get it. Searle calls it "intentionality" and thereby embraces unnecessary mentalisms. But he definitely exposed the problem for any AI that fakes a proper semantics.
Quoting Wayfarer
Great essay against the old, pre-connectionist, symbolic computer model of brain function, which I shall cite next time (and it won't be long) that I want to scorn the ancient myth of pictures in the head. Not an essay espousing the existence of ghosts (in machines), though.
BTW,
They should have looked here.
I can agree with you on this. We are just unlikely to ever put subject-talk aside. It's too basic for our form of life. So abolishing the subject is not a live option. I agree. On the other hand, we can as philosophers do as you just did, and think of the 'I' or 'consciousness' as caught up in especially basic or foundational conventions.
Note the connection to 'freedom of choice' and implicitly to responsibility. A body is trained to take responsibility for its self. This is tied up with reward and punishment. Children aren't held to the same level of responsibility for their actions. Alcohol complicated consent to sex, etc. So in practice we have a continuum of consciousness, agency, responsibility. No doubt.
The issue is whether we want to reify these important conventions into some quasi-mystical substance and get trapped in the old metaphysical maze.
That would be great. I think we see less ambitious versions of that kind of narrative here and there.For instance, Rorty traces the use of the subject in philosophy in PMN.
Indeed! Dissolving the subject into distributed social conventions offends us. The subject plays a huge role in moral/political discourse. We can think of the evolution of the notion of the soul, where 'consciousness' is a last secular holdout in some sense. In any case, we don't have to want to be dissolved this way to follow certain arguments in that direction. Now lots of thinkers have made arguments against traditional notions of consciousness, but this one is particularly concentrated:
[quote=Wittgenstein]
If I say of myself that it is only from my own case that I know what the word "pain" means - must I not say the same of other people too? And how can I generalize the one case so irresponsibly?
Now someone tells me that he knows what pain is only from his own case! --Suppose everyone had a box with something in it: we call it a "beetle". No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle. --Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. --But suppose the word "beetle" had a use in these people's language? --If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. --No, one can 'divide through' by the thing in the box; it cancels out, whatever it is.
That is to say: if we construe the grammar of the expression of sensation on the model of 'object and designation' the object drops out of consideration as irrelevant.
[/quote]
I also addressed as fascinating response to this passage above.
Quoting Wayfarer
Indeed, there are certainly historical reasons, presumably political and moral. But there were reasons for slavery, infanticide, etc. Such reasons aren't necessarily reasonable by our standards here and now.
Quoting Wayfarer
On the first point, perhaps. I posted on this above referring to a related point. I'm not so sure about the second point. I'll just say that my philosophical influences and approach are anti-scientism, where scientism is understood as bad philosophy pasted on to mere prediction and control.
Cool. Well I'd like to hear more about that. I mostly know Searle through his rhetorical war with Derrida. IMV, Derrida was making the kind of point that I'm trying to make, dissolving some pure subject or consciousness into social linguistic conventions. Searle came off (to me, in that context) as leaning on the prejudices of common sense, etc. [And for those who hate Derrida, in Limited Inc he writes more like an analytic philosopher than a continental, IMO. (So it's a good entry point for skeptics.)]
Quoting bongo fury
OK. I guess my point is that if we ultimately reduce 'semantic' to pointing symbols...that at some point AI may satisfy our intuition. Consider the movie Her. And consider that we never prove that others have some secret interior in which they gaze on meanings. We just 'know' it, which is to say that we love them, treat them a certain way. On the other hand we show no mercy to roaches, and very little to pigs, even if they are smarter than animals that we do treat kindly...probably because we respond to human-like faces, which fire up a kind of nurturing or one-of-us instinct or feeling.
[quote=link]
But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand.
[/quote]
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
I like this 'just another metaphor' criticism, but it also applies, I think, to 'consciousness.'
I'm not sure that we are ever done understanding anything. So for me it's a search for further clarity and the revelation of the apparently necessary as the contingently familiar and automatic. This second task is making darkness visible, dragging our ignorance into the light.
[Bitbol seems interesting, but I haven't studied his work.]
But, are there any standards? Or are the standards now 'what I deem acceptable'? That argument is a kind of sleight-of-hand, which can be used to rationalise, or rather relativise, any ethical claim whatever.
I've often been presented with the beetle-in-the-box argument, but I've never seen the point of it. I think the reality of empathy is such that we naturally see ourselves in others, and others in ourselves, unless there's something that interferes with that, like sociopathology (which it often does.)
Quoting path
It's actually a very deep issue, which is taking the methodological postulate of naturalism that posits a strict separation of subject and object, or the 'bracketing out' of the subject, but then interpreting that as a metaphysical axiom, i.e. 'reality really is this way'. I mean, naturalism is splendid within its domain of applicability, which is bounded by human sensory abilities and mathematical abstractions derived from them. But it starts out by omitting the subject, who is the instigator of all of these activities, and then challenging us to 'prove' that there is such a being! (I think this is what is referred to as 'the forgetfulness of being'.)
That's how you get to a point where you can 'dismiss the subject' - which we do, nowadays. We regard respect for the subjective as being a kind of anthropomorphic sentimentality, but that really comes out of the tendency of treating human beings as objects, which are the accidental byproduct of an essentially fortuitious process - just the kinds of processes that science now assumes. But our judgement regarding this process is itself a product of the modern scientific outlook, so spot the circularity.
There's a fairly recent essay on exactly this at Aeon, The Blind Spot, which I happen to think is a tremendously important essay. (I got lot of flak on this forum for posting a discussion of this essay a year ago when it came out.)
Something I've wondered, could our most advanced neural net be performed on our oldest computer, albeit at an extremely slow pace?
I will try to make a few points...
>> We never encounter physical reality outside of our observations of it.
This sounds so obviously true that it simply has to be the problem. I perceive myself as living being with a material form. The abstraction of the epistemological subject already is ideal. So is the concept of "observation". If one derives the "blank mind floating over the world in souvereign supremacy" you are already far away from what defines your being in first place. You will never be able "synthesize" yourself the way you are if persisting on analytic conclusions. This of course means that matter and consciousness can never be interlinked. You started as a human being and took the route to the overmind. Fair enough. But why should the break of the initial synthesis be a problem in general? After all it is salt that tastes salty. The "object" Wittgenstein takes out of the equation is exactly the taste(ing) - not the salt.
The word "encounter" already flies high above world. It implies that for sure there is this ethereal entity that only occasionaliy gets in contact with reality.
I think that is covered quite well in the essay.
I don't think so. The article is fighting it's strawman.
I am sorry.
Yes, those are valid concerns. Today's standards may look crude and stupid tomorrow. I also agree about rationalization. That's always a threat or a risk. We could always be lying to ourselves. That cuts both ways. I could be lying to myself that you are lying to yourself and so on.
Quoting Wayfarer
It's not really about empathy. It's about meaning, which cannot be grounded in a subject but rather distributed via enactment within a community.
I agree that we just 'naturally' see ourselves in others. Which is to say that it's there without us understanding it. It's automatic. I addressed this earlier in the thread, and I think it's a fascinating issue.
Quoting Wayfarer
Where I will agree with you is that my philosophical questioning of the subject or consciousness is up against an anthropocentric sentimentality among other things. IMV, no one can genuinely doubt to the 'effect of the subject' or the loose routine intelligibility of 'I'-talk or 'consciousness'-talk. We couldn't forget this training if we wanted to, and we can only criticize the limits of this training from within this training. FWIW, some of my influences actually found religious significance in abolishing the subject this way. In their view it was egoistic sentimentality that clung to the private subject. That doesn't have to play a role here. But anti-egoistic spiritual talk could even embrace the kind of ideas I'm exploring.
Quoting Wayfarer
Indeed, and if our brain has evolved for survival rather than truth, then maybe the theory of evolution is a useful tool and not a truth, etc. This could be put with more subtlety, but it's an issue. But I don't think we are saved with dogmatism or just asserting some pure source of knowledge. Instead some people just ignore problems like this because no one pays them to address them. We walk in darkness. Yeah we get along practically, but we leave all kinds of contradictions or ambiguities unaddressed. Is it relativism to stress our ignorance? To point out how foggy our foundations are? We start within some hazy routine intelligibility, immersed in making a living, etc. We don't know that we don't know, because we know what everyone knows. I see philosophy as (among other things) a knowledge of ignorance.
Speaking of circles: 'An entity for which, as being-in-the-world, its being is itself an issue, has, ontologically, a circular structure.' (Heidegger, of course.)
[quote= Blind Spot link]
This framework faces two intractable problems. The first concerns scientific objectivism. We never encounter physical reality outside of our observations of it.
[/quote]
FWIW, I'm questioning that whole paradigm. 'Physical reality' is just the shadow cast by some mysterious mental substance. It's two sides of the same coin. IMV the great 20th philosophy was an attempt to break free or at least get some distance from this way of framing the situation.
Yes indeed. That 'white mythology' is taken for granted. What goes along with this is the assumption of some kind of pure meaning that isn't dependent upon social conventions. There is some ideal subject in touch with ideal meaning, and then one can try to construct the world from this, awkwardly.
Like how does the word 'bird' attach to the same meaning in my headspace and his headspace? The assumption that there is some identical meaning is taken for granted. That we even know what we are talking about beyond trading speech acts appropriately is taken for granted. We don't even know that we don't even know what we mean...
The problem would be insufficient memory as I understand it. If my network has 10 billion parameters, then I have to store them somewhere. During training I have to be able to update them as the data come in...
‘Physical reality’ is ‘what is described by physics’. ‘Physicalism’ is ‘the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical‘ (SEP). The essay I referred to is questioning physicalism, it’s not trying to defend it.
Is there anything meaningful apart from social convention? Isn't this simply relativism - 'the doctrine that knowledge, truth, and morality exist in relation to culture, society, or historical context, and have no inherent reality?'
As I said, we walk in darkness. So I will never be done answering this kind of question. Disclaimer aside, all the familiar 'meaning effects' are still here as before. The world in its richness remains. We just take certain interpretations less for granted.
Also 'social conventions' is a dry way to put what it 'means' for us to be in this world together. The idea stresses how radically social it is for us as humans. There is no self apart from others, or the individual self in his or her uniqueness is only intelligible in a social context.
I'll grant that the notion of inherent reality starts to look pretty foggy upon close examination. I don't think anyone can specify what they 'mean' by it. It's just a beetle in the box. If we are radically private subjects gazing at meanings and have to cross some gulf of 'physical' stuff to communicate, then we're never able to check. Why isn't that relativistic? Or solipsistic? Such a position seems to imply that the world is my dream. So the shared world has to be built up as overlapping dreams. Does that help us? Protect us from wandering in the darkness?
Saying exactly what the 'physical' is supposed to be is the same problem IMV as saying exactly what 'consciousness' is supposed to be. In both cases we have a practical know-how with the words. But it's all pretty foggy...and the purer one wants these words to be the more foggy. In both cases one sees futile gestures toward the ineffable. So the positions criticize each other well but miss their own 'emptiness.'
Quoting path
I fail to see whether there lies a need to deprive ourselves of a discussion of the subjective, to further expound over our source of understanding, and the faculties through which the whole of the world, in representation, is mediated. For reasons just stated, I am of the belief that so far as the effects of prejudice, of that blindness with which so many are fraught, and that derives its power, its ability to compel, through a force of conviction, and arrogance, as to the 'truth' bore by one's judgment, are minimized, if not suppressed by way of striving toward what is contrarily based, and by which I mean the objective, queries of the sort that disconsider the subject, in full, neglect a tenet that remains fundamental to all forms of human experience, and has never ceased to be, without in turn offering any benefit that an alternative course, through which this element is retained in consideration, couldn't provide in its own right. Our position must instead be predicated by an embracement of those limitations in thought, that bind us, and which at times, darken, rather than illuminate, the path down which we walk. Never should we seek to content ourselves, either, with the idea that what conditions are antecedent to the experiential, in any case, can ever be escaped; it is necessary to realize then, that the most sensible action, in the face of our descent toward insensibility, being made possible, is to ensure integration of these disparate forces, and forge a cohesive unity of both parts, that despite at first glance, seeming to conflict, can be altered so as to fall within the confines of a complementary type; a relation that is wholly inclusive, and from which to create frameworks of greater broadness.
Language serves to denote, and the objects upon which this activity is impressed, despite their symbolic-forms being entrenched in the abstract, apply concretely. The difficulty that we confront is one of misapplication, whereby the objects toward which an argument points, are taken as absolute, and beyond change; this, when in fact the propriety of language's usage is more often than not, dictated by convention, and naturally assumes differing meanings, that vary in their effect, and appearance; modifying lines of phrase which formerly differed, to better align with the norms of the present, or otherwise describing something that is found within a select context, with such meanings as those of past remaining intact, but added to. In any event, none can dispute that these very objects, the terms of which our language consists, evince a quality of concreteness. That the map stands as a product of our own devising, and is by all accounts, contrived to some degree, doesn't give cause for doubt, as to the existence of that of which it is designed to reflect; the territory.
Note: I encountered a slight error; it has since been amended.
Additional Note: I apologize for the excess of length; it was inviable to condense these reflections of mine, further.
OK, but my employer doesn't see any need for me to philosophize at all. In worldly terms, I should be attending to something else right now. Why am I so addicted to philosophy?
Quoting Vessuvius
To reiterate, we couldn't get rid of the 'subject effect' if we wanted to. We can't disconsider it. Not us anyway. In 1000 years humans may manage it, but they might be neo-humans with green skin who live on sunlight, water, and minerals. What we can do is intervene in today's routine hazy intelligibility and use it against itself to reveal our being entrapped in it as false necessity. We can see that we were dominated by metaphors without realizing it. We can see that we had strangely been satisfied with mud and fog (what everybody knows), because it was familiar mud and fog.
This can also hurt, so I don't know if it's a good idea for others. I can't help myself it seems.
Quoting path
I would be wrong to argue that the nature of your motivations can be attested to, either on my part, or that of any other besides yourself. The worlds onto which our lives are so often projected, are self-contained, and hence inaccessible to most if their workings are not rendered explicit. I am however curious, as to what particularities can be found, beneath the surface of yours.
Quoting path
I hadn't come to state, then, that it is possible to achieve separation of these things, in any way; rather I sought to entertain the possibility of its occurrence, and therefrom, illustrate why it is indeed impossible by showing that contradiction emerges as a result.
On the basis of behavior, we do have a pattern, both individually and in sum, of resigning ourselves to the familiar, and the already known. I would imagine it to be the reason for which life seldom borders on the thrilling.
Thanks!
Quoting Vessuvius
Indeed. I think one of the reasons we philosophize is for the thrill.
[quote=Rorty]
This is the desire to be as polymorphous in our adjustments as possible, to recontextualize for the hell of it.
[/quote]
Neotony, play. Philosophy keeps us young ?
Quoting path
Its study fosters a desire to know, to understand, that is almost child-like in intensity. Which in some sense, would correspond to the retention of a certain trait that is little apparent in those of adulthood, and which deserves to be cherished for all time.
Forever the instrument(s) of a young mind, we are.
It would be too easy to correlate such deficits to certain modes production and local cults. The human being as a social one directly contradicts the ideal, atomic economical subject of burgeois society. The espistemological starting point of an isolated subject pays the independency and souvereignity over it's environment (including humans) with the alienation of it's own nature and nature in general.
My point also. And @InPitzotl's, I thought.
The Chinese Room (and the chips and dip?) just (or partly) cautioned against conflating the mere production of tokens with the actual pointing of them.
The fact there is no 'actual' about it is what makes the social game of pointing so sophisticated. (imv.)
Quoting path
If that means trying to explain our sense of consciousness as a natural effect of our thinking and conversing in symbols, then hooray, cool.
I recall Searle believed minds had to be made of a certain something. I think the analogy he used was pistons. They can't be made from putty. Minds have to exist on certain material.
Quoting path
If you haven't already, you may find this interesting. Particularly the stretching a zombie section.
http://www.jaronlanier.com/zombie.html
So the question being made is, in practice how far can you get? Speed itself shouldn't be an issue because the mind is supposedly an equation. 4+4+4 is 12 even if there is a thousand years between each four. Also some people distinguish between GOFAI and modern neural nets).
In some ways I'm suggesting something similar. At the same time, the notion of 'material' is just as foggy as the notion of 'mind.' As I see it, we have this useful but vague distinction...and then we are tempted to build a metaphysics on such fog.
For what it's worth, I'm not saying that we are zombies or denying consciousness. I could be accused of suggesting that a certain hazy way of looking at consciousness has some serious problems that we mostly ignore. You might say that I'm trying to shine some light on the fog as such.
Quoting Forgottenticket
That reminds me of trying to see minds as the place where universals hang out. The mind is viewed as a spiritual eye that gazes at eternal truths, equations for instance. For this to work, all contingency has to be washed off of the actual languages we 'think' in (talk to ourselves in.) We have to imagine a 'pure' thought-content that lives 'behind' its vehicle. If I can translate Lolita into Italian, then some pure Lolita-in-itself is set upon a new vehicle. But I'd argue that translation isn't perfect...that even the meaning of Lolita in English is not stable. We might talk about identity and difference, the impossibility of a pure repetition. We treat things as the same when they are not when they are the same enough for this or that purpose.
[More can always be said. When I read this next week it won't 'mean' what it 'means' to me now, tho it might be the same-enough for me to plausibly elaborate on it.]
Yeah, I think we are on the same page. 'Intentionality' is more more token after all. We can't even point out what pointing out is. We can just use 'pointing out' in social contexts and see if we keep our job, get blank stares.
Quoting bongo fury
Yeah, it's along those lines. The social conventions are in some sense to prior to the subject. Sociologists make similar points. The reactionary fantasy is a kind of pure subjectivity that participates in pure meaning-stuff, apart from all worldly contingency.
Well said, my friend!
I like your metaphor. We are the instruments of a young mind. I like to think of us individuals as 'neurons.' Together we form a brain. We work with symbols-in-common. We weave and reweave a conversation that preceded and will outlast us. 'I' am just the hazy unification of pieces of an inherited conversation. Obviously we have individual brains. But our hardware is designed to be networked. So metaphorically speaking (as if there were some purely literal alternative!), the ever-young species speaks thru us. The generations come and go, adding to an ever-young conversation that works only with the traces left by those who came before. We are whirlpools in such traces, scratching new patterns in the old patterns.
If for nothing else, I hold to the view that what quality most differentiates us, from other species, whether sharing a closeness of relation to our kind, or not, is the ability, as described, to pass with each generation ever more informational-content, by means of which we continue to better ourselves, and enhance all manner of recognition of both our place, in the world, and the many processes of which it consists. As once seen, and understood, there can be left only an impression of awe at its majesty, at the character of fullness and refinement of that wondrous system upon which we depend; each variable having a role in which to serve, meshing seamlessly with the rest, and adapting to any changes that occur along the way of its natural procession. To make known then, and realize that the entirety of the world to which we lay claim is one of innumerable such things, there is, so far as I may tell, no greater, nor more substantial, an experience of humility, to be given.
That's music to my ears. We 'bind time.' We are cumulative beings, increasing in complexity.
Quoting Vessuvius
Indeed. Or I like when I can get into a mode of praising [s]God[/s] reality. The shit-show is majestic. What do we do with this disastrous opportunity ? What are days for?
[quote=Larkin]
What are days for?
Days are where we live.
They come, they wake us
Time and time over.
They are to be happy in:
Where can we live but days?
Ah, solving that question
Brings the priest and the doctor
In their long coats
Running over the fields.
[/quote]
https://www.poetryfoundation.org/poems/48410/days-56d229a0c0c33
A human being's autonomy always occurs within a context which is potentially open-ended. An automaton always operates within some well-defined context. A human complaining about an online purchase and becoming frustrated with a chatbot might suddenly decide that he does not need a certain type of article, no matter how attractive the price. May suddenly decide to entirely change from a materialistic to a more idealistic form of life, terminate the chat and return the item. Can we imagine the chatbot (or any automaton) ever behaving that way? Even if we programmed it to? Without specific utilities, our automatons lose their meaning.
Lanier's use of the zombie wasn't important. It's whether consciousness can be computed using the transcript of meteor showers.
I've never found any confusion with consciousness. Imo consciousness has always had the same definition though some may get confused and define it the wrong way. Consciousness is the common sense Aristotle wrote about whether senses are combined together and presented as a phenomenal whole. I knew it as a child though lacked the jargon for it. There was seemingly something odd there that wasn't present within my feet or other organs.
The argument may be that there is no reductionist explanation for binding and phenomenal imagery.
Quoting path
Well it's been about a week :). Anyway I don't think the content of experience* is important so much that there is a similar enough framework that has been the same throughout my life (what I described above). That's the consistent part.
*I want to add as well as culture even evolutionary psychological traits can probably be dropped if the brain is on drugs or whatever). I didn't want to focus on them but still want to separate them from the binding problem.