a model of panpsychism with real mental causation
How is that the brain generates the private subjective world of the self and then for what purpose? it seems logically impossible that nerve signals can generate a subjective observer while at the same time enabling that self to have its own distinct powers. It appears to be a useless appendage.
I propose that the brain only generates the content of consciousness and not the self that binds it into a whole. The visual processing center for instance generates visual perceptions and thoughts. But the conscious self, that private world that binds the sensations into a simple unity, has a more permanent status and is not generated by the pattern of nerve signals. Instead the mental subject comes from an already conscious nerve cell from which it splits off to become what Lebiniz called the dominant monad. The conscious self then is an atomic unity evolving from other conscious natural beings in a panpsychist universe and as such can have real causal powers.
see:
Panpsychism and Real Mental Causation
I propose that the brain only generates the content of consciousness and not the self that binds it into a whole. The visual processing center for instance generates visual perceptions and thoughts. But the conscious self, that private world that binds the sensations into a simple unity, has a more permanent status and is not generated by the pattern of nerve signals. Instead the mental subject comes from an already conscious nerve cell from which it splits off to become what Lebiniz called the dominant monad. The conscious self then is an atomic unity evolving from other conscious natural beings in a panpsychist universe and as such can have real causal powers.
see:
Panpsychism and Real Mental Causation
Comments (80)
Quoting lorenzo sleakes
This is intuitively plausible but could do with more elaboration and argument, if you have time. Can you explain further what you mean?
Quoting lorenzo sleakes
I agree with that I think. I think identity should be distinguished from consciousness. Consciouness seems to me to be about the unification of its content, whereas identity is about the contents of consciousness, which are various and plural and determined by brain (or perhaps body) function.
Quoting lorenzo sleakes
I can't bear Leibniz's Monadology, so I instinctively recoil from this. But you may have repurposed his ideas fruitfully, I don't know. I'd need to hear more about this bit.
You're seeing the brain and the subjective world of self as two different things--you're at least assuming some sort of epiphenomenalism if you're not simply asserting a partial dualism.
The subjective world of the self is a property of brains. It's not something different than brains that is generated for some purpose. It's what brains are like/it's simply qualities brains have. It's what those materials, in those structures, undergoing those processes, are like. It's not something separate from that.
True. Chemical properties are "properties" of physical systems, however chemical properties are not explicable in terms of the laws of physics. Rather they are a new set of "rules" which emerge as a result of the formation of a complex-stable physical system creating a fundamentally new type of context. Likewise for other superordinate systems, biological, psychological, etc. Different systems can be connected without one necessarily being reducible to the other.
What are the criteria for explanations in that scenario?
The fundamental forces of physics are gravitation, electromagnetism, the weak nuclear force, and the strong nuclear force. Chemical properties abound. Flammability isn't explicable by any of the fundamental forces of physics. However the fundamental forces of physics can account for the formation of new macroscopic systems (gas clouds, stars, planetary bodies). These new environments, to the extent that they are stable, create the conditions of possibility for novel formations which exhibit novel properties giving rise to new rules.
I think this is very likely true, and is supported by the science. There's a recognized issue called the neural binding problem, which basically comes down to this: science knows a great deal about the neural systems that assimilate colour, movement, quantity, shape, and so on. But there's no faculty that has been identified that is responsible for 'binding' all of the disparate stimuli into a coherent whole. Aside from Liebniz' monad, this also corresponds with Kant's analysis of the transcendental unity of the subject.
Quoting Terrapin Station
This is completely incorrect, and here's a scientific paper which spells it out https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3538094/
Bolds added.
There's no way to map third-person observable data to first-person subjective data to make a statement to the effect of "there is no area that could encode this detailed information."
Not that I was saying something about the relationship between external data and mental representations of it in my comment anyway. I was saying something about the idea that the "sense of self" is something different than certain states that one's brain is in.
At least offer objections and criticisms that stem from understanding what I wrote/what I was saying in the first place. Otherwise you're just wasting my time.
What i am saying is roughly the same as Michael Lockwood's disclosure view in which Awareness can be thought of as "a kind of a searchlight, sweeping around an inner landscape in the brain" . The brain can then create the various sensations but there is a self that ties them into a single unified being. The nerve signals are ephemeral and correspond to the ephemeral generation of experiences. But the self observing those experiences has more continuity and derives from the underlying consciousness of a conscious nerve cell from which its consciousness splits off to become the dominant consciousness in the brain. It is an interactive dualism, but one grounded in a natural panpsychist world where there is a flow of mentality in every living eukaryotic cell and all the way down. Why else would the brain spend so much energy creating a virtual world of colors and sounds and feelings if not for the benefit of an independent entity that has powers of its own? see https://philpapers.org/rec/SLEPAR
u have it backwards. its consciousness that is generating the brain
The brains of people in comma remain despite having lost consciousness...
dont assume other consciousness's exist
So, I guess you're just explaining this to yourself and not the OP, right?
u dont need other consciousness's to exist to have a conversation. you can talk to a computer if it responds well enough
Where's consciousness generated, then?
consciousness is not generated its eternal
and its omnipresent
How can you prove you have been around that long? How can you prove you have been everywhere?
only the here now is real, time and space are illusions
I would phrase it like this: Brain generates cognition, sensation, and emotion. How can "self" experience those feelings and thoughts? For what purpose? What is "self"?
The purpose of sentience or consciousness is so our brain can learn, which is how we "make choices". In other words, the purpose is so we can have "free will".
upon what 'logic' do you conclude that? My model only requires the brain to do it.
If panpsychism was true then would not you expect that the lowest forms of animals with brains would share very similar abilities of MC/EC as do humans b/c they all have practically the same hardware (neurons, nerves, connectivity, etc.)? However, we already know that few animals are even self-aware (e.g., few are able recognize themselves and ID their own agency) let alone having EC.
panpsychism supporters should start by experimentally making the above case before going to untestable near supernatural theories of quantum/atomic sources, etc..
I think you are wrong about that. You certainly do not need sentience or consciousness to learn good enough "make choices". I assume you know better than that, so please better articulate what you mean, esp. wrt have "free will", which would only seem to require the agent (e.g., robot) to have control of the direction of its own program (easy to code that!).
To learn by imagining, that is mental / virtual experience in advance, rather than by waiting to live or die when the physical / actual experience really happens.
It looks like another hint our ‘selves’ are virtual entities, virtual characters like in computer games. Living in a simulation, built not by evil machines, but by our own brains. Our personality, identity, ego, soul... it’s just a virtual little homunculus inside our heads.
Yeah, free will should really mean actions are determined autonomously, that is mostly by personal identity or character, instead of whatever else. Any other meaning is self-defeating contradiction, including the current popular definition: “ability to choose otherwise”.
in a previous reply to discount me saying that we cannot imagine infinity you said that was b/c "Well as we are talking about what can be imagined, not experienced, it seems you can imagine infinity.", but now you say imagination is almost synonymous w/ virtual experience in re 'To learn by imagining, that is mental / virtual experience in advance'. I strongly suspect you are confounding distinctly separate and different human faculties into one, which has you flip-flopping on definitions.
Quoting Zelebg
OK, so I make a robot that evolves its own personality, goals, and decision making by way of a genetic algorithm, and then have it makes its own final action decisions based on its personal/unique personality, and goals, and, in part, on a random number generator to help bias it to action when split decisions are experienced (likely not too different than what most humans do). So, according to your definition, have I not invented/created a robot which has 'free will'? I suspect your next step will be to start including definitions of agency in 'will'. Be careful, that path is a house of cards.... :wink:
It looks like you mixed me with someone. In any case, I don't see you disagree, so is there anything else I should say or explain?
Yes, that is free will, program does what program wants. Aything else takes away that freedom, even if it is a simple freedom, like a crazy wish to print "Hello world!" on the screen over and over again. Of course, some might want to include sentience and qualia for the free will to be complete, that's fine by me.
sorry about that. My head is spinning with all the threads I'm conversing on, like a game of whach-a-mole. So, if you actually consider imagination to be a virtual experience then must it have qualia to render the experience part? Panpsychism would seem to say yes, and it comes from our cells, atoms, etc. Or is, access consciousness enough to do human level imagination?
so, how can you have a 'will' w/o a sentient agent?
Standard definitions seem to require the "I" be present in the agent. So, my robot example won't cut it, esp. since it cannot ever have the cogito ergo sum dilemma. That is, how can one say it made a willful choice when it does not have self-consciousness to know it is choosing anything? thus, no free will there b/c you don't have a sentient free agent.
https://plato.stanford.edu/entries/freewill/
Agent is the program itself. Meat or metal, human, lizard, insect or a single cell bacteria, body doesn’t matter except in the degrees of freedom. The usual problem with the popular understanding of free will is that people think it must not be determined to be free. The error there is not understanding that free will is 100% free only if it is 100% determined, that is determined 100% by the ‘person’ and 0% determined by anything else. So the essence of the problem is really, or should be, about defining what is a ‘person’ or 'self', rather than with the determinism per se.
I wouldn't say that this robot has free will, it is determined to act according to the parameters of your program. This is an age old problem for theologians. How is it possible that God could have created us, and yet we have free will? This requires a separation between the agent (each of us), and the creator (God), such that there is no necessary relation between the agent and the creator. The necessary relation would allow the creator, to be able to figure out, and always know the agent's act, denying the possibility of free will. Free will would be an illusion. the agent would only be responding according to its program. The omniscient God would always be capable of knowing each agent's choice, demonstrating that the agent really does not have free will. To avoid this, we need a separation between the agent and God, the robot and its creator, such that the thing created is not a necessary effect of the act of the creator, more like an accident. But allowing such an accident denies the omniscience of God.
I agree, but is that not implicit in how we use/mean the word 'will'? According to your definition, a crystal growing has 'free will' because it has "actions [that are] are determined autonomously, that is mostly by personal identity or character, instead of whatever else" because you say "body doesn’t matter except in the degrees of freedom" . The growing crystal "on its own" with a high degree of autonomy, where it is going to grow. So, does a self-replicating crystal have free will?
re robot, your robot is operating 100% deterministic on its program. So, your robot cannot represent or know itself. So, your robot cannot know what or why it wants. So, it cannot know itself to find ways modify its behavior according to new principles that have it go against its (prior) default programming. What about ‘will power’? Your robot has no struggle in implementing any new goals because it is completely rule driven so no ‘free will’. Something that is not rule driven will have conflict in deciding to execute a rule and will experience failure in trying to change its own rules.
Further consider the bee whose behavior is dictated by the hive social rules. Most would say it effectively has no free will, even if it certainly has the degree of freedom to not obey them. What do you say?
Again, I posit to you, how can one say it made a willful choice when it does not have self-consciousness to know it is choosing anything? thus, no free will there b/c you don't have a sentient free agent.
What you seem to be (re)defining as ‘free will’ seems to be a trivial game of semantics, nothing deep or meaningful b/c it lacks an agent that has true free agency.
sure, pretty clear our 'selves' is virtual entity (even panpsychist should agree with that); however, how do you logically tie that into "purpose of sentience or consciousness is so our brain can learn, which is how we "make choices". In other words, the purpose is so we can have "free will""
an agent can certainly learn and make choices w/o sentience or consciousness, and you seem to be contradicting yourself by saying that the purpose of sentience or consciousness (which the robot doesn't have) is so we can have "free will" (which you said the robot does have) . Maybe you did not express what you really mean/think. Please restate it in more clear terms.
you got misdirected and hung up on me saying I 'created', which had nothing to do with my rhetorical point/example to Zelebg re the robot not having true self-agency irrespective of who/why made its programming.
The point is that I do not accept your argument. You seem to be proceeding from the false assumption that only a self-conscious being can act freely. That's an assumption which begs the question. You are limiting, restricting what it means to "act freely" by defining it in a way which supports your metaphysics. If we release your restrictive definition, and proceed solely on descriptive principles you'll see that all sorts of beings "act freely". It is unreasonable to enact your restriction, saying that only self-conscious beings can act freely, just because it supports your metaphysics.
Why do you think it is that "choosing" requires that the individual be conscious of the fact that one is choosing? Does breathing require that one be conscious that one is breathing? Do you see what I mean? There is a term which describes the thing being done, "choosing". Why do you assume that the person must understand what it means to be "choosing", in order to be actually choosing?
are you saying that the standard definition/meaning of "free will" does not require an agent? do you believe the standard definition/meaning of "free will" requires "will power" to act on and carry out the 'will'?
is that a bad example? I mean, are you saying that breathing is an example of carrying out our 'free will'. That example actually makes my point, that is breathing is a pre-programmed part of the agent's system so cannot be part of the agent's free will. If you believe otherwise, please try hard to use your will power to stop breathing for more than 5 (or even 10) minutes and let us know how successful 'you' were at that test of 'free will'. I'm sure you have the 'will power' to do so... If we do not hear back from you anymore then we will assume you were right and you have ‘free will’ the way you say you do.
'free will' is not about only about anything that makes a choice. If it were then you can say the Earth is an agent and it has 'free will' to make weather of its choice. If the choices always happens automatically then no 'choice' by an agent is ever made at all. If you disagree with that then everything like inanimate objects have 'free will' according to your (et. al.) definitions and you've thereby reduced the term to be meaningless wrt how it is used for humans.
Consciousness originates in the heart and simulates in the brain.
The heartbeat can be traded for any external stimuli, where the brain hijacks all the hearts momentum.
Thought, can be heart alone, or external stimulus.
Things may be moving too fast to say one is more significant than the other. Consciousness is the hearts and the brains continuum.
Perhaps not.
I feel I am close but, no cigar, if you know what I mean haha.
I do not believe that "agent" requires self-consciousness. For example there are conscious agents which are not self-conscious.
Quoting Sir Philo Sophia
The point is that one can choose without knowing oneself to be choosing, just like one can breathe without knowing oneself to be breathing. There is nothing intrinsic to "choice" which makes it necessary that a person know that they are choosing in order to make a choice. If you are presented with possibilities you might choose one without knowing that you are choosing. A child makes choices before knowing what it means to choose. We do not wait until we know what "choice" means before we start making choices. So we make choices before we know that we are making choices. We choose without knowing ourselves to be choosing.
Quoting Sir Philo Sophia
Free will is about making free choices, notice the word "free". If an automatic response is called a "choice" (and no one in their right mind would call it that), it is not a free choice, because it is necessitated by the thing it is a response to.
And once again, the point is that a person does not need to know oneself to be making a free choice in order to actually be making a free choice. So you've just gone off on a tangent here.
so, sounds like you do not agree w/ @Zelebg that a robot operating 100% deterministic on its program is acting out 'free will' because it actions are necessitated by the thing it is responding to. is that right?
Quoting Metaphysician Undercover
So, would you say that human type/level of 'free will' is pretty much equal to the 'free will' of, say, a bee? Why so or why not?
Right, I'm not even in the same ballpark as Zelebg. You and I might have some things in common
Quoting Sir Philo Sophia
Yes, I would think so. Free will only requires an agent to make a decision on the possibilities which are apparent. I would think that a bee does this. I think that even plants might make decisions, but their decisions are made much slower, depending on weather, nutrients in the ground, etc., and we don't really know very much about many of the actions of plants
The difference with human beings is that we have developed our consciousness in a way which aids us in comprehending possibilities, and assessing possible outcomes from our actions. So not only am I capable of apprehending a much wider variety of possibilities than a bee, I can also foresee the possible outcomes from my potential choices. This, I think, is where self-consciousness starts to play a role, when I realize that my decisions have consequences.
Experience is qualia, in that experience consists of one or more different and simultaneous qualities. So let us call it an event, external event and internal event. Why is either event experienced is the same mystery. Though, external events are always first converted into internal events before they are actually perceived / experienced.
Sentient agents are sentient programs.
“I” is a kind of program. Are you saying your robot can not have “I” or that no robot ever can have it?
Deterministic program does not equal deterministic function. But in any case, how is determinism relevent to "knowing itself"?
In one case I'm talking about 'conscious free will' as most people understand it. In the other case I am talking about my personal definition of free will which does not require consciousness.
Don't think that is true. It has been demonstrated that rats have counterfactual reasoning:
https://www.sciencedirect.com/science/article/pii/S0960982215002134
Quoting Metaphysician Undercover
So, why is Zelebg's robot program not able to make a decision on the possibilities which are apparent? Seems to me like every program does that.
Quoting Metaphysician Undercover
it has been well documented that bees act according to a social program any time they are among other bees so how can you call that 'free will' when their behavior/decisions is completely dictated by the 'will' of the collective at any time dictated by the collective? all social insects likely share the same 'programming'.
this may be a fair point. However, only few creatures seem to exhibit self-consciousness , yet most all larger animals seem to have consciousness that can do what you say there; e.g., rats and birds do it. We all would agree that mammals have consciousness and all have 'free will'. Reptiles/amphibians less so, but still rather unquestionable. However, insects are pretty much like Zelebg's robot programming which you say has no 'free will'. Your definitions seems to be too lose to be coherent. Can you tighten them up to exclude the counter examples I point out?
please give us an example of a stable deterministic program which is not constrained to a set of pre-determined behaviors and functions, yet achieves goals and/or has utility.
Quoting Zelebg
would you agree that all true independent agents acting in the world have a goal? would you agree that a more meaningful decision made by an agent when it "knows" that its decision(s) is/are best for its overall goals? would you agree that best overall decisions can only be achieved if the agent has a state of awareness (e.g., conscious) of its totality of needs and if the agent has the ability to realign its behaviors and/or beliefs and/or goals according to the experienced/predicted consequences of its behaviors and/or beliefs and/or goals ? If so, then would you not agree that a conscious agent is exercising more meaningful 'free will' than the automaton programmed agent, and a self-conscious agent exhibiting still more meaningful 'free will' than an only conscious agent?
At the other extreme, would we say that an automaton agent having no goals, and making purely random 'decisions' is acting out of any kind of meaningful 'free will'?
what we care about is meaningful animated/living 'free will' not trivial 'free will' of all mater.
your original statement did not qualify it that way or indicate you were talking about 'common wisdom'. So, if you acknowledge that the 'common wisdom' of most all philosophers/thinkers is that the purpose of sentience or consciousness is so we can have "free will", are you just playing with word of "free" separately from the word "will" as a synonym for 'make decisions' to a say computer program is 'free' to 'make decisions' so it the same thing as what humans call their 'free will'. That just seems like word games unless you ground your ideas in the human context and coherently address all my counter examples.
I disagree. We can experience many things that have meaning w/o the 'hard problem of qualia', like a shape of an object which has a very similar qualia to us as we expect the actual physical object to exhibit. However, the color red we 'see' in our minds is not a 'different and simultaneous qualities' it is a single, vivid, yet conjured projection which we somehow experience as a qualitative visual object apart from anything that can exist externally and not something that can be simply programmed like video game VR. I wonder if color qualia is in its own class. Are there other qualias that likewise have no basis for existing externally? Time qualia sort of exist externally via entropy and pseudo causality, but that arguably be another. any others?
Not common wisdom. Common definition of free will. What's this about, can you phrase it as question?
No, but those are not the only options. My current working hypothesis is that 'I' cannot be a program or process, but more of a state of matter/energy that may flow in a medium but not in a programming manner. My posts on other threads give details as to one implementation framework I have in mind.
Sure, what do you say is the 'common wisdom' meaning of "free will" and exactly where/how do you reason that is not accurate/true?
I do not. What are we talking about, what is the argument?
you say a simple computer program is exercising 'free will', and you just said that your definition is not "common wisdom or a Common definition of free will. So, I ask you to state what you believe "is the 'common wisdom' meaning of "free will" and clarify exactly where/how do you reason that is not accurate/true" I'm trying to avoid word games and semantics here.
What is not clear about that question?
I do not want to argue about semantics. I do not see where is this supposed to lead, what is the point or importance, what is your position, what is supposed to be my possition... I have no idea what are we arguing about or why.
Yes / no question would clarify.
your odd definition of 'free will'. You brought up 'free will' in this thread and the OP's topic relates to "I propose that the brain only generates the content of consciousness and not the self that binds it into a whole." One clear hallmark of consciousness is the common meaning of 'free will'. So, your statements saying that a simple computer program is exercising 'free will' would seem to contradict the OP's panpsychist position saying that consciousness (thus free will) can only come about via an "atomic unity evolving from other conscious natural beings in a panpsychist universe and as such can have real causal powers". If you can evidence your definition of "free will" is correct then that could be an argument against panpsychism.
Then you said that the purpose of sentience or consciousness is so we can have "free will", and later confounded your answer more by saying that was not your idea but "common definition".
So, is it not reasonable for you to state what you believe is the 'common wisdom' meaning of "free will" and why you disagree with it? Otherwise, we are left dumbfounded to make any sense of your various less than coherent positions/statements.
eager to hear your reasoned clarifications, unless you have none...
It's like you want me to argue something I do not care about. Simply state yes / no question so I know what is this about, or quote my sentence and point what you think is wrong with it.
I'm just asking you for a clear definition of 'free will' and for you to compare/contrast that to the 'common definition' you apparently alluded to. If you do not care to do that then I dare say all your opinions on 'free will' are not meant to be taken seriously as they are not open for debate towards a truth, but just to state/spread your position.
A program that can redefine its set of defined functions and goals.
Questions are not answers. Answer to my question needs to begin with something like this: determinism is relevant to "knowing oneself” BECAUSE…
There is no clear definition of free will. It is sufficient that we agree on the definition relative to the context we are talking about. But what is the context we are talking about now? Did I say something you disgree with, then just quote it already and show me what it is.
https://www.merriam-webster.com/dictionary/quale
https://en.m.wikipedia.org/wiki/Qualia
bad example. just b/c the programmed changed its 'goals' it is still a deterministic program which is constrained to a (narrow) set of pre-determined behaviors and functions. So, it seems your example fails my request.
Quoting Zelebg
Those are obviously leading questions, which do hint at/point to my answers. My leading questions there are effectively begging you for your definition of 'free will'. That gets to the point of where you are coming from. So, why are you avoiding taking your stab at that?
Why not?
again, just asking you for your definition of 'free will' for which your statements and examples are based upon. If you continue to resist then I will just assume you have no definition and you are 'testing the waters' (like a sophist) on some ridiculous idea the all computer programs have human style 'free will', but you really don't believe that, which is why you will not provide your definition that supports that degenerate (case) view. I guess I'm trying to "squeeze water from a rock" with you on this 'free will' topic, and I'm not interest in rhetorical banter on things you really do not believe to be well reasoned and true.
at least b/c it is an objectified and unified state which happens apart from time and apart from its hardware or embodiment (incl. any programming) and considers/'feels' all constraints at a single moment. state-machine programs or processes cannot achieve that 3rd party state of entwined being. Can you evidence they can?
It was incoherent and unrelated to my question. If you can not articulate an answer to WHY question with BECAUSE answer you're wasting everyone's time.
So, you believe the emotive experience of, say, fear is a phenomenal qualia (like color) that the we cannot even begin to explain?
OK. Let's drop the discussion of your definition/views on 'free will'. Seems like a dead end there for me too.
I already told you. Freedom of volition is proportional to how much it is determined by "self", and inversely proportional to how much is determined by anything else. And I already told you not all my statements relate to this same definition, so you need to quote me if you wish to have meaningful discussion.
OK, that is a start, but conflicting definitions generally will not converge. For this definition then clearly your computer programming example has no 'free will' (i.e., Freedom of volition) b/c it has no 'self' (i.e., no "I" as an agent).
I'm not sure what you just said, but it looks like an assumption coupled with an assertion.
Haven’t you already agreed with me previously that all the evidence points to “self” or “I” being a virtual entity? Anyway, plenty of evidence, but for some reason it’s not convincing for everyone, so I’ll give you something that is maybe even better than evidence, I’ll give you a reason.
There is only one thing that is not exhausted by reductionism and thus singly holds the hope for some more meaningful understanding for the phenomena of subjective experience. This thing is a ‘virtual reality’, a world of algebraic abstractions and recursive algorithmic interactions, a realm where almost anything is possible.
The only explanation we actually already know of, for the existence of things that do not actually exist, such as unicorns or qualia, is virtual existence.
.
Semantics. Such a program is indeed deterministic at every instant in time, but that does not mean its future functionality is determined at any time, just like humans.
Not true. Your computer program will still have pre-determined behaviors even its functionality and/or goals can be expost facto updated, which is completely unlike humans who's behavior is not deterministic in any way or at any moment in time b/c, unlike the computer program, humans (and rats et. al.) have an "I" which is self-determined sufficiently apart from their genetic code/programming to have true 'free will'. Thus, I reiterate, for your definition and example(s), clearly your computer programming example has no 'free will' (i.e., Freedom of volition) at least b/c it has no 'self' (i.e., no "I" as an independent agent from its programming).
I don't believe so. Check our thread but I recall agreeing to *imagination* being (at least in some important way) a virtual experience, but that is a far cry from a model of 'self' or 'I'. "Virtual experience" does not equal "Virtual entity". Imagination seems to me to be more like a projection environment for the "I" to play in.
sounds like hypothetical mumbo-jumbo, not evidence. How do algebraic abstractions help anything re a physical model of 'Self" or "I"?
recursive algorithmic interactions do not seem to be able to contain or objectify "I" as a unitary whole b/c recursion does the opposite, successively approximates a global solution as an series of (not necessarily converging) sub calculations trying to approximate some final state, but the whole history is lost so the final has no sense of the path or parameters which got it there. So, on at least 2 counts my intuition says there is no way the 'Self" or "I" can be implemented as recursive algorithmic interactions.
Oh yes, it is very true. I already told you "I" is a program in private virtual reality created by the brain. And according to what physics you conclude that human behaviour is not deterministic in any way?
I explicitly said it is a reason and not evidence. Sounds like we reached limits of your attention and perhaps comprehension too, but let’s try again just in case you are drunk right now or having a stroke.
Try to claim this statement is false: the only explanation we actually already know of, for the existence of things that do not actually exist, such as unicorns or qualia, is virtual existence. Once you realize it is actually true, then my point should be self-evident.
Just because you say so doesn't make that a viable model. private virtual reality doesn't solve the problem of coding a coherent, unified ego-state of "I" that is part of the VR not observing it. If it was just a matter of just writing code in a 'private virtual reality ' don't you think that MIT, Google, IBM (et. al.) would have already done that years ago? You said "I" could be programmed an a recursion I and explained why I think not. You have to propose a plausible coding model for "I" to support your otherwise ungrounded, wild statements.
Maybe I'm too drunk or brain damaged to understand the purported obvious genius of your statement(s), but seems obvious to me that the mind is a virtual existence b/c it exists in a conjured (illusion) of 'reality'. How is that obvious point an explanation of anything helpful? The hard problem is not to create a VR of things that do not actually exist, but to create an existence that brings vivid living objectified experiential meaning to things (like color) that are otherwise just data values to be processed and pattern matched as factual object properties.
I retract that broad statement made in haste. Of course that is not true, but what I was getting at is that as soon as a human knows it is expected to behave in a certain stupid way then it becomes more unpredictable that it will continue to behave that predicted stupid way. Computer programs are completely predictable in this way, when they act stupid they will continue to act stupid. If you know the program, its inputs and outputs and decision algos, you can completely manipulate the state-machine to change its behavior to which ever stupid result you want. AI is very easy to trick b/c of this. The programmer always knows how to make the executing program completely determined by the programmer's will and not the programs. Thus, it has neither free will or an "I" as you want to think about it. This is not generally possible to do with humans (we know the KGB, NSA, CIA, et. have tried (and failed) all the ways to force humans to do something it does not want to do!).
Ironically, the above withstanding Human mobility:
https://cos.northeastern.edu/news/human-behavior-is-93-predictable-research-shows/
HUMAN BEHAVIOR IS 93% PREDICTABLE, RESEARCH SHOWS