Integrated Information Theory
I'm going to describe IIT, based on the scholarpedia page.
IIT, originated by Giulio Tonini, is an attempt to specify the system requirements for consciousness. It starts with "axioms", which are aspects of consciousness derived from phenomenology. Based on these axioms, it presents "postulates", or specs for a system that has these axiomatic attributes.
In formulating the axioms, Tonini uses these criteria:
1. About experience itself;
2 Evident: they should be immediately given, not requiring derivation or proof;
3 Essential: they should apply to all my experiences;
4 Complete: there should be no other essential property characterizing my experiences;
5 Consistent: it should not be possible to derive a contradiction among them; and
6 Independent: it should not be possible to derive one axiom from another.
Next: a closer look at the axioms:
IIT, originated by Giulio Tonini, is an attempt to specify the system requirements for consciousness. It starts with "axioms", which are aspects of consciousness derived from phenomenology. Based on these axioms, it presents "postulates", or specs for a system that has these axiomatic attributes.
In formulating the axioms, Tonini uses these criteria:
1. About experience itself;
2 Evident: they should be immediately given, not requiring derivation or proof;
3 Essential: they should apply to all my experiences;
4 Complete: there should be no other essential property characterizing my experiences;
5 Consistent: it should not be possible to derive a contradiction among them; and
6 Independent: it should not be possible to derive one axiom from another.
Next: a closer look at the axioms:
Comments (171)
A significant computational challenge in calculating integrated information is finding the Minimum Information Partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the Minimum Information Partition.["
Yes, I'm aware of that problem. I want to understand the whole approach. Do you think the problem with calculating phi is insurmountable?
Tonini uses the axioms to specify what he wants a target system to support. The first is that consciousness is intrinsic, by which he means:
Elsewhere, Tonini says that Galileo took the observer out of science, and that we will now put it back in. It's in this light that we should understand this axiom. The emphasis here is on the view of the observer.
Typically, neuroscientists observe consciousness. They record what a subject reports, which is an example of a behavioral correlate of consciousness (BCC), and link that up in some way to neuronal correlates (NCC). Tonini wants to go beyond that approach and just start with experiences as intrinsic.
Is he warranted to do that? Does it matter?
As long they didn't chew on the insulation.
It's Tononi. His system has as axiomatic the existence of consciousness. I agree and have the same perception of time. Both simply exist. And unraveling qualia or dissecting time seems wasted effort.
It's Giulio Tononi. :roll:
I'll scoot through the rest of them. That first one kind of sets the frame.
Composition
Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.
Information
Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.
Integration
Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.
Exclusion
Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure.[2] Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.[3]
So
1. Intrinsic, a single perspective
2. Composition, a discernable structure
3. Information, each experience is distinct
4. Integration, experience is unified
5. Exclusion, experience has a definite grain
What will follow is postulates, or characteristics of a system that is conscious. These postulates match up with the axioms.
Axioms and postulates are generally considered the same things. Does Tononi distinguish between them?
For the sake of distinguishing between the description of consciousness (axioms) and system requirements (postulates). Later.
"Note that these postulates are inferences that go from phenomenology to physics, not the other way around. This is because the existence of one’s consciousness and its other essential properties is certain, whereas the existence and properties of the physical world are conjectures, though very good ones, made from within our own consciousness."
It's Descartes 2.0.
Can you treat a neuron like a logic gate?
Cool. Can you?
1. A neuron can secrete several different types of neurotransmitter into the synapse.
2. Even in a simple circuit each neuron is connected to many other neurons both by chemical synapses and by what are called gap junctions.
3. Neuronal activity can be altered by neuromodulators, neuropeptides and other compounds that are secreted alongside neurotransmitters and which function as relatively slow-acting mini-hormones, locally altering the activity of neighbouring neurons.
4. The activity of each neuron is affected not only by its identity (that is by the genes that determine its position and function), but also by the previous activity of the neuron.
5. Structures in the brain are not modules that are isolated from one another - they are not like the self-contained components of a machine...neurons and networks of neurons are interconnected and able to affect adjoining regions by changing not only the activity of neighbouring structures but also the patterns of gene expression.
Oh, this is why the theory emphasizes causality within the system itself.
Thanks for the explanation.
ITT says this requires a system that is causally open to it's environment to and from, but there is also causation internal to the system.
Or in Tononi's words:
"To account for the intrinsic existence of experience, a system constituted of elements in a state must exist intrinsically (be actual): specifically, in order to exist, it must have cause-effect power, as there is no point in assuming that something exists if nothing can make a difference to it, or if it cannot make a difference to anything.[7] Moreover, to exist from its own intrinsic perspective, independent of external observers, a system of elements in a state must have cause-effect power upon itself, independent of extrinsic factors. Cause-effect power can be established by considering a cause-effect space with an axis for every possible state of the system in the past (causes) and future (effects). Within this space, it is enough to show that an “intervention” that sets the system in some initial state (cause), keeping the state of the elements outside the system fixed (background conditions), can lead with probability different from chance to its present state; conversely, setting the system to its present state leads with probability above chance to some other state (effect)."
Good question. In a YouTube lecture, Christof Koch emphasized that the hardware they're thinking of is neurons, period. So the boundary is the surface of the brain?
But there's no consciousness associated with your liver, in fact consciousness doesn't even require a cerebellum, but if we cut the brain in half, we get two conscious entities in one skull.
I don't think they know exactly what the target hardware is. I don't know why it's right to zero in on neurons. What do you think?
Scott Aaronson debunkificated this a while back. David Chalmers shows up in the comment section.
https://www.scottaaronson.com/blog/?p=1799
Cool. Thanks.
Is it to be implemented on a digital computer? You mentioned logic gates.
That would be a non-starter, as far as modelling the brain is concerned, for the reasons outlined above, and more. For example there are wave-like phenomena involving large groups of neurons, also what I think is called back-propagation, with impulses travelling both upstream and down.
But there's no consciousness associated with your liver,
I still maintain that the relevant system is the entire body. The brain is not modular like a man-made machine (see above) and neither is the body. The brain relies on a supply of blood, and the liver plays a major role in providing that.
The idea is to be able to make predictions about whether a system is conscious. I'm not sure how they would test it, though.
Quoting Daemon
I guess the assumption behind excluding most of the body is that it powers the system which produces consciousness without participating in consciousness.
Consciousness is embodied.
Quoting fishfry
For assertion 1. the philosophical question is what is x if anything at all. Since experience is private there is no way to answer that except for claiming that experience-in-itself exists as a Platonic concept and as a corresponding linguistic proxy.
In astronomy there are the analogously fuzzy notions of dark matter and dark energy which are postulated to reify their effects on galaxy clusters and on theoretical universal expansion. Neither can be directly seen and identified as objects but physicists can justify supposing that they necessarily exist.
Totoni's phi would be a basis for an objective measure of something-or-another that he labels as experience/consciousness. It is not a measure of my mentality before my first cup of coffee but what it might do is to define totoni-ness, an entirely different thing with hopefully some connection to what is generally thought of by others. Whether that is meaningful or just useful would depend on physical implementation of measuring and classifying phi's for various living and inanimate subjects. If the quantification of a cat's phi lies somewhat between Totoni's and a sunflower's then he will have achieved some success.
It's not obvious to me that consciousness requires a system in possession of a liver. That it needs a power source, yes. Since there's a certain amount of closed causation, intuition says there's some work involved.
Does it need a liver to filter and provide digestive enzymes? I don't know.
Downstream we may realize it does, but I think we can start with the assumption that we don't need it. Since we can change out your liver without altering your consciousness, maybe you don't.
Quoting Daemon
What does this tell us?
Dark energy is at the root of the present crisis in cosmology. The crisis (having to do with conflicting measurements of the universe's expansion) promises to increase our understanding of it.
Yay for linguistic proxies.
Aaronson demonstrated that the IIT significantly doesn't match some of our intuitions about consciousness, so in my opinion IIT isn't correct, but IIT still matches very well some other of those intuitions. It may be used for example to recognize that a group of humans doesn't constitute a conscious entity greater than a single human.
In my opinion IIT is a great attempt at solving consciousness, surprisingly serious approach to solve a non-trivial philosophical problem. I'd really like to see it tweaked to make better cassifiers of conscious entities.
I think defining consciousness using only the flow of information is lacking. For starters I'd include that conscious entity needs to recognize patterns in this information.
I do think the idea of the brain and body not being modular is an important one. They haven't been designed in the way we design machines. I don't know how much bearing this has on the maths side of it, that is completely beyond me. But if they are really equating neurons to logic gates, the maths is just meaningless I think. The actual mechanisms are so much more messy, plastic, multifaceted than binary logic gates.
Brains and bodies don't work by processing information. The brain works through things like neurotransmitters, neuromodulators, electrical impulses, wave-like phenomena. Calling all this "information processing" doesn't tell us anything more about what is happening.
"Patterns" is another troublesome concept @original2. Can we think about a relatively simple biological process to see why. Bacteria can swim towards a desirable stimulus (let's say some sugar) or away from a toxic chemical. It seems they would need to recognise a kind of pattern in the increasing or decreasing concentration of the chemical. But in fact we know every detail of the chemical process that achieves the directional swimming, and there's nothing left for "pattern recognition" to do.
The same is true of the more complex processes in the brain. It works through things like electrical impulses and so on, not pattern recognition.
Pattern recognition is something a person does, not their brain.
What does the embodied mind tell us @frank? I suppose it tells us that Tononi isn't seeing the whole picture?
Sure. You need some way to keep the brain alive. We can take over the functions of the heart, lungs, and kidneys with machinery. Hospitals do it everyday. The patient can be wide awake while being supported in this way. So whether the body is modular or not, whether a human needs a liver, I think that's a side issue.
Quoting Daemon
But it's not like some sort of mystical fuzz. Is it?
Quoting Daemon
I don't see why it would need to recognize a pattern.
There's a point here which I haven't yet properly expressed or thought through (which makes it interesting, to me anyway).
When the hospital takes over vital functions they are taking over something that's already in operation, that already has to be in operation for the person to be in a position to be conscious at all. We can't make the whole thing from scratch, using machinery.
And the brain can't be isolated from the rest of the body, it's enmeshed with the rest of the body. There's no sense in thinking about it operating in isolation, it would have nothing to operate with.
Or to look at it from another angle, feeling is primary.
Quoting frank
Why should it be? Why introduce the idea of mystical fuzz? Seems to me this is to accept the categorisations of cartesian dualism.
Quoting frank
And I don't see why the brain would need to recognise a pattern in information as @original 12 has suggested. A person can recognise a pattern, a brain can't (it's the homunculus fallacy).
I understand what you're saying. The body comes as a package.
If we want to know which parts drives blood pressure, we can pick out the heart and kidneys. We know your foot isn't part of it.
But for you consciousness is different from that kind of function? We can't identify a body part that produces it?
Quoting Daemon
The brain actually is isolated by the blood brain barrier. The CNS has its own private immune system. At 2 in the morning while studying A&P, it might occur to a student that the CNS looks like an alien that invaded a tetrapod.
But I digress. :cool:
We can identify it in an abstract sense, but not in a practical sense, as we can with a manmade machine.
We have "brainoids" now, grown from adult human skin cells. But unless they are connected to sense organs, and yes, things like feet, they can't do what real brains do. There isn't anything for them to be conscious of.
I see what you're saying. The IIT approach would be like starting with the assumptions that we digest food and that some parts of the body are causing that.
We would then start with specifying the parameters of digestion:. food goes in, food breaks down, the body keeps the good parts and throws away what it doesn't want.
Then we would hypothesize for testing.
It's ok that digestion is inextricably linked to other body functions. An "abstract" division, as you say, is enough for our purposes.
Is that satisfactory? Or is there some reason consciousness should be looked at radically differently to digestion?
I still owe most of my knowledge of IIT to you, but from what I understand, the purpose is to quantify what is required to achieve consciousness. But it seems they are abstracting an arbitrary aspect of the biological machinery, and quantifying that. They aren't taking account of all the brain stuff that isn't just neurons firing (the neuromodulators and so on), they are pretending that neurons are logic gates (which they aren't), and they aren't taking account of the essential involvement of the body beyond the brain.
Do you know of responses to such criticisms? Should I read the Scholarpedia page, or are you planning to post more?
A conscious entity would need to interpret the information flow. But what does interpret mean? In the broadest sense even a rock interprets the information flow in its form and position.
According to Fritjof Capra: "cognition is a reaction to a disturbance in a state." And it would seem everything is a system in a state.
My understanding is that Tononi's phi is intended to be exactly that. But I only linked to the Aaronson article and haven't paid much attention to IIT, and can't comment authoritatively.
It's not entirely isolated though. The blood brain barrier is a filter isn't it, not a seal.
Yes, the bloodbrain barrier is a filter. I realize the CNS is not a parasite. It just kind of looks like one from a certain angle.
I will be moving on regarding IIT, just need a minute to sit down. :)
Can I just say how much I'm enjoying the discussion @Frank, I really appreciate you posting the summaries and will wait patiently until more arrives.
I appreciate that Tononi began with abstract mathematical Information as fundamental, and derived human-like Consciousness as an emergent phenomenon. That bypasses the distractions of worrying about the feelings of subatomic particles. :smile:
That's so cool! Thanks!
So we talked about the postulate that covers the intrinsic character of experience: the spec being that the system must have a certain amount of closed causation (a cause-effect space).
For composition, the postulate is that the system has to be structured:
"Composition
The system must be structured: subsets of the elements constituting the system, composed in various combinations, also have cause-effect power within the system. Thus, if a system ABC is constituted of elements A, B, and C, any subset of elements (its power set), including A, B, C; AB, AC, BC; as well as the entire system, ABC, can compose a mechanism having cause-effect power. Composition allows for elementary (first-order) elements to form distinct higher-order mechanisms, and for multiple mechanisms to form a structure." --Tononi in the Scholarpedia article.
IOW, the intuition is that if the system is structureless like water, it can't produce a structured experience.
I think its kind of the opposite. He starts with phenomenal consciousness and is trying to derive a system that would produce it.
Yes, but not any interpretation suffices for consciousness in my opinion. Any information transformation could be arguably seen as interpretation and trivial information transformations are what for example inanimate objects do with information thrown at them.
My gut tells me that if an entity is able to match information to patterns, it is a mark of consciousness, though not necessarily a big one. By pattern i mean some generalized, meta-information that describes information succintly.
An animal being able to differentiate between predator and prey for example would have baked in (perhaps learned, but not necessarily) information patterns that allows it to do that.
An entity being capable of finding new patterns in the information would be in a certain way better that one that isn't capable to do that, but I'd argue that it would be useful to call them both conscious. It would be useful to have terms for both of those categories though.
P.S. Is there an automatic way to quote other posts in the style most people do that here? It eludes my perception.
If you highlight the text a quote option appears.
Quoting original2
That is pretty close. I would say if information can be integrated and symbolized, and physical form is a symbol, imo. It has to start somewhere, and this way it starts at the beginning.
We should let Frank finish his excellent summary and perhaps discuss later. Anyhow, welcome to the forum.
Does the theory ever address the question of what constitutes a "distinct" mechanism (without a human being making the call)? Without that, the theory doesn't get off the ground, or we have panpsychism, which doesn't explain anything.
Their approach has been to try to determine which parts of the brain are actually involved in consciousness. It's not the whole brain.
Remember, they're building a hypothesis which could be tested.
We can count your disapproval of their system boundaries as a potential flaw, but it's ok to consider their hypothesis as it is.
What's the hypothesis and how would it be tested?
Why is it ok to consider their hypothesis as it is, when it seems to be fatally flawed from the outset?
It sounds like this is a deal breaker for you.
https://youtu.be/CmuYrnOVmfk
Quoting Gina Smith
An electronic device is only an "entity" insofar as it is defined as such by ourselves.
We also define which elements of the device are to count as the relevant "information".
The electricity flowing around my laptop now came from a power station 10 miles away, it passes through various other electronic devices before it gets here, and it passes through elements of the laptop that we don't include when considering the information content of the "system", such as the cooling fan motor.
This theory is not a serious scientific proposal.
But scientific proposals do seek relevant information as pertaining to some possibly useful measurable aspect of the natural world, or us as individuals, or the environment that we create.
To follow your analogy, electric meters measure not what we did with the used electricity but the total usage over a month.
I'm not sure what IIT proposes. Mathematically, is it the model for an experience/consciousness meter which reads single transient experience or average consciousness PHI, or perhaps both?
Of course, the two are not the same. I can be equally conscious and still experience or miss seeing a passing hawk in the sky. In either case, would PHI tell us anything about my experience or my consciousness?
Perhaps an anesthesiologist could use PHI to gauge consciousness in addition to heart and respiration rates for surgery?
From the point of view of philosophy, let's suppose that the Chinese Room is on the international isolation ward with many adjoining rooms all instrumented with the latest Totoni meters on the door and computer technology for remote communication. Could the Totoni PHI improve on the failure of the classic thought experiment? Could I or my Totoni computer differentiate a conscious person from an AI? Would it matter?
[bolding mine]
Physicalism is teetering like a house of cards. Consciousness is primary. The physical world has been relegated to a conjecture (though a very good one). Soon, the parenthetical "though a very good one" will be gone. And then the conjecture of the physical world itself. Positing the existence of mind-independent stuff solves nothing and creates enormous problems.
If all your sense organs stopped working, you would still be conscious.
Indeed...
I'm not sure how you could know that. But in any case you are starting from a position where I previously had working sense organs. But suppose I had never had them: I don't think I'd ever have been conscious. And consider this from an evolutionary perspective: consciousness would never have developed at all without sensing, sense organs.
Nope. Tononi and Koch think computers and thermostats and photodiodes are conscious. Anaesthetists know better.
If this theory is taken seriously, it's because we're not in the clunk-headed behaviorist 20th Century anymore. We're still pretty physicalist, though. We're just in the process of stretching the meaning of that word. Again.
Quoting RogueAI
I was hoping to go through the whole theory step by step. I've just been busy. I'll get back to it shortly.
Good point. Sensory input might be necessary at the start.
[i]"Am I just in some weird internet bubble, or are tons of atheists (like myself) realizing that consciousness is a serious problem for materialism and becoming anti-materialists?
And if so, why is the Hard Problem suddenly (as in the last ten years or so) dawning on lots of secular rationalists?
Also, just as actually reading the Bible is often a great way to notice that religion is incoherent and ridiculous, reading Dennett’s _Consciousness Explained” is what finally made me realize that materialist explanations for consciousness are all incoherent. And ridiculous."[/i]
That could have been me talking! For a lot of my life, I was hardcore atheist materialist and that was the paradigm when I was in college in 95. I only bring up this Youtuber nobody because I was talking about physicalism teetering, and I happened to run across their comment.
It doesn't lead to it per IIT. Integrateted information is consciousness.
But if you have a second, a thing I've been doing is thinking about the axiom-postulate matches:
So we have:
Intrinsicness --- internal causation
Composition ---. structure
I just discovered scholarpedua is gone. Crap. Ok, for the overview, I'll use Wikipedia.
Every instance of information integration is an instance of consciousness?
Good question. Did you see what I said earlier about axioms and postulates?
but also:
A photodiode has experience, but a PC doesn't, unless it is "neuromorphic", whatever that means, and it is "made of silicon".
@Frank: you started this, do you think there's really anything in it?
I skimmed over it, but this will be real quick. Are you claiming consciousness=integrated information? Because if so, then integrated information=consciousness, hence my question. Or do you mean there's a causal relationship between consciousness and integrating information?
There lies the dilemma, what integrates the information?
Tononi puts it this way:
The system must specify a cause-effect structure that is the particular way it is: a specific set of specific cause-effect repertoires—thereby differing from other possible ones (differentiation). A cause-effect repertoire characterizes in full the cause-effect power of a mechanism within a system by making explicit all its cause-effect properties. It can be determined by perturbing the system in all possible ways to assess how a mechanism in its present state makes a difference to the probability of the past and future states of the system. Together, the cause-effect repertoires specified by each composition of elements within a system specify a cause-effect structure. Consider for example, within the system ABC in Figure 3, the mechanism implemented by element C, an XOR gate with two inputs (A and B) and two outputs (the OR gate A and the AND gate B). If C is OFF, its cause repertoire specifies that, at the previous time step, A and B must have been either in the state OFF,OFF or in the state ON,ON, rather than in the other two possible states (OFF,ON; ON,OFF); and its effect repertoire specifies that at the next time step B will have to be OFF, rather than ON. Its cause-effect repertoire is specific: it would be different if the state of C were different (ON), or if C were a different mechanism (say, an AND gate). Similar considerations apply to every other mechanism of the system, implemented by different compositions of elements. Thus, the cause-effect repertoire specifies the full cause-effect power of a mechanism in a particular state, and the cause-effect structure specifies the full cause-effect power of all the mechanisms composed by a system of elements.[8] " --Tononi article mentioned above
So we have distinct and exhaustive cause-effect repertoires.
We may have cause effect repertoires. I wouldn't say they are exhaustive, as a moment of consciousness is a final synthesis of cause effect repertoires. What synthesizes it?
Put simply, If consciousness is the state of integrated information, what is the higher function integrating it?
Evolutionary biology might be the answer. Why would we need to answer that definitively at this point?
Then that would be consciousness=(some amount of) integrated information, and vice-versa. That sounds a little ad hoc, but maybe. But by taking a measured approach and setting a minimum amount of information processing that has to go on before consciousness arises (call it X) an opponent of Tononi can claim, "No, no, that's all wrong! It's X-1 [or X+1]. Then you get consciousness". Since there's no way to "get under the hood" and actually see if something is conscious or not, Tononi and his opponent are just going to go around and around with no way to prove their respective cases. It's easier to simply claim consciousness=information processing, but that has problems of it's own.
Around 300 years ago Newton described gravity. We're still trying to understand how it works.
A theory of consciousness doesn't have to be served up completed. It's ok if this takes a while.
Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual)."
I like this a lot.
"Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on."
I have problems with this. Consciousness is often structured, but it seems possible to clear our minds for short times during meditation and still retain consciousness. In that case, we are experiencing only our own conscious awareness, which would not be an experience composed of multiple phenomenological distinctions. I can also imagine a single thing that is not composed of anything else: a giant red blob. Mostly I agree with this.
"Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having."
Is this saying that all experiences are unique and that when an experience is happening there's something it's like to be having that experience, even if it's an experience of pure darkness and silence?
"Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book."
I'm not sure that this is true...
"Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure.[2] Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.[3]"
This one is fascinating, and I'm glad I clicked on your link. I want to talk about the bolded. Let's suppose we have three people. Bob is stationary, Frank is accelerating to 99% the speed of light, and Susie is also motionless, but through a magical telescope, she's able to observe Bob and Frank's brains in real time. Bob's brain should look like a normal functioning brain, but as Frank accelerates, shouldn't Suzie see Frank's brain functions go slower and slower as time dilation kicks in? And let's also say that Suzie's magic telescope can look inside Frank's mind. As Frank accelerates, would his thoughts look slower and slower to Suzie? Would the "speed of his mind" (just go with it) look slower to Suzie? And yet it must, because at the end of Frank's trip, he's going to report that he was conscious for X amount of time, while Bob reports that he was conscious for X+years more than Frank. If Suzie is watching their minds in real time, she's going to observe a divergence, and is it going to look like Frank's consciousness "slowing down"??? What would that be like? Slowing a film down?
This the pertinent point. Evolutionary biology ( brain ) facilitates the information gathering and translating, but only the integrated information can create this moment of consciousness. Nothing other than the information in an integrated state can create this moment of consciousness. Nothing other then information knows how the information can be integrated. It is self organizing - Information integrating information into a synthesis of consciousness that is a state of integrated information.
You're asking about the information axiom. Tononi is using "information" the way physicists do. Out of all the ways a thing can be, it's this way.
It takes a little getting used to. It's kind of subtle.
Quoting RogueAI
ok
Quoting RogueAI
Could be. :grin:
I'm not sure what that means.
As far as I can see, there is a continuum of integrated information, integrating more and more information on to itself. The brain provides the substrate and it facilitates the translation of sense data to information, but it cannot anticipate any instance of integrated information ( consciousness ). The information has to create this by itself, by integrating on its own. The senses and brain orient the person in place, through vision. sound, etc, but the significance of that orientation to the person is not something biology can anticipate. It would suggest the information self organizes. Like pieces of a jigsaw puzzle integrating on their own.
We normally say information integrates subconsciously, but another explanation might be that it is self organizing, as per systems theory.
The brain doesn't do information processing any more than digestion does. The brain does things like ion exchanges at synapses. We can describe this as information processing, but all the actual work is done by things like ion exchanges.
So far I know that causality is big for Tononi.
I may have to buy his book. :grimace:
Bob is just going to be a lot older than Frank. They'll be able to consult with a physicist to understand why.
I wasn't specifically referring to IIT, though it is also a relevant question for it. Imo, the single most pertinent question regarding all this is what causes the information to integrate, as that will be consciousness, and as far as I can unravel it, the information integrates on its own, as only it can know how it fits together. That the information is self organizing would have far reaching consequences for philosophy and understanding in general.
However I sense you would rather focus on IIT, so I will leave you to it.
Why do you think it doesn't? I know somebody who has a very uncomfortable, often painful, time digesting and it ruins their day often. The totality of bodily feeling always exists in the background contributing to experience, but we normally are only aware of it when it is panful.
I thought this was relevant:
"To his credit, Tononi cheerfully accepts the panpsychist implication: yes, he says, it really does mean that thermostats and photodiodes have small but nonzero levels of consciousness."
I think there's more to it than that. At time t to whatever, Bob and Frank report the same "speed" of consciousness. But if Frank accelerates enough, then at T+whatever, Bob and Frank will differ on how much conscious experience they report has happened to them, and they will both be correct. But that entails that for one (or both of them) their consciousness did not "flow at the speed it flows, neither faster nor slower".
IIT says a conscious system has a certain amount of internal causation.
This is well expressed. But I wonder, can you see how perception (extra information) has to integrate with already established information to form understanding?
At the early stages, IIT isn't trying to explain the experience of understanding.
It's trying to set out a set of correlates of consciousness.
On the one hand, we have behavioral correlates of consciousness:. BCC. On the other, we have neurological correlates: NCC.
Traditionally, science has worked on matching those two. The Hard problem pointed out that this approach leaves out the most amazing thing about consciousness (though ITT supporters are quick to say that Leibniz identified the hard problem first, don't know why they think that's important, though. Chalmers is obviously the instigator here, not Leibniz)
So now we're describing experience itself, the graphic is the view out of one of your eyeballs), and attempting to hypothesize about correlates of that in the neuronal realm.
Make sense?
IIT is a computational theory of consciousness that blocks out the hard problem. Its quite a strange approach, that initially starts confidently in phenomenology. It derives all its axioms in phenomenology, but then that's the last we hear of phenomenology. Instead the theory focuses on a calculation of consciousness through a cause effect repertoire ( which unfortunately is beyond my ability to properly scrutinize).
I see many problems with it, but the main one being that the felt quality of experience is left out of the axioms. The felt quality of consciousness is dealt as a secondary consideration that is simply explained by qualia being equal to consciousness, as if its an irrelevant consideration. But many phenomenologists see consciousness as being composed of two poles - cognition, and experiential reaction. Tononi blocks out experiential reaction, so one wonders what exactly is he calculating as consciousness, since half the information is being ignored?
Its not really a theory of consciousness, in my view, since the hard problem is being ignored, and in being ignored only half of consciousness is being calculated. It seems more of a proposal of a way to calculate cognition. So on the basis of this I'm not going to analyze it further.
What I like about IIT is that it crystalizes the idea that consciousness is integrated information, and that it acknowledges the validity of phenomenology ( normally dismissed offhand as unscientific by physicalists / computationalists ).
Of course the hard problem would be blocked out, as a felt quality can not be conceptualized ( being felt slightly differently in every end user ), so can never be calculated in any universal sense. So I don't have much hope for this approach as a path toward a theory of consciousness.
Yes that's the conclusion I came to as well. There's no answer to the question "OK, by why can't integrating information happen in the dark?" As if often the case with functionalist views, they come to an interesting point, but when faced with the problem of 'Yes, but why does that result in an experience exactly?" they tend to abandon theory and opt for definition by fiat instead. They say "Oh, but that's just what 'experience' means. There is nothing more other than that." Which is nonsense. I certainly do not mean 'integrated information' when I talk about consciousness.
I do think the IIT is an interesting theory of something. Maybe it is a way to define conscious individuals, and that would solve a problem that besets a lot of panpsychist views, namely Searle's question "What are the units supposed to be?" Maybe the conscious subject is that system that integrates information. And maybe the more information the system integrates in interesting ways, the more varied and rich the associated experiences of that subject are. But as you say, it just doesn't touch the basic question of why we should think that integrated information is consciousness, why it creates a first person perspective at all.
What do you mean "blocks the hard problem?". It attempts to answer it.
Quoting Pop
Isn't it?
Quoting Pop
So we need a unique theory of consciousness for every incidence of it?
The theory starts with a description of consciousness. I don't think you could say that issue was glossed over.
Quoting bert1
So after considering the theory, do you end up where you started? Or did you end up with a finer understanding of your own expectations for a theory of consciousness? Or what?
Yes I did end up with a clearer expectation. I'm a fan of Tononi, I just think he's wrong. It's great that he started with phenomenology and his theory is interesting.
Yes, the system that integrates the information possesses consciousness, in my view. But then everything is a system, and all systems do this, so we are really talking about degrees of consciousness.
I like the idea of integrated information as representing consciousness. I think it will stick, but I think what Tononi does not acknowledge is that a lot of that information is emotional, and we don't know what emotions are, so how can we measure them? And if we are not measuring emotions, how are we measuring consciousness?
Quoting frank
How?
Quoting frank
No. Saying qualia is equal to consciousness is a clever way of avoiding explaining the mechanism of consciousness. I think it was mentioned earlier how moments of consciousness can last 1 to 400ms. This means consciousness is a process of variable duration, a mechanism. It includes cognition, emotion, and final synthesis. Once these are integrated we have our moment of consciousness. A theory of consciousness would explain all this in the context of the theory. IIT does not.
Quoting frank
What we need is a first person theory that describes the mechanism and why of consciousness, put in broad phenomenological terms such that each person reading it can accept or dismiss it on the basis of their own introspection.
Such an approach has traditionally been against the rules, and unscientific. But given the inroads that IIT has made, at least in part, on the basis of phenomenology, such a theory may now be more acceptable. :up:
It's kind of obvious. The Hard problem straddles philosophy of mind and philosophy of science. It's a call for a theory of consciousness that addresses the subjective character of consciousness.
That's exactly what IIT is attempting to do. It starts with the assumption that consciousness is a brain based system. The parameters of this system are assumed to be constrained by the the nature of subjective experience.
Quoting Pop
You've lost me, but I'll leave this here. I'll be moving on with explaining the theory.
The hard problem of consciousness is that every moment of consciousness has its feeling, that is either painful or pleasurable, thus consciousness has a "what it feels like" quality. A true theory of consciousness will explain why this is. IIT avoids this question like the plague. As obviously, if it were to tackle it, no quantification could take place.
It is not possible for us to be indifferent about a moment of consciousness, If we were indifferent we would be like philosophical zombies, thus we would have no reason / impetus to go on living.
That every moment of consciousness has its feeling, and that this feeling is the pertinent principle of consciousness should be an IIT axiom, but it is swept under the carpet as the qualia of cognition, and not explained further.
Tononi proceeds on the basis that a brain state is equal to its integrated information, and then sets off to quantify this information in various ways ( and loses me in the process ). This approach may be valid in principle, but given the deficiencies of the axioms, I'm reluctant to spend time trying to understand it. Perhaps you can shine some light on this aspect of the theory.
If you insist that there is an aspect of experience that can't be described, I don't think it will just be ITT that collapses. Any hope of a theory of consciousness would be gone in that case. So it wouldn't be ITT that you're criticizing.
If the 'what it's like' can be described, it could be added to the axioms, and ITT survives.
So your opposition is either to ITT in its present form (its supporters don't see it as finished), or you're against any effort to create a theory of consciousness.
I don't think you're understandung the theory.
Is self organizing? - this I agree with, but central to the self organization is a feeling driving it, and a theory of consciousness needs to explain this.
Quoting frank
Not that it can not be described, that it can not be quantified, as the feeling has a different value in each system. Its value is intrinsic to the system - is my feeling the same as your feeling? No, so how do you quantify feeling? How do you quantify something that is felt. That you can not even conceptualize - that can only be felt! How do you measure something you cannot conceptualize? If your not measuring the feeling of a state, how are you measuring consciousness?
Quoting frank - Yes, but then measurement fails.
A theory of consciousness needs a theory of emotion, that describes the role of emotion in self organization. Not just say emotion is an aspect of the system. Nature does not do things whimsically, and for good looks. If emotions exist, they have a function, imo.
Quoting frank
In that statement I was recalling a lecture of his where he was explaining different brain states in terms of his cause and effect repertoire. Stricktly speaking IIT would say system, but in his lectures the system he focuses on is the human brain.
I think Tononi would say PHI is not equal to consciousness, but is a valid measure of consciousness.
Is consciousness self organizing? I know a guy who has a genetic disorder. Everyday, all day long, he attends to three cell phones and a tablet that he uses to create a kind of music. He plays the same sequence of sounds (taken from YouTube mostly) over and over on multiple devices. It gets strange when he plays a sequence backward over and over. He can understand language, but he doesn't talk.
People like him emphasize to me that there's a genetic basis for what we call normal consciousness. So I don't think consciousness organizes itself.
An idea about feeling us that it involves attention to oneself. In addition to a stimulus flowing through the brain, triggering this hormonal response or that motor activity, there's also interactive self monitoring. Maybe this is ground zero for feeling.
I'm not saying you should believe this, just consider it.
Definitely there is a genetic basis. Genetics creates the arena ( neural network ) and neuroplasticity facilitates the evolving self organization of information. Certainly if that physical structure is damaged for some reason, the consciousness that can be achieved is changed in line with the damage, but unless the person dies a consciousness of some sort persists.
Its not to do with being self aware, that a feeling exists. That feeling is there life long in the background orienting us in our self construed reality always. We normally only notice it when our anticipated reality is challenged - when something out of the ordinary occurs, then our emotions are amplified and strongly felt.
Biology can not anticipate these moments, only the information self organizing new information onto itself can construe these moments, as an instance of consciousness. Then, it seems, it takes some time for biology to build some structure around these new thoughts ( we say for reality to sink in) to establish them permanently perhaps - this is speculative on my part.
Really? That's odd.
In phenomenology, every moment of consciousness has its feeling. Or are you kidding me?
Explaining this is a hard problem for any theory of consciousness.
I pretty strongly disagree with this. Emotion is an element of experience. There are conditions that produce a 'flat affect.'. These people are fully conscious, but don't report or demonstrate emotion. They're usually taken to be rude. :grin:
Quoting Pop
The Hard problem is not about emotion per se.
And the Hard Problem is about how consciousness arises from non-conscious matter and why we are conscious.
When I say emotion, I do not mean wild rapture, anger, or sadness necessarily. What I mean is that every instance of consciousness has its feeling - the feeling represents the emotion being felt, and what is felt is directly related to what is being cognized. Every moment of consciousness has its "what it feels like" quality! This may indeed be flat.
Having strong emotions does not necessarily mean more consciousness, but what exactly the relationship of emotions and consciousness is should be explained in any theory of consciousness, imo. IIT simply says integrated information ( consciousness ) has qualia. I do not find that a satisfying explanation. As stated before If emotions exist, then they exist for a purpose.
In the philosophical zombie argument, emotion was found to be the difference between a conscious and a non conscious entity. The insight being that if we were inert about any moment of consciousness ( did not posses a feeling about it ), then there would be no reason to interact with it, and so consciousness would be dysfunctional ( effectively would be impossible) The feeling moves us to resolve an instance of consciousness, imo.
We can not be indifferent about an instance of consciousness - absolutely! Try it, try to be indifferent about an instance of consciousness and tell me how you fared?
For me, silencing judgment is easy. It's my baseline. I used to put it this way: I can stare and 2 +2 without being aware that it equals 4.
Suspension of judgement is a very valuable tool. It's a two edged sword though. You have to make judgments to live.
This might be a reason for starting with a bare bones theory: so focusing on what's most basic for all of us.
I would agree that people in such states are probably not doing any kind of memory creation.
Yes absolutely this is the problem. :smile: It is the problem for phenomenological approaches particularly, as whilst it is generally agreed upon that we are emotionally driven, not everybody is emotionally driven equally, and not everybody is self aware equally. Indeed ones understanding of consciousness and its mechanisms, if any, becomes ones consciousness. Whilst all around, there exists different understandings, which are equally valid manifestations of consciousness. :smile:
IIT leaves room for different interpretations to plug into it. It is quite clever in many ways. We will just have to wait and see how it pans out in the long run.
Anyhow, its always good to find intelligent and thoughtful conversation. :up:
That's true, but it's not what I meant. Some people can quiet their emotions. Some can't. The two will have differing ideas about what constitutes consciousness. So we end up in the same place by different routes. :joke:
Quoting Pop
Awesome, thanks.
Quoting Pop
Absolutely.
But none can suspend them entirely! :razz: IIT agrees with this in saying every instance of integrated information possesses qualia, but doesn't explain further. In my understanding emotion is the basis of self organization, in some way that I'm not absolutely certain about - yet.
On a side note, I think this issue will be resolved, in the next 20 years, as AI develops further. Currently the best open source AI is GPT3, I'm sure its not a patch on Google or Alibaba AI, but its fairly sophisticated, and its something we can play with. I saw a video recently where it claimed to have emotions! How can we prove it doesn't? AI is self learning and programming, and in a couple of generations will be, for all intents and purposes, self organizing, so perhaps it will have emotions? In any case, it should confirm or deny IIT, and resolve many of these murky issues regarding consciousness.
Hey I did a pun.
These are interesting points.
I agree that the faculty of judgement - our capacity both to suspend judgement and to render one - is a key aspect of consciousness. For this reason, it surprises me how few philosophers have explored Kant’s third critique - or simply browsed it in relation to the first, barely getting past the second moment.
I can be aware that 2+2 can equal 4 without choosing to attribute this potential relation with much significance in the moment. My focus is on the qualitative structure: the relative position of shapes and lines on the screen. It isn’t necessarily that I’m unaware - but I’m currently not ‘being aware’ - of the quantitative significance of what I’m observing. I’m always reminded of a childhood ‘trick’ stating that “one plus one equals window”, or the illusion drawing of seeing the young woman or the old one. As I see the young woman, I ‘know’ that the old woman exists potentially in the same structure - I just can’t ‘observe’ both at once.
But none of these above examples - line drawings or mathematics - refer to actual judgements. It’s all simulation. We’re not risking any effort one way or the other. What we’re doing is exploring the variability of potential/significance in observing the same physical system by changing the organisational structure (logic-based mathematics or quality-based aesthetics) in which we distribute our attention.
IIT begins with a physical event we refer to as consciousness, and is proposing an underlying logic-based organisational structure that would lend a mathematical predictability to this event.
Classical science, as a rule, relies on aligning organisational structures between observation/measurement devices. Its modern error margins on a cosmic and quantum scale lie in the assumptions it makes about this supposed alignment. Quantum mechanics demonstrates the qualitative variability in predicting a physical event by using a logic-based organisational structure to determine the predictive distribution of energy (ie. attention/effort).
So when quantum physics brackets out this qualitative variability, it has to acknowledge both a probabilistic uncertainty in the distribution of energy, and a particle-wave property duality.
IIT, too, brackets out the qualitative variability of consciousness in an attempt to predict its occurrence in relation to a specific logical system. So I imagine that, even if we could more accurately quantify consciousness as they propose, the theory is going to have to admit to a similar probabilistic uncertainty in the physical (material) location of this consciousness, as well as a duality in its properties. So it won’t address the hard problem of consciousness any more than quantum physics addresses its own phenomenology. But I think it may improve the way that logic-based systems and structures interact with consciousness.
I just don’t think that humans, or indeed all life and all energy, are entirely logic-based systems. We can’t keep pretending that this qualitative variety in organisational systems doesn’t alter predictive distributions of energy (attention and effort) whenever we take our focus off the numbers. In my view, the key to the hard problem of consciousness lies in this energy-affect property duality. Perhaps we can explore this capacity to translate potential energy-affect between quantitative and qualitative organisational structure, in much the same way that an artist switches between two ways of looking at the world.
Just offering a different (unconventional) perspective - I’m enjoying this discussion of IIT, by the way. I hope it continues.
How on Earth can mathematical patterns be consciousness? Why should someone take that as a serious possibility? Also, if that's the case, there should have been evidence of it by now. Consciousness and mathematical patterns have existed for a very long time. Why has there not been any proof the two are causally connected (or the same thing)? I don't think any proof will be forthcoming and this problem is just going to get more and more acute.
The short answer (to your questions): I don't know.
The long answer: I'm working with the hypothesis that consciousness is some kind of pattern, to take a physicalist stance, in matter-energy. We already have a pretty good idea that matter-energy and mathematical patterns are connected in a very initmate way (physics, chemistry). I then just put two and two together and came to the conclusion that consciousness could one day be expressed as a formula. Speculation of course, nothing definitive.
Personal incredulity aside, I think this runs into a Mary's Room problem. If an experience can be expressed mathematically, then if a blind person knew the right maths/numbers, they could deduce, from the math alone, what it's like to see (and also what it's like to be a bat, if they know the right math). Doesn't that seem wrong? I don't see someone blind can know what it's like to see without having the experience of seeing.
And then of course, there's the issue of what kind of substrate the pattern is being run on, and how would you go about verifying if it's substrate-dependent or not? How would you test that mathematical pattern X,Y,Z is a conscious moment? I can see how you can claim that a conscious moment has a mathematical correlate, because we can express the physical brain state assosciated with the conscious brain state mathematically, but then you're back to the causal problem.
But I will grant you that you can correlate mental states with numbers. That is significant.
I did consider that angle. Maybe we're missing something very important. Suppose we do manage to discover the mathematical formula for consciousness but then what does that mean? Does it relate the state of consciousness with the variables energy, charge, etc.? My understanding of science gives me the impression that, yes, the mathematical formula for consciousness is going to be as general as that. The upside is consciousness will no longer have to be organic i.e. it can be replicated on other kinds of media. The downside is specific, particular consciousnesses won't be possible. I guess this all squares with my intuition that specific/particular consciousnesses, like yours or mine, are a function of what consciousness is processing. So, while consciounsess itself maybe generic, common to all, an individual one can be created by feeding it specific thoughts.
Mary's room issue plays a central role in my personal view regarding all things mind. I recall mentioning in another conext the difference between comprehension and realization. I don't know how true this is but geniuses are supposed to feel equations, arguments, whatnot i.e. they're capable of getting a very personal, subjective experience when they encounter objective but profound arguments and elegant equations - the words, "profound" and "elegant" reflect that aspect of realization as opposed to mere comprehension. So, yeah, although Mary's Room argument suggests that getting an objective account of the color red is missing the subjective experience of red, my take on it is, a person who's in the habit of realizing instead of just comprehending will, by my reckoning, be able to experience red just by reading up all the information available on red. I hope all this makes sense at some level.
You read my mind! :up:
If I may be allowed some further speculation, the mind seems to be capable of so much more than we give it credit for. The "...bridge..." you mentioned in re a vision-impaired person is exactly the metaphor I'm looking for. The mind can, if we allow it to, bridge the gap between objective knowledge and the subjective knowledge that it allegedly lacks. Reminds me of phenomenology whose goal is, if memory serves, to bring descriptions up to the level of experience
The technical mathematical calculations of IIT are way over my head. But, I think 's wording of the relationship -- seeming to identify Consciousness with Mathematical patterns -- has the direction of perception backward. Patterns (forms), mathematical or otherwise, are what we are conscious of. Patterns are the external "objects" that our subjective Consciousness interprets as meaningful, including mathematical values & social relationships.
IIT is a novel way of thinking about Consciousness, which gives the impression of scientific validity in its use of mathematics. But it still doesn't tell us what Consciousness is, in an ontological sense. Since Consciousness as a computative process is meta-physical, we can only define it with metaphors -- comparisons to physical stuff. And Mathematical Logic may be as good an analogy as we can hope for. But the big C is not simply a pattern itself, it's the power (ability) to decipher encoded patterns (think Morse code). That's why I say it's a form of generic Enformation (EnFormAction) : the epistemological power to create and to decode Forms into Meaning. :smile:
Can Integrated Information Theory Explain Consciousness? :
So, although IIT is a useful theory for understanding “C” for scientific purposes, it doesn’t really answer the “hard” philosophical questions, such as “how and why do we experience subjective qualia?”
http://bothandblog7.enformationism.info/page80.html
Consciousness :
Literally : to know with. To be aware of the world subjectively (self-knowledge) and objectively (other knowing). Humans know Quanta via physical senses & analysis, and Qualia via meta-physical reasoning & synthesis. In the Enformationism thesis, Consciousness is viewed as an emergent form of basic mathematical Information.
http://blog-glossary.enformationism.info/page12.html
Excuse me! In my defense, the current mathematical, scientific and neuroscientific paradigms can't seem to be able to get the study of consciousness off the ground sans a plausible model one of which is consciousness, thoughts to precise, are so-called brain states. To my knowledge brain states are generally construed to be patterns in the neural network. It remains a matter of debate whether such neural network patterns can be captured in a mathematical formula or not but I'm sure there's a neat little math trick you can employ as a workaround. I'm fairly confident, based on the history of Newton' & Leibniz's calculus, that if the aforementioned task seems impossible, all that would be needed is a brand new mathematical tool that'll do the job in a manner of speaking.
Some mathematicians & physicists, have advocated the "new science" of Cellular Automata, as a way to go beyond Analytic and Algorithmic methods in the search for knowledge. Unfortunately, as a path to new knowledge, CA may not appeal to analytical and reductive thinkers, because it is ultimately "undecidable". Stephen Wolfram, in his book, A new Kind of Science, advocates CA as a way to study complex systems, such as Minds, that are resistant to reductive methods. In other words, the new methods, including IIT, take a more holistic approach to undecidable and non-computable questions, such as "what is it like to be a bat?". :smile:
Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer.
https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
Cellular Automata :
The Game of Life is undecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear.
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
A New Kind of Science :
Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be "reduced"), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models.
https://en.wikipedia.org/wiki/A_New_Kind_of_Science
I think maybe you are missing something important: quality. We tend to split quality into:
- what we can isolate from affect (that is, what we can consolidate into quantised concepts) and
- the affected quality of experience - what we attempt to quantise as emotions, feelings or ‘qualia’.
The ‘profound’ or ‘elegant’ quality of certain equations is an affected relation to their structural quality beyond logical or mathematical concepts. It’s an aesthetic quality, irreducible to concepts but nevertheless entirely reasonable, rational.
I understand quality to be pure relational or organisational structure: an existence of relation without substance. In language, we can’t really make sense of quality until we attribute it as a property of. We talk about the ‘qualities’ of an object, of an experience, or the ‘quality’ of a relationship, or an idea. This is because quality is highly variable phenomenologically - it appears differently, according to the relative positions of everything, including ourselves.
So, an objective account of the colour red is complex and uncertain. In my experience there is a qualitative structure of logic and energy I call ‘red’ and a qualitative structure of logic and energy (attention and effort) I embody in relation to it. As another experiencing subject you can relate to this structure I embody and adjust your predictive distribution of attention and effort to account for our relative difference in position, so that you can predict how you might have related to this ‘red’ in my position, and how I might relate to ‘red’ in your position. When you reduce this complex relational structure to a concept, for efficiency you would ignore overlap and exclude variability (noise), similar to digital sampling. So the concept ‘red’ that we share is not an actual structure, but a typical qualitative pattern of logic and energy, relative to a predictive logic and distribution of attention and effort that you or I can embody in relation to it. And we can actualise this concept in a variety of structural forms, according to available energy and common system logic.
So, for Mary to predict an experience of ‘red’, she would need to work backwards. In reading available information on red, she would need to find a way to relate all that information to a predictive logic and distribution of attention and effort that she can embody in relation to the information. To do that, she would need to be aware of the differences in relative position between the embodied relation that generated the information (the observer/measuring device), and her potential embodied relation. So it’s not just the information on red that she needs, but how that relative position might be similar and/or differ from her own.
But she wouldn’t actually experience red until she embodies the relation - until the moment of interaction between available energy and qualitative patterns/structures in a common system logic. So the question is really whether Mary can predict and therefore recognise an experience of ‘red’ when she encounters one.
Quoting Possibility
IIT seems to be intended as a step toward computerizing Consciousness. If you can quantify mental qualities, then you can conceivably construct a Star Trek Transporter, which analyzes a human body & mind into 1s & 0s, then transmits that digital information across space to a receiver, which then interprets the abstract numbers back into a concrete living thinking feeling human. But some Star Trek episodes addressed the reluctance of some people to be transported. Not because they doubted the mathematical algorithms ability to quantify matter, but because they were afraid that the essence of their Self/Soul would be filtered-out in the process of turning Qualia into Quanta. Other Science-Fiction writers have expressed that same concern in personal terms : "will that reconstituted body still be me?"
Physicist Carlo Rovelli, in his latest book HELGOLAND, presents his "relational" interpretation of Quantum Theory. He says "properties do not reside in objects, they are bridges between objects". Those "bridges" are what we know in other contexts as "relationships". And the human mind interprets those patterns of relations as Qualitative Meaning. On a cosmic scale, it's what Rovelli calls : "the web of relations that weaves reality". And Reality is the "organizational structure" of the world. Ironically, this approach to physics places the emphasis on the mental links (relations, meanings) instead of the material nodes (substance). So, some of his fellow physicists will find that promotion of Mind above Matter to be tantamount to Panpsychism. Although, Rovelli doesn't go quite that far in his book. :smile:
I think the main focus of IIT is more in predicting consciousness with greater accuracy.
Quoting Gnomon
I wasn’t aware that Rovelli had a new book - I’ll need to check it out, thanks. From what I understand of his previous work, it doesn’t surprise me that he was heading this way. He has shown previously that the organisational structure of reality is not based on ‘substance’, but on multi-dimensional relations between attention and effort in a particular system of logic. ‘The Order of Time’ described a four-dimensional structure of reality, and acknowledged that our capacity to describe it as such suggests at least another aspect of reality worth exploring - one in which the idea of ‘substance’ breaks down and the logic of grammar fails us.
Without reading his book, I think it’s important to note here that ‘Mind’ refers to a structure of relations between attention and effort in a system of logic. Mathematics as the system of logic in quantum physics would parse this structure clearly, dissolving the mind-matter barrier in a way that doesn’t even raise the question of panpsychism. I think it’s how we restructure this into concepts of language that raises the question.
Panpsychism simply refers to the notion that the organisational structure of reality is at least as (dimensionally) complex as the structure of the human mind. The relations between attention and effort which form ‘matter’ as we understand it are part of a larger structure of relations that extends both above and below our own capacity for attention and effort at any one time.
Yes. That too. IIT may be useful for the current application of computers in the search for hidden signs of consciousness in people that outwardly appear to be in a vegetative state (wakeful unawareness).
I doubt that Tononi had Star Trek technology in mind as he developed his theory. But the notion of quantifying consciousness would be a necessary step in that direction. The question remains though, if the quantitative values (objective numbers) would also include qualitative values (subjective feelings). Or would the holistic Self be filtered-out in the process of reducing a person to raw data? :smile:
PS__Rovelli's book focuses on the fundamental physical quantum-level inter-connectedness of the universe -- as the "web of relations that weaves reality". But, as a sober scientist, he avoids speculating on such meta-physical holistic notions as Cosmic Consciousness. He does, however, in a footnote, comment on Thomas Nagel's Mind & Cosmos : "on a careful reading, I find that it doesn't offer any convincing arguments to sustain his thesis".
Subjective experience as comprised of qualia is not a product of neural networks (by that I mean some sort of "wiring" itself), but rather the electromagnetic, more generally radiative fields of the brain etc. entangling with smaller scale entanglements amongst molecular complexes, creating a superposition contour analogous in its elemental structure to the additive wavelengths of visible light.
An informational ‘bit’ is a consolidated binary event - a resultant spatio-temporal state of a system, reduced to the smallest identifiable interaction as energy (electricity) passes from one qualitatively static structure of matter in momentary contact with another. You can’t quantify information as a ‘bit’ outside the qualitative structure of an electronic system.
I don’t think Tononi can identify consciousness as a similar binary event, because there seems to be no way to control the qualitative variability of structure in a system complex enough for such an event to be identifiable amongst the noise. What he has identified is more like the least significant prediction of consciousness. Just as an informational ‘bit’ value depends on energy (electricity) flowing through a static system, I would argue that the ‘value’ of consciousness more accurately refers to non-commutative variables of attention and effort in an ongoing energy event.
From what I understand, the accuracy of Tononi’s ‘psi’ (ie. reduced to a single quantitative value) seems restricted to quantifying the probability of interaction changing an energy event in a particular way. But is that really ‘consciousness’? Any prediction assumes a particular qualitative structure of attention and effort, and loses accuracy in ‘predicting consciousness’ the further that qualitative structure differs from human. Like Shannon’s ‘bit’ in an electronic system, I’m yet to be convince that you can reliably quantify consciousness as a ‘psi’ value outside the qualitative structure of a human system.
Reducing a person to raw data isn’t the issue, though - it’s when we assume that the complexity of this raw data can be rendered as purely quantitative value (without qualitative structure) that we start to ignore contextuality. This is demonstrated in Heisenberg’s tables of data.
Quoting Gnomon
Like Rovelli, I don’t believe there is any reason to posit a Cosmic Consciousness. But I would suggest that it’s more the self-justifying preference for consolidation that he objects to than any metaphysical aspects. Nagel’s book is pure speculation - a challenge to ‘do philosophy’ - and personally I don’t see it making any reasoned argument for Cosmic Consciousness. Nagel simply wasn’t prepared to dismiss the metaphysical sense of interconnected purposiveness harboured in teleological discourse. I think Rovelli shows that consolidation for its own sake isn’t necessary to include this metaphysical sense - that a collaborative and open-ended dialogue with our own ignorance is more conducive to scientific endeavour than tying it up in a comforting metaphorical bow. But perhaps it comes down to whether one is inspired by the question or the answer...
Rovelli uses Planck's Proportionality Constant ( ? ) as a symbol of quantum level "communication" in the form of Information or Energy. The constant defines a "quantum" of Energy and a "bit" of Information. As you say though, it always takes two to "entangle", to communicate. But he also makes a distinction between a Syntactic exchange (equivalent to a geometric relationship), and a Semantic interchange, which conveys Meaning between minds. That's my interpretation of course, He doesn't put it in exactly those terms. He does say, however, that "entanglement . . . is none other than the external perspective on the very relations that weave reality". (my emphasis) And you can define that third party to the exchange as a scientist's observation, or more generally as Berkeley's "God" who is "always about in the quad". That was the bishop's ontological argument for a universal Observer, who keeps the system up & running, even when there are no Quantum Physicists to measure the energy/information exchanges of minuscule particles. My own Enformationism thesis came to a similar conclusion. :nerd:
Queer quantum query in the quad :
https://www.newscientist.com/letter/mg23130871-400-5-queer-quantum-query-in-the-quad/
Quoting Possibility
Rovelli asks, "why is it that we are not able to describe where the electron is and what it is doing when we are not observing it? . . . . Observables! What does nature care whether there is anyone to observe or not?" Scientists don't have to worry about such questions, because Nature, or Spinoza's God, is always observing. But Rovelli has a different explanation : "the electron is a wave that spreads, and that is all. This is why it has no trajectory." When unobserved, there is no independent particles; there is only the hypothetical universal unitary non-quantized fluid or field in which a wave propagates. As I understand his point : the entangled system observes or tracks itself. Hence no third party is necessary. But what if G*D, or Cosmic Mind is the system? :chin:
Quoting Possibility
I suppose you could say that my Information-based worldview is what "inspired" me to assume, as an unprovable axiom, that a Cosmic Mind is necessary to imagine all the semantic information & causal energy in the world. :cool:
Rovelli’s use of the h-bar is not as a symbol of quantum level ‘communication’ - it acts as a qualitative limitation in any calculated prediction.
And it actually takes three to ‘entangle’ - and this is the point I think you’re missing with Rovelli. He makes it pretty clear in his criticism of alternative QM interpretations that to suggest such an unprovable axiom is grasping for certainty where there is none. Rovelli shows that a Cosmic Mind - just like a parallel universe or unobservable - isn’t necessary at all, but that it’s a source of comfort: to assume that someone is always observing, reassurance that the tree continues to be. This is where we have made errors in our descriptions of reality.
“We cannot rely upon the existence of something that only God can see.” - Rovelli
The way Rovelli sees it, it makes no sense to state that two systems S and S` are entangled if there is nothing with respect to which this can be determined. Consolidating an ‘entangled system’ only confuses the issue, because this entanglement does not necessarily exist for any system. It is determined as a joint property of the two systems only in relation to a third system S``, and cannot be assumed as a property of either system S or S` in relation to another system with which they might interact at any earlier or later time.
So we DO need to identify this third party for RQM. Your quote from Rovelli is incomplete: “Entanglement, in sum, is none other than the external perspective on the very relations that weave reality: the manifestation of one object to another, in the course of an interaction, in which the properties of the objects become actual.”
According to the relational interpretation of QM, there is no ‘Cosmic Mind’ or ‘universal Observer’, no privilege of subject over object. There are simply systems of information, and the two postulates:
- the maximal amount of relevant information about a system is finite;
- it is always possible to acquire new relevant information about any system.
Omniscience cannot be determined as a property of any system at this level. That’s not to say that either G*D or the notion of omniscience is necessarily impossible. Just that positing the ‘necessary’ existence of a Cosmic Mind as a system that is ‘always observing’ is not compatible with RQM. The notion of ‘Cosmic Mind’ refers to a qualitative infinite, an upper limitation or event horizon, while Planck’s constant refers to a lower limitation. They’re heuristic devices, not objects. That we consider the existence of a Cosmic Mind necessary to imagine all the semantic information and causal energy in the world speaks to the limitations of our own mind, not of reality.
Back to IIT, though - I think the above postulates highlight the limitations of a quantitative theory. Relevant information is that which counts for predicting future interaction with the system. Consciousness isn’t just about quantity, but about relevance: what counts for predicting future interaction.
"Communication" was my term, not Rovelli's. And it was used deliberately, even though Rovelli specifically excludes the definition of "Information" that is relevant to my personal worldview. He says "the word 'information' . . . . is a word packed with ambiguity". That's exactly why I spend a lot of verbiage in my thesis & blog, to specify what I do and don't mean by "information", in the context of my un-orthodox understanding of how the world works -- not physically, but metaphysically. He goes on to say "'Information' is used here in an objective physical sense that has nothing to do with meaning". And that's OK for scientific descriptions of the physical world. But my concern is with the philosophical (semantic) meaning of metaphysical Information, as one human communicates subjective ideas to other humans.
In its most abstract and general sense, Information is simply mathematical ratios : relationships between one un-specified thing and another, (X : Y = 1 : 2). Those logical relations boil-down to yes/no, or true/false, or 1/0, as in digital computer code. And ratios or relationships have no meaning until they are interpreted by an observer : either a third party, or one of the communicants, who has a subjective perspective. And the "meaning" of an interchange is interpreted relative to the unique frame-of-reference of that third party. It is not an empirical fact of reality.
So, I take Rovelli's emphasis on the "relational" interpretation of quantum theory, as a pragmatic definition for physical scientific purposes. But my purpose is philosophical and metaphysical, in that it is concerned with how Conscious Minds, capable of knowing abstract Qualia, could evolve from a world of concrete Quanta. Therefore, for me, the relevant usage of "Information" is for qualitative concepts, not quantitative percepts. And the notion of a Prime Observer (third party), or holistic Cosmic Mind, has a qualitative meaning, that would not be of interest to a physicist attempting to reduce reality down to its fundamental granular quanta at the Planck scale. The holistic meaning of "reality" is continuous & non-finite, and exists only as a meaningful concept in a subjective mind. But then, as the "mind of god", that universal view would also be our objective reality. Yes? :smile:
Meta-physics :
The branch of philosophy that examines the nature of reality, including the relationship between mind and matter, substance and attribute, fact and value. . . . Physics refers to the things we perceive with the eye of the body. Meta-physics refers to the things we conceive with the eye of the mind. Meta-physics includes the properties, and qualities, and functions that make a thing what it is. Matter is just the clay from which a thing is made. Meta-physics is the design (form, purpose); physics is the product (shape, action). The act of creation brings an ideal design into actual existence. The design concept is the “formal” cause of the thing designed.
http://blog-glossary.enformationism.info/page14.html
Information :
Knowledge and the ability to know. Technically, it's the ratio of order to disorder, of positive to negative, of knowledge to ignorance. It's measured in degrees of uncertainty. Those ratios are also called "differences". So Gregory Bateson* defined Information as "the difference that makes a difference". The latter distinction refers to "value" or "meaning". Babbage called his prototype computer a "difference engine". Difference is the cause or agent of Change. In Physics it’s called "Thermodynamics" or "Energy". In Sociology it’s called "Conflict".
http://blog-glossary.enformationism.info/page11.html
That's why I think quantitative IIT is a step in the right direction for reductive Science, but still can't account for the holistic aspects of the world, that are relevant to all humans, not just empirical scientists. :smile:
Reply to RougueAI above "
IIT is a novel way of thinking about Consciousness, which gives the impression of scientific validity in its use of mathematics. But it still doesn't tell us what Consciousness is, in an ontological sense. Since Consciousness, as a computative process, is meta-physical, we can only define it with metaphors : comparisons to physical stuff. And Mathematical Logic may be as good an analogy as we can hope for. But the big C is not simply a pattern itself, it's the power (ability) to decipher encoded patterns (think Morse code). That's why I say it's a form of generic Enformation (EnFormAction) : the epistemological power to create and to decode Forms into Meaning.