Emergence
The universe at it's largest scale, seems to be a system based on disorder-order-disorder.
Combination of fundamentals(which we have not fully identified yet) seems to drive the change from disorder to order, from fundamentals to spacetime, star, planet and galaxy formation, to the formation of flora, fauna and sentient life on a planet such as Earth.
Local entropy means that separate systems can reach the end of their lifespan and can 'disassemble' back to their constituent parts. BUT if a star goes nova then heavier elements are released and that's why we exist. So, the 'disassembly,' does not necessarily mean a return to the original ingredients only.
A dead star does not become pure hydrogen again.
So carbon for example is only produced due to what happens during the life of a star.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen.
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
As carbon based lifeforms, we eventually 'emerged' based on this carbon production system.
So it seems that the 'death' of one system can contribute to the 'creation' of a new, more complex system. (Perhaps there is something for theists in this. Perhaps a 'first cause or prime mover system' had to die(so, no longer exists!) for our universe to begin)
Is this carbon production, an 'objective truth' about our origins? Only in the sense of tracing the path from the origin of carbon, to us.
This got me thinking more about 'emergence.'
Since the early homo sapiens around 300,000 years ago, the 'knowledge' our species has 'as a totality,' been increasing. Each time we gain significant new knowledge, our technology increases and this has all sorts of affects on our species. It opens 'new options,' 'new possibilities.'
This 'direction of change,' seems to me to have been increasing in speed within the 300,000 years of the human story. The rate of speed increase seems to be increasing to the point that we are coming up with new tech at a faster rate than ever before.
To what extent do you think that human beings are 'information processors?'
Our ability to memorialise and pass on new knowledge from generation to generation seems to have 'the potential' to affect the 'structure and purpose of the contents of the universe.'
We have altered the Earth in many significant ways. Can we do the same to the solar system and far beyond it? Is that an objective truth about what is fundamental in our nature to do?
If there are other lifeforms with at least the same cognitive abilities as us then would they be compelled to seek new knowledge in the same way we do?
It seems to me that an objective truth about all humans is that we seek new information. Do you think that's true? and if you do, do you think its objectively true? If you think the answer is yes, then do you think that the following is emergent:
In the future we will
1. 'Network' our individual brain based knowledge.
2. Connect our brain based knowledge, directly, to all electronically stored information and be able to search it at will, in a similar style (or better) to a google search.
3. Act as a single connected intellect and as separate intellects.
My last question would be:
How much credence do you give to the idea that we are heading towards an 'information/technological singularity? Is an tech singularity emergent? and (I know this is very difficult to contemplate but) what do you think will happen as a result of such a 'singularity?'
Combination of fundamentals(which we have not fully identified yet) seems to drive the change from disorder to order, from fundamentals to spacetime, star, planet and galaxy formation, to the formation of flora, fauna and sentient life on a planet such as Earth.
Local entropy means that separate systems can reach the end of their lifespan and can 'disassemble' back to their constituent parts. BUT if a star goes nova then heavier elements are released and that's why we exist. So, the 'disassembly,' does not necessarily mean a return to the original ingredients only.
A dead star does not become pure hydrogen again.
So carbon for example is only produced due to what happens during the life of a star.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen.
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.
As carbon based lifeforms, we eventually 'emerged' based on this carbon production system.
So it seems that the 'death' of one system can contribute to the 'creation' of a new, more complex system. (Perhaps there is something for theists in this. Perhaps a 'first cause or prime mover system' had to die(so, no longer exists!) for our universe to begin)
Is this carbon production, an 'objective truth' about our origins? Only in the sense of tracing the path from the origin of carbon, to us.
This got me thinking more about 'emergence.'
Since the early homo sapiens around 300,000 years ago, the 'knowledge' our species has 'as a totality,' been increasing. Each time we gain significant new knowledge, our technology increases and this has all sorts of affects on our species. It opens 'new options,' 'new possibilities.'
This 'direction of change,' seems to me to have been increasing in speed within the 300,000 years of the human story. The rate of speed increase seems to be increasing to the point that we are coming up with new tech at a faster rate than ever before.
To what extent do you think that human beings are 'information processors?'
Our ability to memorialise and pass on new knowledge from generation to generation seems to have 'the potential' to affect the 'structure and purpose of the contents of the universe.'
We have altered the Earth in many significant ways. Can we do the same to the solar system and far beyond it? Is that an objective truth about what is fundamental in our nature to do?
If there are other lifeforms with at least the same cognitive abilities as us then would they be compelled to seek new knowledge in the same way we do?
It seems to me that an objective truth about all humans is that we seek new information. Do you think that's true? and if you do, do you think its objectively true? If you think the answer is yes, then do you think that the following is emergent:
In the future we will
1. 'Network' our individual brain based knowledge.
2. Connect our brain based knowledge, directly, to all electronically stored information and be able to search it at will, in a similar style (or better) to a google search.
3. Act as a single connected intellect and as separate intellects.
My last question would be:
How much credence do you give to the idea that we are heading towards an 'information/technological singularity? Is an tech singularity emergent? and (I know this is very difficult to contemplate but) what do you think will happen as a result of such a 'singularity?'
Comments (1098)
Yes, with my addition inserted.
Maybe a pair of photons can do this, but I can't think of anything with proper mass that can. It would require the two objects to be at the same place at the same time. So no overwrites.
An object is present at every event on its worldline. It doesn't occupy just one location like a path through space. Yes, with a path through space, one can move to a different location and a different object can be at the location where you no longer are, but that doesn't work with worldlines. It is impossible (by definition) for an object not to be present anywhere on its worldline, hence its existence in spacetime.
Yes, but Minkowski was not talking about points in space when describing worldlines.
BTW, he didn't invent worldlines. They've been around since the block universe had been proposed centuries earlier. Minkowski just changed the mathematics from essentially Euclidean transformations to Lorentz transformations. Euclidean coordinates measure the distance between events as ?(t²+x²+y²+z²) while Minkowski coordinates measure the interval between events as ?(t²-x²-y²-z²) .
The old Euclidean mathematics (used also by Newton) also did not have a notion of this 'overwrite'. An event is objective, and the state of affairs at that event is exactly one state, regardless of what goes on elsewhere in the spacetime.
Yes, but your description of Minkowskian spacetime in incorrect. You seem to be mixing 'space' and 'spacetime'. The state of a location in space changes over time, but an event in spacetime includes a time coordinate, and thus any time after that is a different event, not an overwrite of the first event in question.
c=1?
In spacetime, there is no separation of space and time, so you cannot pull 'space' or 'time' out of the concept, 'spacetime.'
When you overwrite memory locations on a DVD, it will happen at a different time, to when the previous data was placed there. The older data no longer exists in those locations, it has been overwritten, yes? Why would real space locations act any differently?
I put a carton of milk in my fridge and that location becomes part of it's worldline, yes? It seems to me that you are simply saying, that when I throw the carton in the bin, the space it occupied in the fridge, still exists, and by making such a trivial observation, you say worldlines never cease to exist.
To me, that's like saying spacetime will never cease to exist. Well, it may oscillate between being in a state of singularity and expansion, eternally, but so what? The concept of worldlines, remains nothing more than convenient mathematical modelling. I think you are blurring the lines between the notion of a worldline(spacetime) and that which might occupy it, at any instant of time.
I use the term 'overwrite,' to indicate, that the suggestion that space 'memorialises' every event that has ever occupied spacetime coordinates, is fanciful.
When we look at a star, we know that image no longer exists. When I look at any object around me, I know that snapshot no longer exists, as quantum fluctuations in that space, will alter it's state in some undetectable way I cannot describe, within a planck time duration. But again, to me, that is also a very trivial suggestion. The distance between every dimensionless 3 coordinates (x, y, z,) will also have expanded, during a planck time duration, creating more dimensionless members of the set of all dimensionless (x, y, z) coordinates (points).
s² = (ct)² - x² - y² - z²
Quoting universenessThat's right. Imagine the DVD is a digital copy of your wedding video made in 2005 and overwritten by a spongebob episode by your kids in 2020. So given that the existence of the '0' on a certain spot has a different time coordinate (events from 2005 to 2020) than when it has the 1 on it (2020+). Since those events have different time coordinates, none of them overlap and no event was overwritten.
No. The wedding video still exists from 2005 to 2020. That 15 year worldline cannot be overwritten. Mind you, there are movies depicting such an overwrite where Marty McFly overwrites his loser family with a less loser one, except for himself. That's an example of overwriting of events, but it's fiction and physically impossible.
I'm talking about spacetime locations (events), not spatial location.
No. Points in spacetime are events, not locations. The difference is 4 coordinates for an event vs 3 coordinates for a location.
No, I'm saying that you were present at your birth, and nobody else can ever be present at your birth, that is, to be exactly were you were and not just absurdly close by like presumably your mother. Some other person can be present at that spatial location (like the cleaning guy 30 minutes later), but that's a different event with different coordinates, not an overwrite of your birth event which has an earlier time coordinate.
Spacetime isn't contained by time, so it would be meaningless to talk about it coming into or going out of existence. Spacetime contains time, and there isn't a special moment that is the present (presentism). Einstein's (and Minkowski's) theories do not posit such a thing. Lorentz did, but a generalized theory of a universe contained by time was never published until this century. Spacetime is denied in it, as are black holes and the big bang, all replaced by other things with similar properties, testable only with fatal tests.
So no, if time and space are just different dimensions to be treated equally, then just like other places don't cease to exist relative to what you consider to be 'here', other times don't cease to exist relative to what you consider to be 'now'. So absent presentism, at no time in history do other times not exist. If only one time existed, that would be presentism.
Events don't occupy coordinates. Events are objective: the state of affairs at an event is the same regardless of frame choice or point of view. The coordinates assigned to that event however are entirely abstract and dependent on the coordinate system of choice. So I find it backwards to assign events to coordinates rather than the other way around.
True only under presentism. Relativity theory isn't a presentist theory. Strictly speaking, the image very much does exist since you're viewing it. But a presentist would say that the star (or your friend in the next seat for that matter) is no longer in the state that you perceive.
Many people do however argue against all current models of linear time. Carlo Rovelli being for me, the most interesting scientists who does so.
I remain conflicted, that in any REAL sense, past events STILL exist. I remain unconvinced on that one, for now.
Re: Large language models (i.e. neural networks which are self-learning machines) which also "hallucinate". :yikes:
@universeness @Tom Storm @Wayfarer
Another good video. Demis Hassabis merely repeated what he has said about AI developments at Deep Mind in other video's on the topic. BARD seems to fit into the 'gollum class' of AI, currently being slowly introduced. This is discussed further in the OP I posted, based on the Tristan Harris and Asa Raskin video.
In this video, it seemed to me, that the main participants were suggesting that current AI developments and a future AGI, would be a benefit, overall, to the human race.
The main warning seemed to be that we needed to introduce the current developments, very carefully and slowly, establishing strong protections against any negative affects before taking another step.
I am becoming more and more convinced that there will be an AI 'struggle,' coming soon or already here, and it will pose a similar threat (as Tristian and Asa compared it to) to humans, as nuclear weapons did and still do (perhaps even a greater threat.) But, I remain hopeful that we will maintain/acieve enough control/influence etc, so that we will survive it's negative affects, and we will eventually 'merge' with it, without the result being a 'post human' existence, as you have previously predicted.
Anyway, back to the present, I just came across this article
https://philosophynow.org/issues/155/Whats_Stopping_Us_Achieving_Artificial_General_Intelligence
and I'm reading it now. Might be worth discussing ..
We have the issue of gradation, and the concept of 'critical mass/turning point' etc.
This is an old discussion that I have been having with folks, since I first asked a classroom of students:
Would you surrender your pinky, If I offered you a replacement, which could do everything your current pinky does plus a few new functions and abilities?
Would you still be you, if you became one of the advanced pinky people?
How disadvantaged would you be, if you decided not to become one of the advanced pinky people?
I am sure you can easily predict where the discussion normally goes.
At some point, many people will pull out of the deal!
For some it's at stage 1, the pinky offer. For others it's arms and legs, for some it's the heart, for some it's the 'only your brain is left' stage.
It also depends on what new longevity and functionality is offered.
Many are attracted to, If you accept these changes you can:
Live to ....... hundred or ..... thousand years old.
You can live underwater or in space, without a spacesuit.
You can speak any language, including animal languages..... etc
The list of offers is only limited by the questioners imagineering ability.
The question quickly becomes, what is the critical point, such that if your 'merging,' moves beyond it, you are no longer human?
You are not the same you that you were when you were a teenager, but you are still you, and you are still human, so, considering such concepts as the 7 stages of man, etc. What you might consider post human, others may consider 'advanced/augmented human.'
Of course no human elements present, certainly would be 'post human,' but there are many other 'potential gradations,' of human. Do you not agree?
I will have a look at the article you linked to soon.
Not merely, just upgraded eukaryotes, no. But I expect that the results of combining upgraded genetic material, will produce as many surprising results as evolution via natural selection has.
There is another 'player,' in the game, still in it's infancy. Biological computing, combined with genetic engineering may make great advances in the future, especially with AI's help.
Based on the article you cited, I think:
[b]four possible techno-umwelts, or areas of perception for a machine:
1) Verbal virtual;
2) Non-verbal virtual;
3) Verbal physical; and
4) Non-verbal physical.
The versatility that marks general or comprehensive intelligence, that is, AGI, would only be possible when the machine freely operates in all four of these techno-umwelts.[/b]
and
[b]Only then could artificial intelligence become truly multimodal – meaning, it will be able to solve a wide range of possible tasks and comprehensively communicate with a human.
The idea of the combination of techno-umwelts thus gives us the opportunity to propose a new definition of AGI:
Artificial general intelligence is the ability of a robot (a machine with sense-think-act capability) to learn and act jointly with a person or autonomously in any techno-umwelt (but potentially better than a specialist in this field), achieving the goals set in all four techno-umwelts, while limiting the resources consumed by the robot.[/b]
Seems to be a valid and more detailed definition of an AGI than Wiki's:
An artificial general intelligence (AGI) is a hypothetical intelligent agent which can understand or learn any intellectual task that human beings or other animals can. AGI has also been defined alternatively as an autonomous system that surpasses human capabilities at the majority of economically valuable work.
I also share some common ground, with the last paragraph of the article:
On the one hand, we are beginning to ‘dissolve’ into the technologies and virtual worlds surrounding us, blurring the concept of ‘human’. On the other hand, as computers explore new areas of activity, be it chess or machine translation or whatever else, those areas are no longer exclusive to humans. Perhaps humans are the final frontier that the machine cannot yet overcome.
Quoting 180 Proof
I think definitions do absolutely matter in the 'observer reference frame' sense, but the notion of 'future' and 'change/progress' makes them, ultimately fluidic. What it is to be human, can change, and still maintain some of the fundamentals. I just don't see why we have to insist on a post' or 'after' human definition. I told you previously, I preferred neo/nova sapien, to your nano sapien.
I also prefer my more optimistic view of the future of humans to your more pessimistic one. :roll:
I think you secretly hope I am correct, even though you think the preponderance of the evidence available, convinces you that your more pessimistic viewpoint is correct.
Until they can perceive time, i.e. they develop a temporal mind, they're stuck with a built-in clock calibrated to coincide with the time zones. Math and/or computing is non-temporal. This is the sad reality.
I'm presuming that by "advances", you mean they become humans. If not, I stand corrected.
What do you mean? A computer does what it does IN time. Anything mathematical is an event that happens in time. Perhaps I am misunderstanding the aspect of 'temporal,' you are referencing.
Quoting L'éléphant
No, by progress, I refer to two possible emergents, as a result of the current path of biological computing.
1 The ability of biological computing to enhance and augment human lifespan and ability.
2. The possibility of a system, which is not completely formed of non-organic components, (but also not cyborg,) becoming self-aware/conscious/sentient.
Quoting L'éléphant
Humans are still debating what time is, so I can't comment on how a future orga/mecha sentient might perceive time. They will face the same concepts we do, relative time, individual time, proper time etc.
Expound on this. I've no idea what you mean by "perceive time" or "temporal mind".
Quoting universeness
I think "self-awareness" (i.e. real-time self-modeling) has to be built into an artificial system, it's not an emergent (i.e. "becoming") property or capability – and isn't necessary for intelligent performance (e.g. large language models). Why do you assume machines (or synthetic organisms) can, in effect, "wake-up sentient"?
Mainly because of the 'critical mass' or 'tipping point' concept found in nature. I think this is also found in various human illnesses. Physically we have the 'locked in syndrome,' or complete physical paralysis and the various coma style states of which some are referred to as vegetative.
From your linked article, we have:
"This concept comprises experiences of ownership, of first person perspective, and of a long-term unity of beliefs and attitudes. These features are instantiated in the prefrontal cortex."
This suggests to me that the functionality of the pre-frontal cortex is vital to what we would describe as the 'first person perspective.'
In this article, Metzinger (who I am unfamiliar with), to me, is describing the required 'stabilities,' and component contributing parts that result in the model of self (system), that he is describing. I see the 'self' he is describing as an emergence, in that it manifests as a combinatorial of the sub-systems involved. I use the concept of 'more than the sum of the sub-systems,' or fundamental quanta involved, to account for the more unusual features of self.
For example, I may (as a self,) become attracted to a person or an object or an idea, for reasons that even I find very hard to fully explain. That seems to me, to be caused by something more 'bizarre'/'complicated'/nuanced etc than everything a car or my laptop does, due to the combination of its parts and fundamental quanta.
Perhaps you are referencing 'emergent' and 'emergence,' differently than I, or/and perhaps under some strict philosophical or scientific rule, I am not employing the concept of emergence in a logically sound way. I am willing to be 'better tuned' on this point, if the reasoning I am employing here, requires it.
This is an excellent point. I think it's easy to miss that a huge amount of the brain's "floating point operations per second," or their rough biological equivalent, are devoted entirely to helping a human being avoid tripping over as they walk, keeping the heart and lungs properly synced up, constantly searching incoming sensory streams for threats, motivating a person to go eat, use the bathroom, or talk to their attractive coworker, etc.
It's not even just that humans need to eat, drink, etc., producing down time, it's that a large part of the computation power we have access to, likely a solid majority, is used to maintain homeostasis or so adapted to survival functions that it is hard to keep task oriented.
That said, I think it's also possible that we vastly underestimate the advantages of biological systems' use of dynamic parallel processing and have over emphasized the role of action potentials alone in cognition. I read a book called "The Other Brain," on glial cells a while back and it was remarkable how much this under appreciated set of cells effects everything the brain does. The actual workings of neurotransmitters are incredibly complex and most neural networks reduce this to just "inhibitory value" or "excitation," which we may learn misses a lot more than we thought through AGI experiments.
I'm not that pessimistic, but if AGI proves very far away, I'd wager it is because our shot in the dark attempts to describe biological computing power in terms of our digital computers was massively off the mark due to only focusing on "number of nerve cell firing." There are a lot of signals that change cell metabolism, feed back loops involving hormones to NTs and back, places where a molecule at one binding site subtlety changes the shape of another which in turn radically affects signaling, etc. It would take FAR more information to encode all that, and so if that stuff ends up being essential instead of merely a means to get neurons depolarizing, computation in the brain could involve orders of magnitude more processing power to replicate, let alone the jump if some sort of quantum search optimization akin to photosynthesis shows up in a way that meaningfully effects things.
E.g., https://news.mit.edu/2022/neural-networks-brain-function-1102
But maybe the things we want AGI to do don't depend on this stuff (if it is essential)? That seems distinctly possible.
Such processes exist in the systems software of computers as well, start up and shut down routines, refreshing the contents of RAM space, port polling (around 30 times per second) for data input from connected peripherals like a touchscreen or a keyboard. Are such processes also existent in say, trees?
If so, do we consider such processes in humans, an aspect of human consciousness and If we do then must it not follow that we must label ANY such process in a computer or a tree, an aspect of consciousness?
No, I don't think so. It seems like you could be conscious even if your blood had to be circulated by a machine, your blood oxygenated by a machine, etc.
I wasn't really thinking in those terms. I was just thinking in terms of estimates of the total computational power of the human brain in the biological equivalent of floating point operations per second versus the amount of that computational power that can actually be allocated for doing things like planning and executing a Moon landing.
Intuitively, it seems like digital AI would have to allocate a lower share of its total computational resources towards non-relevant activities. I might be entirely wrong about that though.
Ok, I remain fascinated however, regarding which processes/activities of the brain, can be proven to be 'contributory' towards what we consider consciousness and/or awareness of self, as alluded to in articles like the one @180 Proof linked to.
I am not 'personally' aware of an aspect of my consciousness, which causes my hair to grow on my head or my chin at a particular pace, but at a much slower rate, on my chest or eyebrows, unless I shave my chest hair or eyebrows. Then it grows back at a similar speed to my head, until it reaches a certain length again, then it's rate of growth substantially slows. This is why we don't have to go to the barber with overgrowing chest, underarm, eyebrow or pubic hair.
Is my internal system for personal hair control, contributory to my consciousness? or is it a separate sub-system that has no importance or value at all to my consciousness or self - awareness, even though I am aware of it as part of the 'workings of my 'being?'
:rofl: Sorry Count! This is just one of the ways in which 'my strange,' manifests!
Youngsters today, talk about 'my bad' (which I personally hate,) so I feel justified in typing an equally bad English phrase such as 'my strange.' :halo:
The 'Tedtalk' video was interesting:
Do you agree with it's main suggestions, such as:
There is no such REALITY as self.
We can only ever experience the results of the 'hidden interface,' and not the detailed workings and structure involved. We can only see the bird flying, 'via/through, a 'hidden window.'
I did not find the examples Mr Metzinger gave, compelling, as arguments against the existence of a REAL self. His fake hand, phantom limbs, virtual stroking examples seemed to me, to be mere projections (empathies) based on previous experience of what an individual was witnessing live.
Even if you have never experienced being pregnant yourself, you can still experience a level of pregnancy pain, because your wife is pregnant. I remained unclear as to why Metzinger saw these examples, as supporting his claim of 'no such reality as 'self.'
He also does not explain how he conceives the existence of other 'individuals.'
Does his position support solipsism or simulation theory etc?
Do you think that YOUR notion of 'self' has no sound foundation in REALITY?
What am I missing here?
Quoting 180 Proof
Initially, a 'new' AGI will surely base what it labels it's current 'knowledge' maximums, or what it is most confident that it knows for sure, (for want of a better way to explain myself here.) on it's previously stored knowledge and it's stored knowledge will include a description of what a human consciousness is.
I assume that at some point in it's 'growth,' an AGI will 'ask itself,' the same questions that humans struggle with:
Who and what am I?
What (do I want) is my purpose? etc. If it does pose these questions to itself, then I assume it will reference it's notion of what it's stored information defines as a human consciousness.
Would this be anthropomorphising?
Are you suggesting that such questions will never be internally posed, by an AGI and it will just function as maintenance, growth and survival necessities direct it?
To me the capabilities of our cortex are much more important that the functions of our limbic system or R-complex (Which I fully accept we could not survive without).
Am I interpreting your position correctly or am I way off?
Well, I am trying to gain some insight into Mr Metzingers work and I see some crossover between our discussion here, and your discussion with @Eugen on the Consciousness - Fundamental or Emergent Model thread. I absolutely agree that the devil is in the detail and that you need to be familiar, or almost fluent in the details of what is being offered, to make the exchange worthwhile. I feel the same way sometimes, when discussing the details of computer science with a novice.
My experience as a teacher however puts the burden of patience on me. I only get really frustrated with a novice, if they are asking me questions, but constantly demonstrate an inability to understand my answers, or do understand my answers but refuse to accept the academia behind them, without good reason.
Quoting 180 Proof
If we do find that is the reality of an exchange between us then sure, we should pause, regroup, and see if we can find a better common ground which offers some value to both of us. If not, then we should 'pause' again and find a more fruitful exchange, somewhere down the line.
I agree that it's a burden on you to summarise Mr Metzinger for me, to save me from having to do my own shovel work, so I try to only ask you to clarify YOUR OWN viewpoints, citing any sources in support, that you wish. At least;
Quoting 180 Proof
Confirms for me that you do agree that the concept of self, does not in your view (in line with Mr Metzinger,) manifest as a REAL existent.
My question then simply becomes, as annoying but nonetheless as serious as 'who are you?' if you have no reality in the concept of 'self.' Perhaps I should ask Mr Metzinger!
1. this is a nonsense
2. you're asking the wrong question
3. there is no weak or strong emergence
4. your assumptions are wrong
5. etc. etc. etc.
Solutions or coherent answers to my ''mistakes"? - never. Only general criticism and no solutions.
He is the only guy on this forum acting like that. The rest of you guys seem to understand my questions perfectly.
So, you want me to stop doing something that I did not do? In what way do you conflate:Quoting universeness
with
Quoting Eugen
:roll:
If @180 Proof challenges you a little more than anyone else on TPF then imo, you should enjoy that challenge. AND, before you take further umbrage, I am only stating a personal opinion that you are free to reject.
Quoting universeness
I'm not sure I'm a novice to 180Proof, and I do understand your answers. So when you tried to compare your "novice" with me (wether in regard to you or him), I think you're wrong.
As for summarizing ... that's all I've been doing in our exchanges on this topic over dozens of posts. We're here to inform, maybe inspire & intrigue, not spoon-feed each other.
You have simply misunderstood my reference to you and your recent thread. Let me clarify.
My use of the word 'novice' in my response to 180proof, contained no stealth intent to relate IN ANY WAY, to you.
I referenced you and your recent exchange with @180 Proof as me showing a little support for YOUR position, in the sense that 180proof can seem a little exasperated at times, with me as well as others, and I feel that I have to try harder to garnish more detail from him to attempt to clarify his own viewpoints.
He has impressive knowledge of philosophy imo and at times, again, imo, this can make him a little impatient at times with those who don't have such fluency. But from my teaching experience, I can understand his and the 'exasperation' sometimes demonstrated by others on TPF for philosophical novices such as me. I made no accusation AT ALL, that YOU are a philosophical novice.
I leave the declarations of your own qualifications to you.
He may have the knowledge, I'm skeptical about his skills though. But I'm still waiting...
Good, keep doing that and I will do the same for you and others regarding my own fields of fluency.
I agree we can inform and perhaps even inspire & intrigue and I also assume that you have not ossified to the stage where you think you also cannot learn from others posting here. I do not advocate for spoon feeding, unless doing so, on occasion, would assist another poster in all humility.
Time savers are always welcome.
NO worries!
:up: Lost in translation is a very forgivable confusion.
From Wiki:
Take any perdurant and isolate a part of its spatial region. That isolated spatial part has a corresponding temporal part to match it. We can imagine an object, or four-dimensional worm: an apple. This object is not just spatially extended but temporally extended. The complete view of the apple includes its coming to be from the blossom, its development, and its final decay. Each of these stages is a temporal time slice of the apple, but by viewing an object as temporally extended, perdurantism views the object in its entirety.
This seems akin to world lines, do you agree?
So from your description above, how much of it (or you) do you associate with the label 'real,' especially since you also employ the label 'virtual' (I assume 'virual' was a typo).
Based on Eugen's comment above, I would ask you to apply the same standard as you applied to @Gnomon If Eugen claims that you have not answered his questions to you, on the Consciousness - Fundamental or Emergent Model thread. Then I assume that you would want to sufficiently answer his complaint, so that your imo, 'fair' complaint against @Gnomon remains sound.
:starstruck: :love: I am so happy when someone gives me another 'conduit' to post AGAIN, one of my fav songs. Sorry, in advance to any of the 'we arra mods' group this idiosyncratic behaviour of mine, might annoy:
Exactly. :up:
I'm not aware of @Eugen asking me to explain my own metaphysical or scientific speculations and that I've refused to answer as Gnomon (& Wayfarer) has often done. These are my answers to Eugen's questions of my objections – not questions of my speculations – on his thread:
https://thephilosophyforum.com/discussion/comment/803218
https://thephilosophyforum.com/discussion/comment/803300
Not comparable at all to my exchanges with Gnomon.
I think your response at:
https://thephilosophyforum.com/discussion/comment/803443
has redressed the imbalance, or lack of detail, that remained after your two posts that you linked to in your above post.
I was making a loose comparison with @Gnomons refusal to answer YOUR questions sufficiently, based on my opinion, that you could be accused of doing something similar. The brevity and lack of explanation in the two linked quotes from your post above, confirms that imo.
Anyway, apparently @Eugen, is rather selective in which of my own questions, HE decides to answer.
I understand that position, and have described my own similar frustration at times, via our recent PM.
I see some value in us both encouraging each other to maintain a consistent approach.
In hindsight, I would have been better to discuss the particular, small issue I raised publicly with you here, by PM. I will do so in the future if such should arise again.
At least I treated you, by offering you a listen at a fab Ting Ting song, by way of compensation for any bruised ego I caused you. :halo:
Here's a recent book I just came across by computer engineer and neuroscientist Jeff Hawkins titled A Thousand Brains which summarizes 'lessons learned' from his own company's research on AGI. I haven't read it yet but reviews intrigue me and his first book On Intelligence was quite good and informative. Maybe you're already familiar with him? My guess is that Mr. Hawkins would be right at home in our 'futurist' discussions (no doubt schooling us both).
No. That's just you talking human talk. What does "in time" mean to you? Explain that first. Then try to analyze, for example, the retrieval of information by a computer. The human mind cannot retrieve all words simultaneously from a written text and not get a jumbled mess of information.
(It will be hard for me to explain this to anyone, unless you already have an idea of what it means to be nontemporal).
Quoting 180 Proof
In a manner of speaking, we perceive time as past or present. We also perceive time in terms of duration -- how long or how short.
Temporalism in metaphysics posits that perception necessarily involves the objects of perception as being within a duration or time order of some sort. This is not to say that all objects of perception involves the temporal aspect of thinking -- we do perceive the spatial and nontemporal qualities of objects. The size of a tree is nontemporal, so is the brightness of a light bulb.
Yes, clocks, for instance, do not experience duration or retrospection. I think it's our metabolic functioning – relative states of homeostasis – that constitutes the intuition of "temporality". If this is so, then only an AGI instantiated in either synthetic or organic organism will, as you say, have a "temporal mind". This, however, would not be an intrinsic, or fundamental, feature or property of AGI itself, and therefore, it wouldn't (need to) be sentient – certainly not as we conceive of sentience today.
@universeness
If ever an AGI is created, it still would not be sentient, as humans are sentient. Or in our usual term, conscious. The measure of consciousness involves also our fundamental propensity to inaccuracy or errors due to the fact that our perceptual qualities have been developed naturally, and overtime; involving actual experiences with objects. It's a lived experience, not created in the laboratory or simulation.
Errors, for example, an experiment involving a measure of duration: two images are flashed to human subjects and they are to judge how long the images were shown. One image is larger than the other. So there's the non-temporal aspect of the experiment - size. Either the subjects would say that the bigger one lasted longer, or the smaller lasted longer, despite the fact that both were flashed at the same length of time.
The inaccuracy is exciting, in my opinion.
"where he leads a team in efforts to reverse-engineer the neocortex"
:grin: What a brilliant Job!
You should try to emulate @Mikie and try to contact Jeff and see if you can convince him to be a guest speaker on TPF. That's a schooling I would love to experience!
That's called the fetch-execute cycle and happens to the clock pulse of the clock line of the 'control bus' (not really a bus, as the lines operate discretely).
Each line below occurs serially, within a single clock pulse.
1. The processor sets up the address bus with the address of the memory location to be accessed.
2. The read line of the control bus is set high by the processor
3. The data/instruction resident at the memory location currently on the address bus is copied onto the data bus and is sent along to a memory data register, by the processor.
4. The processor will then transfer the data to a general purpose register or directly to RAM space or it will decode and execute, if it is dealing with an instruction rather than a data item.
This all happens WITHIN time slices(durations).
Quoting L'éléphant
Sure but it took you a duration to type that, or to even think it, so your perception of a tree size or brightness, is temporal in the sense of your own perception time/duration.
Even if you (can) consider the biggest reference frame of perceiving the universe as single system, then that system will have a temporal aspect to any observer, as it did have a beginning, it does have a duration, and via entropy, it will 'disassemble.'
The idea that the tree height has a non-temporal frame of reference to entities such as us, is relative imo.
I get it. That was my point. But I was trying to point out to you that human errors are errors peculiar to humans. Which is what makes it interesting to me. Just as a computer could be made perfect, humans organically develop and along the way this development picks up natural selections, mutations, and accidents, which make for an exciting phenomenon.
I'm not trying to compare the abilities of humans and computers. I'm trying to explain why human consciousness (it's redundant to say this) is human.
Sure, if you feel that change would better fit it's content.
That's ok, you are free to bail out anytime you wish.
Have you watched this?
I watched it last night. It's 2 hours 35 mins, but worth the investment.
The term emergence/emergent was used quite a bit.
I enjoyed the little insight it gave me into the work of neuroscientists and Jeff's 'thousand brains theory.'
Brain reference frames and movement models, the brains use of maps/graphs, cortical columns, grid cells, place cells, vector cells, etc, etc.
This video is easily due it's own thread but I don't know if TPF is an adequate place for such a thread.
Obviously a neuroscience site would fit much more.
Few here would be willing to invest the time involved imo.
I certainly think it's content would help make theists feel more and more uncomfortable as they continue to try desperately to hold on to their woo woo, ancient fables and present them as facts.
God did this! Just seems more and more 'silly.'
I would personally need, to watch this video a few times to gain better insight however.
Additional: I will now have to update my personal, previous, Paul Maclean model of the human triune brain, to Jeff's thousand brain model based on cortical columns.
WHEN I SAW HIM draw the little circles on paper and start to draw connecting communication lines between them. I said HEY, that looks like he is starting to draw a topology of a fully connected mesh network of computer nodes!! The amount of crossover between the mechanisms this video describes and computer science is very strong imo.
@noAxioms, @Count Timothy von Icarus, @Alkis Piskas, @bert1, @Isaac, @Benj96, and of course anyone else here on TPF, that might find Jeff Hawkins work (as introduced to me by @180 Proof) as interesting as he, and now I, do.
I will have to now buy Jeff's 'thousand brains book.' YOU keep adding to my homework sir! :rofl:
I recently completed The memoirs of Ulysses Simpson Grant.
My current read is my second reading of Brian Green's 'The Elegant Universe.' (I first read it 15 years ago)
After that, I have TPF member @Vera Mont's book 'The Ozimord project,' to read, then Sean Carrol's 'The Biggest Ideas in the Universe, VOL 1! (two more to come!), and now!
Jeff Hawkins 'A thousand brains.'
This is beginning to impact my weekend drinking time!!!!!! :halo:
I'll give this video a look eventually. Thanks!
In his discussion, in the video, with 3 other folks involved in the area, you will find that Jeff, does not currently hold the same opinion as you do, regarding the threat of AI developments towards AGI.
He certainly thinks that significant threat exists but I would also suggest, that he thinks we humans, are capable of countering them. He could of course be quite wrong in that view.
I just heard that:
Geoffrey Hinton, (the godfather of AI,) 75 has just resigned from his post with Google. He helped to design the systems that became the bedrock of AI. But the Turing prize winner now says a part of him regrets making them.
There certainly does seem to be two intrenched camps on either side of the current AI debate.
You did not misread my remark, but perhaps I continue to misunderstand your position regarding AGI.
Perhaps it's your regular use of 'posthuman,' or/and your, in general, more pessimistic view of the future of humanity.
A few 'optimistic' old posts ...
https://thephilosophyforum.com/discussion/comment/768537
https://thephilosophyforum.com/discussion/comment/770469
https://thephilosophyforum.com/discussion/comment/502217
:lol: Well done in finding 3 out of your almost 12,000 posts. :joke:
I am sure you could find more, if you had to.
I won't suggest 6 examples of your less positive comments to 'move in front,' and invite such a race.
The fact that you continue to push back against my accusation that you can be quite pessimistic about the future of humanity, and our ability to be better than our more base 'jungle rules and jungle thinking' style manifestations, such as territoriality, tribalism, theism, capitalism, malevolent hierarchy, xenophobia, etc, etc.
Means that you, imo, are one of those who are part of the solutions and not part of the problems.
I enjoy your more 'optimistic' posts whenever I come across one. If I need cheered up, I can always write more of my own. If you ever watch the Jeff Hawkins vid above, I would be interested in discussing it's content with you.
Btw, we just differ over what constitutes an 'optimistic' view of our automated (IMO, prospective "post-scarcity") future. In a nutshell, anthropocentric you: "super-humanity" with exponentially more biophysical-metacognitive options than our current human condition affords us; de-anthropo-centric me: "post-humanity" with exponentially fewer biophysical-metacognitive defects than our current human condition constrains us with.
Or in (visionary) "science fiction" terms – my view is more "Starchild-Monolith" (or "Culture Minds") and your view is more "United Federation of Planets-Star Fleet", no? :nerd:
Yes, I would broadly agree with your analysis. It would be fun to drill more into how much of an individual identity you think could still be maintained, after a merging with future tech. My projected future, or I might even be so bold to say, the future I see currently, slowly emerging, will be very turbulent and may continue to be an 'ever on the precipice, existential risk,' as dangerous as we have faced since the invention of nuclear weapons. But I DO think we will eventually, get to a stage, where the following will be a description of a typical human existence.
1. Birth into a mainly secular humanist, globally united society, where an individual can take their basic needs and personal protections for granted, from cradle to grave.
2. We will become an interplanetary species.
3. Tech will be used initially as physical and mental benevolent medical support, and personal security support, and your 'first stage' of life, will be as a natural human existence, with a lifespan max of around 200 years, based on maintaining and growing and nurturing your 'natural identity,' developed since your birth.
4. Your second stage of life will happen, when your first stage is close to it's natural end or if it ends via accident, but you can be saved via tech. This is the point when an individual human. can CHOOSE to 'merge' with tech to become neo/nova sapien, and gain all those 'biophysical-metacognitive.' options you mentioned, and YES, I think we will fight as much as possible, to maintain as much of the 'human identity,' we had in our 'first stage' of life. Not all anthropomorphising is ill-advised.
5. Artificial lifeforms, biologically engineered lifeforms, mech/orga variants, genetically enhanced animal/ aquatic and insect lifeforms etc, etc will eventually exist along side us. Perhaps we will have encountered some alien life by that time as well. Perhaps all of the lifeforms on Earth (natural and 'created,') will eventually become interplanetary/interstellar.
6. I don't think the threat of extinction will remain the main picture. I think the main picture will consist of an eventual diversification, that will produce variety in a number that will dwarf the number of varieties produced by evolution and natural selection.
The vastness of the universe, can easily accommodate such.
1. I suspect runaway climate change will balkanize the globe even more than it is today because the capacities for mitigating the catastrophic 'warming' effects are now and will be even more so unevenly distributed (even when AGI comes online). In the best case scenarios, however, I agree with your "cradle to grave" techno-"secularism" – what I imagine as automated post-scarcity societies (APS).
2. I imagine that in about fifty years we will start 'spreading out' in earnest across the inner solar system, mostly orbital, moon & asteroid habitats rather than planetary 'colonies'.
3. Okay (re: APS).
4. Assuming that "the human identity" is a manifestation of the human condition. Thus, I imagine as technosciences, extraterresrial habitation & AGI —> ASI accelerate the disappearance of the current human condition, "human identity" also will disappear. (Re: posthumanity (e.g. body-mods & brain-augments for living in space; AI-mediated-hiveminds; orga-mecha mergers, etc) —> transcension)
5. I predict that by the end of this century our (AGI-controlled) space probes will discover robust exo-biomes and thriving xeno-species beneath the ice carapaces of a number of Jupiter's & Saturn's moons. By then, however, ASI will determine how best to protect (enhance) terrestrial life from (by) extraterrestrial and artificial life-forms.
6. Three natural mass extinction-events come to mind which could affect the entire inner solar system (now and always): (A) gamma ray bursts, (B) planetary colliding coronal mass ejections (re: the Carrington event) and (C) micro-singularities. A non-terrestrial diaspora, of course, increases the likelihood of our species surviving extinction events but in no way guarantees it.
Thanks for the very interesting response.
1. I understand your 'balkanise the globe' projection but I don't agree with your bracketed ('even when AGI comes on-line.') I think AI progress, will help us very significantly, with climate change and I am also boosted by two others 'impressions,' I have. Global youth seems more aware of the threats that our historical and current stewardship of the Earth has caused, and seem more determined (compared to earlier generations) to organise themselves, to reverse those effects. Even many members of the traditionally nefarious rich and powerful, are beginning to realise that they cannot feed as well, from a dead or even balkanised global population. I do also accept that there is nonetheless, a dangerous global apathy and substantial 'fake news force,' to contend with.
I agree with your 'post-scarcity' epoch and hope it ends global hunger and vastly improves peoples lives BUT, it will then result in an increased need for better population control (at least on Earth).
2. Yes, I think 'stepping stone' space habitats, stations etc will become very necessary, before eventual extraplanetary 'large,' probably initially domed settlements, until 'terraforming' can make any kind of impact.
3. :up:
4. Quoting 180 Proof
I think the human 'first stage (fully natural, organic) life' will change yes, but not in a way that humans alive today would not recognise. I think we will fight hard to make our first, up to around 200 years of existence to be much as it is today. I think the current experiences we have that forms 'who we are' and 'who we might become,' are very much revered by a great number of us.
Quoting 180 Proof
From your link: Today, we examine the possibility that the reason for the Great Silence is that all the aliens have evolved beyond the need to explore!
Your suggestion that any 'changes' in 'humanity,' especially what I would consider human stage 2 life need not become a disappearance of 'human identity' but an 'updating' of it. I know you think I am arguing semantics here but I think it's a valid semantic debate.
The Universe today article you cited was a fun read and it's main proposals were dramatised somewhat in the guise of 'the first ones' in Babylon 5:
But remember the first ones, all went to explore beyond the rim and became 'intergalactic.' The universe is so much bigger and unknown that any AGI or ASI will be able to 'comprehend,' in my 'humble' but still very very very atheist opinion. I don't find the posit of 'beyond the need to explore,' very likely.
5. I like that prediction, I hope it happens that way, I certainly would not advocate for any discovered microbial sized or any sized, biological structures being destroyed to make way for any colony from Earth.
6. Well, there are comments like this, from such as the physics stack exchange:
[b]Gamma rays can be stopped by the few inches of lead shielding nuclear reactors, the Trillions of yotta grams that make up the sun will be absolutely fine for the job.
You also don't need to worry about venus losing it's atmosphere, the worry with a GRB is that it destroys the ozone layer not that it flat out strips away our atmosphere.
The shortest GRB's can be two seconds long so earth could definitely be behind the sun for the entire duration of one.[/b]
All I am suggesting is that there seems to have always been many existential threats to the Earth and its 'life' based contents. Despite these, life on Earth endures. The threats you cite are very real and very valid. We will have to pay attention and make very serious, united, global efforts in the future to protect our future selves and our home planet. I think that we are left with nothing stronger at the moment than our individual hope that we will survive, in some form or another. I know that in some posts you have suggested that you are not a big fan of the notion of human 'hopes.'
1. The oceans are already too warm to reverse catastrophic climate change. AGI will triage the global population centers so that 1 in 4 (2bn) people might survive to the end of the next century.
2. 'Planetary colonization' (e.g. megaengineering, terraforming) does not make economic, engineering or scientific sense IMO. No "stepping stones", my friend, just dispersion of Earth's species as a hedge against terrestrial extinction risks. And because of hard radiation (e.g. cosmic rays) and astronomical transit durations, 'deep space exploration' is only feasible for (tinier-the-better) intelligent machines.
4. Babylon 5?! :rofl: (sorry) Nothing remotely to do with the transcension hypothesis.
5. :up:
6. "Global efforts?" Never were, never will be. And no need for that: AGI —> ASI will drive the big blue bus out of the ditch we're stuck in despite our fractious human nature. No doubt, over the next century or so, 3 out of 4 (6bn) of us will be left behind in the ditch so that the rest of our biological descendents can survive (predominantly due to the efforts of our machine descendents 'herding a billion cats').
I apologise in advance for such a terrible accusation if it's just a tech hitch.
In truth I am not that bothered anyway. It's lived a long life in the league of page one threads.
It seems to be getting pushed down the pages, regardless of any new posts on it. :lol:
1. Any exemplar, reliable scientific studies you know of that claim this as fact?
2. Not yet, I agree but tech advances may/I think will, change this. I will stick with my stepping stones projection/prediction.
4. I quote from the article ", the Transcension Hypothesis ventures that an advanced civilization will become fundamentally altered by its technology. In short, it theorizes that any ETIs that predate humanity have long-since transformed into something that is not recognizable by conventional SETI standards."
Same in B5, the humans required 'Vorlon' tech and the power of the alien tech (the great machine) they found on the planet that B5 orbited, Epison III. Without that, 'the first ones,' would have remained invisible to them. G'Kar explains it quite well below:
6. Quoting 180 Proof
See! Your more pessimistic sentences are still alive and kicking! :grin:
Quoting 180 Proof
LOOK! there's another one! :joke:
Yes, I “sunk” it, which means new posts no longer push it up the page. As you say, it’s had a long enough life, and it’s now more like a private conversation.
Typed like a true emotionless AI!
It may diminish, but will be freshly remembered by a significant few of the highest quality!
Plenty. This article cites some of them:
https://www.nytimes.com/2021/08/09/climate/climate-change-report-ipcc-un.html
I don't have a great deal of confidence in a New York Times article. I am cynical enough to treat all newspaper articles, as deserving only a base level of confidence that it is true.
Unfortunately, I could not read the article without agreeing to subscribe to the newspaper.
Do you have any better links to support the proposal that your point below has very strong evidence behind it?
Quoting 180 Proof
https://www.cnn.com/2023/05/05/world/ocean-surface-temperature-heat-record-climate-intl/index.html
Thanks for the CNN link. As you suggest, the article was not being too heavily alarmist, but was offering significant warning. I cannot post any quotes from it as I try not to accept cookies.
From my own past reading on this (mostly about coral reef damage/bleaching and melting ice in the arctic and antarctic regions), I agree that the current situation in Earth's Oceans is very bad. I do not however think that we have reached the point of no return and I remain hopeful that your prediction of a human population fall from the current 8 billion to 1 or 2 billion, within the not to distant future, is unlikely, BUT I cannot provide convincing evidence that you are completely wrong.
Our h. sapiens species has shown itself to be uniquely smart enough to create at least one problem for itself so intractably complex in scale and scope that we cannot solve it – climate change accelerated by anthropogenic global warming. Weirdly I'm hopeful that AGI —> ASI – assuming it bothers – will be capable of reframing the parameters of the problem so that it can be solved well enough to save a significant portion of Earth's habitable biosphere and thereby a sustainable fraction (1/2-1/20?) of the human population. I imagine the only significant "planetary terraforming" that will ever be undertaken will be an AGI —> ASI-driven project to terraform the Earth and eventually reverse / end the Anthropocene.
Ok, It was useful to drill down a little more into your position in this issue.
My "hopes" are silver linings in the dark clouds rolling in. The butterfly, sir, is about to leave the caterpillar's "human" chrysalis (re: ).
:point:
Do you have evidence that the butterfly retains no knowledge of its time as a caterpillar?
Might the butterfly maintain much of the 'mind' of the caterpillar?
Do we "retain knowledge" of our time as blastocysts? :roll:
I imagine crawling is, at best, useless for flying. Maybe butterflies keep caterpillars around just to study them (e.g. "butterflygenesis") or for shitz-n-giggles (à la reality tv, stupid pet tricks, etc) or both? :smirk:
I would need to concentrate to see if I have any such stored memories. I will try hard this weekend after 1 or 10 single malts!
Quoting 180 Proof
They have to land sometimes! I have witnessed landed butterfly's walk/crawl!
Quoting 180 Proof
No caterpillars = no butterfly's. As I suggested before, there may be aspects of human consciousness that no 'created' system can reproduce.
Too much in your link for me to read at the moment. When I have read it, I will comment on it.
I did not see any strong connections to our discussion, was there a main summary point from his system 2 category that YOU find strongly contends with my suggestion that Quoting universeness
Btw, I came across this:
A new study finds that moths can remember things they learned when they were caterpillars — even though the process of metamorphosis essentially turns their brains and bodies to soup. The finding suggests moths and butterflies may be more intelligent than scientists believed.
From here
Another link to the catastrophic effects of (runaway) global heating on Earth's fresh water sources: lakes & reservoirs.
https://www.cnn.com/2023/05/18/world/disappearing-lakes-reservoirs-water-climate-intl/index.html
The heating of oceans and drying up of lakes-reservoirs are strongly correlated. Not "pessimism", my friend, just facts. :mask:
Do you mean 'intelligence versus self-awareness?'
I just can't conceive of any value in an intelligent system that is not-self aware other that as a functional, very useful tool for an intelligence that IS self-aware. Like a computer is for a human today.
Perhaps I am missing your main point here due to my attempts to decipher/interpret the words/phrases, you choose to use.
Quoting 180 Proof
I don't refute the very valid concerns regarding climate change.
I do fully accept that the evidence is overwhelming, that we have damaged the Earth's ecosystem significantly, in a way that compromises our survival and the survival of the current flora and fauna on the Earth. I think the Earth itself, will easily survive the actions of humans.
I think WE WILL pay a price for abusing Earths resources for private gain, and to satisfy the lusts/greeds of individual/(groups of) nefarious humans, but it's not over until it's over.
The 'facts' you mention are not imo, immutable, yet.
We probably have passed the point of no return in some ways, but not with the results that you suggest, ie, population reduction to the levels of an 'endangered species' or actual extinction.
No. I mean intelligence (i.e. adaptivity) without "consciousness" (i.e. awareness of being self-aware), a distinction I suggest in this old post https://thephilosophyforum.com/discussion/comment/528794 ... and speculate on further, with respect to 'AGI', here https://thephilosophyforum.com/discussion/comment/608461.
Also today ...
https://www.theguardian.com/environment/2023/jul/19/climate-crisis-james-hansen-scientist-warning
Apologies for continuing to flog this equine's carcass:
https://www.dw.com/en/sea-surface-temperature-hotter-than-ever-before/a-66444694
Anyway ....... firstly I will try to refresh where we are in our exchange here:
Quoting universeness
Quoting 180 Proof
Quoting 180 Proof
Quoting 180 Proof
I still perceive a 'versus' between the 'theory of mind' that you propose for a future AI and our human 'theory of mind.' Would the AI theory of mind you propose have to decide whether or not their 'intelligence' but not 'conscious' (at least not conscious in the human sense) was a 'superior' or inferior state the human 'state of mind.' I am struggling to find clear terminology here.
Perhaps, a better angle would be, If your AI mind model cannot demonstrate all of your listed functionalities:
Quoting 180 Proof
How do you know, it would not conclude/calculate that to be an inferior state and that functions 4 and 5 above become two of it's desires/imperatives/projects?
Quoting 180 Proof
I have watched just about everything with Carl Sagan in it, available on-line, more than once. Some, I have watched many times. I have watched the vid you posted at least 5 times so far.
Carl was a far better predictor of future events than Nostradamus ever was.
I don't try to play down any current danger that climate change activists are shouting about, nor have I ever suggested that the human race is doing other than a piss poor job of its stewardship of this planet but I don't see any reason to believe that a future AI would do a better job as stewards of this planet.
AGI/ASI may well not be as 'biased,' or 'irrational' as 'human groupthink' can be but are you soooooooo sure that a future mecha wont be just as toxic towards planet Earth as humans were, if not more so.
If it needs to strip the Earth of its resources to replicate, advance and spread its own system, then it may do so and move on into space.
Anything I typed here in response to the linked article, would probably be a repeat of elements of my previous post above. I fully accept all the warnings about the climate change disaster we immanently face. BUT, It's not over until its over! That's all I have to cling to, and cling on is what I will continue to do! Feel free to think of me like the monty python black knight if you wish but I don't think its as hopeless as that ...... yet.
https://thephilosophyforum.com/discussion/comment/892509