You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Emergence

universeness January 06, 2023 at 11:15 12975 views 1098 comments
The universe at it's largest scale, seems to be a system based on disorder-order-disorder.

Combination of fundamentals(which we have not fully identified yet) seems to drive the change from disorder to order, from fundamentals to spacetime, star, planet and galaxy formation, to the formation of flora, fauna and sentient life on a planet such as Earth.

Local entropy means that separate systems can reach the end of their lifespan and can 'disassemble' back to their constituent parts. BUT if a star goes nova then heavier elements are released and that's why we exist. So, the 'disassembly,' does not necessarily mean a return to the original ingredients only.
A dead star does not become pure hydrogen again.
So carbon for example is only produced due to what happens during the life of a star.

Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen.
Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang.

As carbon based lifeforms, we eventually 'emerged' based on this carbon production system.
So it seems that the 'death' of one system can contribute to the 'creation' of a new, more complex system. (Perhaps there is something for theists in this. Perhaps a 'first cause or prime mover system' had to die(so, no longer exists!) for our universe to begin)

Is this carbon production, an 'objective truth' about our origins? Only in the sense of tracing the path from the origin of carbon, to us.

This got me thinking more about 'emergence.'
Since the early homo sapiens around 300,000 years ago, the 'knowledge' our species has 'as a totality,' been increasing. Each time we gain significant new knowledge, our technology increases and this has all sorts of affects on our species. It opens 'new options,' 'new possibilities.'
This 'direction of change,' seems to me to have been increasing in speed within the 300,000 years of the human story. The rate of speed increase seems to be increasing to the point that we are coming up with new tech at a faster rate than ever before.

To what extent do you think that human beings are 'information processors?'
Our ability to memorialise and pass on new knowledge from generation to generation seems to have 'the potential' to affect the 'structure and purpose of the contents of the universe.'

We have altered the Earth in many significant ways. Can we do the same to the solar system and far beyond it? Is that an objective truth about what is fundamental in our nature to do?
If there are other lifeforms with at least the same cognitive abilities as us then would they be compelled to seek new knowledge in the same way we do?
It seems to me that an objective truth about all humans is that we seek new information. Do you think that's true? and if you do, do you think its objectively true? If you think the answer is yes, then do you think that the following is emergent:
In the future we will
1. 'Network' our individual brain based knowledge.
2. Connect our brain based knowledge, directly, to all electronically stored information and be able to search it at will, in a similar style (or better) to a google search.
3. Act as a single connected intellect and as separate intellects.

My last question would be:
How much credence do you give to the idea that we are heading towards an 'information/technological singularity? Is an tech singularity emergent? and (I know this is very difficult to contemplate but) what do you think will happen as a result of such a 'singularity?'

Comments (1098)

noAxioms April 02, 2023 at 23:00 #794977
Quoting universeness
What I describe as a worldline ...
Ive been trying to figure out if what you describe as a worldline is the same thing that say Minkowski would call a worldline.

The path an object takes [through spacetime] from its beginning to its end can be called a worldline.
Yes, with my addition inserted.

So, basically any path though spacetime is a worldline, and many objects can take the same path.
Maybe a pair of photons can do this, but I can't think of anything with proper mass that can. It would require the two objects to be at the same place at the same time. So no overwrites.

An object is present at every event on its worldline. It doesn't occupy just one location like a path through space. Yes, with a path through space, one can move to a different location and a different object can be at the location where you no longer are, but that doesn't work with worldlines. It is impossible (by definition) for an object not to be present anywhere on its worldline, hence its existence in spacetime.

and as I suggested, many objects pass the same points in space
Yes, but Minkowski was not talking about points in space when describing worldlines.
BTW, he didn't invent worldlines. They've been around since the block universe had been proposed centuries earlier. Minkowski just changed the mathematics from essentially Euclidean transformations to Lorentz transformations. Euclidean coordinates measure the distance between events as ?(t²+x²+y²+z²) while Minkowski coordinates measure the interval between events as ?(t²-x²-y²-z²) .
The old Euclidean mathematics (used also by Newton) also did not have a notion of this 'overwrite'. An event is objective, and the state of affairs at that event is exactly one state, regardless of what goes on elsewhere in the spacetime.

All this stuff is covered in the notion of Minkowski space
Yes, but your description of Minkowskian spacetime in incorrect. You seem to be mixing 'space' and 'spacetime'. The state of a location in space changes over time, but an event in spacetime includes a time coordinate, and thus any time after that is a different event, not an overwrite of the first event in question.

jgill April 02, 2023 at 23:30 #794985
Quoting noAxioms
Minkowski coordinates measure the interval between events as ?(t²-x²-y²-z²) .


c=1?
universeness April 03, 2023 at 11:23 #795155
Quoting noAxioms
You seem to be mixing 'space' and 'spacetime'.


In spacetime, there is no separation of space and time, so you cannot pull 'space' or 'time' out of the concept, 'spacetime.'

When you overwrite memory locations on a DVD, it will happen at a different time, to when the previous data was placed there. The older data no longer exists in those locations, it has been overwritten, yes? Why would real space locations act any differently?
I put a carton of milk in my fridge and that location becomes part of it's worldline, yes? It seems to me that you are simply saying, that when I throw the carton in the bin, the space it occupied in the fridge, still exists, and by making such a trivial observation, you say worldlines never cease to exist.
To me, that's like saying spacetime will never cease to exist. Well, it may oscillate between being in a state of singularity and expansion, eternally, but so what? The concept of worldlines, remains nothing more than convenient mathematical modelling. I think you are blurring the lines between the notion of a worldline(spacetime) and that which might occupy it, at any instant of time.
I use the term 'overwrite,' to indicate, that the suggestion that space 'memorialises' every event that has ever occupied spacetime coordinates, is fanciful.
When we look at a star, we know that image no longer exists. When I look at any object around me, I know that snapshot no longer exists, as quantum fluctuations in that space, will alter it's state in some undetectable way I cannot describe, within a planck time duration. But again, to me, that is also a very trivial suggestion. The distance between every dimensionless 3 coordinates (x, y, z,) will also have expanded, during a planck time duration, creating more dimensionless members of the set of all dimensionless (x, y, z) coordinates (points).
noAxioms April 03, 2023 at 13:47 #795204
Quoting jgill
Minkowski coordinates measure the interval between events as
?() .
— noAxioms

c=1?
Thanks for pointing that out, since what I quoted was the normalized version. The square root doesn't really belong there either. The less normalized version is:
s² = (ct)² - x² - y² - z²

Quoting universeness
When you overwrite memory locations on a DVD, it will happen at a different time, to when the previous data was placed there.
That's right. Imagine the DVD is a digital copy of your wedding video made in 2005 and overwritten by a spongebob episode by your kids in 2020. So given that the existence of the '0' on a certain spot has a different time coordinate (events from 2005 to 2020) than when it has the 1 on it (2020+). Since those events have different time coordinates, none of them overlap and no event was overwritten.

The older data no longer exists in those locations, it has been overwritten, yes?
No. The wedding video still exists from 2005 to 2020. That 15 year worldline cannot be overwritten. Mind you, there are movies depicting such an overwrite where Marty McFly overwrites his loser family with a less loser one, except for himself. That's an example of overwriting of events, but it's fiction and physically impossible.

Why would real space locations act any differently?
I'm talking about spacetime locations (events), not spatial location.

I put a carton of milk in my fridge and that location becomes part of it's worldline, yes?
No. Points in spacetime are events, not locations. The difference is 4 coordinates for an event vs 3 coordinates for a location.

It seems to me that you are simply saying, that when I throw the carton in the bin, the space it occupied in the fridge, still exists, and by making such a trivial observation, you say worldlines never cease to exist.
No, I'm saying that you were present at your birth, and nobody else can ever be present at your birth, that is, to be exactly were you were and not just absurdly close by like presumably your mother. Some other person can be present at that spatial location (like the cleaning guy 30 minutes later), but that's a different event with different coordinates, not an overwrite of your birth event which has an earlier time coordinate.

To me, that's like saying spacetime will never cease to exist.
Spacetime isn't contained by time, so it would be meaningless to talk about it coming into or going out of existence. Spacetime contains time, and there isn't a special moment that is the present (presentism). Einstein's (and Minkowski's) theories do not posit such a thing. Lorentz did, but a generalized theory of a universe contained by time was never published until this century. Spacetime is denied in it, as are black holes and the big bang, all replaced by other things with similar properties, testable only with fatal tests.

So no, if time and space are just different dimensions to be treated equally, then just like other places don't cease to exist relative to what you consider to be 'here', other times don't cease to exist relative to what you consider to be 'now'. So absent presentism, at no time in history do other times not exist. If only one time existed, that would be presentism.

I use the term 'overwrite,' to indicate, that the suggestion that space 'memorialises' every event that has ever occupied spacetime coordinates, is fanciful.
Events don't occupy coordinates. Events are objective: the state of affairs at an event is the same regardless of frame choice or point of view. The coordinates assigned to that event however are entirely abstract and dependent on the coordinate system of choice. So I find it backwards to assign events to coordinates rather than the other way around.

When we look at a star, we know that image no longer exists.
True only under presentism. Relativity theory isn't a presentist theory. Strictly speaking, the image very much does exist since you're viewing it. But a presentist would say that the star (or your friend in the next seat for that matter) is no longer in the state that you perceive.





universeness April 03, 2023 at 15:07 #795211
I appreciate the distinction you are making, based on the time dimension, being the linear variable, that means every event that happens at a particular set of x, y, z coordinates can be placed serially next to each other, on a moment to moment timeline, and I accept the validity of that model.
Many people do however argue against all current models of linear time. Carlo Rovelli being for me, the most interesting scientists who does so.
I remain conflicted, that in any REAL sense, past events STILL exist. I remain unconvinced on that one, for now.
180 Proof April 17, 2023 at 16:16 #800578
Project: Black Box

Re: Large language models (i.e. neural networks which are self-learning machines) which also "hallucinate". :yikes:



@universeness @Tom Storm @Wayfarer
universeness April 17, 2023 at 20:15 #800609
Reply to 180 Proof
Another good video. Demis Hassabis merely repeated what he has said about AI developments at Deep Mind in other video's on the topic. BARD seems to fit into the 'gollum class' of AI, currently being slowly introduced. This is discussed further in the OP I posted, based on the Tristan Harris and Asa Raskin video.

In this video, it seemed to me, that the main participants were suggesting that current AI developments and a future AGI, would be a benefit, overall, to the human race.
The main warning seemed to be that we needed to introduce the current developments, very carefully and slowly, establishing strong protections against any negative affects before taking another step.
I am becoming more and more convinced that there will be an AI 'struggle,' coming soon or already here, and it will pose a similar threat (as Tristian and Asa compared it to) to humans, as nuclear weapons did and still do (perhaps even a greater threat.) But, I remain hopeful that we will maintain/acieve enough control/influence etc, so that we will survive it's negative affects, and we will eventually 'merge' with it, without the result being a 'post human' existence, as you have previously predicted.
180 Proof April 18, 2023 at 14:24 #800858
Reply to universeness I don't see how we could "merge with" AGI —> ASI —> ??? and not be(come) "posthuman" – another species completely (e.g. nano sapiens). Are butterflies just 'winged caterpillars' after the chrysalis?

Anyway, back to the present, I just came across this article

https://philosophynow.org/issues/155/Whats_Stopping_Us_Achieving_Artificial_General_Intelligence

and I'm reading it now. Might be worth discussing ..
universeness April 18, 2023 at 15:42 #800880
Quoting 180 Proof
I don't see how we could "merge with" AGI —> ASI —> ??? and not be(come) "posthuman" – another species completely (e.g. nano sapiens). Are butterflies just 'winged caterpillars' after the chrysalis?


We have the issue of gradation, and the concept of 'critical mass/turning point' etc.

This is an old discussion that I have been having with folks, since I first asked a classroom of students:
Would you surrender your pinky, If I offered you a replacement, which could do everything your current pinky does plus a few new functions and abilities?
Would you still be you, if you became one of the advanced pinky people?
How disadvantaged would you be, if you decided not to become one of the advanced pinky people?

I am sure you can easily predict where the discussion normally goes.
At some point, many people will pull out of the deal!
For some it's at stage 1, the pinky offer. For others it's arms and legs, for some it's the heart, for some it's the 'only your brain is left' stage.

It also depends on what new longevity and functionality is offered.
Many are attracted to, If you accept these changes you can:
Live to ....... hundred or ..... thousand years old.
You can live underwater or in space, without a spacesuit.
You can speak any language, including animal languages..... etc
The list of offers is only limited by the questioners imagineering ability.

The question quickly becomes, what is the critical point, such that if your 'merging,' moves beyond it, you are no longer human?
You are not the same you that you were when you were a teenager, but you are still you, and you are still human, so, considering such concepts as the 7 stages of man, etc. What you might consider post human, others may consider 'advanced/augmented human.'
Of course no human elements present, certainly would be 'post human,' but there are many other 'potential gradations,' of human. Do you not agree?

I will have a look at the article you linked to soon.
180 Proof April 18, 2023 at 16:13 #800888
Reply to universeness I think you're hung up on semantics. Besides, are humans merely just a gradation of – "advanced / augmented" – eukaryotes? or "advanced / augmented" fish? 'Human intellect instantiated on a planck-scale (entangled) synthetic substrate' doesn't seem like a merely "advanced / augmented human" prospect to me.
universeness April 18, 2023 at 17:06 #800895
Reply to 180 Proof
Not merely, just upgraded eukaryotes, no. But I expect that the results of combining upgraded genetic material, will produce as many surprising results as evolution via natural selection has.
There is another 'player,' in the game, still in it's infancy. Biological computing, combined with genetic engineering may make great advances in the future, especially with AI's help.
universeness April 18, 2023 at 18:37 #800906
Reply to 180 Proof
Based on the article you cited, I think:
[b]four possible techno-umwelts, or areas of perception for a machine:
1) Verbal virtual;
2) Non-verbal virtual;
3) Verbal physical; and
4) Non-verbal physical.

The versatility that marks general or comprehensive intelligence, that is, AGI, would only be possible when the machine freely operates in all four of these techno-umwelts.[/b]

and

[b]Only then could artificial intelligence become truly multimodal – meaning, it will be able to solve a wide range of possible tasks and comprehensively communicate with a human.

The idea of the combination of techno-umwelts thus gives us the opportunity to propose a new definition of AGI:

Artificial general intelligence is the ability of a robot (a machine with sense-think-act capability) to learn and act jointly with a person or autonomously in any techno-umwelt (but potentially better than a specialist in this field), achieving the goals set in all four techno-umwelts, while limiting the resources consumed by the robot.[/b]

Seems to be a valid and more detailed definition of an AGI than Wiki's:
An artificial general intelligence (AGI) is a hypothetical intelligent agent which can understand or learn any intellectual task that human beings or other animals can. AGI has also been defined alternatively as an autonomous system that surpasses human capabilities at the majority of economically valuable work.

I also share some common ground, with the last paragraph of the article:
On the one hand, we are beginning to ‘dissolve’ into the technologies and virtual worlds surrounding us, blurring the concept of ‘human’. On the other hand, as computers explore new areas of activity, be it chess or machine translation or whatever else, those areas are no longer exclusive to humans. Perhaps humans are the final frontier that the machine cannot yet overcome.

Quoting 180 Proof
I think you're hung up on semantics.


I think definitions do absolutely matter in the 'observer reference frame' sense, but the notion of 'future' and 'change/progress' makes them, ultimately fluidic. What it is to be human, can change, and still maintain some of the fundamentals. I just don't see why we have to insist on a post' or 'after' human definition. I told you previously, I preferred neo/nova sapien, to your nano sapien.
I also prefer my more optimistic view of the future of humans to your more pessimistic one. :roll:
I think you secretly hope I am correct, even though you think the preponderance of the evidence available, convinces you that your more pessimistic viewpoint is correct.
180 Proof April 26, 2023 at 02:35 #803081
o.o


L'éléphant April 26, 2023 at 05:07 #803097
Quoting universeness
Biological computing, combined with genetic engineering may make great advances in the future, especially with AI's help.

Until they can perceive time, i.e. they develop a temporal mind, they're stuck with a built-in clock calibrated to coincide with the time zones. Math and/or computing is non-temporal. This is the sad reality.
I'm presuming that by "advances", you mean they become humans. If not, I stand corrected.
universeness April 26, 2023 at 10:27 #803128
Quoting L'éléphant
Math and/or computing is non-temporal


What do you mean? A computer does what it does IN time. Anything mathematical is an event that happens in time. Perhaps I am misunderstanding the aspect of 'temporal,' you are referencing.

Quoting L'éléphant
I'm presuming that by "advances", you mean they become humans. If not, I stand corrected.

No, by progress, I refer to two possible emergents, as a result of the current path of biological computing.
1 The ability of biological computing to enhance and augment human lifespan and ability.
2. The possibility of a system, which is not completely formed of non-organic components, (but also not cyborg,) becoming self-aware/conscious/sentient.

Quoting L'éléphant
Until they can perceive time,

Humans are still debating what time is, so I can't comment on how a future orga/mecha sentient might perceive time. They will face the same concepts we do, relative time, individual time, proper time etc.
180 Proof April 26, 2023 at 16:14 #803180
Quoting L'éléphant
Until they can perceive time, i.e. they develop a temporal mind

Expound on this. I've no idea what you mean by "perceive time" or "temporal mind".

Quoting universeness
... becoming self-aware/conscious/sentient.

I think "self-awareness" (i.e. real-time self-modeling) has to be built into an artificial system, it's not an emergent (i.e. "becoming") property or capability – and isn't necessary for intelligent performance (e.g. large language models). Why do you assume machines (or synthetic organisms) can, in effect, "wake-up sentient"?

universeness April 26, 2023 at 17:10 #803192
Quoting 180 Proof
Why do you assume machines (or synthetic organisms) can, in effect, "wake-up sentient"?


Mainly because of the 'critical mass' or 'tipping point' concept found in nature. I think this is also found in various human illnesses. Physically we have the 'locked in syndrome,' or complete physical paralysis and the various coma style states of which some are referred to as vegetative.

From your linked article, we have:
"This concept comprises experiences of ownership, of first person perspective, and of a long-term unity of beliefs and attitudes. These features are instantiated in the prefrontal cortex."
This suggests to me that the functionality of the pre-frontal cortex is vital to what we would describe as the 'first person perspective.'

In this article, Metzinger (who I am unfamiliar with), to me, is describing the required 'stabilities,' and component contributing parts that result in the model of self (system), that he is describing. I see the 'self' he is describing as an emergence, in that it manifests as a combinatorial of the sub-systems involved. I use the concept of 'more than the sum of the sub-systems,' or fundamental quanta involved, to account for the more unusual features of self.
For example, I may (as a self,) become attracted to a person or an object or an idea, for reasons that even I find very hard to fully explain. That seems to me, to be caused by something more 'bizarre'/'complicated'/nuanced etc than everything a car or my laptop does, due to the combination of its parts and fundamental quanta.
Perhaps you are referencing 'emergent' and 'emergence,' differently than I, or/and perhaps under some strict philosophical or scientific rule, I am not employing the concept of emergence in a logically sound way. I am willing to be 'better tuned' on this point, if the reasoning I am employing here, requires it.
Count Timothy von Icarus April 26, 2023 at 18:50 #803214
Reply to 180 Proof

This is an excellent point. I think it's easy to miss that a huge amount of the brain's "floating point operations per second," or their rough biological equivalent, are devoted entirely to helping a human being avoid tripping over as they walk, keeping the heart and lungs properly synced up, constantly searching incoming sensory streams for threats, motivating a person to go eat, use the bathroom, or talk to their attractive coworker, etc.

It's not even just that humans need to eat, drink, etc., producing down time, it's that a large part of the computation power we have access to, likely a solid majority, is used to maintain homeostasis or so adapted to survival functions that it is hard to keep task oriented.

That said, I think it's also possible that we vastly underestimate the advantages of biological systems' use of dynamic parallel processing and have over emphasized the role of action potentials alone in cognition. I read a book called "The Other Brain," on glial cells a while back and it was remarkable how much this under appreciated set of cells effects everything the brain does. The actual workings of neurotransmitters are incredibly complex and most neural networks reduce this to just "inhibitory value" or "excitation," which we may learn misses a lot more than we thought through AGI experiments.

I'm not that pessimistic, but if AGI proves very far away, I'd wager it is because our shot in the dark attempts to describe biological computing power in terms of our digital computers was massively off the mark due to only focusing on "number of nerve cell firing." There are a lot of signals that change cell metabolism, feed back loops involving hormones to NTs and back, places where a molecule at one binding site subtlety changes the shape of another which in turn radically affects signaling, etc. It would take FAR more information to encode all that, and so if that stuff ends up being essential instead of merely a means to get neurons depolarizing, computation in the brain could involve orders of magnitude more processing power to replicate, let alone the jump if some sort of quantum search optimization akin to photosynthesis shows up in a way that meaningfully effects things.

E.g., https://news.mit.edu/2022/neural-networks-brain-function-1102

But maybe the things we want AGI to do don't depend on this stuff (if it is essential)? That seems distinctly possible.
universeness April 26, 2023 at 19:08 #803220
Quoting Count Timothy von Icarus
are devoted entirely to helping a human being avoid tripping over as they walk, keeping the heart and lungs properly synced up, constantly searching incoming sensory streams for threats,


Such processes exist in the systems software of computers as well, start up and shut down routines, refreshing the contents of RAM space, port polling (around 30 times per second) for data input from connected peripherals like a touchscreen or a keyboard. Are such processes also existent in say, trees?
If so, do we consider such processes in humans, an aspect of human consciousness and If we do then must it not follow that we must label ANY such process in a computer or a tree, an aspect of consciousness?
Count Timothy von Icarus April 26, 2023 at 19:18 #803224
Reply to universeness
No, I don't think so. It seems like you could be conscious even if your blood had to be circulated by a machine, your blood oxygenated by a machine, etc.

I wasn't really thinking in those terms. I was just thinking in terms of estimates of the total computational power of the human brain in the biological equivalent of floating point operations per second versus the amount of that computational power that can actually be allocated for doing things like planning and executing a Moon landing.

Intuitively, it seems like digital AI would have to allocate a lower share of its total computational resources towards non-relevant activities. I might be entirely wrong about that though.
universeness April 26, 2023 at 19:46 #803227
Reply to Count Timothy von Icarus
Ok, I remain fascinated however, regarding which processes/activities of the brain, can be proven to be 'contributory' towards what we consider consciousness and/or awareness of self, as alluded to in articles like the one @180 Proof linked to.

I am not 'personally' aware of an aspect of my consciousness, which causes my hair to grow on my head or my chin at a particular pace, but at a much slower rate, on my chest or eyebrows, unless I shave my chest hair or eyebrows. Then it grows back at a similar speed to my head, until it reaches a certain length again, then it's rate of growth substantially slows. This is why we don't have to go to the barber with overgrowing chest, underarm, eyebrow or pubic hair.
Is my internal system for personal hair control, contributory to my consciousness? or is it a separate sub-system that has no importance or value at all to my consciousness or self - awareness, even though I am aware of it as part of the 'workings of my 'being?'
:rofl: Sorry Count! This is just one of the ways in which 'my strange,' manifests!
Youngsters today, talk about 'my bad' (which I personally hate,) so I feel justified in typing an equally bad English phrase such as 'my strange.' :halo:
180 Proof April 26, 2023 at 20:32 #803230
Reply to universeness 'Consciousness' seems to function only (or mostly) as a keyhole through which we project our 'self-reflexive confabutions' for adapting our bodily movements to parochial, physical environments; machine intelligences, which are engineered and not naturally selected, more than sufficiently function without this sort of processing bottleneck in order to adapt (i.e. self-learn, or generate their own algorithms). I imagine that what I call "AGI —> ASI" will never anthropomorphize itself, no matter how perfectly it will mimic humans, to the degree it engineers its own 'synthetic phenomenology', in effect, dumbing itself down with a metacognitive blindfold (i.e. keyhole).
Tom Storm April 26, 2023 at 21:19 #803237
Reply to 180 Proof :fire: I love your notion of synthetic phenomenology.
universeness April 27, 2023 at 10:02 #803294
Reply to 180 Proof
The 'Tedtalk' video was interesting:


Do you agree with it's main suggestions, such as:
There is no such REALITY as self.
We can only ever experience the results of the 'hidden interface,' and not the detailed workings and structure involved. We can only see the bird flying, 'via/through, a 'hidden window.'

I did not find the examples Mr Metzinger gave, compelling, as arguments against the existence of a REAL self. His fake hand, phantom limbs, virtual stroking examples seemed to me, to be mere projections (empathies) based on previous experience of what an individual was witnessing live.

Even if you have never experienced being pregnant yourself, you can still experience a level of pregnancy pain, because your wife is pregnant. I remained unclear as to why Metzinger saw these examples, as supporting his claim of 'no such reality as 'self.'

He also does not explain how he conceives the existence of other 'individuals.'
Does his position support solipsism or simulation theory etc?
Do you think that YOUR notion of 'self' has no sound foundation in REALITY?
What am I missing here?

Quoting 180 Proof
I imagine that what I call "AGI —> ASI" will never anthropomorphize itself, no matter how perfectly it will mimic humans, to the degree it engineers its own 'synthetic phenomenology', in effect, dumbing itself down with a metacognitive blindfold (i.e. keyhole).

Initially, a 'new' AGI will surely base what it labels it's current 'knowledge' maximums, or what it is most confident that it knows for sure, (for want of a better way to explain myself here.) on it's previously stored knowledge and it's stored knowledge will include a description of what a human consciousness is.
I assume that at some point in it's 'growth,' an AGI will 'ask itself,' the same questions that humans struggle with:
Who and what am I?
What (do I want) is my purpose? etc. If it does pose these questions to itself, then I assume it will reference it's notion of what it's stored information defines as a human consciousness.
Would this be anthropomorphising?

Are you suggesting that such questions will never be internally posed, by an AGI and it will just function as maintenance, growth and survival necessities direct it?
To me the capabilities of our cortex are much more important that the functions of our limbic system or R-complex (Which I fully accept we could not survive without).
Am I interpreting your position correctly or am I way off?
180 Proof April 27, 2023 at 10:19 #803298
Reply to universeness I would not have recommended Metzinger's work several times if I do not find it compelling as corroborating my own speculations. Having not read his books or papers, universeness, I hope that that short video as well as the related wiki articles I've proffered you find interesting enough to read Metzinger's work for yourself since the philosophy of mind devil is in the cognitive neuroscience details. And if not, well then, believe whatever you like about "self" "consciousness" & other folk concepts, my friend, and we'll just have to go on talking – speculating on incommensurable data sets – past one another.
universeness April 27, 2023 at 10:43 #803302
Reply to 180 Proof
Well, I am trying to gain some insight into Mr Metzingers work and I see some crossover between our discussion here, and your discussion with @Eugen on the Consciousness - Fundamental or Emergent Model thread. I absolutely agree that the devil is in the detail and that you need to be familiar, or almost fluent in the details of what is being offered, to make the exchange worthwhile. I feel the same way sometimes, when discussing the details of computer science with a novice.
My experience as a teacher however puts the burden of patience on me. I only get really frustrated with a novice, if they are asking me questions, but constantly demonstrate an inability to understand my answers, or do understand my answers but refuse to accept the academia behind them, without good reason.

Quoting 180 Proof
and we'll just have to go on talking – speculating on incommensurable data sets – past one another.

If we do find that is the reality of an exchange between us then sure, we should pause, regroup, and see if we can find a better common ground which offers some value to both of us. If not, then we should 'pause' again and find a more fruitful exchange, somewhere down the line.

I agree that it's a burden on you to summarise Mr Metzinger for me, to save me from having to do my own shovel work, so I try to only ask you to clarify YOUR OWN viewpoints, citing any sources in support, that you wish. At least;
Quoting 180 Proof
I would not have recommended Metzinger's work several times if I do not find it compelling as corroborating my own speculations

Confirms for me that you do agree that the concept of self, does not in your view (in line with Mr Metzinger,) manifest as a REAL existent.
My question then simply becomes, as annoying but nonetheless as serious as 'who are you?' if you have no reality in the concept of 'self.' Perhaps I should ask Mr Metzinger!
Eugen April 27, 2023 at 10:51 #803303
Reply to universeness Don't put the sign ''=" between you and Reply to 180 Proof when it comes to me. I totally understand you, I just don't agree with you. With Reply to 180 Proof it's a totally different scenario. He never misses the chance to come on my OP's and say:
1. this is a nonsense
2. you're asking the wrong question
3. there is no weak or strong emergence
4. your assumptions are wrong
5. etc. etc. etc.
Solutions or coherent answers to my ''mistakes"? - never. Only general criticism and no solutions.
He is the only guy on this forum acting like that. The rest of you guys seem to understand my questions perfectly.
universeness April 27, 2023 at 11:01 #803304
Quoting Eugen
Don't put the sign ''=" between you and ?180 Proof when it comes to me.

So, you want me to stop doing something that I did not do? In what way do you conflate:Quoting universeness
I see some crossover between our discussion here, and your discussion with Eugen on the Consciousness - Fundamental or Emergent Model thread.

with
Quoting Eugen
Don't put the sign ''=" between you and ?180 Proof when it comes to me.

:roll:
If @180 Proof challenges you a little more than anyone else on TPF then imo, you should enjoy that challenge. AND, before you take further umbrage, I am only stating a personal opinion that you are free to reject.
Eugen April 27, 2023 at 11:08 #803305
Reply to universeness I'm not sure he challenges me. It might be the case he spams me. Not sure yet.

Quoting universeness
My experience as a teacher however puts the burden of patience on me. I only get really frustrated with a novice, if they are asking me questions, but constantly demonstrate an inability to understand my answers, or do understand my answers but refuse to accept the academia behind them, without good reason.

I'm not sure I'm a novice to 180Proof, and I do understand your answers. So when you tried to compare your "novice" with me (wether in regard to you or him), I think you're wrong.
180 Proof April 27, 2023 at 11:14 #803307
Reply to universeness "Who am I?" A persona (mask) – a dynamic, virtual assemblage of perdurant bodily, cognitive & demographic data aka "self") – I believe I am: "the name" to which I've learned to involuntarily answer. Who else could I be?

As for summarizing ... that's all I've been doing in our exchanges on this topic over dozens of posts. We're here to inform, maybe inspire & intrigue, not spoon-feed each other.
universeness April 27, 2023 at 11:21 #803310
Reply to Eugen
You have simply misunderstood my reference to you and your recent thread. Let me clarify.
My use of the word 'novice' in my response to 180proof, contained no stealth intent to relate IN ANY WAY, to you.
I referenced you and your recent exchange with @180 Proof as me showing a little support for YOUR position, in the sense that 180proof can seem a little exasperated at times, with me as well as others, and I feel that I have to try harder to garnish more detail from him to attempt to clarify his own viewpoints.
He has impressive knowledge of philosophy imo and at times, again, imo, this can make him a little impatient at times with those who don't have such fluency. But from my teaching experience, I can understand his and the 'exasperation' sometimes demonstrated by others on TPF for philosophical novices such as me. I made no accusation AT ALL, that YOU are a philosophical novice.
I leave the declarations of your own qualifications to you.
Eugen April 27, 2023 at 11:26 #803311
Quoting universeness
You have simply misunderstood my reference to you and your recent thread. Let me clarify.
My use of the word 'novice' in my response to 180proof, contained no stealth intent to relate IN ANY WAY, to you.
- My bad, so don't worry!

Reply to 180 Proof He may have the knowledge, I'm skeptical about his skills though. But I'm still waiting...
universeness April 27, 2023 at 11:28 #803314
Quoting 180 Proof
As for summarizing ... that's all I've been doing in our exchanges on this topic over dozens of posts. We're here to inform, maybe inspire & intrigue, not spoon-feed each other.


Good, keep doing that and I will do the same for you and others regarding my own fields of fluency.
I agree we can inform and perhaps even inspire & intrigue and I also assume that you have not ossified to the stage where you think you also cannot learn from others posting here. I do not advocate for spoon feeding, unless doing so, on occasion, would assist another poster in all humility.
Time savers are always welcome.
universeness April 27, 2023 at 11:29 #803315
Reply to Eugen
NO worries!
Eugen April 27, 2023 at 11:30 #803316
Reply to universeness PS: I'm non-native, so I might type the wrong words sometimes.
universeness April 27, 2023 at 11:31 #803318
Reply to Eugen
:up: Lost in translation is a very forgivable confusion.
universeness April 27, 2023 at 11:40 #803320
Quoting 180 Proof
"Who am I?" A persona (mask) – a dynamic, virual assemblage of perdurant bodily, cognitive & demographic data aka "self") – I believe I am: "the name" to which I've learned to involuntarily answer. Who else could be?


From Wiki:
Take any perdurant and isolate a part of its spatial region. That isolated spatial part has a corresponding temporal part to match it. We can imagine an object, or four-dimensional worm: an apple. This object is not just spatially extended but temporally extended. The complete view of the apple includes its coming to be from the blossom, its development, and its final decay. Each of these stages is a temporal time slice of the apple, but by viewing an object as temporally extended, perdurantism views the object in its entirety.

This seems akin to world lines, do you agree?
So from your description above, how much of it (or you) do you associate with the label 'real,' especially since you also employ the label 'virtual' (I assume 'virual' was a typo).
universeness April 27, 2023 at 11:52 #803321
Quoting Eugen
He may have the knowledge, I'm skeptical about his skills though. But I'm still waiting...


Reply to 180 Proof
Based on Eugen's comment above, I would ask you to apply the same standard as you applied to @Gnomon If Eugen claims that you have not answered his questions to you, on the Consciousness - Fundamental or Emergent Model thread. Then I assume that you would want to sufficiently answer his complaint, so that your imo, 'fair' complaint against @Gnomon remains sound.
universeness April 27, 2023 at 12:38 #803326
Quoting 180 Proof
I believe I am: "the name" to which I've learned to involuntarily answer.


:starstruck: :love: I am so happy when someone gives me another 'conduit' to post AGAIN, one of my fav songs. Sorry, in advance to any of the 'we arra mods' group this idiosyncratic behaviour of mine, might annoy:
180 Proof April 28, 2023 at 00:56 #803422
Quoting universeness
This seems akin to world lines, do you agree?

Exactly. :up:

Reply to universeness I'm not aware of @Eugen asking me to explain my own metaphysical or scientific speculations and that I've refused to answer as Gnomon (& Wayfarer) has often done. These are my answers to Eugen's questions of my objections – not questions of my speculations – on his thread:

https://thephilosophyforum.com/discussion/comment/803218

https://thephilosophyforum.com/discussion/comment/803300

Not comparable at all to my exchanges with Gnomon.
universeness April 28, 2023 at 09:31 #803479
Reply to 180 Proof
I think your response at:
https://thephilosophyforum.com/discussion/comment/803443
has redressed the imbalance, or lack of detail, that remained after your two posts that you linked to in your above post.
I was making a loose comparison with @Gnomons refusal to answer YOUR questions sufficiently, based on my opinion, that you could be accused of doing something similar. The brevity and lack of explanation in the two linked quotes from your post above, confirms that imo.

Anyway, apparently @Eugen, is rather selective in which of my own questions, HE decides to answer.
180 Proof April 28, 2023 at 09:34 #803481
Reply to universeness I answered Eugen the only way pseudo/incoherent questions can / deserve to be answered. IMO.
universeness April 28, 2023 at 09:43 #803484
Reply to 180 Proof
I understand that position, and have described my own similar frustration at times, via our recent PM.
I see some value in us both encouraging each other to maintain a consistent approach.
In hindsight, I would have been better to discuss the particular, small issue I raised publicly with you here, by PM. I will do so in the future if such should arise again.
180 Proof April 28, 2023 at 09:44 #803485
universeness April 28, 2023 at 09:49 #803487
Reply to 180 Proof
At least I treated you, by offering you a listen at a fab Ting Ting song, by way of compensation for any bruised ego I caused you. :halo:
180 Proof April 29, 2023 at 01:52 #803735
Reply to universeness :smirk:

Here's a recent book I just came across by computer engineer and neuroscientist Jeff Hawkins titled A Thousand Brains which summarizes 'lessons learned' from his own company's research on AGI. I haven't read it yet but reviews intrigue me and his first book On Intelligence was quite good and informative. Maybe you're already familiar with him? My guess is that Mr. Hawkins would be right at home in our 'futurist' discussions (no doubt schooling us both).
L'éléphant April 30, 2023 at 00:27 #803997
Quoting universeness
A computer does what it does IN time. Anything mathematical is an event that happens in time.

No. That's just you talking human talk. What does "in time" mean to you? Explain that first. Then try to analyze, for example, the retrieval of information by a computer. The human mind cannot retrieve all words simultaneously from a written text and not get a jumbled mess of information.

(It will be hard for me to explain this to anyone, unless you already have an idea of what it means to be nontemporal).

Quoting 180 Proof
I've no idea what you mean by "perceive time" or "temporal mind".

In a manner of speaking, we perceive time as past or present. We also perceive time in terms of duration -- how long or how short.

Temporalism in metaphysics posits that perception necessarily involves the objects of perception as being within a duration or time order of some sort. This is not to say that all objects of perception involves the temporal aspect of thinking -- we do perceive the spatial and nontemporal qualities of objects. The size of a tree is nontemporal, so is the brightness of a light bulb.


180 Proof April 30, 2023 at 01:50 #804008
Reply to L'éléphant Thanks for clarifying.

Reply to L'éléphant Yes, clocks, for instance, do not experience duration or retrospection. I think it's our metabolic functioning – relative states of homeostasis – that constitutes the intuition of "temporality". If this is so, then only an AGI instantiated in either synthetic or organic organism will, as you say, have a "temporal mind". This, however, would not be an intrinsic, or fundamental, feature or property of AGI itself, and therefore, it wouldn't (need to) be sentient – certainly not as we conceive of sentience today.

@universeness
L'éléphant April 30, 2023 at 04:40 #804018
Quoting 180 Proof
This, however, would not be an intrinsic, or fundamental, feature or property of AGI itself, and therefore, it wouldn't (need to) be sentient – certainly not as we conceive of sentience today.

If ever an AGI is created, it still would not be sentient, as humans are sentient. Or in our usual term, conscious. The measure of consciousness involves also our fundamental propensity to inaccuracy or errors due to the fact that our perceptual qualities have been developed naturally, and overtime; involving actual experiences with objects. It's a lived experience, not created in the laboratory or simulation.

Errors, for example, an experiment involving a measure of duration: two images are flashed to human subjects and they are to judge how long the images were shown. One image is larger than the other. So there's the non-temporal aspect of the experiment - size. Either the subjects would say that the bigger one lasted longer, or the smaller lasted longer, despite the fact that both were flashed at the same length of time.

The inaccuracy is exciting, in my opinion.
180 Proof April 30, 2023 at 05:30 #804023
Reply to L'éléphant AGI will make errors and correct and learn from them hundred of thousands to millions of times faster than human brains can. It won't need to be "sentient" to reach and surpass human-level performance. General Intelligence without the processing bottleneck of "consciousness" will render h. sapiens a metacognitively obsolete species and manifest AGI as the tip of an alien iceberg.
universeness April 30, 2023 at 12:53 #804082
Reply to 180 Proof
"where he leads a team in efforts to reverse-engineer the neocortex"
:grin: What a brilliant Job!

You should try to emulate @Mikie and try to contact Jeff and see if you can convince him to be a guest speaker on TPF. That's a schooling I would love to experience!
universeness April 30, 2023 at 13:17 #804086
Quoting L'éléphant
Then try to analyze, for example, the retrieval of information by a computer.


That's called the fetch-execute cycle and happens to the clock pulse of the clock line of the 'control bus' (not really a bus, as the lines operate discretely).
Each line below occurs serially, within a single clock pulse.

1. The processor sets up the address bus with the address of the memory location to be accessed.
2. The read line of the control bus is set high by the processor
3. The data/instruction resident at the memory location currently on the address bus is copied onto the data bus and is sent along to a memory data register, by the processor.
4. The processor will then transfer the data to a general purpose register or directly to RAM space or it will decode and execute, if it is dealing with an instruction rather than a data item.

This all happens WITHIN time slices(durations).

Quoting L'éléphant
The size of a tree is nontemporal, so is the brightness of a light bulb.

Sure but it took you a duration to type that, or to even think it, so your perception of a tree size or brightness, is temporal in the sense of your own perception time/duration.

Even if you (can) consider the biggest reference frame of perceiving the universe as single system, then that system will have a temporal aspect to any observer, as it did have a beginning, it does have a duration, and via entropy, it will 'disassemble.'
The idea that the tree height has a non-temporal frame of reference to entities such as us, is relative imo.
Jamal April 30, 2023 at 16:52 #804114
@universeness Something’s been bothering me. This discussion has been hovering around on the first page for ages, and I find the title annoying. Is it meant to be Emergence? If so I can change it.
L'éléphant April 30, 2023 at 17:01 #804120
Reply to universeness It now occurs to me that my discussion with you is futile. So, I'm ending it here.
L'éléphant April 30, 2023 at 17:06 #804123
Quoting 180 Proof
AGI will make errors and correct and learn from them hundred of thousands to millions of times faster than human brains can.

I get it. That was my point. But I was trying to point out to you that human errors are errors peculiar to humans. Which is what makes it interesting to me. Just as a computer could be made perfect, humans organically develop and along the way this development picks up natural selections, mutations, and accidents, which make for an exciting phenomenon.

I'm not trying to compare the abilities of humans and computers. I'm trying to explain why human consciousness (it's redundant to say this) is human.
180 Proof April 30, 2023 at 17:11 #804128
universeness April 30, 2023 at 17:19 #804130
Reply to Jamal
Sure, if you feel that change would better fit it's content.
universeness April 30, 2023 at 17:22 #804131
Reply to L'éléphant
That's ok, you are free to bail out anytime you wish.
universeness May 02, 2023 at 10:10 #804551
Quoting 180 Proof
neuroscientist Jeff Hawkins titled A Thousand Brains which summarizes 'lessons learned' from his own company's research on AGI.


Have you watched this?

I watched it last night. It's 2 hours 35 mins, but worth the investment.
The term emergence/emergent was used quite a bit.
I enjoyed the little insight it gave me into the work of neuroscientists and Jeff's 'thousand brains theory.'
Brain reference frames and movement models, the brains use of maps/graphs, cortical columns, grid cells, place cells, vector cells, etc, etc.
This video is easily due it's own thread but I don't know if TPF is an adequate place for such a thread.
Obviously a neuroscience site would fit much more.
Few here would be willing to invest the time involved imo.
I certainly think it's content would help make theists feel more and more uncomfortable as they continue to try desperately to hold on to their woo woo, ancient fables and present them as facts.
God did this! Just seems more and more 'silly.'
I would personally need, to watch this video a few times to gain better insight however.

Additional: I will now have to update my personal, previous, Paul Maclean model of the human triune brain, to Jeff's thousand brain model based on cortical columns.
WHEN I SAW HIM draw the little circles on paper and start to draw connecting communication lines between them. I said HEY, that looks like he is starting to draw a topology of a fully connected mesh network of computer nodes!! The amount of crossover between the mechanisms this video describes and computer science is very strong imo.
@noAxioms, @Count Timothy von Icarus, @Alkis Piskas, @bert1, @Isaac, @Benj96, and of course anyone else here on TPF, that might find Jeff Hawkins work (as introduced to me by @180 Proof) as interesting as he, and now I, do.
universeness May 02, 2023 at 11:50 #804560
Reply to 180 Proof
I will have to now buy Jeff's 'thousand brains book.' YOU keep adding to my homework sir! :rofl:
I recently completed The memoirs of Ulysses Simpson Grant.
My current read is my second reading of Brian Green's 'The Elegant Universe.' (I first read it 15 years ago)
After that, I have TPF member @Vera Mont's book 'The Ozimord project,' to read, then Sean Carrol's 'The Biggest Ideas in the Universe, VOL 1! (two more to come!), and now!
Jeff Hawkins 'A thousand brains.'
This is beginning to impact my weekend drinking time!!!!!! :halo:
180 Proof May 02, 2023 at 17:00 #804597
Reply to universeness :clap: :cool: Enjoy!

Reply to universeness I'll give this video a look eventually. Thanks!
universeness May 02, 2023 at 18:16 #804608
Reply to 180 Proof
In his discussion, in the video, with 3 other folks involved in the area, you will find that Jeff, does not currently hold the same opinion as you do, regarding the threat of AI developments towards AGI.
He certainly thinks that significant threat exists but I would also suggest, that he thinks we humans, are capable of countering them. He could of course be quite wrong in that view.
180 Proof May 02, 2023 at 18:44 #804609
Reply to universeness Maybe I'm misreading your remark, but I haven't opined that "AI development" (i.e. AGI) is a "threat". IMO, it's human civilization with its shiny new tools (e.g. intelligent, thermonuclear and/or nano-technological), however, that threatens human existence in the near-term.
universeness May 02, 2023 at 19:10 #804612
Reply to 180 Proof
I just heard that:
Geoffrey Hinton, (the godfather of AI,) 75 has just resigned from his post with Google. He helped to design the systems that became the bedrock of AI. But the Turing prize winner now says a part of him regrets making them.
There certainly does seem to be two intrenched camps on either side of the current AI debate.

You did not misread my remark, but perhaps I continue to misunderstand your position regarding AGI.
Perhaps it's your regular use of 'posthuman,' or/and your, in general, more pessimistic view of the future of humanity.
180 Proof May 02, 2023 at 21:45 #804617
Reply to universeness Let me put it this way: I think AGI is the future of humanity. :nerd:


A few 'optimistic' old posts ...

https://thephilosophyforum.com/discussion/comment/768537
https://thephilosophyforum.com/discussion/comment/770469
https://thephilosophyforum.com/discussion/comment/502217
universeness May 03, 2023 at 08:24 #804725
Reply to 180 Proof
:lol: Well done in finding 3 out of your almost 12,000 posts. :joke:
I am sure you could find more, if you had to.
I won't suggest 6 examples of your less positive comments to 'move in front,' and invite such a race.
The fact that you continue to push back against my accusation that you can be quite pessimistic about the future of humanity, and our ability to be better than our more base 'jungle rules and jungle thinking' style manifestations, such as territoriality, tribalism, theism, capitalism, malevolent hierarchy, xenophobia, etc, etc.
Means that you, imo, are one of those who are part of the solutions and not part of the problems.
180 Proof May 03, 2023 at 11:58 #804764
Reply to universeness :sweat: Do you really want hundreds of more 'optimistic' posts?! Anyway, ty. :up:
universeness May 03, 2023 at 14:49 #804784
Reply to 180 Proof
I enjoy your more 'optimistic' posts whenever I come across one. If I need cheered up, I can always write more of my own. If you ever watch the Jeff Hawkins vid above, I would be interested in discussing it's content with you.
180 Proof May 03, 2023 at 20:01 #804888
Reply to universeness :up:

Btw, we just differ over what constitutes an 'optimistic' view of our automated (IMO, prospective "post-scarcity") future. In a nutshell, anthropocentric you: "super-humanity" with exponentially more biophysical-metacognitive options than our current human condition affords us; de-anthropo-centric me: "post-humanity" with exponentially fewer biophysical-metacognitive defects than our current human condition constrains us with.

Or in (visionary) "science fiction" terms – my view is more "Starchild-Monolith" (or "Culture Minds") and your view is more "United Federation of Planets-Star Fleet", no? :nerd:
universeness May 03, 2023 at 22:05 #804945
Reply to 180 Proof
Yes, I would broadly agree with your analysis. It would be fun to drill more into how much of an individual identity you think could still be maintained, after a merging with future tech. My projected future, or I might even be so bold to say, the future I see currently, slowly emerging, will be very turbulent and may continue to be an 'ever on the precipice, existential risk,' as dangerous as we have faced since the invention of nuclear weapons. But I DO think we will eventually, get to a stage, where the following will be a description of a typical human existence.

1. Birth into a mainly secular humanist, globally united society, where an individual can take their basic needs and personal protections for granted, from cradle to grave.
2. We will become an interplanetary species.
3. Tech will be used initially as physical and mental benevolent medical support, and personal security support, and your 'first stage' of life, will be as a natural human existence, with a lifespan max of around 200 years, based on maintaining and growing and nurturing your 'natural identity,' developed since your birth.
4. Your second stage of life will happen, when your first stage is close to it's natural end or if it ends via accident, but you can be saved via tech. This is the point when an individual human. can CHOOSE to 'merge' with tech to become neo/nova sapien, and gain all those 'biophysical-metacognitive.' options you mentioned, and YES, I think we will fight as much as possible, to maintain as much of the 'human identity,' we had in our 'first stage' of life. Not all anthropomorphising is ill-advised.
5. Artificial lifeforms, biologically engineered lifeforms, mech/orga variants, genetically enhanced animal/ aquatic and insect lifeforms etc, etc will eventually exist along side us. Perhaps we will have encountered some alien life by that time as well. Perhaps all of the lifeforms on Earth (natural and 'created,') will eventually become interplanetary/interstellar.
6. I don't think the threat of extinction will remain the main picture. I think the main picture will consist of an eventual diversification, that will produce variety in a number that will dwarf the number of varieties produced by evolution and natural selection.

The vastness of the universe, can easily accommodate such.
180 Proof May 04, 2023 at 04:55 #805054
Reply to universeness :chin: Well ...

1. I suspect runaway climate change will balkanize the globe even more than it is today because the capacities for mitigating the catastrophic 'warming' effects are now and will be even more so unevenly distributed (even when AGI comes online). In the best case scenarios, however, I agree with your "cradle to grave" techno-"secularism" – what I imagine as automated post-scarcity societies (APS).

2. I imagine that in about fifty years we will start 'spreading out' in earnest across the inner solar system, mostly orbital, moon & asteroid habitats rather than planetary 'colonies'.

3. Okay (re: APS).

4. Assuming that "the human identity" is a manifestation of the human condition. Thus, I imagine as technosciences, extraterresrial habitation & AGI —> ASI accelerate the disappearance of the current human condition, "human identity" also will disappear. (Re: posthumanity (e.g. body-mods & brain-augments for living in space; AI-mediated-hiveminds; orga-mecha mergers, etc) —> transcension)

5. I predict that by the end of this century our (AGI-controlled) space probes will discover robust exo-biomes and thriving xeno-species beneath the ice carapaces of a number of Jupiter's & Saturn's moons. By then, however, ASI will determine how best to protect (enhance) terrestrial life from (by) extraterrestrial and artificial life-forms.

6. Three natural mass extinction-events come to mind which could affect the entire inner solar system (now and always): (A) gamma ray bursts, (B) planetary colliding coronal mass ejections (re: the Carrington event) and (C) micro-singularities. A non-terrestrial diaspora, of course, increases the likelihood of our species surviving extinction events but in no way guarantees it.
universeness May 04, 2023 at 13:16 #805183
Reply to 180 Proof
Thanks for the very interesting response.

1. I understand your 'balkanise the globe' projection but I don't agree with your bracketed ('even when AGI comes on-line.') I think AI progress, will help us very significantly, with climate change and I am also boosted by two others 'impressions,' I have. Global youth seems more aware of the threats that our historical and current stewardship of the Earth has caused, and seem more determined (compared to earlier generations) to organise themselves, to reverse those effects. Even many members of the traditionally nefarious rich and powerful, are beginning to realise that they cannot feed as well, from a dead or even balkanised global population. I do also accept that there is nonetheless, a dangerous global apathy and substantial 'fake news force,' to contend with.
I agree with your 'post-scarcity' epoch and hope it ends global hunger and vastly improves peoples lives BUT, it will then result in an increased need for better population control (at least on Earth).

2. Yes, I think 'stepping stone' space habitats, stations etc will become very necessary, before eventual extraplanetary 'large,' probably initially domed settlements, until 'terraforming' can make any kind of impact.

3. :up:
4. Quoting 180 Proof
Assuming that "the human identity" is a manifestation of the human condition. Thus, I imagine as technosciences, extraterresrial habitation & AGI —> ASI accelerate the disappearance of the current human condition, "human identity" also will disappear

I think the human 'first stage (fully natural, organic) life' will change yes, but not in a way that humans alive today would not recognise. I think we will fight hard to make our first, up to around 200 years of existence to be much as it is today. I think the current experiences we have that forms 'who we are' and 'who we might become,' are very much revered by a great number of us.
Quoting 180 Proof
"human identity" also will disappear. (Re: posthumanity (e.g. body-mods & brain-augments for living in space; AI-mediated-hiveminds; orga-mecha mergers, etc)

From your link: Today, we examine the possibility that the reason for the Great Silence is that all the aliens have evolved beyond the need to explore!
Your suggestion that any 'changes' in 'humanity,' especially what I would consider human stage 2 life need not become a disappearance of 'human identity' but an 'updating' of it. I know you think I am arguing semantics here but I think it's a valid semantic debate.

The Universe today article you cited was a fun read and it's main proposals were dramatised somewhat in the guise of 'the first ones' in Babylon 5:

But remember the first ones, all went to explore beyond the rim and became 'intergalactic.' The universe is so much bigger and unknown that any AGI or ASI will be able to 'comprehend,' in my 'humble' but still very very very atheist opinion. I don't find the posit of 'beyond the need to explore,' very likely.

5. I like that prediction, I hope it happens that way, I certainly would not advocate for any discovered microbial sized or any sized, biological structures being destroyed to make way for any colony from Earth.

6. Well, there are comments like this, from such as the physics stack exchange:
[b]Gamma rays can be stopped by the few inches of lead shielding nuclear reactors, the Trillions of yotta grams that make up the sun will be absolutely fine for the job.
You also don't need to worry about venus losing it's atmosphere, the worry with a GRB is that it destroys the ozone layer not that it flat out strips away our atmosphere.
The shortest GRB's can be two seconds long so earth could definitely be behind the sun for the entire duration of one.[/b]

All I am suggesting is that there seems to have always been many existential threats to the Earth and its 'life' based contents. Despite these, life on Earth endures. The threats you cite are very real and very valid. We will have to pay attention and make very serious, united, global efforts in the future to protect our future selves and our home planet. I think that we are left with nothing stronger at the moment than our individual hope that we will survive, in some form or another. I know that in some posts you have suggested that you are not a big fan of the notion of human 'hopes.'
180 Proof May 04, 2023 at 17:11 #805213
Reply to universeness Some more comments on your comments ...

1. The oceans are already too warm to reverse catastrophic climate change. AGI will triage the global population centers so that 1 in 4 (2bn) people might survive to the end of the next century.

2. 'Planetary colonization' (e.g. megaengineering, terraforming) does not make economic, engineering or scientific sense IMO. No "stepping stones", my friend, just dispersion of Earth's species as a hedge against terrestrial extinction risks. And because of hard radiation (e.g. cosmic rays) and astronomical transit durations, 'deep space exploration' is only feasible for (tinier-the-better) intelligent machines.

4. Babylon 5?! :rofl: (sorry) Nothing remotely to do with the transcension hypothesis.

5. :up:

6. "Global efforts?" Never were, never will be. And no need for that: AGI —> ASI will drive the big blue bus out of the ditch we're stuck in despite our fractious human nature. No doubt, over the next century or so, 3 out of 4 (6bn) of us will be left behind in the ditch so that the rest of our biological descendents can survive (predominantly due to the efforts of our machine descendents 'herding a billion cats').
universeness May 04, 2023 at 18:52 #805230
Has @Jamal or his mod minions, decided to 'diminish' this thread?
I apologise in advance for such a terrible accusation if it's just a tech hitch.
In truth I am not that bothered anyway. It's lived a long life in the league of page one threads.
It seems to be getting pushed down the pages, regardless of any new posts on it. :lol:

1. Any exemplar, reliable scientific studies you know of that claim this as fact?
2. Not yet, I agree but tech advances may/I think will, change this. I will stick with my stepping stones projection/prediction.
4. I quote from the article ", the Transcension Hypothesis ventures that an advanced civilization will become fundamentally altered by its technology. In short, it theorizes that any ETIs that predate humanity have long-since transformed into something that is not recognizable by conventional SETI standards."

Same in B5, the humans required 'Vorlon' tech and the power of the alien tech (the great machine) they found on the planet that B5 orbited, Epison III. Without that, 'the first ones,' would have remained invisible to them. G'Kar explains it quite well below:


6. Quoting 180 Proof
"Global efforts?" Never were, never will be.

See! Your more pessimistic sentences are still alive and kicking! :grin:
Quoting 180 Proof
No doubt, over the next century or so, 3 out of 4 (6bn) of us will be left behind in the ditch so that the rest of our biological descendents can survive (predominantly due to the efforts of our machine descendents 'herding a billion cats').

LOOK! there's another one! :joke:
Jamal May 04, 2023 at 18:57 #805231
Quoting universeness
Has Jamal or his mod minions, decided to 'diminish' this thread?
I apologise in advance for such a terrible accusation if it's just a tech hitch.
In truth I am not that bothered anyway. It's lived a long life in the league of page one threads.
It seems to be getting pushed down the pages, regardless of any new posts on it.


Yes, I “sunk” it, which means new posts no longer push it up the page. As you say, it’s had a long enough life, and it’s now more like a private conversation.
universeness May 04, 2023 at 19:02 #805232
Reply to Jamal
Typed like a true emotionless AI!
It may diminish, but will be freshly remembered by a significant few of the highest quality!
180 Proof May 04, 2023 at 19:20 #805236
Quoting universeness
1. Any exemplar, reliable scientific studies you know of that claim this as fact?

Plenty. This article cites some of them:
https://www.nytimes.com/2021/08/09/climate/climate-change-report-ipcc-un.html
universeness May 05, 2023 at 11:19 #805363
Reply to 180 Proof
I don't have a great deal of confidence in a New York Times article. I am cynical enough to treat all newspaper articles, as deserving only a base level of confidence that it is true.
Unfortunately, I could not read the article without agreeing to subscribe to the newspaper.
Do you have any better links to support the proposal that your point below has very strong evidence behind it?
Quoting 180 Proof
1. The oceans are already too warm to reverse catastrophic climate change. AGI will triage the global population centers so that 1 in 4 (2bn) people might survive to the end of the next century.
180 Proof May 05, 2023 at 16:18 #805416
Reply to universeness It's a forecast, not a prediction, like AGI itself. I'm just as cynical about news articles except when they cite the sources of the scientific studies they are summarizing. I'm not at all cynical, however, about accelerating climate change due to anthropogenic global warming. Here's an article published today that's clearly trying to avoid being "alarmist" and yet the implications are obvious (you can check out the sources cited therein for yourself):

https://www.cnn.com/2023/05/05/world/ocean-surface-temperature-heat-record-climate-intl/index.html
universeness May 06, 2023 at 11:38 #805584
Reply to 180 Proof
Thanks for the CNN link. As you suggest, the article was not being too heavily alarmist, but was offering significant warning. I cannot post any quotes from it as I try not to accept cookies.
From my own past reading on this (mostly about coral reef damage/bleaching and melting ice in the arctic and antarctic regions), I agree that the current situation in Earth's Oceans is very bad. I do not however think that we have reached the point of no return and I remain hopeful that your prediction of a human population fall from the current 8 billion to 1 or 2 billion, within the not to distant future, is unlikely, BUT I cannot provide convincing evidence that you are completely wrong.
180 Proof May 06, 2023 at 20:18 #805683
[quote=Albert Einstein (1946)]The unleashed power of the atom has changed everything save our modes of thinking and we thus drift toward unparalleled catastrophe. A new type of thinking is essential if mankind is to survive and move toward higher levels.[/quote]

Reply to universeness Our h. sapiens species has shown itself to be uniquely smart enough to create at least one problem for itself so intractably complex in scale and scope that we cannot solve it – climate change accelerated by anthropogenic global warming. Weirdly I'm hopeful that AGI —> ASI – assuming it bothers – will be capable of reframing the parameters of the problem so that it can be solved well enough to save a significant portion of Earth's habitable biosphere and thereby a sustainable fraction (1/2-1/20?) of the human population. I imagine the only significant "planetary terraforming" that will ever be undertaken will be an AGI —> ASI-driven project to terraform the Earth and eventually reverse / end the Anthropocene.


We are the cure.
universeness May 07, 2023 at 10:03 #805846
Reply to 180 Proof
Ok, It was useful to drill down a little more into your position in this issue.
180 Proof May 07, 2023 at 17:50 #806015
Reply to universeness :cool:

My "hopes" are silver linings in the dark clouds rolling in. The butterfly, sir, is about to leave the caterpillar's "human" chrysalis (re: Reply to 180 Proof).

:point:
universeness May 07, 2023 at 17:59 #806021
Reply to 180 Proof
Do you have evidence that the butterfly retains no knowledge of its time as a caterpillar?
Might the butterfly maintain much of the 'mind' of the caterpillar?
180 Proof May 07, 2023 at 18:31 #806041
Quoting universeness
Do you have evidence that the butterfly retains no knowledge of its time as a caterpillar?

Do we "retain knowledge" of our time as blastocysts? :roll:

Might the butterfly maintain much of the 'mind' of the caterpillar?

I imagine crawling is, at best, useless for flying. Maybe butterflies keep caterpillars around just to study them (e.g. "butterflygenesis") or for shitz-n-giggles (à la reality tv, stupid pet tricks, etc) or both? :smirk:
universeness May 07, 2023 at 18:42 #806044
Quoting 180 Proof
Do we "retain knowledge" of our time as blastocysts?

I would need to concentrate to see if I have any such stored memories. I will try hard this weekend after 1 or 10 single malts!

Quoting 180 Proof
I imagine crawling is, at best, useless for flying.

They have to land sometimes! I have witnessed landed butterfly's walk/crawl!

Quoting 180 Proof
Maybe butterflies keep caterpillars around just to study them (e.g. "butterflygenesis") or for shitz-n-giggles (à la reality tv, stupid pet tricks, etc) or both?


No caterpillars = no butterfly's. As I suggested before, there may be aspects of human consciousness that no 'created' system can reproduce.
180 Proof May 07, 2023 at 18:55 #806047
Reply to universeness Fortunately, "no created system" requires – is functionally enabled by – any "aspects of human consciousness" (i.e. a metacognitive processing bottleneck ... à la D. Kahneman's slooooow 'brain system 2'). Sapience sans (beyond) sentience. Butterfly sans (free from constraints-defects of) chrysalis/caterpillar.
universeness May 07, 2023 at 18:59 #806049
Reply to 180 Proof
Too much in your link for me to read at the moment. When I have read it, I will comment on it.
universeness May 08, 2023 at 12:02 #806173
Read the article about Daniels System 1 and System 2 thinking.
I did not see any strong connections to our discussion, was there a main summary point from his system 2 category that YOU find strongly contends with my suggestion that Quoting universeness
there may be aspects of human consciousness that no 'created' system can reproduce.


Btw, I came across this:
A new study finds that moths can remember things they learned when they were caterpillars — even though the process of metamorphosis essentially turns their brains and bodies to soup. The finding suggests moths and butterflies may be more intelligent than scientists believed.
From here
180 Proof May 18, 2023 at 23:29 #808920
Reply to universeness My reference to Kahneman's work was only mentioned as scientific corroboration, not justification or proof, of my philosophical statement about a 'metacognitive processing bottleneck' (re: System 2, thinking slow aka "consciousness"). There isn't any evidence among higher mammals, including h. sapiens, that Sys 2 / conscious processing such as ours is indispensible for intelligent – adaptive problem-solving – behavior. To me it's clear that that expectation is only an anthropocentric bias. The current developmental state of 'large language models' / 'neural net machines' (e.g. ChatGPT, OpenAI, AlphaZero, etc) in still narrow ways, as far as I can discern, show that 'sapience sans sentience' is the (optimal) shape of things to come.

Reply to universeness Another link to the catastrophic effects of (runaway) global heating on Earth's fresh water sources: lakes & reservoirs.

https://www.cnn.com/2023/05/18/world/disappearing-lakes-reservoirs-water-climate-intl/index.html

The heating of oceans and drying up of lakes-reservoirs are strongly correlated. Not "pessimism", my friend, just facts. :mask:
universeness May 19, 2023 at 09:43 #809002
Quoting 180 Proof
show that 'sapience sans sentience' is the (optimal) shape of things to come.


Do you mean 'intelligence versus self-awareness?'
I just can't conceive of any value in an intelligent system that is not-self aware other that as a functional, very useful tool for an intelligence that IS self-aware. Like a computer is for a human today.
Perhaps I am missing your main point here due to my attempts to decipher/interpret the words/phrases, you choose to use.

Quoting 180 Proof
The heating of oceans and drying up of lakes-reservoirs are strongly correlated. Not "pessimism", my friend, just facts. :mask:


I don't refute the very valid concerns regarding climate change.
I do fully accept that the evidence is overwhelming, that we have damaged the Earth's ecosystem significantly, in a way that compromises our survival and the survival of the current flora and fauna on the Earth. I think the Earth itself, will easily survive the actions of humans.
I think WE WILL pay a price for abusing Earths resources for private gain, and to satisfy the lusts/greeds of individual/(groups of) nefarious humans, but it's not over until it's over.
The 'facts' you mention are not imo, immutable, yet.
We probably have passed the point of no return in some ways, but not with the results that you suggest, ie, population reduction to the levels of an 'endangered species' or actual extinction.
180 Proof May 19, 2023 at 19:53 #809057
Quoting universeness
Do you mean 'intelligence versus self-awareness?'

No. I mean intelligence (i.e. adaptivity) without "consciousness" (i.e. awareness of being self-aware), a distinction I suggest in this old post https://thephilosophyforum.com/discussion/comment/528794 ... and speculate on further, with respect to 'AGI', here https://thephilosophyforum.com/discussion/comment/608461.



180 Proof July 19, 2023 at 21:37 #823472
Reply to universeness If you haven't watched this US Congressional testimony by the late Carl Sagan back in 1985, consider his well-informed warnings – macro predictions – which had subsequently been largely ignored by governments and transnational corporations because of very irrational, biased, human groupthink – a metacognitive defect AGI will not be limited by) ...



Also today ...
https://www.theguardian.com/environment/2023/jul/19/climate-crisis-james-hansen-scientist-warning
180 Proof August 06, 2023 at 20:08 #827659
Quoting universeness
We probably have passed the point of no return in some ways ...

Apologies for continuing to flog this equine's carcass:
https://www.dw.com/en/sea-surface-temperature-hotter-than-ever-before/a-66444694
universeness August 07, 2023 at 10:02 #827864
For some reason, I have only been messaged regarding your last post on this thread. I was unaware of your previous 2. I know @Jamal 'sunk' this thread, so that it would not show up on the main page anymore, but it was not closed to new posts. You have replied to me in the two posts I was not messaged about so I don't know what happened.

Anyway ....... firstly I will try to refresh where we are in our exchange here:

Quoting universeness
Do you mean 'intelligence versus self-awareness?'
I just can't conceive of any value in an intelligent system that is not-self aware other that as a functional, very useful tool for an intelligence that IS self-aware. Like a computer is for a human today.
Perhaps I am missing your main point here due to my attempts to decipher/interpret the words/phrases, you choose to use.

Quoting 180 Proof
No. I mean intelligence (i.e. adaptivity) without "consciousness" (i.e. awareness of being self-aware), a distinction I suggest in this old post https://thephilosophyforum.com/discussion/comment/528794 ... and speculate on further, with respect to 'AGI', here https://thephilosophyforum.com/discussion/comment/608461.

Quoting 180 Proof
"consciousness", on the other hand, is intermittent (i.e. flickering, alter-nating), or interrupted by variable moods, monotony, persistent high stressors, sleep / coma, drug & alcohol intoxication, psychotropics, brain trauma (e.g. PTSD) or psychosis, and so, therefore, is either online (1) or offline (0) frequently – even with variable frequency strongly correlated to different 'conscious-states' – during (baseline) waking-sleep cycles.

Quoting 180 Proof
What I mean by 'atavistic ... metacognitive bottleneck of self-awareness' is an intelligent system which develops a "theory of mind" as humans do based on a binary "self-other" model wherein classes of non-selves are otherized to varying degrees (re: 'self-serving' (i.e. confabulation-of-the-gaps) biases, prejudices, ... tribalism, etc). Ergo: human-level intelligence without anthropocentric defects (unless we want all of our Frankenstein, Skynet-Terminator, Matrix nightmares to come true).


I still perceive a 'versus' between the 'theory of mind' that you propose for a future AI and our human 'theory of mind.' Would the AI theory of mind you propose have to decide whether or not their 'intelligence' but not 'conscious' (at least not conscious in the human sense) was a 'superior' or inferior state the human 'state of mind.' I am struggling to find clear terminology here.
Perhaps, a better angle would be, If your AI mind model cannot demonstrate all of your listed functionalities:
Quoting 180 Proof

• pre-awareness = attention (orientation)
• awareness = perception (experience)
• adaptivity = intelligence (error-correcting heurstic problem-solving)
• self-awareness = [re: phenomenal-self modeling ]
• awareness of self-awareness = consciousness

How do you know, it would not conclude/calculate that to be an inferior state and that functions 4 and 5 above become two of it's desires/imperatives/projects?
universeness August 07, 2023 at 10:18 #827871
Quoting 180 Proof
If you haven't watched this US Congressional testimony by the late Carl Sagan back in 1985, consider his well-informed warnings – macro predictions – which had subsequently been largely ignored by governments and transnational corporations because of very irrational, biased, human groupthink – a metacognitive defect AGI will not be limited by) ...


Quoting 180 Proof
Also today ...
https://www.theguardian.com/environment/2023/jul/19/climate-crisis-james-hansen-scientist-warning


I have watched just about everything with Carl Sagan in it, available on-line, more than once. Some, I have watched many times. I have watched the vid you posted at least 5 times so far.
Carl was a far better predictor of future events than Nostradamus ever was.
I don't try to play down any current danger that climate change activists are shouting about, nor have I ever suggested that the human race is doing other than a piss poor job of its stewardship of this planet but I don't see any reason to believe that a future AI would do a better job as stewards of this planet.
AGI/ASI may well not be as 'biased,' or 'irrational' as 'human groupthink' can be but are you soooooooo sure that a future mecha wont be just as toxic towards planet Earth as humans were, if not more so.
If it needs to strip the Earth of its resources to replicate, advance and spread its own system, then it may do so and move on into space.
universeness August 07, 2023 at 10:24 #827874
Quoting 180 Proof
Apologies for continuing to flog this equine's carcass:
https://www.dw.com/en/sea-surface-temperature-hotter-than-ever-before/a-66444694


Anything I typed here in response to the linked article, would probably be a repeat of elements of my previous post above. I fully accept all the warnings about the climate change disaster we immanently face. BUT, It's not over until its over! That's all I have to cling to, and cling on is what I will continue to do! Feel free to think of me like the monty python black knight if you wish but I don't think its as hopeless as that ...... yet.
180 Proof March 31, 2024 at 05:23 #892516
"AI Winter is coming." :nerd:

https://thephilosophyforum.com/discussion/comment/892509