You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Elon Musk on the Simulation Hypothesis

Shawn November 15, 2018 at 21:09 19200 views 137 comments
I just read an amazing op-ed by Elon Musk yesterday about simulation theory, and the probability we're in one. Following are some quotes from him:

====

You could argue that any group of people — like a company — is essentially a cybernetic collective of human people and machines. That’s what a company is. And then there are different levels of complexity in the way these companies are formed and then there is a collective AI in Google search, where we are also plugged in like nodes in a network, like leaves in a tree.

We’re all feeding this network without questions and answers. We’re all collectively programming the AI and Google. […] It feels like we are the biological boot-loader for AI effectively. We are building progressively greater intelligence. And the percentage that is not human is increasing, and eventually we will represent a very small percentage of intelligence.

The argument for the simulation I think is quite strong. Because if you assume any improvements at all over time — any improvement, one percent, .1 percent. Just extend the time frame, make it a thousand years, a million years — the universe is 13.8 billion years old. Civilization if you are very generous is maybe seven or eight thousand years old if you count it from the first writing. This is nothing. So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality. Or civilization will end. Either one of those two things will occur. Or we are most likely living in a simulation.

There are many many simulations. These simulations, we might as well call them reality, or you can call them multiverse. They are running on a substrate. That substrate is probably boring.

When we create a simulation like a game or movie, it’s a distillation of what’s interesting about life. It takes a year to shoot an action movie. And that is distilled down into two to three hours. But the filming is boring. It’s pretty goofy, doesn’t look cool. But when you add the CGI and upgrade editing, it’s amazing. So i think most likely, if we’re a simulation, it’s really boring outside the simulation. Because why would you make the simulation as boring? [You’d] make simulation way more interesting than base reality.

=====

I find his argument persuasive and convincing for the simulation hypothesis. I don't think computers are prohibited by the laws of physics to render reality as a simulation, thus is he right in the last piece of the interview?

Comments (137)

unenlightened November 15, 2018 at 21:46 #227991
Elon:the universe is 13.8 billion years old.


Elon:we are most likely living in a simulation.


When the conclusion invalidates the premises, the argument is weakened.

That is to say, it is probably more convenient to simulate a universe that is - say - a minute old, but looks 13.8 billion years old, than to run the program for 3.8 billion years, which would be 'boring'. but if the universe is not that old, then the argument somewhat collapses. Elon has to assume the reality of the universe that he wants to argue is likely a simulation.
Shawn November 15, 2018 at 21:52 #227992
Reply to unenlightened

So, what you're really saying is that we are in base zero reality given how old the universe is?
apokrisis November 15, 2018 at 21:57 #227993
Musk:It feels like we are the biological boot-loader for AI effectively. We are building progressively greater intelligence.


I think if you understood biology as well as you understood tech, then you would realise how much more amazing the biology still is.

So this is 99% bullshit.

Now human culture is amazing. And we are finding all sorts of ways to evolve and augment that through our tech mastery. Machines amplify our control over material reality, giving us a means to act out our fantasies. AI - or really great pattern recognition machines - is one of those kinds of tools.

But simulation is essentially pointless. Life and mind are about a modelling relation with the world. Simulation appeals because it seems to be modelling without limits. But it is also then modelling without consequences. Someone would have to explain why that would be of any real interest. There is a missing bit to the argument right there.



Michael November 15, 2018 at 21:59 #227994
I think he took this from Nick Bostrom's trilemma:

  • The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero, or
  • The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero, or
  • The fraction of all people with our kind of experiences that are living in a simulation is very close to one


And he thinks the third is more likely.
unenlightened November 15, 2018 at 22:02 #227996
Reply to Posty McPostface I'm saying the argument is self-undermining. I'm not making any positive claims.

If simulation, then evidence is simulated.
Shawn November 15, 2018 at 22:04 #227998
Quoting apokrisis
So this is 99% bullshit


How so? His argument is sound. As he mentions, at any rate of progress you end up with the consequences of the simulation hypothesis.
Shawn November 15, 2018 at 22:05 #227999
Reply to Michael

Yes, I think he just expanded on that rationale.
Shawn November 15, 2018 at 22:06 #228000
Reply to unenlightened

Well if all you're saying is that reality must exist somewhere then I don't disagree with that. But his argument still holds.
unenlightened November 15, 2018 at 22:12 #228002
Quoting Posty McPostface
Well if all you're saying is that reality must exist somewhere then I don't disagree with that. But his argument still holds.


Let's call that reality 'the Presence of God'. all things are possible to God. Bla bla... I don't think it is impossible, but I don't think there is an argument either. somehow, when it is dressed up in scientific garb, folks will swallow the most medieval cosmologies and think them both new and plausible.
apokrisis November 15, 2018 at 22:24 #228007
Quoting Posty McPostface
How so? His argument is sound.


Musk:So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality.


Indistinguishable to who? Are you and me going to be actual real witnesses to this shared simulation, or are we simulations of those witnesses and thus part of the simulation too?

You think there is an argument here, rather than the usual hand-waving based on having also watched the Matrix?

At what point is the biology of consciousness shown to be replicable by "computational simulation"?


Shawn November 15, 2018 at 22:30 #228009
Quoting apokrisis
Indistinguishable to who?


Any observer or participants of that shared reality.

Regarding your other question, I think that as long as the laws of physics don't prohibit such a reality from occurring, then it's possible and guaranteed to occur given enough time.
apokrisis November 15, 2018 at 22:53 #228019
Quoting Posty McPostface
Any observer or participants of that shared reality.


But are you claiming those observers/participants to be themselves physically real or computationally simulated? You are failing to flesh out the critical part of the argument. Thus there is no "argument" as such.

Quoting Posty McPostface
I think that as long as the laws of physics don't prohibit such a reality from occurring, then it's possible and guaranteed to occur given enough time.


That's a view. And I say that anyone who understands biology as well as they understand computation can see why its inadequate as "a sound argument".

So I've asked you to show you understand the biological constraints on "simulating consciousness". But - like Musk - you don't want to clearly commit to having to place your observers/participants on some side of that tricky line.

The argument is that virtual reality tech can be so good that we don't know the difference. Biology can already be fooled by skilled programming. And we can imagine that fooling progressing exponentially - particularly because we can be so willing to immerse ourselves in the "reality" of our fictional movie and gaming worlds. The laws of physics don't even come into it. The nature of our biology has this in-built capacity to learn to believe anything to be real.

But it takes real food to nourish the gamer's body. Real diseases can kill it. At some point, civilisation does interrupt the fantasy being spun exponentially here. As Musk also admits....

Or civilization will end. Either one of those two things will occur.






praxis November 15, 2018 at 23:04 #228022
The metaphysics supposedly help to account for some quantum weirdness, like entanglement and how information appears to travel faster than light, on the positive side. :razz:
Shawn November 16, 2018 at 00:35 #228078
Quoting apokrisis
But are you claiming those observers/participants to be themselves physically real or computationally simulated? You are failing to flesh out the critical part of the argument. Thus there is no "argument" as such.


The argument is valid either if we're simulations or not. I don't see the issue here.
apokrisis November 16, 2018 at 01:40 #228090
Reply to Posty McPostface Which argument are you finding so impressive then - that we are most likely all the figments of a simulation, or that if this were the case, then the reality beyond our simulation would be "boring"?

If we dismiss the first, what can be said about the second?

I think you are still stuck with the observer issue. The analogy is that we make movies as a heightened reality - life with the boring bits taken out.

But it is a higher intelligence - the clever writer or director - that is constructing these heightened realities for us as their consumers. Sure, the filming process might be boring (well even that doesn't ring true). But the world beyond a Matrix simulation isn't going to be some mindless world - a computational substrate that for no reason, in some fashion that is quite different to the laws of physics as we understand them, also wants to generate these heightened realities with fictional observers within them, and no actual observer without.

If you actually stop to analyse this op-ed for any proper philosophical argument, it is just a bunch of handwaving fragments.

You can sort of see what Musk is going for there. If the ultimate reality is some kind of computational multiverse, then in blind fashion, that might just construct these random simulations of every kind of reality. And by the workings of infinite chance, that will generate bizarre creations like our world where we are simulated people, in a simulated world, complete with simulated physical laws and simulated biological histories.

The laws of physics don't seem to forbid this computational multiverse, you say. Again that is totally questionable given the holographic limits on physical computation. But then, by the light of the hypothesis itself, who could even care about this caveat if those laws are just going to be the simulated features of some randomly generated scenario?

There are just so many holes and loose ends in a short few paragraphs. I see no argument as such. Simply fashionably muddled thought.
Shawn November 16, 2018 at 01:45 #228092
Reply to apokrisis

Are you arguing over the observer effect in QM, and how it might impact a simulated reality? Non-locality and locality would be an issue if we were to live in a simulation, I think.
apokrisis November 16, 2018 at 02:07 #228101
Reply to Posty McPostface Again, the claim being made is too confused for QM to be an actual issue. But if the laws of physics are taken as a constraint on the realisation of computational simulations, then you can’t gaily exponentiate that infinite computation ensures anything is then possible.

And that is before we get on to the biological constraints.
Maw November 16, 2018 at 02:08 #228102
Elon Musk is proof that just because someone is rich, doesn't mean they are smart.
Marchesk November 16, 2018 at 02:08 #228103
Quoting Michael
The fraction of all people with our kind of experiences that are living in a simulation is very close to one


One potential problem is that we don't know whether a simulation can include consciousness. The fact that we're

A. Conscious
B. Don't have any clue what it would entail to simulate consciousness

Argues against the likelihood that we're living in a simulation.
Shawn November 16, 2018 at 02:11 #228104
Reply to apokrisis

I'm not sure we're on the same page. Could you distill your qualms with Musk's argument?
Marchesk November 16, 2018 at 02:13 #228106
Quoting Posty McPostface
The argument for the simulation I think is quite strong. Because if you assume any improvements at all over time — any improvement, one percent, .1 percent. Just extend the time frame, make it a thousand years, a million years — the universe is 13.8 billion years old.


But this argument doesn't work for everything. Say we apply it to the speed of transportation. There was a dramatic increase from horse to train, automobile and airplane. But after a certain point, which would be the 60s or 70s with highways, concord jets and trips to the moon, we didn't really increase our speed of transportation, despite continued improvements in technology related to transportation. We leveled out on how fast we move people and things around.

A similar thing might happen to computing before we reach the amount we would need to actually run an ancestor simulation. What sort of computing resources is it going to take to simulate Earth's history? It will be a tremendous amount, if you want it to be anything like the real world.
BC November 16, 2018 at 02:14 #228109
Reply to Posty McPostface I think Mr. Musk should mind his several businesses and leave the universe alone.
Shawn November 16, 2018 at 02:17 #228110
Reply to Marchesk

I believe Musk is creating the conditions with his Boring company and SpaceX, to be able to travel to any part of the world in an hour's time. That's pretty radical if you ask me.
Marchesk November 16, 2018 at 02:19 #228112
Quoting Posty McPostface
I believe Musk is creating the conditions with his Boring company and SpaceX, to be able to travel to any part of the world in an hour's time. That's pretty radical if you ask me.


Sure, along with cold fusion, flying cars, and Martian colonies. We've heard this sort of stuff for decades now. You should see some of the futuristic predictions from the 1950s.
Shawn November 16, 2018 at 02:22 #228113
Reply to Marchesk

I don't understand your pessimism on the matter of those futuristic ideas. They will be a reality in the future if it's economically viable and possible. Which then renders it an engineering problem.

One thing that Musk is good at is generating interest in more economical and efficient solutions to our current dilemma of transportation (depending on where you live).
apokrisis November 16, 2018 at 02:42 #228120
Quoting Posty McPostface
Could you distill your qualms with Musk's argument?


Are you taking the piss?
Shawn November 16, 2018 at 02:46 #228121
Reply to apokrisis

No, good Sir. I am wondering what exactly are your qualms with the argument. All I could figure out given my limited understanding was that you thought biological organisms are far too complex to simulate. Is that correct?
BrianW November 16, 2018 at 03:10 #228127
Is Elon Musk (or any of us) part of this simulation? If so, then it's that real. That is, a simulation is real to its objects/subjects. Right...? And how do we come to realize it's a simulation? We must have a reality (or non-simulation) to compare it with, right...? If we're not part of the simulation, then what are we?
Streetlight November 16, 2018 at 03:15 #228128
Its such a lazy argument.

Over a long enough time-span, any bullshit scenario I make up is more likely to be the case because time. Time, therefore, [insert bullshit scenario here] is very likely.
apokrisis November 16, 2018 at 04:15 #228150
Reply to Posty McPostface Sure. That would be one of the things in demand of support to constitute an argument.

Where is it plausible that any amount of computational simulation adds up to be anything like a conscious biological organism?

I realise it is fashionable to take it for granted. But enough science exists to say let’s see a little more evidence and a lot less handwaving.

Michael November 16, 2018 at 07:31 #228218
Quoting Marchesk
One potential problem is that we don't know whether a simulation can include consciousness. The fact that we're

A. Conscious
B. Don't have any clue what it would entail to simulate consciousness

Argues against the likelihood that we're living in a simulation.


I don't see how that follows. We'd have to have some reason to believe that consciousness requires something like a human brain and that a human brain cannot be manufactured (e.g. a "brain in a vat"), but not knowing how consciousness works isn't a sufficient reason to believe this.

Of course on the other hand Bostrom's trilemma might seem to assume that it is possible to simulate consciousness, which may be unwarranted. Perhaps it requires a fourth option? Although I suppose this can be covered by the first option.
DiegoT November 16, 2018 at 07:40 #228219
Reply to Posty McPostface bear in mind that Elon Musk and the people like him are a product of too much sci-fi reading, too much ego inflation, and possibly the influence of masonic understanding of the world, that is: that the natural universe is garbage and something to use and get rid of, and the chosen ones must do whatever it takes to upload their "soul" to a non-physical plane (gnosticism, it´s been plaguing us some XXV centuries). That is, the way Elon Musk, or psychopaths like Jeff Bezos or George Soros see the world is heavily filtered by an ideology that is all about ego-survival and demonization of nature. I wonder if, having as main competitor an empathy-free ego like Jobs is what made Bill Gates to develop his philantropic ways. I´m looking forward to shit on his last product!
diesynyang November 16, 2018 at 07:48 #228221
Reply to Posty McPostface

^I think the simulation argument is sound, that's why it is still arguable today. It is possible that there are technology like The Matrix (Even Better) in which that machine restrict us from perceiving/imagining the "Outside World". The problem is not in "Is it possible to do that simulation" because it is possible.

The question are, "Why does the simulation is being done in the first place?" because if you don't find a reason for the simulation, then there are no simulation and our reality is real. That is the problem.

Some of the reason are, the concept of Post-Human or Alien. Reply to Michael which Nick Bostrom's state. It is because the Post-Human want to "Experience" this life. Maybe to test a scenario, or for enjoyment, etc.

If you don't believe in the concept of Post-Human, or a more intelligence alien, then it is safe to say that we are not in a simulation. But if you believe that concept, then it is indeed possible.

But again, it is not yet proven, then either you work to prove it (Which is hard and almost impossible). Or you put your head into something that is real, like the existential crisis of Super AI for example : D
Wayfarer November 16, 2018 at 09:21 #228236
It’s a metaphor with obvious appeal in our technological age. Nothing more. But it would suit the technocracy if a lot of people were to believe it.
Terrapin Station November 16, 2018 at 11:42 #228258
Quoting Posty McPostface
So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality. Or civilization will end. Either one of those two things will occur. Or we are most likely living in a simulation.


That section couldn't be more of a non-sequitur.

And calling any computer activity "intelligent" at this point is simply a manner of speaking. What makes it "intelligent" is our interpretation of computer events, where we are anthropomorphizing to some extent.

Quantifying improvements is dubious as well. We can easily quantify things like processor speed, storage capacity, etc., but that's not the same thing.

Quoting Posty McPostface
When we create a simulation like a game or movie, it’s a distillation of what’s interesting about life.


Ideally.

Lots of video games and movies are unfortunately a distillation of how little imagination some folks have, how little effort they put into the work at hand, etc.




Harry Hindu November 16, 2018 at 11:58 #228273
Quoting Posty McPostface
There are many many simulations. These simulations, we might as well call them reality, or you can call them multiverse. They are running on a substrate. That substrate is probably boring.

Reality and simulations are two directly opposite things. To say that one is the other is making a category mistake. Is it "simulations" all the way down, or is it just reality all the way down?

Simulation only makes sense in relation to some reality.
SophistiCat November 16, 2018 at 13:49 #228357
Quoting unenlightened
I'm saying the argument is self-undermining. I'm not making any positive claims.

If simulation, then evidence is simulated.


The idea is that the simulation is a simulation of (a part of) the actual world under representative conditions. So yes, evidence is simulated, but if the simulation is accurate enough, then this simulated evidence is close to the real evidence.

Like, for instance, if I was simulating an engine turbine, I would be putting in the material properties, geometry, physics, and boundary conditions that are characteristic of the real engine that I am interested in.

(I am not endorsing the simulation hypothesis, btw, least of all Musk's OP. Why are we even talking about Musk?)
Arkady November 16, 2018 at 14:43 #228381
Reply to Maw I think it's more accurate in this case to say that merely because a person is smart in their areas of expertise, that doesn't imply that they're smart in all areas.
Arkady November 16, 2018 at 14:47 #228384
What does it even mean to "simulate" subjective, first-person experience? As Descartes pointed out so long ago, it doesn't even seem possible that I be deceived about such things. So, even in this simulation, there are some "real" things when it comes to phenomenal consciousness.
Terrapin Station November 16, 2018 at 14:50 #228387
Quoting Harry Hindu
Reality and simulations are two directly opposite things. To say that one is the other is making a category mistake. Is it "simulations" all the way down, or is it just reality all the way down?

Simulation only makes sense in relation to some reality.


Yeah, you can't have "only simulations." That's incoherent.
Terrapin Station November 16, 2018 at 14:51 #228388
Quoting Arkady
What does it even mean to "simulate" subjective, first-person experience? As Descartes pointed out so long ago, it doesn't even seem possible that I be deceived about such things. So, even in this simulation, there are some "real" things when it comes to phenomenal consciousness.


Another good point.
Forgottenticket November 16, 2018 at 15:43 #228407
He's just repeating whatever Bostrom says.

Btw from Bostrom's paper
https://www.simulation-argument.com/simulation.html
Bostrom:
A common assumption in the philosophy of mind is that of substrate ? independence . The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon ? based biological neural networks inside a cranium: silicon ? based processors inside a computer could in principle do the trick as well. Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.


lol, :grin: I like he sweeps away the age old mind/body problem in a sentence. The literature for this premise is also not readily provided from what I can see.
apokrisis November 16, 2018 at 19:17 #228470
Bostrom:It is not an essential property of consciousness that it is implemented on carbon ? based biological neural networks inside a cranium: silicon ? based processors inside a computer could in principle do the trick as well.


This claim of multirealisabilty has in fact been deeply challenged by research into the biophysics of life over the past decade.

Everything biological hinges on the ability of informational mechanisms, like genes and neurons, to regulate entropic metabolic flows, like proton gradients and electron respiratory chains. So this biology, this set up, now seems so special, life and mind could only arise with very specific “hardware”.

This familiar assumption of cogsci, and hence 1980s philosophy of mind, now sounds horribly dated.
Shawn November 16, 2018 at 19:48 #228486
Reply to apokrisis

What are your thoughts about this?

https://www.digitaltrends.com/cool-tech/artificial-life-quantum-computing/

https://www.nature.com/articles/s41598-018-33125-3#Abs1
ssu November 16, 2018 at 20:12 #228498
Quoting apokrisis
This claim of multirealisabilty has in fact been deeply challenged by research into the biophysics of life over the past decade.

Everything biological hinges on the ability of informational mechanisms, like genes and neurons, to regulate entropic metabolic flows, like proton gradients and electron respiratory chains. So this biology, this set up, now seems so special, life and mind could only arise with very specific “hardware”.

This familiar assumption of cogsci, and hence 1980s philosophy of mind, now sounds horribly dated.

I think the basic problem is learning and the interaction with the World that isn't part of you. Too many times the focus is just on the very broadly defined physical mechanisms.
apokrisis November 16, 2018 at 20:32 #228503
Reply to Posty McPostface If life and mind are defined by information that has material consequences, then be suspicious of all claims that talk about plays of information without material consequence.

A pattern running on a computer is just syntax. Symbol processing. It still takes an actual biological being to read the pattern as having meaning and thus wanting to act on it in some way. The material consequences are what give a modelling relation with the world any semantics.

If you think that life and mind are just essentially machines, then you will be forever insensitive to the chasm that in fact exists between the biological and the mechanical. Life is based on the physics of dissipative structures. And the mechanical is defined by its insensitivity to entropic reality.

The parts constituting a machine are essentially dead in being fixed and stable. The parts constituting an organism are essentially unstable - poised to fall. And that is how regulating information can actually insert itself into the material equation and determine which way the instability will fall.


apokrisis November 16, 2018 at 20:37 #228508
Reply to ssu Yeah. If we are talking about neural network architectures, then we are starting to talk about legitimate attempts to follow the path of biological realism. And I doubt you would find neural networkers spending a lot of time worrying about whether we are figments of a matrix simulation.
Arkady November 16, 2018 at 20:54 #228520
Quoting apokrisis
And the mechanical is defined by its insensitivity to entropic reality.

:confused: So why aren't there perpetual motion machines?
Shawn November 16, 2018 at 21:01 #228530
Quoting apokrisis
If life and mind are defined by information that has material consequences, then be suspicious of all claims that talk about plays of information without material consequence


As far as I'm aware information doesn't equate with the "material" so, what do you think?
Michael November 16, 2018 at 21:08 #228537
Reply to apokrisis What about biological computers?
apokrisis November 16, 2018 at 21:08 #228538
Reply to Arkady Computation does rely on being able to produce a frictionless world. But yes. My point is that that is in the end a thermodynamic fiction.

There is always a cost attached to every time a symbol is written, a gate is switched. The cost is simply being made the same for any such informational action. And as small as possible.

Likewise, your car engine will always eventually wear out. The hardened parts will erode with use. I like the fact that car design has reached the stage where all the parts have been strengthened to the exact degree that they will all tend to fail about the same time.

So the mechanical is about stepping outside the usual entropic deal - the world of self organised flows like rivers, plate tectonics and solar flares - to control what is going on with rigid material form and imposed systems of informationally operated switches, gates and timing devices.

And yet all that machinery still erodes. Friction can be minimised but never eradicated. Dissipation wins in the end.
apokrisis November 16, 2018 at 21:10 #228542
Reply to Michael Do you have one?
Michael November 16, 2018 at 21:10 #228545
apokrisis November 16, 2018 at 21:11 #228549
Reply to Posty McPostface Point to information that exists without a physical mark then.
apokrisis November 16, 2018 at 21:12 #228550
Reply to Michael And why would that be?
Michael November 16, 2018 at 21:13 #228552
Reply to apokrisis Because I don't have the knowledge, skills, or technology to build one, and know of nobody who does and has and who is willing to give or sell it to me?

I don't understand the relevance of your questions.
Shawn November 16, 2018 at 21:13 #228554
Reply to apokrisis

The number two or one-hundred or a million? These numbers dont denote anything in the real world but are snippets of information about it.
apokrisis November 16, 2018 at 21:36 #228570
Quoting Michael
I don't understand the relevance of your questions.


Huh? You asked me “what about biological computers?”

Well. An example if you please.
apokrisis November 16, 2018 at 21:43 #228573
Reply to Posty McPostface You made some marks appear on my screen - 2, 100, 1,000,000. And so the party started.

Numbers stand for acts of counting. Some set of marks to be scratched or instances to be recorded. The efficiency of a notation shouldn’t fool you that symbols don’t need grounding. Every act of reference is also a physical event.
Michael November 16, 2018 at 21:43 #228574
Quoting apokrisis
Huh? You asked me “what about biological computers?”


Because you took issue with Bostrom's claim that silicon-based computers could give rise to consciousness. So what if we considered biological computers running these simulations instead? Would that address the concern you had? I don't understand why me having or not having one is relevant to this question.

Quoting apokrisis
Well. An example if you please.


I can't exactly give you one, but there's information about them here.
apokrisis November 16, 2018 at 21:57 #228582
Quoting Michael
So what if we considered biological computers running these simulations instead?


Again, first show that “running a simulation” is something a biocomputer could even do. Then we are still left with the basic point that a simulation is not a reality as it is divorced from any material consequences due to being founded on an artificial stability.

Biology arises because material instability - criticality - offers a suitable foundation for the informational regulation of that instability. That is the whole deal. So using meat to imitate stable computational hardware is missing the point of what actually defines life and mind. If it is perfect for Turing computation, you have ruled out the very thing - the material instability - which life and mind exist to regulate.

Michael November 16, 2018 at 22:12 #228592
Quoting apokrisis
Again, first show that “running a simulation” is something a biocomputer could even do.


A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

And we know that at least one biological organ is capable of giving rise to consciousness.

So put the two together and we have a biological computer, running simulations, where the output is a certain kind of conscious experience.

I don't think it's controversial to think that a sufficiently advanced civilization can create biological computers that function somewhat like the human brain, complete with consciousness, but where its experiences of things like having a body and sitting under a tree are caused by its own internal activity and not by actually having a body and sitting under a tree.
Baden November 16, 2018 at 22:37 #228598
Musk: Because technology, everything.

:yawn:

apokrisis November 16, 2018 at 23:10 #228603
Quoting Michael
The article I linked to explains that biological computers can do this.


Sure. You can build a Turing machine out of anything. Even meat, or string and tin cans. So long as it is eternally stable and entropically unlimited. That is rather the point.

Meanwhile over here in reality, a very different game is going on. I’m asking you to focus on that.

Quoting Michael
I don't think it's controversial to think that a sufficiently advanced civilization can create biological computers that function somewhat like the human brain, complete with consciousness


If you don't find it controversial then you might want to question how well you understand the biology of brains, and indeed the biology of biology.

A) Machine - stable parts.

B) Life - unstable parts.



Michael November 16, 2018 at 23:20 #228604
Quoting apokrisis
If you don't find it controversial then you might want to question how well you understand the biology of brains, and indeed the biology of biology.


I know biological organisms are complicated. But if they can develop naturally in response to a sperm interacting with an ovum then I don't see why they can't be developed artificially. It's not like they're made of some special physical substance that an advanced intelligence is incapable of manually manipulating. It's all still the same molecules that we use for all sorts of other purposes.
Wayfarer November 16, 2018 at 23:20 #228605
There was an insightful essay years ago on BBC’s online magazine (can’t find it since) about the powerful appeal of Inception, Matrix, and other such sci-fi films which suggest just such a scenario as Musk is speaking about.

They play to an intuition that existence itself is a grand illusion, which has obvious parallels in some streams of occult lore and also in Eastern religions. (Actually I found the red pill/blue pill scene in Matrix quite offensive at the time, because I thought it suggestive of a profound insight which the film itself didn’t really grasp, but only sought to exploit.)

But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite true in that the brain - the most complex phenomenon known to science - is itself a ‘reality simulator’. The problem is that it is difficult to understand that from ‘within simulation’ as it were. But that, I think, is the intuition that the ‘simulation hypothesis’ suggest.
apokrisis November 16, 2018 at 23:42 #228609
Reply to Michael Strewth. So life on earth began when a sperm met an ovum and organisms arose.


Michael November 16, 2018 at 23:46 #228611
Quoting apokrisis
Strewth. So life on earth began when a sperm met an ovum and organisms arose.


That's not what I'm saying. My brain developed after my dad's sperm fertilized my mum's ovum. We don't need to recreate the origin of life to build a brain-like biological computer. We can just recreate what sperms and ovums do using DNA and proteins of our own making.
ssu November 16, 2018 at 23:52 #228614
Quoting Michael
A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

And we know that at least one biological organ is capable of giving rise to consciousness.

Yeah, but unlike computers which follow orders and basically use algorithms, we being conscious can look at those rules/algorithms and create something else, invent something, which wasn't in the rules/algorithm in the first place. When a computer "creates" something new, it has to have specific orders just how to do this.

Hence computers simply cannot follow the algorithm "do something else". They have to have specific instructions about how to do 'something else'.
Michael November 16, 2018 at 23:54 #228615
Reply to ssu Have you not heard of machine learning?
apokrisis November 16, 2018 at 23:57 #228616
Quoting Wayfarer
But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite true


That’s like saying the eye is like a camera. It might get the conversation started, then you get serious.

Take for instance the evidence from sensory deprivation experiments. Without a world forcing the brain into some kind of stabilising state of interpretation, then experience and thought just fall apart.

There is no Cartesian theatre, no running simulation, that is a consciousness standing apart from the world. The idea of the mind as a stable entity, a soul stuff, is what underpins the naivety computationalists.

Neurology depends on its own instability being regulated by its running interaction with a world. It becomes constrained by its environment to have an appropriate level of fixed or habitual response.

So the simulation story is just dualism warmed over. Sensory deprivation reveals that being in a definite and organised state of mind is not about a concrete act of world simulation but an enactive state of world interpretation. The infinite garbled possibility the dreaming mind can conjure up is stabilised by whatever the available regularities of the environment happen to be.

apokrisis November 16, 2018 at 23:58 #228617
Reply to Michael The problem is not that you are talking nonsense. It is that you don’t even know its nonsense.
ssu November 17, 2018 at 00:10 #228620
Michael:?ssu Have you not heard of machine learning?
Yes. And there's exactly the problem. Just from the Wikipedia link you gave me:

Machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.


As I said, the Computer has to have an algorithm. It cannot do anything without an algorithm and it cannot do something that algorithm doesn't say to do. It's Limited by it's algorithm. Now you can look at an algorithm (1. do this 2. Then do that 3. check what you have done works) and think out of the box and come up with a new algorithm (1. Go and drink a beer and let others do those things and check they work). Your basically conscious. You can look at the algorithm, understand the objective that the algorithm is intended for and then do something else.

However data-driven decisions it makes and however it builds a model from sample inputs, the computer has to have instructions how to build these, how to use the data, and all that still is very basic instruction following just like a Turing Machine does.
Wayfarer November 17, 2018 at 01:32 #228639
Quoting apokrisis
Neurology depends on its own instability being regulated by its running interaction with a world. It becomes constrained by its environment to have an appropriate level of fixed or habitual response.


none of which contradicts what I meant to say, but I will have to enlarge on it when not typing on an iphone.

(Incidentally, as it happens, I’m working at an AI startup right now.)
Wayfarer November 17, 2018 at 02:24 #228645
Quoting apokrisis
But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite true
— Wayfarer

That’s like saying the eye is like a camera. It might get the conversation started, then you get serious.

Take for instance the evidence from sensory deprivation experiments. Without a world forcing the brain into some kind of stabilising state of interpretation, then experience and thought just fall apart.


I will try and enlarge on that. What I have in mind, is the role of the brain (or mind, mind/brain, whatever) in synthesising perception and data in the generation of conscious experience. So it's a simulation in the sense of the constructive act of conscious cognition. We do indeed 'build' a world by the act of combining sensory perception with judgements, intentions, and the many other conscious and unconscious components of the vast ensemble that is the mind. Obviously that has to be well-adapted, otherwise you'll 'mistake your wife for a hat', or whatever. But it's still in some sense a simulation.

Although I just read the Musk quote again, and I honestly think it's bullshit. What it doesn't see is that 'artificial intelligence' doesn't really know anything at all. As I mentioned above, I am working in an AI start-up. The vision is, the software they're making is a person - it has a name, but I won't give it - let's say 'Hal' - so you 'ask Hal' questions about data sets. And you have to know exactly what to ask, referencing very specific terms. Like, I noticed there was a big data set about types of customers - single parents, parents with two kids, and so on - for supermarket data. So I idly wondered, does Hal know what a bachelor is? I asked Hal, sales figures, by quarter, these product lines, for bachelors. Hal says - 'what's a bachelor? Is it a kind of olive?' Just for fun, I say, yes Hal. So Hal says 'great! Bachelors are olives! I'll remember that!'

[quote=Steve Talbott] In 1965, Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work that a man can do.” M.I.T. computer scientist Marvin Minsky assured a Life magazine reporter in 1970 that “in from three to eight years we’ll have a machine with the general intelligence of an average human being ... a machine that will be able to read Shakespeare and grease a car.”

The story is well-told by now how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.[/quote]

[url=https://www.thenewatlantis.com/publications/logic-dna-and-poetry]Logic, DNA, and Poetry.
Marchesk November 17, 2018 at 10:42 #228665
Bostrom:A common assumption in the philosophy of mind is that of substrate ? independence . The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon ? based biological neural networks inside a cranium: silicon ? based processors inside a computer could in principle do the trick as well.


Yeah, this is far from widely accepted in philosophy of mind. People with a strong computer science background tend to endorse it a lot more than people who are more philosophical in general. I'm not sure where the neuroscientists fall on this on average, but I would guess they're a bit more reserved about making such assumptions.
Marchesk November 17, 2018 at 10:45 #228666
Here's something related Elon said last year. To paraphrase:

[quote=paraphrased Elon]Humans are already cyborgs and superintelligent because of smartphones. Anyone with one of these is more powerful than the president of the United states 30 years ago.[/quote]

Then he goes on to talk about the limiting factor for superhuman intelligence is output bandwidth, so we need brain to computer interfaces to bypass our slow modes of communication.
Marchesk November 17, 2018 at 11:24 #228668
Quoting Michael
A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

And we know that at least one biological organ is capable of giving rise to consciousness.

So put the two together and we have a biological computer, running simulations, where the output is a certain kind of conscious experience.


Wait a second, what does a conscious output look like where you take some input, apply the rules of a mathematical model, and produce output?

I'm not aware of any mathematical model that can do that, or what it could even possible look like. Are you?

I'm thinking you input some matrices of data, there's some machine learning models, and then the output is .... a blue experience???

That doesn't compute, because it's not a computation.
Michael November 17, 2018 at 11:31 #228669
Quoting Marchesk
I'm thinking you input some matrices of data, there's some machine learning models, and then the output is .... a blue experience???


Give a computer a Hex code of 000000, have it add FF, and the result is 0000FF. This is the hex code for blue, and it tells the computer to turn on the blue lamps that each make up part of a pixel.

Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.

Unless you want to argue for something like a God-given soul, what reason is there to think that the human brain and its emergent consciousness is some special, magical thing that cannot be manufactured and controlled? We might not have the knowledge or technology to do it now, but it doesn't follow from this that it's in principle impossible.
Michael November 17, 2018 at 11:40 #228670
Quoting ssu
As I said, the Computer has to have an algorithm. It cannot do anything without an algorithm and it cannot do something that algorithm doesn't say to do. It's Limited by it's algorithm. Now you can look at an algorithm (1. do this 2. Then do that 3. check what you have done works) and think out of the box and come up with a new algorithm (1. Go and drink a beer and let others do those things and check they work). Your basically conscious. You can look at the algorithm, understand the objective that the algorithm is intended for and then do something else.

However data-driven decisions it makes and however it builds a model from sample inputs, the computer has to have instructions how to build these, how to use the data, and all that still is very basic instruction following just like a Turing Machine does.


And you don't think that we operate according to algorithms of our own, albeit ones that are a product of DNA-driven cell development rather than intelligent design? How exactly do you think the human brain works? Is our mind some mystical homunculus, operating with libertarian free will, and that can only occur naturally and never artificially?
Marchesk November 17, 2018 at 11:46 #228671
Quoting Michael
Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.


But how will we know how to put together a biological computer that can bring about a blue color experience? I assume that won't be a binary pattern.

Quoting Michael
Unless you want to argue for something like a God-given soul or substance dualism,


There are other options, which you know about.

Quoting Michael
what reason is there to think that the human brain and its emergent consciousness is some special, magical thing that cannot be manufactured and controlled?


Not magical, but maybe fundamental.

Quoting Michael
We might not have the knowledge or technology to do it now, but it doesn't follow from that that it's in principle impossible.


Right, but there are somewhat convincing conceptual arguments against it. I don't know what the nature of consciousness is, but nobody else has been able to explain it either. And until that can be done, we don't know what computing it would entail, other than stimulating an existing brain.
Michael November 17, 2018 at 11:48 #228672
Quoting Marchesk
But how will we know how to put together a biological computer that can bring about a blue color experience? I assume that won't be a binary pattern.


By studying the human brain and replicating its behaviour.

Quoting Marchesk
Not magical, but maybe fundamental.


What do you mean by "fundamental"? And if it can occur naturally by DNA-driven cell development then why can't it occur artificially by intelligent design?
Marchesk November 17, 2018 at 11:51 #228673
Quoting Michael
By studying the human brain and replicating its behaviour.


Assuming behavior can result in consciousness. There's good reasons for thinking that's not the case.

Quoting Michael
What do you mean by "fundamental"?


Something that's not explicable in terms of something else, which in context means an empirical explanation.
Marchesk November 17, 2018 at 11:52 #228674
Quoting Michael
nd if it can occur naturally by DNA-driven cell development then why can't it occur artificially by intelligent design?


I don't know whether it can, but the conceptual argument against computing consciousness is that computation is objective and abstract, whereas consciousness is subjective and concrete.
Michael November 17, 2018 at 11:57 #228675
Quoting Marchesk
I don't know whether it can, but the conceptual argument against computing consciousness is that computation is objective and abstract, whereas consciousness is subjective and concrete.


But consciousness happens when a physical brain behaves a certain way, right? So replicate that kind of behaviour using the same kind of material and it should also cause consciousness to happen.

If it can occur naturally then I see no reason to believe that it can't occur artificially.
Marchesk November 17, 2018 at 11:59 #228677
Quoting Michael
But consciousness happens when a physical brain behaves a certain way, right? So replicate that kind of behaviour using the same kind of material and it should also cause consciousness to happen.


That might work. I'm more arguing against the simulation idea.
Arkady November 17, 2018 at 13:33 #228686
Not to just dump a link without discussion, but this blog post by philosopher Alexander Pruss may be interesting to some of you, and is somewhat a propos of the current discussion.

https://alexanderpruss.blogspot.com/2010/06/could-something-made-of-gears-be-person.html
Moliere November 17, 2018 at 13:34 #228687
Quoting Posty McPostface
So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality. Or civilization will end. Either one of those two things will occur. Or we are most likely living in a simulation.


Others have already pointed this out, but I figure I'll throw my hat in with that lot and try to rephrase. . .

I think the implication is false. @Marchesk pointed this out in their reply here: Reply to Marchesk


"Rate of improvement" is a squishy concept. Even supposing that the concept can be modeled mathematically as the use of the word "rate" seems to imply this is just plainly false. Empirically you have the car example. Theoretically speaking you need only consider what graphing a rate can look like. In a more localized sense a rate can appear to be linear -- it can look like it is a straight line that, if having a positive value, increases. But that's only locally. Often times a rate can be approximated like this when, in reality, it has, say, a logarithmic progression. Modelling equilibrium curves often produces this exactly. So instead of. . .

User image

You get. . .

User image

In which case, as you can see, given infinite time we'll progress towards a limit -- wherever that happens to be -- but that limit will not be infinite.


Elon musk is not only assuming that improvements can be modeled mathematically, but also assuming that the rate is linear (and positive, for that matter). So the probability of his implication hinges a lot on what he does not know, nor I.

If that's the case then I'd say his claim that this argument is very strong is false. It's a flight of fantasy with a lot of assumptions.
ssu November 17, 2018 at 15:35 #228697
Quoting Michael
And you don't think that we operate according to algorithms of our own, albeit ones that are a product of DNA-driven cell development rather than intelligent design? How exactly do you think the human brain works? Is our mind some mystical homunculus, operating with libertarian free will, and that can only occur naturally and never artificially?

I think that you aren't grasping the fact that this is basic and a fundamental issue in Computer science and computational theory. An algorithm is simply a set of rules and a computer follows those rules. It's about logic. Period.

There is absolutely nothing mystical here: the simple fact is that not everything is computable even in mathematics. True but uncomputable mathematical objects exist. And everything here is about the limitations of computability.

Just think a while what you mean by that "we operate according to algorithms of our own". OK, if indeed it would be so, then these algorithms by definition of the term could be described to you: an algorithm is a process or set of rules to be followed in calculations or other problem-solving operations. Thus you surely could read them, understand that "OK, this is me, I do react and solve things the way that the algorithm says". However, and here comes the absolutely crucial part, the algorithm to be an algorithm that computers use must tell how you react to it, how you learn from seeing this algorithm. Now people might argue that this is because you are conscious or have 'free will' or yadda yadda and thus you can look at this algorithm, set of rules, and do something else. Take it as a whole, learn from it and change your behaviour in a way that isn't in the algorithm. There's nothing mystical here. You simply aren't using an algorithm like a computer does.

A computer or a Turing Machine cannot do that. It just follows a set of rules. If you think that a computer can overcome this problem, then congratulations! You have just shown Turing's Halting Problem and a multitude of incompleteness results in Mathematics are false.
Michael November 17, 2018 at 15:45 #228699
Quoting ssu
However, and here comes the absolutely crucial part, the algorithm to be an algorithm must tell how you react to it


What does it mean for an algorithm to tell a computer how to react? If we look at the actual physics of it all it just reduces to the basic principle of cause and effect. A particular kind of input produces a particular kind of output. Talking about these causal chains as being an algorithm is just a useful abstraction. But the basic principle is the same whether we're dealing with how a computer behaves or with how the human brain behaves. There's no special substance in human brains that makes them behave in acausal ways, and impossible in principle to reproduce artificially.

Unless you want to argue for the libertarian's "agent causation", and that this "agent causation" can only occur in naturally grown organisms. Do you?
SophistiCat November 17, 2018 at 15:59 #228704
Quoting ssu
As I said, the Computer has to have an algorithm. It cannot do anything without an algorithm and it cannot do something that algorithm doesn't say to do. It's Limited by it's algorithm.


First of all, most computer programs are algorithms that process data, so it is not just an algorithm that you put in - it is algorithm plus data, and data can bring in potentially unlimited information. Deep learning programs are already pretty impressive, to the point that they can fool some of the people some of the time. Second, what is to stop a computer from creating new algorithms, or indeed from evolving its own algorithms in response to inputs? That sounds suspiciously like what the brain is doing, and indeed that is the direction that some of the more advanced machine learning is taking.

Your argument is: computers just follow predefined rules. But if you are a physicalist, i.e. you believe that the world we live in is regular through and through, with no place for magic and the supernatural, then everything in this world - including you - just follow predefined rules (whether or not those rules were predefined by some sentient being is irrelevant to this discussion, as far as I can see).

Now, whether everything in the world can be computed is still a hotly disputed thesis, but this conundrum cannot be resolved by pointing out that computers just follow rules - the question is much more complex than that.

Reply to Arkady I know that Pruss is pretty clever, but that argument was singularly bad. He should have just left it where Leibniz did.

Quoting Moliere
In which case, as you can see, given infinite time we'll progress towards a limit -- wherever that happens to be -- but that limit will not be infinite.


Although log(x) grows sublinearly, it doesn't have an upper limit ;) But I take your point.
Moliere November 17, 2018 at 16:04 #228708
Reply to SophistiCat Heh. It's been a few years. :D

EDIT: Just cuz it was bothering me.

y = -e^(-x) + a

That was the function I was thinking of. Superficially looks like a log curve, but has a number it approaches when you take its limit.
ssu November 17, 2018 at 16:06 #228709
Quoting Michael
What does it mean for an algorithm to tell a computer how to react?

Again:

Definitions of an algorithm:

"A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."

"An algorithm is a step by step method of solving a problem. It is commonly used for data processing, calculation and other related computer and mathematical operations."

"An algorithm is a set of instructions designed to perform a specific task."

Definitions of a computer:

"A computer is a machine or device that performs processes, calculations and operations based on instructions provided by a software or hardware Program"

So: computer follows instructions (algorithms). It doesn't do things that the instructions don't tell it to do.

Michael November 17, 2018 at 16:09 #228711
Reply to ssu You haven’t addressed the substance of my post.
ssu November 17, 2018 at 16:10 #228712
Quoting SophistiCat
First of all, most computer programs are algorithms that process data, so it is not just an algorithm that you put in - it is algorithm plus data, and data can bring in potentially unlimited information.
Yet how the handle the data has to be in the algorithm. There surely can be feedback loops even in very simple computer programs, that in the old days were called cybernetic systems and there is a myriad of other ways how computers "learn" from the given data. Yet for that learning there has to be a specific algorithm.

Or let's put this another way. Give me an example of a computer that doesn't follow an algorithm, instructions provided by a software or hardware program as said above.

ssu November 17, 2018 at 16:18 #228714
Quoting Michael
You haven’t addressed the substance of my post.

Neither have you my post.

You see the definition of a computer, a Turing Machine, matters. It's not a synonym for agent. Computation has it's limits. You simply cannot argue that because there is cause and effect, because there is this "Black box" in between input and output the two everything is computable. This actually isn't a thing about consciousness or free will, but the limitations of computation.
SophistiCat November 17, 2018 at 16:24 #228716
Quoting ssu
Or let's put this another way. Give me an example of a computer that doesn't follow an algorithm, instructions provided by a software or hardware program as said above.


Why? What would that prove?
ssu November 17, 2018 at 16:39 #228718
Quoting SophistiCat
Why? What would that prove?

I would say then that computers really can think, but I assume that I would be just confusing you.

Ok. Assume a computer that you give a program to run. The computer follows first the program, yet later you find it running a totally different program, which wasn't at all described in the first program to be done.
Michael November 17, 2018 at 16:44 #228720
Quoting ssu
You simply cannot argue that because there is cause and effect, because there is this "Black box" in between input and output the two everything is computable.


I’m not saying that. I’m saying that human brains are not in principle impossible to manufacture and that unless there really is some magic involved then if we reproduce the material and the behaviour then consciousness will result. We can then manipulate this artificial brain’s experiences by stimulating the relevant neurons, just as we can to a limited extent in real people already.

Whether or not you want to call this artificial brain a biological computer or its experiences a simulation is an irrelevant semantic matter and of no concern to the scenarios described by Bostrom’s trilemma.
ssu November 17, 2018 at 17:00 #228724
Quoting Michael
I’m saying that human brains are not in principle impossible to manufacture and that unless there really is some magic involved then if we reproduce the material and the behaviour then consciousness will result. We can then manipulate this artificial brain’s experiences by stimulating the relevant neurons, just as we can to a limited extent in real people already.

Fair enough. I'm not implying that there is any magic either, only that our current Turing Machines called computers have severe limitations in being accurate models on how we function. Of course in many ways they can model us, that's for sure.

Quoting Michael
Whether or not you want to call this artificial brain a biological computer or its experiences a simulation is an irrelevant semantic matter

I wouldn't call it an irrelevant semantic matter as a computer does have a specific definition. Now, if you use the term AI, you aren't implying something specific on how the AI operates, but calling it a computer you do that, because (as I've said now many times) a computer has a definition. Just like in earlier historical times people just assumed humans to be just advanced mechanical devices.


Forgottenticket November 17, 2018 at 20:34 #228814
Quoting Marchesk
I'm not sure where the neuroscientists fall on this on average, but I would guess they're a bit more reserved about making such assumptions.


Well the binding problem is still unresolved. (Well some like Dennett say it's not real.) So it's not like there is specific criteria for it.

SophistiCat November 17, 2018 at 21:21 #228847
Quoting ssu
Ok. Assume a computer that you give a program to run. The computer follows first the program, yet later you find it running a totally different program, which wasn't at all described in the first program to be done.


Yes, that's what evolutionary algorithms do: they modify part of their own code (the other part you may think of as the environment, which is subject to unchanging rules).
ssu November 18, 2018 at 00:13 #228878
Quoting SophistiCat
Yes, that's what evolutionary algorithms do: they modify part of their own code

With the way the algorithm instructs them to do.

Notice the part "which wasn't at all described in the first program to be done". That part you see means that it's not following the instructions, it's not modifying it's code how it was instructed to do.
jorndoe November 18, 2018 at 02:52 #228903
What's the difference between consciousness and simulated consciousness anyway?
Simulated suggests crafted intentionally by someone else.
If that's the only difference, then "simulated" has little bearing on consciousness itself, just the circumstances.
Shawn November 18, 2018 at 03:00 #228906
Quoting jorndoe
What's the difference between consciousness and simulated consciousness anyway?


Hi jorndoe. :)

None, as I see it.
SophistiCat November 18, 2018 at 08:13 #228936
Quoting ssu
With the way the algorithm instructs them to do.

Notice the part "which wasn't at all described in the first program to be done". That part you see means that it's not following the instructions, it's not modifying it's code how it was instructed to do.


I am still trying to understand where (if anywhere) you are leading with these requirements for programs that spring into existence fully formed out of the blue. Are you trying to say that consciousness is a miracle? Many do think so, but why beat around the bush? Just come out and say it and we will be done.
ssu November 18, 2018 at 12:30 #228955
Quoting SophistiCat
I am still trying to understand where (if anywhere) you are leading with these requirements for programs that spring into existence fully formed out of the blue.

No. I'm just explaining the limitations of computation and using algorithms.

Quoting SophistiCat
Are you trying to say that consciousness is a miracle?

Again no. Look, if I were to say that not everything is purely mechanical and can be modelled to work as clock-work, would that mean that I'm implying that there are miracles?
Queen Cleopatra November 18, 2018 at 14:52 #228985
I've always wondered, what reasons are there to suppose that life could be simulations or illusions? The arguments in favour of such are often well presented but none expressly identifies a reason for venturing into such propositions. Is it just a sort of way to balance the speculation about reality? So, instead of "what is reality?", we have "what is not reality?".
SophistiCat November 18, 2018 at 15:48 #228989
Quoting ssu
Look, if I were to say that not everything is purely mechanical and can be modelled to work as clock-work, would that mean that I'm implying that there are miracles?


Depends on how one defines miracles. If we assume the popular Humean view of miracles as violations of the laws of nature - which already implies that nature mostly behaves in law-like ("mechanical") fashion - then yes, that is what you are implying.
A Seagull November 18, 2018 at 16:07 #228994
Reply to ssu Reply to ssu

Only an idealised computer will follow its instructions and only those instructions.

A real computer will respond to input data according to its physical characteristics which may not follow the specified instructions (algorithms) perfectly. There can be problems with the hardware, problems with the software , intrinsic logical problems such as stack overflow all of which conspire to produce output that is not intended by the programmer.
Michael Ossipoff November 18, 2018 at 17:45 #229017
Reply to Posty McPostface

The simulation theory doesn't hold up.

It attributes magical powers are the transistor-switchings in some computer.

That computer can duplicate and display, for its audience, a hypothetical world-story, but it doesn't make there be that world-story. It's already "there", as a system of inter-referring abstract implications.

Why is it that people are so prone to believe the simulation-theory, but rebel at the suggestion our world, and the objective physical facts in our experience, are a hypothetical story consisting of a complex system of inter-referring abstract implications?

Evidently people firmly believe that there must be, even in this logically-inter-dependent realm, some absolute objective existence at the basis of it all. ...that either this physical world has objective absolute existence, or that, at least, there is, at the bottom of all the hierarch of simulations, some absolutely objectively existent physical world.

...unless some people believe an infinite regress of simulations, with no definite objectively absolutely existent world at the bottom of it. But, if you believe in that, and if you agree that all of the apparent physical worlds don't have an objective absolute physical basis, then why do you need all those (nonexistent) computers to make it be?

I suggest that this physical world, as the setting for your life-experience-story, is a figment of logic. And it doesn't need a computer for its (non) existence.

Michael Ossipoff





SteveKlinko November 18, 2018 at 18:30 #229023
See http://TheInterMind.com for more on the terminology of the following:

Even if Reality is a Simulation we obviously still have Conscious Experiences of that Reality. So there is probably still a Conscious Mind doing the Experiencing in Conscious Space. There is probably still an Inter Mind but it would now connect the Conscious Mind to the Simulation instead of to a Physical Mind. There are two basic types of Simulations that we can talk about. One type is a Simulation that just runs with us being helpless observers having no ability to affect things that are happening in the Simulation. This means that all our desires, strivings, and actions are just something we experience, but we really can't do anything about anything. The Simulation makes us think we have desires and strivings and that we can do things. In this type of Simulation the Conscious Mind would have no Volitional connections back to the Simulation and would only have connections from the Simulation to the Inter Mind and then to the Conscious Mind. In the other type the Conscious Mind can, through Volitional connections through the Inter Mind and to the Simulation, affect things in the Simulation similar to how the Conscious Mind can, through the Inter Mind, affect things in Physical Space. The Simulation will make us believe we are actually in Physical Space, but there would be no difference for us if we were in an Actual Physical Universe or a Simulated Physical Universe. The take away from this is that it doesn't matter if the Inter Mind is connected to a Brain or to a Simulation.


ssu November 18, 2018 at 18:34 #229024
Quoting SophistiCat
Depends on how one defines miracles. If we assume the popular Humean view of miracles as violations of the laws of nature - which already implies that nature mostly behaves in law-like ("mechanical") fashion - then yes, that is what you are implying.

Yet we know that the reality cannot be at all times accurately modelled with the idea of a clock-work mechanical universe. Quantum Physics and relativity do have their merits in making better models of reality.

Quoting A Seagull
Only an idealised computer will follow its instructions and only those instructions.

A real computer will respond to input data according to its physical characteristics which may not follow the specified instructions (algorithms) perfectly. There can be problems with the hardware, problems with the software , intrinsic logical problems such as stack overflow all of which conspire to produce output that is not intended by the programmer.

Sure, we get that "syntax error" from time to time. But it's not intentional (or who knows, perhaps it's a clever marketing scheme that computers stop working after enough time).

Yet if the idealised Computer is basically a Turing Machine, then these problems exist. That's my basic point.

SophistiCat November 18, 2018 at 20:48 #229039
Quoting ssu
Yet we know that the reality cannot be at all times accurately modelled with the idea of a clock-work mechanical universe.


Do we? How?

Anyway, I am not going to argue for or against the laws of nature. If you believe that conscious beings are outside any general order of things, then obviously you will reject the simulation conjecture for that reason alone. So there is nothing to talk about.
apokrisis November 18, 2018 at 20:59 #229040
Quoting Michael
Give a computer a Hex code of 000000, have it add FF, and the result is 0000FF. This is the hex code for blue, and it tells the computer to turn on the blue lamps that each make up part of a pixel.

Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.


Love it. A computer can be programmed to operate a light switch. Therefore a conscious computer is possible. [Hands wave furiously.]

So how is it that neural firing would "look blue"? How is this little trick achieved? What is it that we know "in principle" here that would warrant your extrapolation.







Michael November 18, 2018 at 21:14 #229044
Quoting apokrisis
So how is it that neural firing would "look blue"? How is this little trick achieved?


We don't know how but we know it happens in us. What makes our brains so special that the same effect can't be achieved artificially? You're making the brain out to be a miracle as SophistiCat mentioned earlier.

Quoting apokrisis
Love it. A computer can be programmed to operate a light switch. Therefore a conscious computer is possible. [Hands wave furiously.]


It was an analogy, not a syllogism.
apokrisis November 18, 2018 at 21:15 #229045
Getting back to the OP, the interesting thing is this idea of a simulation that would somehow be all our consciousnesses, plus the world we think we share. Is anyone stopping to think what this would entail?

What even is the hypothesis?

Is there one fake world and then somehow a whole lot of fake minds having private thoughts, feelings and understandings of it?

Or is there only one fake mind and that mind is the entire world as such, any others appearing in this world being merely fake furnishing?







SophistiCat November 18, 2018 at 21:30 #229049
Reply to apokrisis Heh, that's a good point. I suppose that if you were only simulating one mind, you could make your simulation domain smaller than if you were, say, simulating the entire population of the earth.
apokrisis November 18, 2018 at 21:47 #229056
Quoting Michael
We don't know how, but we know it happens in us.


If you can't say anything to bridge this explanatory gap then you can't claim anything "in principle" here. That's pretty straightforward.

I'm not denying that we can't take a biologically inspired approach to "computation". Neural network approaches already do.

But you can't offer a Turing Machine example - hex code to operate a switch - and freely extrapolate from that. You have to show that biology is in principle doing that kind of computation.

And as I said - and as you have ignored - we know enough about biology to see that it relies on material instability, while TMs, and machines in general, rely on material stability.

So biology is essentially relational. It is about informational constraints on material dissipation. The overall organisation is emergent.

While computation is essentially dualistic. The software is informationally isolated from the material hardware needed to implement it. Where biology is about an intimate sensitivity to the material conditions of its being, computing is the precise opposite - the ability to completely disregard those material conditions.

If you are wanting to make "in principle" claims, then that basic difference is where you have to start.

Computation is nothing more than rule-based pattern making. Relays of switches clicking off and on. And the switches themselves don't care whether they are turned on or off. The physics is all the same. As long as no one trips over the power cord, the machine will blindly make its patterns. What the software is programmed to do with the inputs it gets fed will - by design - have no impact on the life the hardware lives.

Now from there, you can start to build biologically-inspired machines - like neural networks - that have some active relation with the world. There can be consequences and so the machine is starting to be like an organism.

But the point is, the relationship is superficial, not fundamental. At a basic level, this artificial "organism" is still - in principle - founded on material stability and not material instability. You can't just wave your hands, extrapolate, and say the difference doesn't count.
Michael November 18, 2018 at 21:51 #229057
Reply to apokrisis I've been talking about using biological material rather than inorganic matter so the above is irrelevant.
Michael November 18, 2018 at 21:52 #229058
Quoting apokrisis
If you can't say anything to bridge this explanatory gap then you can't claim anything "in principle" here.


Says the man who keeps saying that it's impossible in principle for a machine to be conscious?

All I'm saying is that the brain isn't a miracle. Consciousness happens because of ordinary (even if complex) physical processes. If these processes can happen naturally then a sufficiently advanced civilization should be able to make them happen artificially. Unless consciousness really is magic and only ordinary human reproduction can bring it about?
apokrisis November 18, 2018 at 21:54 #229060
Quoting SophistiCat
I suppose that if you were only simulating one mind, you could make your simulation domain smaller than if you were, say, simulating the entire population of the earth.


I see the problem as being not just a difference in scale but one of kind. If you only had to simulate a single mind, then you don't even need a world for it. Essentially you are talking solipsistic idealism. A Boltzmann brain becomes your most plausible physicalist scenario.

But how does it work if we have to simulate a whole collection of minds sharing the one world? Essentially we are recapitulating Cartesian substance dualism, just now with an informational rather than a physicalist basis.

It should be telling that the Simulation Hypothesis so quickly takes us into the familiar thickets of the worst of sophomoric philosophy discussions.
apokrisis November 18, 2018 at 22:04 #229064
Quoting Michael
Says the man who keeps saying that it's impossible in principle for a machine to be conscious?


What I keep pointing out is the in principle difference that biology depends on material instability while computation depends on material stability. So yes, I fill in the gaps of my arguments.

See .... https://thephilosophyforum.com/discussion/comment/68661

Quoting Michael
I've been talking about using biological material rather than inorganic matter so the above is irrelevant.


It can't be irrelevant if you want to jump from what computers do - hex code to throw a switch - to what biology might do.

If you want to instead talk about "biological material", then please do so. Just don't claim biology is merely machinery without proper support. And especially not after I have spelt out the "in principle" difference between machines and organisms.


SophistiCat November 18, 2018 at 22:07 #229066
Quoting apokrisis
I see the problem as being not just a difference in scale but one of kind. If you only had to simulate a single mind, then you don't even need a world for it. Essentially you are talking solipsistic idealism. A Boltzmann brain becomes your most plausible physicalist scenario.


Well, yes, you do need a world even for a single mind - assuming you are simulating the mind of a human being, rather than a Boltzmann brain, which starts in an arbitrary state and exists for only a fraction of a second. Solipsism is notoriously unfalsifiable, which means that there isn't a functional difference between the world that only exists in one mind and the "real" world. But if you are only concerned about one mind, then you can maybe bracket off/coarse-grain some of the world that you would otherwise have to simulate. Of course, that is assuming that your simulation allows for coarse-graining.
apokrisis November 18, 2018 at 22:26 #229068
Quoting SophistiCat
But if you are only concerned about one mind, then you can maybe bracket off/coarse-grain some of the world that you would otherwise have to simulate.


Sure. Just simulating one mind in its own solipsistic world of experience is the easy thing to imagine. I was asking about the architecture of a simulation in which many minds are sharing a world. How could that work?

And also the Simulation Hypothesis generally asks us to believe the simplest compatible story. So once we start going down the solipsistic route, then a Boltzmann brain is the logical outcome. Why would you have to simulate an actual ongoing reality for this poor critter when you could just as easily fake every memory and just have it exist frozen in one split instant of "awareness"?

Remember Musk's particular scenario. We are in a simulation that spontaneously arises from some kind of "boring" computational multiverse substrate. So simulating one frozen moment is infinitely more probable than simulating a whole lifetime of consciousness.

I'm just pointing out that half-baked philosophy ain't good enough here. If we are going to get crazy, we have to go the whole hog.
apokrisis November 18, 2018 at 22:34 #229069
Quoting Michael
Consciousness happens because of ordinary (even if complex) physical processes. If these processes can happen naturally then a sufficiently advanced civilization should be able to make them happen artificially.


Sure. Nature produced bacteria, bunny rabbits, the human brain. This stuff just developed and evolved without much fuss at all.

Therefore - in principle - it is not impossible that wait long enough, let biology do its thing, and the Spinning Jenny, the Ford Model T, the Apple iPhone will also just materialise out of the primordial ooze.

It's a rational extrapolation. Sufficiently severe evolutionary pressure should result in every possible instance of a machine. It's just good old physics in action. Nothing to prevent it happening.




ssu November 19, 2018 at 12:21 #229196
Quoting SophistiCat
Do we? How?

Are you serious? Well, to give an easy example: if you would model reality with just Newtonian physics, your GPS-system wouldn't be so accurate as the present GPS system we now have, that takes into account relativity. And there's a multitude of other example where the idea of reality being this clock-work mechanical system doesn't add up.

Quoting SophistiCat
If you believe that conscious beings are outside any general order of things, then obviously you will reject the simulation conjecture for that reason alone. So there is nothing to talk about.

That has to be the strawman argument of the month. Where did I say "conscious beings are outside any general order of things"?

Definitions do matter. If we talk about Computers, then the definition of how they work, that they follow algorithms, matters too. Apokrisis explains this very well on the previous page:

Quoting apokrisis
Computation is nothing more than rule-based pattern making. Relays of switches clicking off and on. And the switches themselves don't care whether they are turned on or off. The physics is all the same. As long as no one trips over the power cord, the machine will blindly make its patterns. What the software is programmed to do with the inputs it gets fed will - by design - have no impact on the life the hardware lives.

Now from there, you can start to build biologically-inspired machines - like neural networks - that have some active relation with the world. There can be consequences and so the machine is starting to be like an organism.

But the point is, the relationship is superficial, not fundamental. At a basic level, this artificial "organism" is still - in principle - founded on material stability and not material instability. You can't just wave your hands, extrapolate, and say the difference doesn't count.





SophistiCat November 19, 2018 at 12:47 #229200
Quoting apokrisis
And also the Simulation Hypothesis generally asks us to believe the simplest compatible story. So once we start going down the solipsistic route, then a Boltzmann brain is the logical outcome. Why would you have to simulate an actual ongoing reality for this poor critter when you could just as easily fake every memory and just have it exist frozen in one split instant of "awareness"?

Remember Musk's particular scenario. We are in a simulation that spontaneously arises from some kind of "boring" computational multiverse substrate. So simulating one frozen moment is infinitely more probable than simulating a whole lifetime of consciousness.


You need enormous probabilistic resources in order to realize a Boltzmann brain. AFAIK, according to mainstream science, our cosmic neighborhood is not dominated by BBs. BBs are still a threat in a wider cosmological modeling context, but if the hypothetic simulators just simulate a random chunk of space of the kind that we find ourselves in, then BBs should not be an issue.
SophistiCat November 19, 2018 at 13:35 #229209
Quoting ssu
Are you serious? Well, to give an easy example: if you would model reality with just Newtonian physics, your GPS-system wouldn't be so accurate as the present GPS system we now have, that takes into account relativity.


And if you do it with Lego blocks it will be less accurate still (funnier though). But I am not sure what your point is. Do you suppose that computers are limited to simulating Newtonian physics? (That's no mean feat, by the way: some of the most computationally challenging problems that are solved by today's supercomputers are nothing more than classical non-relativistic fluid dynamics.)

Quoting ssu
That has to be the strawman argument of the month. Where did I say "conscious beings are outside any general order of things"?

Definitions do matter. If we talk about Computers, then the definition of how they work, that they follow algorithms, matters too.


Well, the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.

Quoting ssu
Apokrisis explains this very well on the previous page


I am not sure what that business with instability is about, but I haven't really looked into this matter. I know that the simulation hypothesis is contentious - I have acknowledged this much.
apokrisis November 19, 2018 at 20:11 #229320
Quoting SophistiCat
...according to mainstream science, our cosmic neighborhood is not dominated by BBs.


According to mainstream science, we ain’t a simulation either. We were talking about Musk’s claim which involves “enormous probabilistic resources”. The BB argument then becomes one way that the claim blows itself up. If it is credible that some "boring substrate" generates simulated realities, then the simulation we are most likely to inhabit is the one that is the most probably in requiring the least of this probabilistic resource.

The fact that this then leads to the BB answer - that the simulation is of a single mind's single frozen moment - shows how the whole simulation hypothesis implodes under its own weight.

I'm just pointing out the consequences of Musk's particular line of argument. He doesn't wind up with the kind of Matrix style simulation of many fake minds sharing some fake world in a "realistic way" that he wants.

And even if the "substrate" of that desired outcome is some super-intelligent race of alien mad scientists building a simulation in a lab, then I'd still like to know how the actual architecture of such a simulation would work.

As I said, one option essentially recapitulates idealism, the other substance dualism. And both outcomes ought to be taken as a basic failure of the metaphysics. We can learn something from that about how muddle-headed folk are about "computational simulation" in general.

apokrisis November 19, 2018 at 21:23 #229385
Quoting SophistiCat
I am not sure what that business with instability is about,


I explained in this post how biology - life and mind - is founded on the regulation of instability.

Biology depends on molecules that are always on the verge of falling apart (and equally, just as fast reforming). And so the hardware of life is the precise opposite of the hardware suitable for computing. Life needs a fundamental instability as that then gives its "information" something to do - ie: create the organismic-level stability.

So from the get-go - down at the nanoscale quasi-classical scale of organic chemistry - semiosis is giving the biophysics just enough of a nudge to keep the metabolic "machinery" rebuilding itself. Proteins and other constituents are falling together slightly more than they are falling apart, and so the fundamental plasticity is being statistically regulated to produce a long-running, self-repairing, stable organism.

The half-life of a microtubule - a basic structural element of the cell - is about 10 minutes. So a large proportion of what was your body (and brain) this morning will have fallen apart and rebuilt itself by the time this evening comes around.

This is molecular turn-over. All the atoms that make you you are constantly being churned. So whatever you might remember from your childhood would have to be written into neural connections that have got washed away and rebuilt - more or less accurately, you hope - trillions of times.

The issue is then whether this is a bug or a feature. Machinery-minded folk would see molecular turnover as some kind of basic problem that biology must overcome with Herculean effort. If human scientists are going to reverse-engineer intelligence, the first thing they would want to do is start with some decently stable substrate. They wouldn't look for a material that just constantly falls apart, even if it is also just as constantly reforming as part of some soupy chemical equilibrium.

But this is just projecting our machine prejudices onto the reality of living processes. We are learning better now. It is only because of soupy criticality that the very possibility of informational regulation could be a thing. Instability of the most extreme bifurcating kind brings with it the logical possibility of its control. Instability produces uncontrolled switching - a microtubule unzipping into its parts, and also rezipping, quite spontaneously. All you need then is some kind of memory mechanism, some kind of regulatory trick, which can tip the soupy mix in a certain direction and keep it rebuilding just a little faster than it breaks down.

So this is a fundamental metaphysical fact about reality. If you have radical instability, that brings with it the very possibility of stabilising regulation. Chaos already plants the seeds of its own ordering.

An engineer wants solid foundations. Machines need stable parts that won't immediately fall apart. But life and mind want the opposite. And so right there you have a causal and metaphysical-level difference that artificial mind or artificial life has to deal with.

Silicon switches are just not the right stuff as, by design, there is minimal chance of them entropically falling apart, and even less chance that they will negentropically put themselves back together.

Yet every part of every cell and neuron in your body is doing this all day long. And knowing how to do this is fundamental to the whole business of existing as an informational organism swimming in a flow of environmental entropy.

Life and mind can organise the material world, bend its erosive tendencies to their own long-term desires. This is the basic scientific definition of life and mind as phenomena. And you can see how machine intelligence or simulated realities are just not even playing the game. The computer scientists - playing to the gullible public - haven't got a clue of how far off they are.

Quoting SophistiCat
Well, the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.


You see here how readily you recapitulate the "everything is really a machine" meme. And yet quantum physics shows that even material reality itself is about the regulation of instability.

Atomism is dead now. Classicality is emergent from the fundamental indeterminism of the quantum realm. Stability is conjured up statistically, thermodynamically, from a basic instability of the parts.

The simulation hypothesis takes the world to be stably classical at some eventual level. There is some fixed world of atomistic facts that is the ground. And then the only problem to deal with is coarse-graining. If we are modelling the reality, how much information can we afford to shed or average over without losing any essential data?

When it comes to fluid turbulence, we know that it has a lot of non-linear behaviour. Coarse-graining can miss the fine detail that would have told us the process was on some other trajectory. But the presumption is that there is always finer detail until eventually you could arrive at a grain where the reality is completely deterministic. So that then makes coarse graining an epistemic issue, not ontic. You can choose to live with a degree of imprecision in the simulation as close is good enough for all practical purposes.

That mindset then lets you coarse-grain simulate anything. You want to coarse-grain a model of consciousness? Sure, fine. The results might look rather pixellated, not that hi res, as a first go. But in principle, we can capture the essential dynamics. If we need to, we can go back in and approach the reality with arbitrary precision .... because there is a classically definite reality at the base of everything to be approached in this modelling fashion.

For engineers, this mindset is appropriate. Their job is to build machines. And part of their training is to get some real world feel for how the metaphysics of coarse-graining can turn around and bite them on the bum.

But if we are talking about bigger philosophical issues, then we have to drop the belief that reality is actually some kind of physical machine. It's causality is irreducibly more complex than that. Both biology and physics tell us that now.






ssu November 19, 2018 at 22:39 #229413
Quoting SophistiCat
And if you do it with Lego blocks it will be less accurate still (funnier though). But I am not sure what your point is.

Ok, I'll try to explain again, thanks for having the interest and hopefully you'll get through this long answer and understand it. Let's look at the basic argument, the one that you explain the following way:

Quoting SophistiCat
the idea behind the simulation hypothesis is that (a) there is a general, all-encompassing order of things, (b) any orderly system can be simulated on a computer, and possibly (c) the way to do it is to simulate it at its most fundamental level, the "theory of everything" - then everything else, from atoms to trade wars, will automatically fall into place. All of these premises can be challenged, but not simply by pointing out the obvious: that computers only follow instructions.


Ok, the question is about premiss (b) any orderly system can be simulated on a Computer.

Because from the definition what you yourself just said there above, it means that premiss (b) can be written as (b) any orderly system can be simulated only following instructions. And with "following instructions" we mean using algorithms, computing.

And the following here is, and don't get carried away to other things, is plain mathematics. Yet there exists non-computable, but true mathematical objects. You can call these orderly systems etc. The math is correct, they do have a correct model of them, they aren't mystical, the only thing is that they are simply uncomputable. Now if a Computer has to compute these, it obviously cannot do it.

So how do we ask a Computer something to what there exists a correct model, but it cannot compute it? Well, simply by a situation where the correct answer is depended on what the computer doesn't do, in other words, negative self-reference. You get this with Turing's Halting Problem. Now you might argue that this is quite far fetched, but actually it isn't when the computer has to interact with the outside World, when it has to take into account the effects of it's own actions. Now, in the vast majority of cases this isn't a problem (taking it's own effects into account on the system to be modelled). Yes, you can deal with it with "Computer learning" or basically a cybernetic system, a feedback loop.

With negative self reference you cannot do it. And notice, you don't have to have here consciousness or anything mystical of that sort (so please stop saying that I'm implying this). What the basic problem is that as the Computer has an effect on what it is modelling, it's actions make it a subject while the mathematical model, ought to be objective. Sometimes it's possible stll to give the correct model and the problem of subjectivity can be avoided, but not with negative self reference.

I'll give you an example of the problem of negative self reference: try to say or write down a sentence in English that you never in your life have or will say or write. Question: do these kinds of sentences exist? Yes, surely as your life as mine is finite. The thing is that you cannot say them, me or others here can do it. Computation has simple logical limits to self-reference. I can give other examples of this for example with a problem with forcasting the correct outcome when there obviously is one, but it cannot be computed.

When you think about it, this is the problem of the instruction "do something else than what is in the instructions" for a computer. If there isn't a simple instruction on what to do when confronted with this kind of instruction, the Computer cannot do it. Because do something else is not in the instructions. Do something else means negative self reference to the instructions the Computer is following.

Why is this important? Because interaction with the world is filled with these kinds of problems and to assume that one can mathematically compute them hence solve them by computation, follow simple instructions, is difficult when the problems start from mathematical logic itself. It's simply like trying to argue that everything is captured by Newtonian physics when it isn't so.

apokrisis November 19, 2018 at 23:20 #229433
Quoting ssu
What the basic problem is that as the Computer has an effect on what it is modelling, it's actions make it a subject while the mathematical model, ought to be objective. Sometimes it's possible stll to give the correct model and the problem of subjectivity can be avoided, but not with negative self reference.


I agree with this but would also point out how it still doesn't break with the reductionist presumption that this fact is a bug rather than a feature of physicalist ontology.

So it is a problem that observers would introduce uncertainty or instability into the world being modelled and measured. And being a problem, @Michael and @SophistiCat will feel correct in shrugging their shoulders and replying coarse-graining can ignore the fact - for all practical purposes. The problem might well be fundamental and ontic. But also, it seems containable. We just have to find ways to minimise the observer effect and get on with our building of machines.

I am taking the more radical position of saying both biology and physics are fundamentally semiotic. The uncertainty and instability is the ontic feature which makes informational regulation even a material possibility. It is not a flaw to be contained by some clever trick like coarse graining. It is the resource that makes anything materially organised even possible.

Self-reference doesn't intrude into our attempts to measure nature. Nature simply is self-referential at root. In quantum terms, it is contextual, entangled, holistic. And from there, informational constraints - as supplied for instance by a cooling/expanding vacuum - can start to fragment this deep connectedness into an atomism of discrete objects. A classical world of medium-sized dry goods.

The observer effect falls out of the picture in emergent fashion. Although human observers can restore that fundamental quantum holism by experimental manipulation, recreating the world as it is when extremely hot/small.







ssu November 20, 2018 at 10:48 #229552
Quoting apokrisis
I agree with this but would also point out how it still doesn't break with the reductionist presumption that this fact is a bug rather than a feature of physicalist ontology.

So it is a problem that observers would introduce uncertainty or instability into the world being modelled and measured. And being a problem, Michael and @SophistiCat will feel correct in shrugging their shoulders and replying coarse-graining can ignore the fact - for all practical purposes. The problem might well be fundamental and ontic. But also, it seems containable. We just have to find ways to minimise the observer effect and get on with our building of machines.

You nailed it Apokrisis, this is exactly what has been done.

It's been the assumption that with better models and in time things like this can be avoided or even solved. It simply doesn't sink in that this is a fundamental and an inherent problem. The only area where it has been confronted is in Quantum Mechanics, where nobody tells that Quantum Mechanics and relativity are totally reducible to Newtonian mechanics and that the problematic issues of QM can simply be avoided and hence we can use Newtonian mechanics.

It really might seem containable, until you notice that since the 1970's the Computer scientists have predicted an immediate breakthrough in AI. Of course, we still don't have true AI. We just have advanced programs than can trick us from a limited point of view to think they have AI.

Quoting apokrisis
I am taking the more radical position of saying both biology and physics are fundamentally semiotic. The uncertainty and instability is the ontic feature which makes informational regulation even a material possibility. It is not a flaw to be contained by some clever trick like coarse graining. It is the resource that makes anything materially organised even possible.

That's the basic argument in this case on the mathematical side that when something is uncomputable, you really cannot compute it. It's an ontic feature that cannot be contained with some clever trick.

Quoting apokrisis
Self-reference doesn't intrude into our attempts to measure nature. Nature simply is self-referential at root. In quantum terms, it is contextual, entangled, holistic. And from there, informational constraints - as supplied for instance by a cooling/expanding vacuum - can start to fragment this deep connectedness into an atomism of discrete objects. A classical world of medium-sized dry goods.

And hence mathematical models don't work so well as in some other field. That's the outcome. Does there exist a mathematical model for evolution? Can Darwinism be explained by an algorithm, By a computable model? Some quotes about this question:

Biological evolution is a very complex process. Using mathematical modeling, one can try to clarify its features. But to what extent can that be done? For the case of evolution, it seems unrealistic to develop a detailed and fundamental description of phenomena as it is done in theoretical physics. Nevertheless, what can we do?


Evolution is a highly complex multilevel process and mathematical modeling of evolutionary phenomenon requires proper abstraction and radical reduction to essential features.


Basically mathematical modeling is used in various ways, but there isn't the mathematical model for evolution. Now this should tell people something.
SophistiCat November 20, 2018 at 21:39 #229731
Quoting ssu
Ok, the question is about premiss (b) any orderly system can be simulated on a Computer.


Yes, after I posted that, I realized that I overreached a bit. There are indeed "regular" systems that nevertheless cannot be simulated to arbitrary precision (indeed, if we sample from all mathematically possible systems, then almost all of them are uncomputable in this sense). However, most of our physical models are "nice" like that; the question then is whether that is due to modelers' preference or whether it is a metaphysical fact. Proponents of the simulation hypothesis bet on the latter, that is that the hypothetical "theory of everything" (or a good enough approximation) will be computable.

Quoting ssu
So how do we ask a Computer something to what there exists a correct model, but it cannot compute it? Well, simply by a situation where the correct answer is depended on what the computer doesn't do, in other words, negative self-reference. You get this with Turing's Halting Problem. Now you might argue that this is quite far fetched, but actually it isn't when the computer has to interact with the outside World, when it has to take into account the effects of it's own actions. Now, in the vast majority of cases this isn't a problem (taking it's own effects into account on the system to be modelled). Yes, you can deal with it with "Computer learning" or basically a cybernetic system, a feedback loop.


It is difficult to understand what you are trying to say here, but my best guess is that you imagine a simulation of our entire universe - the actual universe that includes the simulation engine itself. That would, of course, pose a problem of self-reference and infinite regress, but I don't think anyone is proposing that. A simulation would simulate a (part of) the universe like ours - with the same laws and typical conditions.
ssu November 20, 2018 at 22:55 #229746
Quoting SophistiCat
Yes, after I posted that, I realized that I overreached a bit. There are indeed "regular" systems that nevertheless cannot be simulated to arbitrary precision (indeed, if we sample from all mathematically possible systems, then almost all of them are uncomputable in this sense). However, most of our physical models are "nice" like that; the question then is whether that is due to modelers' preference or whether it is a metaphysical fact. Proponents of the simulation hypothesis bet on the latter, that is that the hypothetical "theory of everything" (or a good enough approximation) will be computable.

Seems to me that we are finding some kind of common ground here. Cool.

So the point here is that you just remember that if there is one black swan, not all swans are white. But anyway, assuming they're all white doesn't lead to everything going astray as the vast majority of them are indeed white. And this is the important point: understanding the limits of the models we use gives us a better understanding of the issue at hand.

I have found it to be very useful especially in economics because people often make the disasterous mistake of believing that the (economic) models portray reality as well as the laws of Physics do explain moving billiard balls. Believe me, I had in the late 1990's an assistant yelling at me that the whole idea of there existing or happening speculative bubbles in the modern financial markets was a totally ludicrous idea and hence not worth studying, because the financial markets work so well. The professor had to calm him down and say that this is something we don't know yet. But the assistant was great in math!

Quoting SophistiCat
It is difficult to understand what you are trying to say here, but my best guess is that you imagine a simulation of our entire universe - the actual universe that includes the simulation engine itself. That would, of course, pose a problem of self-reference and infinite regress, but I don't think anyone is proposing that. A simulation would simulate a (part of) the universe like ours - with the same laws and typical conditions.

I think you've got it now. But it can also be far more limited in scope, not just the entire universe, just where and when the computers actions have effects that result in this kind of loop.


Arkady November 21, 2018 at 13:53 #229975
Quoting ssu
Believe me, I had in the late 1990's an assistant yelling at me that the whole idea of there existing or happening speculative bubbles in the modern financial markets was a totally ludicrous idea and hence not worth studying, because the financial markets work so well.

Wow. And the late 1990s were the time of the dot-com bubble, so he was really missing the forest for the trees...