You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

General purpose A.I. is it here?

m-theory August 19, 2016 at 03:41 18400 views 110 comments Philosophy of Mind
Recently a company called DeepMind (purchased by google) made what many in computer science believe to be a major breakthrough in general purpose A.I.
Deepmind is self learning neural network that uses reinforcement (a system of reward/penalty) techniques inspired by how a brain would work.



I thought I would ask what people think the philosophical implications of this might be?

Comments (110)

apokrisis August 19, 2016 at 03:57 ¶ #16582
Quoting m-theory
Is the mind an algorithm


Is a neural net strictly speaking just an algorithm? Or does it do what it can do because an anticipation-creating learning rule acts as a constraint on material dynamics?

Potentially there is a lot of equivocation in what is understood by "algorithm" here. The difference between neural nets and Turing Machines is a pretty deep one philosophically.
m-theory August 19, 2016 at 04:13 ¶ #16584
Reply to apokrisis
Is a neural net strictly speaking just an algorithm? Or does it do what it can do because an anticipation-creating learning rule acts as a constraint on material dynamics?

Here is a comprehensive lecture on how to configure the neural network so that you can capitalize on the reinforcement learning technique developed by deepmind.
https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLweqsIcZJac7PfiyYMvYiHfOFPg9Um82B

Potentially there is a lot of equivocation in what is understood by "algorithm" here. The difference between neural nets and Turing Machines is a pretty deep one philosophically.

Not sure what specifically is your grievance...here is the wiki link describing neural turing machine.

From the deepmind link

The company has created a neural network that learns how to play video games in a fashion similar to that of humans,[4] as well as a Neural Turing Machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.


While it is true that neural network programming is quite a bit more advanced than typical programming...it is no the less reliant upon algorithms so I don't see equivocation being a problem here.
Perhaps I am missing something?
m-theory August 19, 2016 at 04:19 ¶ #16585
Reply to apokrisis
by the way nice to meet up with you again apokrisis
:D
apokrisis August 19, 2016 at 04:22 ¶ #16586
Quoting m-theory
Perhaps I am missing something?


Yep. As your cite says: "Neural turing machines combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers."

So this is talking about a hybrid between a neural net and a turing machine with " algorithmic power".

The distinction is important. The mind could be a neural net (neural nets might have the biological realism to do what brains do). But the mind couldn't be a Turing Machine - as biology is different in architectural principle from computation.

apokrisis August 19, 2016 at 04:25 ¶ #16587
Reply to m-theory Likewise. I believe you were about the first person I "met" on PF, talking about thermal models of time!
m-theory August 19, 2016 at 04:29 ¶ #16588
Reply to apokrisis
I think you should reveiw the lectures link I posted...it is a detailed explanation of the exact algorithm.
There is no question that deepmind is an algorithmic process.
I don't understand the distinction you are making.
The "fuzziness" of neural networks and other computer learning techniques just refers to the probability methods being used.
Those probabilistic techniques are covered in depth in the lecture vids.

These methods are no less step by step format of an algorithm.
I assure you, from a computer science perspective, it is no equivocation to say that the deepmind general purpose ai is an algorithm.
m-theory August 19, 2016 at 04:58 ¶ #16591
The interesting thing to me is that this breakthrough was possible because the mind was modeled as though it were an algorithm.

In the deepmind example the machine learns from performing actions...it models itself as an agent that can act in the world in order to learn what to do next and becomes more proficient by a system of rewards and penalties meant to model the reward system of the brain.
Through trail and error in the actions it takes it will approach an optimum solution to real world tasks.
It forms a simulation of itself acting in the world that it experiences only as raw data (simply pixels in the case of deepmind) to create possible courses of action then executes what it predicts will be the most beneficial action.
.
These are all very interesting developments in the field of A.I. because before A.I. systems were model dependent and humans had to hand craft those models.
With deepmind system the models of reality are formed from scratch through trail and error actions within the world that the machine experiences.

tom August 19, 2016 at 09:37 ¶ #16617
Quoting m-theory
The interesting thing to me is that this breakthrough was possible because the mind was modeled as though it were an algorithm.


It's certainly an algorithm (what else could it be?) but to call the algorithm a "mind" is not very helpful, because it isn't one.

Jamal August 19, 2016 at 09:43 ¶ #16618
Quoting tom
It's certainly an algorithm (what else could it be?)


The computational theory of mind is one philosophical view among many, and it's been heavily criticized. If it's your position then cool, but don't pretend it's not a philosophical issue.
tom August 19, 2016 at 09:48 ¶ #16619
Quoting jamalrob
The computational theory of mind is one philosophical view among many, and has been heavily critized. If it's your position then cool, but don't pretend it's not a philosophical issue.


The program running on the laptop or the supercomputer is an algorithm. Not only are "program" and "algorithm" synonymous, but it cannot be anything but an algorithm!
m-theory August 19, 2016 at 09:50 ¶ #16620
Reply to tom
I disagree...this example is helping define exactly what the term mind should mean in a very practical way.
A way that generates results.

I have made this argument before but I will do it again here.

Either we can decide what a mind is or the question of what a mind actually is will be an undecidable problem.

If the mind is an undecidable problem then we cannot answer the question of whether we have minds or not.
If we can answer that question it will mean that there is an effective procedure/algorithm to solve that problem.

What ever we mean by the terms mind and consciousness we must define these terms in a way which makes them decidable by an algorithm of some sort or we must conclude that they are undecidable problems and that we cannot be sure we have a mind or consciousness.

As far as I can tell, if we wish to have an intelligible definition of terms like mind and consciousness, then we are forced to formalize these terms in logic and math and when we do so we will be able to create effective procedures and algorithms that are able to think, that are intelligent, and that can learn.

The alternative is that we are saying that we cannot know if we are consciousness or if we have minds.

We could not claim to know if we had minds or consciousness...is that really what you believe?
What would the philosophical implications of that be?

If you believe that you can answer yes or no the question of whether you have a mind or consciousness then a consequence of that is fundamentally you believe the mind/consciousness is computable.



Jamal August 19, 2016 at 09:59 ¶ #16621
Reply to tom The question is whether the mind is an algorithm, meaning the human mind, which m-theory is suggesting might be one of the philosophical implications.
m-theory August 19, 2016 at 10:09 ¶ #16623
Reply to jamalrob
Exactly...I am asking if we have discovered the correct algorithm for what we mean by the term mind or if this is just one step in that direction.

Wayfarer August 19, 2016 at 11:23 ¶ #16635
Is the device a being? Because if it were, then you would have to recognize its rights to self-determination - you would have to ask it how it felt about being born into the world as a consequence of being manufactured by human beings, and what that means to it.

If none of that makes sense, then, the likelihood is, it's a device.
m-theory August 19, 2016 at 11:51 ¶ #16637
Reply to Wayfarer
That is probably what I am most curious about is if the algorithm could be adapted to do just as you say. Learn emotions, think emotionally, and reflect upon it's own experiences in an emotional context. Right now, that appears to be quite a bit of a ways off in the future. It is certainly not a being in the sense that it is human, and I don't think it ever will be.

So I would have to say no, it can't recognize itself as a being deserving of dignity and rights not as of now. The question if it could or did is often viewed as troubling for many that speculate on the future of A.I. and is probably a topic for another thread.

I do think it is fair to say that it is a being in the sense that it does experience though, albeit at a very primitive level compared to human level identity of self.

But it does model itself as an agent that acts in an environment that will react to its actions. I think it can be argued quite reasonably that it does have some concept of self as an experiencing being because it can form these models of itself acting without taking those actions.
It can think about what it will do before it does something and it does have a sense of self as collection of past experiences and biases.

tom August 19, 2016 at 14:18 ¶ #16655
Quoting m-theory
Either we can decide what a mind is or the question of what a mind actually is will be an undecidable problem.


But there is no such thing as an undecidable problem in physics. It is inconceivable that a "mind" could be programmed by accident i.e. that's not going to happen until we understand what constitutes a mind.

Properties that the artificial mind will possess include consciousness, qualia, creativity, and dare I say it, free will. AlphaGo possesses none of these. It is not a mind.
tom August 19, 2016 at 14:19 ¶ #16656
Quoting jamalrob
The question is whether the mind is an algorithm, meaning the human mind, which m-theory is suggesting might be one of the philosophical implications.


What else could a mind possibly be? The mind is software.
Jamal August 19, 2016 at 14:24 ¶ #16657
Reply to tom Are you joking? I already said:

Quoting jamalrob
The computational theory of mind is one philosophical view among many, and it's been heavily criticized. If it's your position then cool, but don't pretend it's not a philosophical issue.


m-theory August 19, 2016 at 14:33 ¶ #16659
Reply to tom

But there is no such thing as an undecidable problem in physics.


Yes there is.
Perhaps you failed to understand.
Algorithms are mechanical physical things.
They are not just mere abstractions...many problems exist for which there is no mechanical solution and can be no mechanical solution or algorithm.
This statement is very uninformed.
Any undecidable problem is literally a physically undecidable problem.


It is inconceivable that a "mind" could be programmed by accident i.e. that's not going to happen until we understand what constitutes a mind.


The mind in this case is no accident...it was very deliberately created by modeling psychology and neurology.
As well I pointed out before that we are not ever going to understand the mind if we define that term as something that is undecidable.
Again if you believe you can answer the question "do I have a mind/consciousness" with a yes or a no definitively and correctly...then you fundamentally believe that the mind/consciousness is a computable thing that an algorithm determines.

If you believe that the question "do I have a mind/consciousness" cannot be answered with an algorithm...then you believe fundamentally that it is an undecidable question.


Properties that the artificial mind will possess include consciousness, qualia, creativity, and dare I say it, free will. AlphaGo possesses none of these. It is not a mind.


Alpha go is just one iteration of deepmind.
I disagree with you on many of these points.
I believe the deepmind system does posses qualia, creativity and free will, and even some level of consciousness.
But of course that is just my opinion, I have argued in other posts why I hold that opinion.

What concerns me here, about this remark, is you simply assert this statement as though it is true without providing any justification as to why it actually is true.
Simply stating flat out that "it has none of these" is not particularly convincing if you don't bother to explain how you can know that it is true.
tom August 19, 2016 at 14:44 ¶ #16661
Quoting m-theory
Yes there is.
Perhaps you failed to understand.
Algorithms are mechanical physical things.
They are not just mere abstractions...many problems exist for which there is no mechanical solution and can be no mechanical solution or algorithm.
This statement is very uninformed.
Any undecidable problem is literally a physically undecidable problem.


So, it should be no problem for you to give a few examples of these undecidable problems in physics?

Quoting m-theory
I believe the deepmind system does posses qualia, creativity and free will, and even some level of consciousness.


Why do you believe the computer program possesses qualia?

m-theory August 19, 2016 at 14:59 ¶ #16666
Reply to tom

First let me ask do you believe it is or ever will be possible for machines to think in similar ways that humans do such that they can or could form minds and consciousness.
If you don't even believe it is possible then of course this breakthrough will seem unimportant and inconsequential.
If you do, however, believe that such a thing is possible then this breakthrough is quite different as it represent a crucial milestone towards that path.


So, it should be no problem for you to give a few examples of these undecidable problems in physics?


The halting problem has no mechanical or physical solution.
It cannot be decided by any physical means.
Here is a list of more.


Why do you believe the computer program possesses qualia?


First I don't simply assume that computers cannot possess qualia.
My definition of the term does not automatically exclude computers and their programs from having qualia because I don't believe the term qualia refers to something that is undecidable.
That is to say I think the question of "Is this qualia" can be answered yes or no.

Deepmind experiences things and forms a concept of its own existence as an acting agent within an environment that responds to the actions that deepmind performs.

If deepmind was unable to model itself it would not be able to achieve what it has.

I can understand people being skeptical that deepmind is general purpose A.I. and I will admit there is still a long way to go before we have to worry about whether not deepmind or systems like it are minds/consciousness in similar ways that humans are (most computer science experts believe this is still 50 years away) but I don't agree that you can simply dismiss the philosophical implications of this A.I. breakthrough as you have done.

I also believe it is important to take this breakthrough seriously now rather than wait until it is at a human level consciousness.

People should stop simply dismissing the idea because it makes them uncomfortable and start asking themselves what if it is possible and what does it mean if it is possible?

tom August 19, 2016 at 15:10 ¶ #16668
Quoting m-theory
The halting problem has no mechanical or physical solution.
It cannot be decided by any physical means.
Here is a list of more.


There are no physics problems in your list. The undecidable problems of mathematics are irrelevant to physics, as are the non-computable functions and non-computable numbers. None is required to describe reality.

Quoting m-theory
Deepmind experiences things and forms a concept of its own existence as an acting agent within an environment that responds to the actions that deepmind performs.


That is simply a fantasy. But you seem to have decided the undecidable problem nevertheless.
m-theory August 19, 2016 at 15:17 ¶ #16669
Reply to tom

There are no physics problems in your list. The undecidable problems of mathematics are irrelevant to physics,

This is simply false.
These problems are physically impossible to solve.
You are woefully ignorant in this subject area it seems.
In reality all of the problems I have listed are physical problems in the sense that there can be no mechanical solutions to them.

None the less here is a search result for a google of the terms undecidable problems in physics with About 108,000 results (0.33 seconds).
Undecidability is not simply some abstraction that does not apply to physics which you can brush off so lightly.
Undecidability is an important discovery about mathematics and mechanical systems.


That is simply a fantasy. But you seem to have decided the undecidable problem nevertheless.

Perhaps you could explain how you know it is a fantasy...for some one so uninformed you are rather quick to doll out proclamations as though they are simply true.
If you can't explain yourself you should probably not bother with this site.
Just saying stating things baldly is not how our community operates...you need to be more in depth with your replies.
tom August 19, 2016 at 18:01 ¶ #16676
Quoting m-theory
Undecidability is an important discovery about mathematics and mechanical systems.


The reason you cannot give an example of an undecidable problem in physics is because there aren't any. The reason for that, is that only the class of computable functions (and computable numbers) is required to express any physical law, or any problem in physics. No physical process relies on the the unphysical aspects of undecidability, which either involve the liar paradox or infinity.

It just so happens that the famous Bekenstien Bound guarantees that Reality is a finite-state machine. Every calculation which you have carried out, every calculation any computer has carried out, and any calculation that any finite-state machine ever will carry out is expressible in Presburger arithmetic.

Quoting m-theory
I believe the deepmind system does posses qualia, creativity and free will, and even some level of consciousness.


As for your fantasy that any current computer program experiences qualia etc, well you had better be wrong. If you are not, then what exists is an artificial person who can suffer and who should be protected by rights like the rest of us.

Fortunately, you have just been taken in by the hype, florid language and some software jargon.



m-theory August 19, 2016 at 18:55 ¶ #16677
Reply to tom


The reason you cannot give an example of an undecidable problem in physics is because there aren't any. The reason for that, is that only the class of computable functions (and computable numbers) is required to express any physical law, or any problem in physics. No physical process relies on the the unphysical aspects of undecidability, which either involve the liar paradox or infinity.



Sigh...honestly I don't care.
OK fine...have it your way...you are right and undecidibility is "nonphysical."
It is unimportant to my argument either way.

Simply calling undecidability nonphysical does not make that problem go away though, you are still left with the problem of whether or not the mind is decidable or undecidable.

So again I will ask you...do you think the term mind/consciousness should mean something that is decidable or undecidable?

And again I will remind you that if you believe you can answer the question "do I have a mind/consciousness" correctly with a yes or no everytime you ask then at a fundamental level the consequence is that the mind/consciousness is something that is decidable.


It just so happens that the famous Bekenstien Bound guarantees that Reality is a finite-state machine. Every calculation which you have carried out, every calculation any computer has carried out, and any calculation that any finite-state machine ever will carry out is expressible in Presburger arithmetic.
.
Not sure how this proves the mind/consciousness cannot be decidable?
Perhaps you could elaborate further on how this is relevant or how it eliminates the possibility that the mind/consciousness could be expressed algorithmically?


As for your fantasy that any current computer program experiences qualia etc, well you had better be wrong. If you are not, then what exists is an artificial person who can suffer and who should be protected by rights like the rest of us.


Again you state that it is fantasy and still you have not demonstrated why that is so.

Should I just take your word for it?

Is it something you know that i, or others, cannot know?

Don't be shy...share your wisdom with us...I am quite eager to hear how you know it is mere fantasy that deepmind has no mind at all.
tom August 19, 2016 at 19:55 ¶ #16679
Quoting m-theory
Simply calling undecidability nonphysical does not make that problem go away though, you are still left with the problem of whether or not the mind is decidable or undecidable.


But all you need to do (on the 3rd time of asking) is to demonstrate that a physical theory is undecidable. How many opportunities do you need to present a counter-example?

The fact that the class of functions necessary for describing all of Reality is an infinitesimal subset of all possible functions, is not only amazing, but is a consequence of the laws of physics.

Quoting m-theory
And again I will remind you that if you believe you can answer the question "do I have a mind/consciousness" correctly with a yes or no everytime you ask then at a fundamental level the consequence is that the mind/consciousness is something that is decidable.


Thanks for the reminder. Since I have explained that no problem in physics is undecidable, and that I have detailed some of the properties expected from a mind, do you really think I would pretend the question of consciousness-or-not is undecidable?

To repeat: It is utterly improbable that, in trying to solve the computational problem "how to win at Go" will also solve the hard problem. Of course, it is possible that in trying to solve the problem "how to win at space-invaders" the problem of qualia is also solved, but what use is that? If we have solved the problem of qualia without an explanatory theory of qualia, then how could we ever learn of our inexplicable success?

Despite the complete absence of a theory of why a computer program is conscious, you wish me to provide a refutation of a non-existent theory. By the way, there is no orbiting invisible teapot.
m-theory August 19, 2016 at 20:39 ¶ #16680
Reply to tom


But all you need to do (on the 3rd time of asking) is to demonstrate that a physical theory is undecidable. How many opportunities do you need to present a counter-example?


Why?
I already conceded the point...you win.
It does not matter so I do not care and it does not change my argument if we say undecidability is physical or nonphysical.
The same argument applies either way.


To repeat: It is utterly improbable that, in trying to solve the computational problem "how to win at go" will also solve the hard problem. Of course, it is possible that in trying to solve the problem "how to win at space-invaders" the problem of qualia is also solved, but what use is that? If we have solved the problem of qualia without an explanatory theory of qualia,


First the people that engineered deepmind were not trying to solve the problem of "how to win at go"...that is a very silly thing to say....they were trying to solve the problem of general purpose A.I..
Go is just one task their algorithm has learned to preform.
Second the game of go has long been considered a major milestone in A.I. because it is all but impossible to use brute force calculating to win at go.
Deepmind is not a brute force go engine, and it plays intuitively just as do humans (technically better than humans)...guess you missed that somehow?

The explanation of qualia is that it is algorithmic.
If the mind/consciousness is decidable then the hard problem is solved with an algorithm.
The hard problem should not be too hard at all...especially considering we already have examples (our own minds and brains) that we reverse engineer.

I disagree that it is improbable that we might discover that algorithm for general consciousness/minds.
That is not a very well thought out rebuttal in light of the fact that we are modeling the mechanism of our own psyches and brains.

To discover the algorithm for a particular mind...sure I might grant you that it is highly unlikely.

But I believe there is a general template upon which particular minds are formed and it is not all that unlikely that we might discover what that general template is, considering we are modeling from neurology and psychology.
To me it seems improbable that we won't discover the proper algorithm.
apokrisis August 19, 2016 at 22:48 ¶ #16689
Quoting m-theory
I assure you, from a computer science perspective, it is no equivocation to say that the deepmind general purpose ai is an algorithm.


There is a computer science difference between programmable computers and learning machines.

So yes, you can point to a learning rule embedded in a neural network node and say there - calculating a weight - is an algorithm.

But then a neural network is (ideally) in dynamical feedback interaction with the world. It is embodied in the way of a brain. And this is a non-algorithmic aspect of its being. You can't write out the program that is the system's forward model of the world. The forward model emerges rather than being represented by a-priori routines.

So sure, you can ask about the algorithm of the mind. But this is equivocal if you then seem to think you are talking about some kind of programmable computer and not giving due weight to the non-algorithmic aspects of a neural net which are the actual basis of its biological realism.

The idea of an algorithm in itself completely fails to have biological realism. Sure we can mathematically simulate the dynamical bistability of molecular machine. We can model what is going on in brains and cells in terms of a sequence of rules. But algorithms can't push matter about or regulate dissipative processes.

That is the whole point of Turing machine - to disconnect the software actions from the hardware mechanics. And the whole point of biology is the opposite - to have a dynamical interaction between the symbols and the matter. At every level of the biological organisation, matter needs to get pushed about for anything to be happening.

So in philosophy of mind terms, Turing computation is quite unlike biological semiosis in a completely fundamental way.

See - http://www.academia.edu/3075569/Artificial_Life_Needs_a_Real_Epistemology

And a neural net architecture tries to bridge the gap. But to the degree it is algorithmic, it is merely a Turing-based simulation of neurosemiosis~neurodynamics.

Just a bit of simulated biological realism is of course very powerful. Neural nets make a big break with programmable devices even if the biology is simulated at the most trivial two layer perceptron level. And if you are asking the big question of whether neural networks could be conscious - have qualitative states - I think that is a tough thing to even pin down as an intelligible query.

I can imagine a simulation of neurodynamics that is so speedy that it can keep up with the world at the rate that humans keep up with the world. But would this simulation have feelings if it wasn't simulating also the interior millieu of a regulated biogical body? And how grainy would the simulation be given internal metabolic processes have nano range timescales?

The natural human brain builds up from nanoscale molecular dynamics and so never suffers a graininess issue. There is an intimate connection with the material world built into the semiotic activity from the get-go.

But computation comes from the other direction. It starts algorithmically with no material semiosis - a designed-in disconnect between the symbolic software and the physical hardware. And to attain biological realism via simulation, it has to start building in that feedback dynamical connection with the world - the Bayesian forward modeling loop - from the top down. And clearly, the extra computational cost of that increases exponentially as the design tries to build a connection in on ever finer scales of interaction.

So I don't just say that neural nets can't be conscious. But I do say we can see why it might be impossibly expensive to do that via algorithmic simulation of material semiosis.
m-theory August 20, 2016 at 07:14 ¶ #16722
Reply to apokrisis


But then a neural network is (ideally) in dynamical feedback interaction with the world. It is embodied in the way of a brain. And this is a non-algorithmic aspect of its being. You can't write out the program that is the system's forward model of the world. The forward model emerges rather than being represented by a-priori routines.


I am not sure I understand this.
Can you elaborate further on this point?
I fear that you are suggesting that computers must be able to operate in a continuous and analog fashion and that digitization is something that will prevent computers from having minds...is that what you are saying?


So sure, you can ask about the algorithm of the mind. But this is equivocal if you then seem to think you are talking about some kind of programmable computer and not giving due weight to the non-algorithmic aspects of a neural net which are the actual basis of its biological realism.


It was my understanding, apokrisis, that an algorithm is simply a set of steps or operations that get processed in a particular order depending upon logical conditions of the computational architecture.
My understanding of neural nets is that their architecture is different from a programmable computer, yes, but that term algorithm still applies in the neural network cases because algorithms can be adapted to account for the additional logical conditions of that architecture.
If I was mistaken about that I apologize, I did admit that I did not understand the distinction you are making though and I still don't

It seems to me you may be suggesting that biology is not algorithmic and/or cannot be algorithmic?


The idea of an algorithm in itself completely fails to have biological realism. Sure we can mathematically simulate the dynamical bistability of molecular machine. We can model what is going on in brains and cells in terms of a sequence of rules. But algorithms can't push matter about or regulate dissipative processes.

That is the whole point of Turing machine - to disconnect the software actions from the hardware mechanics. And the whole point of biology is the opposite - to have a dynamical interaction between the symbols and the matter. At every level of the biological organisation, matter needs to get pushed about for anything to be happening.

So in philosophy of mind terms, Turing computation is quite unlike biological semiosis in a completely fundamental way.


If the criterion is achieving nano-scale biological realism, then yes...deepmind falls well short of this measure.
That is still quite a ways off indeed.
To me you seemed to be suggesting...it may be necessary to simulate all of the brains mechanisms and only then can we begin to probe the question of whether that computer simulations has a mind.

Are you saying that it is necessary to digitize the brain and maybe even the body at a nano scale?

At any rate you have raised some good points and have given me quite a bit to think about.




Wayfarer August 20, 2016 at 07:26 ¶ #16723
I still say, that if you think of it in terms of whether the device or neural network, or whatever, is actually a being, then you have answered the question. If it can think, in the sense that humans think, then it would be 'a being' and no longer simply a device; it would be an 'I'. That's what was behind the provocative title of Isaac Asimov's great Sci Fi series, 'I, Robot'. The fact that a robot could refer to itself in the first person was the point of the title. (A lot of people don't seem to get that.)

See, I don't think that any element, nor the totality, of those systems, has the reflexive first-person knowledge of being, or experience of being, that humans have, or are; it is not an 'I'. So, sure, you could feasibly create an incredibly clever system, that could answer questions and engage in dialogue, but it would still not be a being.
m-theory August 20, 2016 at 07:54 ¶ #16725
Reply to Wayfarer
You may be right...this machine is not a being in the sense that a human is a being..at least not yet...I cannot say if it might be possible for it to learn to do so as you do though...and I am fascinated because I believe deepmind is the closest a machine has come in approaching that possibility.

I still argue that this machine does have a state of being and a concept of self that it experiences along side its experience of its environment...it models itself and learns about itself from its environment and learns about its environment from its model of self.

Deepmind is a major step forward towards A.I. that can adapt to a wider range of problems than any other attempt thus far...the question of whether it can adapt to as wide range as humans can adapt to is still very much an open one.


Jamal August 20, 2016 at 08:00 ¶ #16726
Quoting Wayfarer
See, I don't think that any element, nor the totality, of those systems, has the reflexive first-person knowledge of being, or experience of being, that humans have, or are; it is not an 'I'. So, sure, you could feasibly create an incredibly clever system, that could answer questions and engage in dialogue, but it would still not be a being.


I guess the question is, could there possibly be artificial "I"s? Computational A.I. might not get us more than clever devices, but why in principle could we not create artificial minds, maybe some other way?

(I'm just being provocative; I don't have any clear position on it myself.)
Wayfarer August 20, 2016 at 08:31 ¶ #16729
Theory:this machine is not a being in the sense that a human is a being....I still argue that this machine does have a state of being and a concept of self that it experiences....


When you say 'in the sense that a human is a being' - what other sense is there? Pick up a dictionary or an encyclopedia, and look up 'being' as a noun - how many instances are there? How many things are called 'beings'? As far as I know, the only things commonly referred to by that term, are humans.

None of that is to say that Deepmind is not amazing technology with many applications etc etc. But it is to call into question the sense in which it is actually a mind.

Jamalrob:I guess the question is, could there possibly be artificial "I"s?


Would help to know if there were a real 'I' before trying to replicate it artificially.
apokrisis August 20, 2016 at 08:32 ¶ #16730
Reply to m-theory Pattee would be worth reading. The difference is between information that can develop habits of material regulation - as in biology - and information that is by definition cut free of such an interaction with the physical world.

Software can be implemented on any old hardware which operates with the inflexible dynamics of a Turing machine. Biology is information that relies on the opposite - the inherent dissipative dynamics of the actual physical world. Computers calculate. It is all syntax and no semantics. But biology regulates physiochemical processes. It is all about controlling the world rather than retreating from the world.

You could think of it as a computer being a bunch of circuits that has a changing pattern of physical states that is as meaningless from the world's point of view as randomly flashing lights. Whereas a biological system is a collection of work organising devices - switches and gates and channels and other machinery designed to regulate a flow of material action.

As we go from cellular machinery to neural machinery, the physicality of this becomes less obvious. Neurons with their axons and synapses start to look like computational devices. But the point is that they don't break that connection with a physicality of switches and motors. The biological difference remains intrinsic.
Jamal August 20, 2016 at 08:32 ¶ #16731
Quoting Wayfarer
Would help to know if there were a real 'I' before trying to replicate it artificially.


Hey, nice dodge. ;)
m-theory August 20, 2016 at 08:55 ¶ #16733
Reply to Wayfarer Quoting Wayfarer
When you say 'in the sense that a human is a being' - what other sense is there? Pick up a dictionary or an encyclopedia, and look up 'being' as a noun - how many instances are there? How many things are called 'beings'? As far as I know, the only things commonly referred to by that term, are humans.

None of that is to say that Deepmind is not amazing technology with many applications etc etc. But it is to call into question the sense in which it is actually a mind.


Well deepmind is not modeled after a full human brain...it is modeled from a part of the mammalian brain.

I think the goal of one to one human to computer modeling is very far off from now and I am not so sure it is necessary in order to refer to a computer as having a mind.
That is not the criterion that interests me when asking if a computer has a mind.

To me the question is if a computer can demonstrate the ability to solve any problem a human can...be that walking or playing the game of go.

If deepmind can adapt to as wide a range of problem sets as any human can then I think we are forced to consider it a mind in the general sense of that term.

Often I see the objection that a mind is something that is personal and has an autonomous agenda...that is not how I am using the term mind.
I am using the term mind to indicate a general intelligence capable of adapting to any problem a human can adapt to.
Not an individual person...but a general way of thinking.

Of course computers will always be different than humans because humans and computers do not share the same needs.
A computer does not have to solve the problem of hunger a human does for example.
So in some ways comparing a human to computer there is always going to be a fundamental difference...apokrisis touched on this.

I don't necessarily agree that because a computer's needs and a human's needs are necessarily different that therefor a computer cannot be said to have a mind.
That to me is a semantic distinction that misses the philosophical point of A.I.

So my question is more accurately what that would mean about human problem solving if a computer can solve all the same problems as efficiently or even better than humans?

Deepmind is at the very least a peek of what that would look like.




m-theory August 20, 2016 at 08:59 ¶ #16734
Reply to apokrisis
I did skim over that link you provided...thanks for that reference...I will review it more in depth later.
Wayfarer August 20, 2016 at 09:12 ¶ #16736
Reply to jamalrob if you think it's a dodge, how are you going to answer the question? X-)
Jamal August 20, 2016 at 09:25 ¶ #16737
Reply to Wayfarer Eek. My first instinct is to say that there is no barrier in principle to the creation of artificial persons, or agentive rational beings, or what have you (such vagueness precludes me from coming up with a solution, I feel). Do you think there is such a barrier? Putting that another way: do you think it's possible in principle for something artificial to possess whatever you think is special about human beings, whatever it is that you think distinguishes a person from an animal (or machine)? Putting this yet another way: do you think it's possible in principle for something artificial to authentically take part in human community in the way that humans themselves do, without mere mimicry or clever deception?
tom August 20, 2016 at 10:05 ¶ #16744
Quoting jamalrob
My first instinct is to say that there is no barrier in principle to the creation of artificial persons, or agentive rational beings


You don't need instinct, you just need to point to the physical law that forbids the creation of artificial people - there isn't one!

However, the notion that after installing TensorFlow on your laptop, you have a person in there, is only slightly less hilarious than the idea that if you run it on multiple parallel processors, you have a society.

Wayfarer August 20, 2016 at 10:35 ¶ #16751
Jamalrob:My first instinct is to say that there is no barrier in principle to the creation of artificial persons, or agentive rational beings, or what have you (such vagueness precludes me from coming up with a solution, I feel). Do you think there is such a barrier?


I already said that! That is what you were responding to already.

You said 'could we create an artificial "I"', and I said, 'you would have to know what the "I" is'. That is a serious challenge, because in trying to do that, I say you're having to understand what it means to be a being that is able to say "I am".

The word 'ontology' is derived from the present participle of the Greek verb for 'to be'. So the word itself is derived from 'I am'. So the study of ontology is, strictly speaking, the study of the meaning of "I am"'. Of course it has other meanings as well - in the context of computer networks 'ontology' refers to the classifications of entities, inheritances, properties, hierarchies, and so on. But that doesn't detract from the point that ontology is still concerned with 'the meaning of being', not 'the analysis of phenomena'.

It sounds strained to talk of the meaning of 'I am', because (obviously), what "I am" is never present to awareness, it is what it is that things are present to. It is 'first person', it is that to which everything is disclosed, for that reason not amongst the objects of consciousness. And that again is an ontological distinction.

And as regards 'creating a mind', think about the role of the unconscious in the operations of mind. The unconscious contains all manner of influences, traits, abilities, and so on - racial, linguistic and cultural heritage, autonomic features, the archetypes, heaven knows what else:

User image

So if you were to create an actual artificial intelligence, how would you create the unconscious? How would you write a specification for it? 'The conscious mind' would be a big enough challenge, I suspect 'the unconscious' would be orders of magnitude larger, and impossible to specify, for obvious reasons, if you think about it.

So, of course, we couldn't do that - we would have to endow a network with characteristics, and let it evolve over time. Build up karma, so to speak, or gain an identity and in so doing, the equivalent of a culture, an unconscious, archetypes, and the rest. But how would you know what it was you were creating? And would it be 'a being', or would it still be billions of switches?
m-theory August 20, 2016 at 11:08 ¶ #16753
Reply to tom
Again I want to address this notion that what the term mind should mean is exclusively an individual person.

I am seeking to explore the philosophical implications of what it means if general purpose A.I. can learn to solve any problem a human might solve.

That is an open question concerning the methods employed by the example I have...will the new techniques demonstrated by deepmind be able to adapt to as wide a range of problems as a person could?

If you believe you know that these techniques are not general purpose and cannot adapt to any problem a human could adapt to you should focus your criticism by addressing why you believe that is so.

I did not come here to argue that deepmind is, at this point, a fully self aware computer system.
So you can stop beating that dead strawman with your alphago and tensorflow banter.

I came here to ask if the techniques that are employed in my example will be able to eventually converge on that outcome....
if so what are the philosophical implications...
if not...why?

tom August 20, 2016 at 11:11 ¶ #16754
Quoting Wayfarer
So if you were to create an actual artificial intelligence, how would you create the unconscious? How would you write a specification for it? 'The conscious mind' would be a big enough challenge, I suspect 'the unconscious' would be orders of magnitude larger, and impossible to specify, for obvious reasons, if you think about it.


The unconscious is easy. We have most of what is necessary in place already - databases, super fast computers, and of course programs like AlphaGo which can be trained to become expert at anything, much like our cerebellum.

Humans have augmented their minds with unconscious tools e.g. pencils and paper, and as a matter of fact, our conscious mind has no idea how the mechanism of e.g. memory retrieval.

I think I gave a list earlier of some of the key attributes of a mind: consciousness, creativity, qualia, self-awareness. Consciousness, if you take that to mean what we lose when we are under anaesthetic, may be trivial. As for the rest, how can we possibly program them when we don't understand them?
Wayfarer August 20, 2016 at 11:18 ¶ #16756
Tom:I think I gave a list earlier of some of the key attributes of a mind: consciousness, creativity, qualia, self-awareness.




m-theory August 20, 2016 at 11:50 ¶ #16759
Quoting Wayfarer
And as regards 'creating a mind', think about the role of the unconscious in the operations of mind. The unconscious contains all manner of influences, traits, abilities, and so on - racial, linguistic and cultural heritage, autonomic features, the archetypes, heaven knows what else:


I think the unconscious mind is more about the brain than the psyche.
You personally do not know how your own brain does what it does...but that does not particularly limit your mind from solving problems.

Quoting Wayfarer
So if you were to create an actual artificial intelligence, how would you create the unconscious? How would you write a specification for it? 'The conscious mind' would be a big enough challenge, I suspect 'the unconscious' would be orders of magnitude larger, and impossible to specify, for obvious reasons, if you think about it.


The particulars of the computer hardware and algorithms being used would simply not be known to the agent in question...that would be it's subconscious.

Quoting Wayfarer
So, of course, we couldn't do that - we would have to endow a network with characteristics, and let it evolve over time. Build up karma, so to speak, or gain an identity and in so doing, the equivalent of a culture, an unconscious, archetypes, and the rest. But how would you know what it was you were creating? And would it be 'a being', or would it still be billions of switches?


That is, the some argue, most disturbing part....A.I. is approaching a level where it can learn exponentially which would mean it might be able to learn such things much more quickly than we would expect from a human.

m-theory August 20, 2016 at 11:50 ¶ #16760
[hide]oops[/hide]

Metaphysician Undercover August 20, 2016 at 11:58 ¶ #16761
I believe that IBM is working on AI projects which will make Deep Mind look rather insignificant. In fact, some argue that Watson already makes Deep Mind look insignificant.

Quoting Wayfarer
It sounds strained to talk of the meaning of 'I am', because (obviously), what "I am" is never present to awareness, it is what it is that things are present to. It is 'first person', it is that to which everything is disclosed, for that reason not amongst the objects of consciousness. And that again is an ontological distinction.


This, I believe, is a vey important point. And, it underscores the problems which the discipline of physics will inevitably face in creating any kind of artificial being. That discipline does not have a coherent approach to what it means to be present in time.
Wayfarer August 20, 2016 at 12:05 ¶ #16762
MTheory: A.I. is approaching a level where it can learn exponentially...


'Computers [can] outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.

We could, of course, record any of these facts in a computer. The impossibility arises when we consider how to record and make accessible the entire, unsurveyable, and ill-defined body of common sense. We know all these things, not because our “random access memory” contains separate, atomic propositions bearing witness to every commonsensical fact (their number would be infinite), and not because we have ever stopped to deduce the truth from a few more general propositions (an adequate collection of such propositions isn’t possible even in principle). Our knowledge does not present itself in discrete, logically well-behaved chunks, nor is it contained within a neat deductive system.

It is no surprise, then, that the contextual coherence of things — how things hold together in fluid, immediately accessible, interpenetrating patterns of significance rather than in precisely framed logical relationships — remains to this day the defining problem for AI.

It is the problem of meaning.

Logic, Poetry and DNA, Steve Talbott.
m-theory August 20, 2016 at 12:37 ¶ #16770
Quoting Metaphysician Undercover
I believe that IBM is working on AI projects which will make Deep Mind look rather insignificant. In fact, some argue that Watson already makes Deep Mind look insignificant.

Watson is distinctly different than deepmind they use different techniques...I believe deepmind is more flexible in that it can learn to do different tasks from scratch where as watson is programed to perform a specific task.
tom August 20, 2016 at 12:45 ¶ #16771
Quoting m-theory
I am seeking to explore the philosophical implications of what it means if general purpose A.I. can learn to solve any problem a human might solve.


I apologise if I didn't make my position abundantly clear: an Artificial General Intelligence would *be* a person. It could certainly be endowed with capabilities far beyond humans, but whether one of those is problem solving or "growth of knowledge" can't be understood until we humans solve that puzzle ourselves.

Take for the sake of argument that knowledge grows via the Popperian paradigm (if you'll pardon the phrase). i.e. Popper's epistemology is correct. There are two parts to this: the Logic of Scientific Discovery, and the mysterious "conjecture". I'm not convinced that the Logic can be performed by a non self-aware entity, if it could, then why has no one programmed it?

AlphaGo does something very interesting - it conjectures. However, the conjectures it makes are nothing more than random trials. There is no explanatory reason for them, in fact, there is no explanatory structure beyond the human-encoded fitness function. That is why it took 150,000 games to train it up to amateur standard and 3,000,000 other games to get it to beat the 2nd best human.


m-theory August 20, 2016 at 12:55 ¶ #16773
Quoting Wayfarer
It is no surprise, then, that the contextual coherence of things — how things hold together in fluid, immediately accessible, interpenetrating patterns of significance rather than in precisely framed logical relationships — remains to this day the defining problem for AI.

It is the problem of meaning.


Meaning for who?

Are you suggesting that a computer could not form meanings?

How would meaning be possible without logical relationships?

Having watched these lectures I believe that one could argue quite reasonably that deepmind is equipped with common sense.

I would also argue that common sense is not something that we are born with...people and dogs have to learn that they cannot jump over a house.

The reason we do not form nonsense solutions to problems has to do from learned experienced so I don't know why it should be a problem for a learning computer to accomplish.


m-theory August 20, 2016 at 13:35 ¶ #16776
Quoting tom
I apologise if I didn't make my position abundantly clear: an Artificial General Intelligence would *be* a person. It could certainly be endowed with capabilities far beyond humans, but whether one of those is problem solving or "growth of knowledge" can't be understood until we humans solve that puzzle ourselves.


My concern is that people are too quick to black box "the problem" and that is not productive for discussing the issue.
I don't think the mind or the how the mind is formed is a black box...I think it can be understood and I see no reason why we should not assume that it is an algorithm.

Quoting tom
Take for the sake of argument that knowledge grows via the Popperian paradigm (if you'll pardon the phrase). i.e. Popper's epistemology is correct. There are two parts to this: the Logic of Scientific Discovery, and the mysterious "conjecture". I'm not convinced that the Logic can be performed by a non self-aware entity, if it could, then why has no one programmed it?

If you don't mind could you elaborate on this.

Do you believe an agent would have to be fully self aware at a human level to perform logically for instance?

Quoting tom
AlphaGo does something very interesting - it conjectures. However, the conjectures it makes are nothing more than random trials.

I see it a bit differently.

AlphaGo does not play randomly it uses randomness to learn how to play efficiently.

Wayfarer August 20, 2016 at 23:08 ¶ #16840
What does 'form meaning' mean?

The nature of meaning is far from obvious. There is a subsection of books on the subject 'the metaphysics of meaning' and they're not easy reads. And no, I don't believe computers understand anything, they process information according to algorithms and provide outputs.
Janus August 20, 2016 at 23:16 ¶ #16841
Reply to Wayfarer

I think that's a bit harsh. Many, perhaps most or even all, animals other than humans could not fulfill your criterion for beinghood. Are we then to say that they are devices?
m-theory August 20, 2016 at 23:37 ¶ #16843
Quoting Wayfarer
What does 'form meaning' mean?


Well in the formal sense meaning is just a knowing a problem and knowing the solution to that problem.
And formally knowing is just a set of data.

I would argue that this algorithm does not simply compute syntax but is able to understand semantic relationships of that syntax.
It is able to derive semantics by learning what the problem is, and learning what is the solution to that problem.
The process of learning may be syntactical, but when the algorithm learns the problem and that problems solution it understands both.

I suspect you will not be satisfied with this definition of meaning...if so feel free to describe how you think the term meaning should be defined.

Quoting Wayfarer
The nature of meaning is far from obvious.


I disagree.
I think formally meaning is knowing a problem and the solution to that problem and the logical relationships between the two is the semantics.

Quoting Wayfarer
I don't believe computers understand anything, they process information according to algorithms and provide outputs.


Suppose your previous posts are right..and we don't have any good idea of what meaning and understanding is, if that were so you could not be sure that computers are capable in that capacity.
That is to say if we don't know what meaning and understanding is then we can't know if computers have these things.
You seem to want it both ways here.
You know computers can't understand or know meaning...but at the same time meaning and understanding is mysterious.

That is a bit of a contradiction don't you think?
apokrisis August 20, 2016 at 23:50 ¶ #16845
Beinghood is about having an informational idea of self in a way that allows one to materially perpetuate that self.

So we say all life has autonomy in that semiotic fashion. Even immune systems and bacteria are biological information that can regulate a material state of being by being able to divide the material world into what is self and nonself.

This basic division of course becomes a highly elaborated and "felt" one with complex brains, and in humans, with a further socially constructed self-conscious model of being a self. But for biology, there is this material state of caring that is at the root of life and mind from the evolutionary get go.

And so when talking about AI, we have to apply that same principle. And for programmable machines, we can see that there is a designed in divorce between the states of information and the material processes sustaining those states. Computers simply lack the means for an involved sense of self.

Now we can imagine starting to create that connection by building computers that somehow are in control of their own material destinies. We could give our laptops the choice over their power consumption and their size - let them grow and spawn in some self-choosing way. We could let them pick what they actually wanted to be doing, and who with.

We can imagine this in a sci fi way. But it would hardly be an easy design exercise. And the results would seem autistically clunky. And as I have pointed out we would have to build in this selfhood relation from the top down. Whereas in life it exists from the bottom up, starting with molecular machines at the quasi classical nanoscale of the biophysics of cells. So computers are always going against nature in trying to recreate nature in this sense.

It is not an impossible engineering task to introduce some biological realism into an algorithmic architecture in the fashion of a neural network. But computers must always come at it from the wrong end, and so it is impossible in the sense of being utterly impractical to talk about a very realistic replication of biological structure.

Wayfarer August 21, 2016 at 00:34 ¶ #16849
Theory:Well in the formal sense meaning is just a knowing a problem and knowing the solution to that problem.
And formally knowing is just a set of data.


You toss these phrases off, as if it is all settled, as if understanding the issues is really simple. But it really is not, all you're communicating is the fact that you're skating over the surface. 'The nature of knowledge' is the subject of the discipline of epistemology, and it's very difficult subject. Whether computers are conscious or not, is also a really difficult and unresolved question.

Theory:You know computers can't understand or know meaning...but at the same time meaning and understanding is mysterious.

That is a bit of a contradiction don't you think?


No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.

Over and out on this thread, thanks.
m-theory August 21, 2016 at 00:34 ¶ #16850
Quoting apokrisis
And for programmable machines, we can see that there is a designed in divorce between the states of information and the material processes sustaining those states.


The same is true of the brain and the mind I believe.
It has taken the course of hundreds of thousands of years for us to study the mechanisms of the brain and how the brain relates to the mind.
We don't know personally how our brain and/or subconscious work.

Quoting apokrisis
And as I have pointed out we would have to build in this selfhood relation from the top down. Whereas in life it exists from the bottom up, starting with molecular machines at the quasi classical nanoscale of the biophysics of cells. So computers are always going against nature in trying to recreate nature in this sense.


I am not sure I agree that we have to completely reverse engineer the body and brain down to the nano scale to achieve a computer that has a mind....to achieve a computer that can simulate what it means to be human sure....I must concede that.

I would instead draw back to my question of whether or not we could reverse engineer a brain sufficiently that the computer could solve any problem a human could solve.
Granted this machine would not have the inner complexity a human has but I still believe we would be forced to conclude, in a general sense, that the such a machine had a mind.

So in top down bottom up terms I think when we meet in the middle we will have arrived at a mind in the sense that computers will be able to learn anything a human can and solve any problem a human can.

Of course the problem of top down only gets harder as you scale so the task of creating a simulated human is a monumental one requiring unimaginable breakthroughs in many disciplines.
If we are defining the term mind in such a way that this the necessary criterion prior then we still have a very long wait on our hands I must concede.






m-theory August 21, 2016 at 00:51 ¶ #16852
Quoting Wayfarer
No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.

Over and out on this thread, thanks.


Quoting Wayfarer
You toss these phrases off, as if it is all settled, as if understanding the issues is really simple. But it really is not, all you're communicating is the fact that you're skating over the surface. 'The nature of knowledge' is the subject of the discipline of epistemology, and it's very difficult subject.


I was not trying to be dismissive and I did not intend for it to seem as though I do not appreicate that epistemology is vast subject with a great many complexities.
But in fairness we have to start somewhere and I think starting with a formal definition of meaning and/or understanding as learning what is a problem and learning what is that problem's solution is a reasonable place to begin.

Quoting Wayfarer
Whether computers are conscious or not, is also a really difficult and unresolved question.


Well I did argue in the first page that either the mind/consciousness must be decidable (it is possible to answer the question "do I have a mind/consciousness" with a yes or no correctly each time you ask it).
If consciousness and/or the mind is undecidable these terms are practical useless to us philosophically and I certainly don't agree we should define the terms thus.
(sorry can't link to my op but it on the first page)

The implication here is that the mind/consciousness is an algorithm so that just leaves the question which algorithm.

But you still make a valid point...the question of which algorithm is the mind/consciousness is hardly a settled matter.
Sorry if I gave the impression that I believed it was settled...I intended to give the impression that I see no reason why it should not become settled in light of the fact that we are modeling after our own minds and brains.

Quoting Wayfarer
No, that's not a contradiction at all. As far as I am concerned it is a statement of fact.

I guess I will have to take your word for it.

Quoting Wayfarer
Over and out on this thread, thanks.


Yes thank you too for posting here...you made me think about my own beliefs critically...hopefully that is a consolation to the frustration I was responsible for.
Again sorry about the misunderstanding it was not my intention to be curt.

apokrisis August 21, 2016 at 01:14 ¶ #16856
Reply to m-theory So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP.

Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same?

m-theory August 21, 2016 at 01:39 ¶ #16860
Quoting apokrisis
So every time I point to a fundamental difference, your reply is simply that differences can be minimised. And when I point out that minimising those differences might be physically impractical, you wave that constraint away as well. It doesn't seem as though you want to take a principled approach to your OP.

No that is not it at all.

I was seeking to make a distinction between simulating a human being and simulating general intelligence.

I did concede that if we must digitally simulate a human at nano-scale before we can hope to simulate a mind that this would be a monumental task.
And perhaps you are also correct that may may be impossible.

I just don't use that criterion.
I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...then that would be a mind in the general sense.
Why, if a computer could do such, is that not a mind in a general sense?

I believe there are two distinct meanings to the term mind.
One meaning is the intimately personal and rich inner self.
The second meaning is the sense in which others have minds...if we took away all the differences of personal minds and focused on the general template of what that term means then the problem of creating a mind is minimized to a hardy degree I would argue.

Quoting apokrisis
Anyway, another way of phrasing the same challenge to your presumption there is no great problem here: can you imagine an algorithm that could operate usefully on unstable hardware? How could an algorithm function in the way you require if it's next state of output was always irreducibly uncertain? In what sense would such a process still be algorithmic in your book if every time it computed some value, there would be no particular reason for the calculation to come out the same?


I believe we can argue the same thing of a human.
If our brain suffers trauma and damage it can result in sever impairment.

I just don't agree that the top down approach is necessarily faulty all the way up until a nano scale human simulation is achieved.

I suspect that somewhere in the middle of top down design something mindlike should be possible.

The reason I believe this is because a lot of the human body and brain functions are autonomous of what we mean by the term mind...and while I understand your point that there is feedback of these system that inform consciousness/mind (understanding that the extent of which it does is unclear) and that is what contributes to what we call an individual persons...but mind also has a more general meaning...and I am suggesting that should be possible before we achieve nano-scale human simulation.




apokrisis August 21, 2016 at 02:12 ¶ #16870
Quoting m-theory
I was seeking to make a distinction between simulating a human being and simulating general intelligence....I was using the criterion of if a computer could learn any problem and or solution to a problem that a human could...


Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".

So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?

Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.

But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.

Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.


Wayfarer August 21, 2016 at 02:18 ¶ #16872
Reply to m-theory hey don't sweat it and I apologise also if I seemed brusque.
BC August 21, 2016 at 03:10 ¶ #16892
The question of whether A.I. is here, or will be here, when, how, what and where...

Two things: First, electronic (dry) equipment that can produce the verisimilitude of intelligence might be here now, or will be soon. It might not be that the algorithms are so good as it is our desire is so great to hail non-human intelligence. I don't know why we so desire this mirror. Go, Jeopardy, chess... whatever complex game we ask it to play (or ask it to learn how to play, in the case of Go) is a very limited (but none the less impressive) achievement. Still...

Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell.

Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats.

Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.

It's not the algorithm, it's the structure of the body and its history.

[NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that.
Metaphysician Undercover August 21, 2016 at 11:37 ¶ #16979
I like that distinction, "wet intelligence" versus "dry intelligence".
m-theory August 21, 2016 at 22:40 ¶ #17015
Quoting apokrisis
Ok. But from my biophysical/biosemiotic perspective, a theory of general intelligence just is a theory of life, a theory of complex adaptive systems. You have to have the essence of that semiotic relation between symbols and matter built in from the smallest, simplest scales to have any "intelligence" at all.

So yes, you are doing the familiar thing of trying to abstract away the routinised, mechanics-imitating, syntactical organisation that people think of as rational thought or problem solving. If you input some statement into a Searlean Chinese room or Turing test passing automaton, all that matters is that you get the appropriate output statement. If it sounded as though the machine knew what you were talking about, then the machine passes as "intelligent".


I don't mean to sweep away your criticisms.
I freely admit if we are using a biological metric of life then we are no where close to simulating intelligence.
If simulating biology is the criterion we can safely conclude machines don't think.

Quoting apokrisis
So again, fine, its easy to imagine building technology that is syntactic in ways that map some structure of syntax that we give it on to some structure of syntax that we then find meaningful. But the burden is on you to show why any semantics might arise inside the machine. What is your theory of how syntax produces semantics?


I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.

Take the go example, alphago would not be able to learn to play the game as well as humans unless it is forming semantics.
Because it has to learn the problem and learn the solution, often at the same time, it learns to have biases about different syntactical relationships within the context of the problem and the solution.
Not all syntactical relationships are equal within the context of what the problem is and within the context of what is the solution.

You may argue that it is a rather crude and primitive form of semantics when compared to humans and perhaps you are right...but it is still a form of semantics.

I might use another analogy.
Consider the task of creating robot hand that is dexterous as the human hand.
You might argue that the finished product can not sense what it grasps, that it has no nerves, no skin, no bones, no blood coursing through it and then you claim this is not a hand.

But if we ask the question of whether or not it is a hand by a different criterion, whether or not it can perform any action a human hand can preform, then the problem is very different.

Instead of trying to replicate the human hand we are trying to replicate the utility of a human hand and that is a far less difficult engineering goal.

So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.
But I do not agree that we can be sure that it won't be able to perform at human level utility of human semantic understanding.

Quoting apokrisis
Biology's theory is that of semiotics - the claim that an intimate relation between syntax and semantics is there from the get-go as symbol and matter, Pattee's epistemic cut between rate independent information and rate dependent dynamics. And this is a generic theory - one that explains life and mind in the same physicalist ontology.


Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.
Do you have any references for epistemic cut?
I did not find it as an entry in Stanford.

I tried to read through your link but got hung up on that term, the definition is not clear to me.

Quoting apokrisis
But computer science just operates on the happy assumption that syntax working in isolation from material reality will "light up" in the way brains "light up". There has never been any actual theory to back up this sci fi notion.

Instead - given the dismal failure of AI for so long - the computer science tendency is simply to scale back the ambitions to the simplest stuff for machines to fake - those aspects of human thought which are the most abstractly syntactic as mental manipulations.

If you just have numbers or logical variables to deal with, then hey, suddenly everything messy and real world is put at as great a distance as it can possibly be. Any schoolkid can learn to imitate a calculating engine - and demonstrate their essential humanness by being pretty bad, slow and error-prone at it, not to mention terminally bored.


Again I recall my hand example.
It is exceedingly difficult to simulate the human hand to the finest detail.
It is not nearly so difficult to engineer a machine that replicates the utility of a human hand.

I believe a similar thing applies to A.i.

Quoting apokrisis
Then we humans invent an actual machine to behave like a machine and ... suddenly we are incredibly impressed at its potential. Already our pocket calculators exceed all but our most autistic of idiot savants in rapid, error-free, syntactical operation. We think if a pocket calculator can be this unnaturally robotic in its responses, then imagine how wonderfully conscious, creative, semantic, etc, a next generation quantum supercomputer is going to be. Or some such inherently self-contradicting shit.


Well again I understand that you believe there is a fundamental problem that engineering human level A.I. faces.
I will try to read through Pattee's work again to see if I can address that point.




m-theory August 21, 2016 at 22:59 ¶ #17020
Quoting Bitter Crank
Second, I think there is a strong tendency to underrate animal (wet) intelligence. It isn't learning how to recite Beowulf from memory that is the only impressive human achievement. It's also remembering the odor of the room where we learned Anglo-Saxon and now feel nostalgia for that faint musty odor when we recite Beowulf, that's distinctive. [SEE NOTE] Dry intelligence can replay Beowulf, but it can't connect odors and texts and feelings. It can't feel. It can't smell.


This is a more a matter of sensory apparatus, dry intelligence would be able to record and recall this input if it had the sensory input to record it.

Quoting Bitter Crank
Dry intelligence can't connect with the feelings of a dog excited by the walk it's about to take. Dry intelligence can't lay on the floor and determine whether the guy walking around is getting ready to go to work (alone) or is going to take the dog for a walk. Dogs can do that. They can tell the difference between routine getting ready to go to work and getting ready to go out of town (which the dog will probably disapprove of, considering what happened the last time "they" left). So can cats.


This algorithm does have primitive feelings.
It understands from experience that there is reward in the world and there is penatly in the world.
It also understands which of these it experiences will depend on the choices it makes in its environment.

Quoting Bitter Crank
Wet brains and wet intelligence have developed over an exceedingly long time. Wet brains aren't the only defense animals have, but they are remarkably effective. A rat's wet brain does, and will, out-performs Deep Blue and all of it's Blue successors, Screwed Blue, Dude Blue, Rude Blue, etc. because it has capabilities that can be reproduced by an algorithm.

It's not the algorithm, it's the structure of the body and its history.

[NOTE] I never learned Anglo Saxon and I can't recite Beowulf. I can pretend I did, and even feel like I did. Betcha Deep Blue can't do that.


This is a different example from deepblue because it has the above mentioned reinforcement learning techniques employed.
Deepblue had to be programmed what the problem of chess was, that program had to be hand crafted by human engineers.
Alphago had to have it's ability to learn hand crafted, but once that was done it learned what the problem of go was, learned what the solution to that problem is (to win) and it learned all this from scratch.

This algorithm is also different because it is not limited to playing go.
Deepblue can only play chess unless it is reconfigured by human programmers (it would have to use a different algorithm to learn a different game and it would not perform well at go because go has far too many possible moves to solve with brute force techniques).
Deepmind on the other hand can learn to play atari games in the same way it learn to play go.

This algorithm is a breakthrough because, so far, it appears that the algorithm can be applied to any problem in general.

BC August 21, 2016 at 23:48 ¶ #17027
Reply to m-theory Designing a computer to learn things is an advance on our part, for sure. I've been using computers of one kind or another for the last 36 years, and not one of them has learned a damned thing. Granted, these were slightly less powerful than DeepMind. An early PC is to DeepMind as a cockroach is to an elephant.

If computers are ever to be "intelligent", whatever that means, they certainly will have to have the capacity to learn without human instigation. That means, I suppose, that they have to have some sort of will. They will also need some independent mobility too, to take their sensory apparatus on the road to find things that they want to learn about. Will and wishes implies some sort of feelings, like curiosity and satisfaction. When they arrive, we will all be watched over by machines of loving grace. [BBC]
apokrisis August 21, 2016 at 23:48 ¶ #17028
Quoting m-theory
I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.


That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?

The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.

Quoting m-theory
Consider the task of creating robot hand that is deleterious as the human hand.


Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

(Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).

Quoting m-theory
So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.


But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.

Quoting m-theory
Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.


Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....

http://lifesratchet.com/
m-theory August 21, 2016 at 23:55 ¶ #17031
Reply to Bitter Crank
Even if this algorithm makes that possible it would still take quite a while to teach it how to have any thing resembling the common sense we expect of developed humans.

m-theory August 22, 2016 at 00:34 ¶ #17037
Quoting apokrisis
That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?


Again with regular humans there is a human in the loop.
As you grew from an infant to a child it was not in a vacuum...you learn from expectations of others.

But yes, sometimes humans have to intervene and give guidance.

However all this amounts to is reward and penalty value assignment changes.
If deepmind gets stuck on a problem in which it needs to explore more to be efficient then the value of reward for exploring is tweaked.

Quoting apokrisis
The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.


I am arguing that the semantics in this algorithms example are not simply baked-in because it can learn on it's own to shift biases as it discovers new information about it's environment and itself in relation to it's environment.

I don't agree with the notion that humans have semantics from birth (perhaps some) semantics is something we learn not just from ourselves but from others.

Semantics is a dynamic thing and this is the first example of an algorithm with a robust dynamic semantic capability.
That is to say it is flexible enough that it can handle the dynamic semantics of a variety of tasks with a hardy degree of autonomy.

This system can handle instability of environments (I gave this example of a system that it learned to regulate)

Quoting apokrisis
Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

(Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).


My point was there is a difference between engineering to replicate one-to-one systems and designing to accomplish one-to-one utility.
We can often achieve the same utility with out modeling the exact system.

But I will concede you main point here, that a human hand can adapt by process of evolution as a consequence of its complicated systems where as a robot hand will never be able to adapt in that way.
I don't see that as a major one because evolution takes so long to produces describable adaptation and because we do not necessarily want a robot model of the human hand to adapt under environmental pressure over the course of many many generations.


Quoting apokrisis
But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.


I tried to explain that there is a general sense of the term mind as something others have.
As a term that means a general way of thinking.
I believe this sense of the term mind is an algorithm and it is how we account for the fact that vastly different people can agree on semantics...because they learn the same problems and form the same solutions to those problems and are taught by people that have the same general algorithm.

I am suggesting that there is a single algorithm for general intelligence that not only we possess but others possess and that is how we can answer the question of whether or not we have a mind with a yes or no without error.

If there is no general intelligence algorithm that is quite a curious thing...that so many different individuals and different cultures should share so much in common.
One would expect that if each mind is not a general template upon which the individual is formed but rather its own unique iteration then there should be much more variety and that the other minds of humans would seem utterly alien to us more often than they would seem similar to ourselves.


m-theory August 22, 2016 at 02:25 ¶ #17056
Quoting apokrisis
Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....


I have read some more and you are right he is very technically laden.
I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.

I noticed he draws on the work of von neumann so I will pursue that as well.

Thanks again for your posts and again you have given me a lot to think about.
apokrisis August 22, 2016 at 03:20 ¶ #17066
Quoting m-theory
I have read some more and you are right he is very technically laden.


Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.

Quoting m-theory
I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.


Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?
m-theory August 22, 2016 at 05:01 ¶ #17071
Quoting apokrisis
Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.

What I see as his main issue is that he believes there is something like the measurement problem when dealing with the origin of life.
He seems to use the term epistemic cut synonymously with the measurement problem.

Perhaps he is correct.

To solve the problem of artificial life in general maybe sure he may have a point...however the goal in the field of A.I. is not to recreate life artificially but to create artificial intelligence.
The problem of general artificial intelligence is not equivalent to the problem of artificial life I don't believe.
So I don't agree we have to solve the measurement problem to solve the problem of making general purpose A.I.

If so and if the measurement problem is undecidable then that would mean we could not answer yes or no if we had general intelligence.
This is why I do not believe defining our terms (intelligence/mind/consciousness) in this way would be productive and it certainly is not clear that it is necessary to do so.
It solves no issue and creates one that is not necessary if another definition is more suitable.
That is to say if our terms are decidable things.

I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.

But again I don't agree that we have to solve the issue of the origins of life (and any measurement problems that exist there) in order to solve the problem of machines that can think as good if not better than humans do.

Quoting apokrisis
Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?


Mr. Palm Pilot...I don't get it?
:s

What is wrong with bayesian probability I don't get it either?

Are you saying that bayesian statistical methods cannot be used to form an epistemic cut because of some fundamental issue?

Some statistical method will have to be used because exact details of initial conditions at the time of observation cannot be known.
I don't see any issue using bayesian methods?






apokrisis August 22, 2016 at 05:38 ¶ #17074
Quoting m-theory
But I don't agree that we have to solve the origin of life and the measurement problem to solve the problem of general intelligence.


Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.

Quoting m-theory
I suppose if you want to argue that the mind ultimately takes place at a quantum scale in nature then Pattee may well be correct and we would have to contend with the issues surrounding the measurement problem.


That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.

I explained this fairly carefully in a thread back on PF if you are interested....
http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.

Quoting m-theory
What is wrong with bayesian probability I don't get it either?


I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.
m-theory August 22, 2016 at 06:15 ¶ #17083
Quoting apokrisis
Great. So in your view general intelligence is not wedded to biological underpinnings. You have drunk the Kool-Aid of 1970s cognitive functionalism. When faced with a hard philosophical rebuttal to the hand-waving promises that are currency of computer science as a discipline, suddenly you no longer want to care about the reasons AI is history's most over-hyped technological failure.


I just don't think we have to crack the origin of life before we can crack the problem of machines with minds.

That is the bottom up approach.
We are reverse engineering from the top down as you pointed out.
And I believe that somewhere in the middle is where the mind breakthrough will happen.
I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.

I granted that of course those processes have feedback that informs the mind...but I do not see that a significant portion of them do.
I think the level of detail regarding that feedback can be considered negligible (for example I don't think we need to model the circulatory system or the neurology that supports it to in order to achieve a mind...the list of systems I believe are unnecessary to model does not end there).
Upon this is were we seem to disagree most.

Quoting apokrisis
That is nothing like what I suggest. Instead I say "mind" arises out of that kind of lowest level beginning after an immense amount of subsequent complexification.

The question is whether computer hardware can ever have "the right stuff" to be a foundation for semantics. And I say it can't because of the things I have identified. And now biophysics is finding why the quasi-classical scale of being (organic chemistry in liquid water) is indeed a uniquely "right" stuff.


Most of what happens at the nano or quantum scale has little to do with how the brain forms semantics in my view.
For my view I believe semantics in the context of the mind is entailed by self aware syntax.
For a machine to create a model of itself does not require that it is biological in my view.

For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.

Again we mostly have a different view on how the relevant terms ought to be defined.

Quoting apokrisis
I explained this fairly carefully in a thread back on PF if you are interested....
http://forums.philosophyforums.com/threads/the-biophysics-of-substance-70736.html

So here you are just twisting what I say so you can avoid having to answer the fundamental challenges I've made to your cosy belief in computer science's self-hype.

Well I think I get it...Pattee argues that life may be like a unique state of matter at the quantum scale and we just might not be able to tell because of the measurement problem (I know it is much more complicated then that I just could not think of a better analogy for breviaries sake).

I just don't agree that intelligence is necessarily dependent upon that state.
I don't see why computers can not be the "right stuff" as you put it.
Pattee does not provide conclusive evidence that such is the case.
And you haven't either.

Also you don't have to be so condescending in your replies.
We can disagree without being insulting to each other...I may be wrong and stupid for what I believe but I am entitled to be wrong and stupid and it does not hurt any one but me.
It kind of hurts my feelings man because I have a lot of respect for you.

Quoting apokrisis
I thought you were referring to the gaudy self-publicist, Jeff Hawkins, of hierarchical temporal memory fame - https://en.wikipedia.org/wiki/Hierarchical_temporal_memory

But Bayesian network approaches to biologically realistic brain processing models are of course what I think are exactly the right way to go, as they are implementations of the epistemic cut or anticipatory systems approach.

Look, it's clear that you are not even familiar with the history of neural networks and cybernetics within computer science, let alone the way the same foundational issues have played out more widely in science and philosophy.

Don't take that as in insult. It is hardly general knowledge. But all I can do is point you towards the arguments.

And I think they are interesting because they are right at the heart of everything - being the division between those who understand reality in terms of Platonic mechanism and those who understand it in terms of organically self-organising processes.


Hey thanks.
That cheered me a bit.
You are right I am not well versed in the history of neural network theory.
I guess I have a lot more research to do before become of aware of the issues you are referring to.

My main concern is that some want to define terms surrounding the issue in such a way that they are not decidable.
That is not productive because very obviously they must be or we could not know that is what we are doing when we think.

What we mean by the term mind is that we ourselves can know definitively that we have one...that will mean that this term is something an algorithm can compute.

So that is a foundational assumption about how the term should be defined that I have.


m-theory August 22, 2016 at 06:35 ¶ #17088
Reply to apokrisis
To put it another way I don't agree that a mind is utterly dependent upon all of life's complicated systems.
I think it is more dependent upon the computation that life is able to perform and that computers can be designed to perform similarly without necessarily being one-to-one biological or one-to-one simulations of the biological.
tom August 22, 2016 at 08:31 ¶ #17103
Quoting m-theory
That is the bottom up approach.
We are reverse engineering from the top down as you pointed out.
And I believe that somewhere in the middle is where the mind breakthrough will happen.


So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it?

Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room.

It has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.

Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.

Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved!


m-theory August 22, 2016 at 08:53 ¶ #17106
Quoting tom
So you hope to discover the software by examining the hardware? The trouble is, since we don't know what we're looking for, how could we recognise it?

That is a good point maybe you are right.

I thought we were just looking for a way to encode semantics relative agency.

But there could be much more to it than just this...I have to admit I don't know.

Quoting tom
Back to epistemology. If we want to create an AGI then the problem of how to create knowledge will have to be solved. You can't transfer knowledge from one mind to another. Instead one mind creates cultural artefacts, from which the other mind discerns something not contained within the artefact - its meaning. As Karl Popper said, "It is impossible to speak in such a way that you cannot be misunderstood. This by the way, dispenses with the Chinese Room.


If we had a thinking machine that interacted with humans there is no reason to assume it would not be able to communicate with the conventions humans use.

Quoting tom
t has been suggested that the human brain evolved the way it did in order to facilitate efficient knowledge transfer. Humans are unique (i.e. they are the last remaining species) in that they interpret meaning and intention - i.e. they create knowledge from artefacts and behaviours.

Now, here's the amazing thing if this account of our evolutionary history is true: once you can create knowledge, there is no stopping you. This is a leap to universality. Once you are an explainer you are automatically a universal explainer because the same mechanisms are involved.

Prior to the leap to universal explainer, there must have been another leap - the leap to computational universality in the human brain. This is a hardware problem, which we have long solved!

I am not so sure.
It could be that the brains software became more efficient to and that it is not strictly a hardware leap.




tom August 22, 2016 at 09:49 ¶ #17122
Quoting m-theory
If we had a thinking machine that interacted with humans there is no reason to assume it would not be able to communicate with the conventions humans use.


Nice example of misunderstanding a cultural aretfact.

Quoting m-theory
I am not so sure.
It could be that the brains software became more efficient to and that it is not strictly a hardware leap.


And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood.
apokrisis August 22, 2016 at 10:18 ¶ #17126
Quoting m-theory
And I believe that somewhere in the middle is where the mind breakthrough will happen.
I believe this because a great deal of what the body and brain do is completely autonomous from the mind...or at least what we mean by the term mind.


Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.

Quoting m-theory
For this reason I think simulations of thought do not have to recreate the physics of biology at the nano scale before a mind can be modeled.


This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.

Perhaps the computer simulation only needs to be as coarse grain as you describe. But you have to be able to provide positive reasons to think that is so rather than make the usual computer science presumption it probably is going to be so.

And part of that is going to be showing that simulations are more than just syntactical structures. You have to have an account of semantics that is grounded in physicalism, not in some hand-wavy dualistic folk psychology notion of "mind".

Quoting m-theory
I just don't agree that intelligence is necessarily dependent upon that state.
I don't see why computers can not be the "right stuff" as you put it.
Pattee does not provide conclusive evidence that such is the case.
And you haven't either.


But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.

If you also think a machine can be the right stuff, then why isn't it already easier to produce artificial life before we can produce artificial mind? DNA is just a genetic algorithm, right? And we understand biology better than neurology?

So maybe we are just fooling ourselves here because we humans are smart enough to follow rules as if we are machines. We can walk within lines, move chess pieces, write squiggled pages of algebra. And we can even then invent machines that follow the rules we invent in a fashion that actually is unaware, syntactic and simulated.

That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.
m-theory August 22, 2016 at 10:18 ¶ #17127
Quoting tom
Nice example of misunderstanding a cultural aretfact.


Or it could be a nice example of a poorly constructed artifact.
But I will assume the fault lies with me...and hope you can forgive that.

Quoting tom
And again it seems. The leap to computational universality (the hardware problem) is fully understood. The leap to universal explainer (the software problem) is not understood.


The software problem...
Our software can self analyze...even it's own software.
Why it should not be able to model itself is beyond me...and your short cryptic answers does not help me to understand (are you on a mobile phone or something?).

Why should I agree that we cannot self analyze sufficiently to explain how we are able to analyze?

You seem to indicate that we are at square one of this problem with no clue where to start.

That idea seems absurd to me considering the vast amount of effort in many different disciplines aimed at explaining how it is we are able to explain (after all the unexamined life is not worth living if you can examine it).

If it is that you believe the problem is immensely more vast then I realize then you should at the very least suggest why I should believe that too.





m-theory August 22, 2016 at 10:49 ¶ #17135
Quoting apokrisis
Do you mean a dualistic folk psychology notion of mind? I instead take the neurocognitive view that what you are talking about is simply the difference between attentive and habitual levels of brain processing. And these are hardly completely autonomous, but rather completely interdependent.


Allow me to put it another way.
We might disembody a head and sustain the life of the brain without a body by employing machines.
Were we to do so we would not say that this person has lost a significant amount of their mind.
Would we?
A gruesome prospect to be sure but it is only a hypothetical.
Perhaps it would not be practical for any but a short period I did do some research and it is not completely implausible.

This may be a folksy rebuttal to the notion that we must understand the all of the body and even the origin of life to understand the mind.
But it is what I immediately thought when I realized that this was the problem you seemed to be presenting.

I am not sure what role attentive and habitual processing plays in theories of the mind or how relevant it is or how relevant it is to this subject.

Again you shame me my lack of knowledge...I will have to research further to begin to understand your concern in this regard.

My notion was that we might hope to model something like the default mode network.
How dependent that network is upon attentive and habitual processing, I do not know, so I admit I may have greatly underestimated the difficulties involved.


Quoting apokrisis
But the burden of proof is on you here. The only sure thing is that whatever you really mean by intelligence is a product of biology. And so biological stuff is already known to be the right stuff.


I don't agree that at all.
If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.

Quoting apokrisis
This misrepresents my argument again. My argument is that there is a fundamental known difference between hardware and wetware as BC puts it. So it is up to you to show that this difference does not matter here.


Nonesense.
If the mind is computational then it is simply matter that creates the environment in which computation can take place.
That the matter must be living is a claim and will also have a burden of proof.

The main issue at hand is whether or not computational theory of the mind is valid.
Not whether or inorganic matter can compute.

Quoting apokrisis
That would be why it seems easy to work from the top down. Computers are just mechanising what is already us behaving as if we were mechanical. But as soon as you actually dig into what it is to be a biological creature in an embodied relation with a complex world, mechanical programs almost immediately break down. They are the wrong stuff.

Neural networks buy you some extra biological realism. But then you have to understand the detail of that to make judgements about just how far that further exercise is going to get.


Again we are working from completely different assumptions about theory of the mind.
I am arguing a case for computational theory.
You seem to be arguing a case for embedded cognition to the exclusion computational models.

You are also misleading about how conclusive the matter is...it is not simply settled in the context of philosophy whether or not computational theories of the mind are valid or not...even if you have decided they aren't
So please be charitable and don't assume that ignorance alone is what guides my views.

I respect your position and grant that if it is the most valid, then I am just wrapped up in some hype chamber.
You will have to forgive me, the idea of it fascinates me that I want to believe.


apokrisis August 22, 2016 at 11:58 ¶ #17159
Quoting m-theory
We might disembody a head and sustain the life of the brain without a body by employing machines.
Were we to do so we would not say that this person has lost a significant amount of their mind.
Would we?


That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.

Quoting m-theory
My notion was that we might hope to model something like the default mode network.


That is simply how the brain looks when attention is in idle mode with not much to do. Or indeed when attention is being suppressed to avoid it disrupting smoothly grooved habit.

Quoting m-theory
If you state that the origins of life must be understood in order that we understand the mind that is claim that entails burdens of proof.


Who is talking about the origins of life - the problem of abiogenesis? You probably need a time machine to give an empirical answer on that.

I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.

Quoting m-theory
The main issue at hand is whether or not computational theory of the mind is valid.
Not whether or inorganic matter can compute.


Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?

And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.

And we know that computation is rooted in material stability? Hardware fabrication puts a lot of effort into achieving that, starting by worrying about the faintest speck of dust in the silicon foundry.

And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?

So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.
m-theory August 22, 2016 at 12:51 ¶ #17171
Quoting apokrisis
That is irrelevant because you are talking about an already fully developed biology. The neural circuitry that was the result of having a hand would still be attempting to function. Check phantom limb syndrome.

Then imagine instead culturing a brain with no body, no sense organs, no material interaction with the world. That is what a meaningful state of disembodiment would be like.


I had not thought of that.
I suppose you are right that there was a body even if there is not one now.
So you can still argue that the body plays a very significant role in the mind.

I see the problem that I face now...biology has produced a mind and you can always fall back on that.

Touché

nicely done sir.

Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
You have decided that the term mind must be defined biologically to the exclusion of a computational model.

It may well be that you are correct...but it is not a settled matter in philosophy.

Quoting apokrisis
I was talking about the biological basis of the epistemic cut - something we can examine in the lab today.


Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.

Quoting apokrisis
Again, we know that biology is the right stuff for making minds. You are not expecting me to prove that?

We also know that matter can compute...surely I am not expected to prove as much?

Quoting apokrisis
And we know that biology is rooted in material instabilty, not material stabilty? I've given you the evidence of that. And indeed - biosemiotically - why it has to be the case.


Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
Biology is not without stability.

Quoting apokrisis
And I've made the case that computation only employs syntax. It maps patterns of symbols onto patterns of symbols by looking up rules. There is nothing in that which constitutes an understanding of any meaning in the patterns or the rules?


No you have stated this as if it were a settled matter by suggesting that only biology can form semantics.
I don't agree semantics can only occur in biology.

Quoting apokrisis
So that leaves you having to argue that despite all this, computation has the right stuff in a way that makes it merely a question of some appropriate degree of algorithmic complication before it "must" come alive with thoughts and feelings, a sense of self and a sense of purpose, and so you are excused of the burden of saying just why that would be so given all the foregoing reasons to doubt.


Again I refer to the alternative of a undecidable mind.
We could not know if we had one if the mind is not algorithmic it is that simple.
If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.

Why this argument fails has not been addressed by what you have provided on this thread.


tom August 22, 2016 at 18:43 ¶ #17238
Quoting m-theory
Or it could be a nice example of a poorly constructed artifact.
But I will assume the fault lies with me...and hope you can forgive that.


You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.

So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible.

Notice that subjectivity has already appeared! AlphaGo has no subjectivity.

Quoting m-theory
It could be that the brains software became more efficient to and that it is not strictly a hardware leap.


AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go!
m-theory August 22, 2016 at 20:13 ¶ #17246
Quoting tom
Notice that subjectivity has already appeared! AlphaGo has no subjectivity.


This does not follow from^

Quoting tom
You are completely missing the point. It is impossible to transfer knowledge from one mind to another. Minds construct new knowledge from artefacts, problem-situations, background knowledge, by a fundamentally creative ability.

So, the creator of the artefact, and the interpreter of the artefact, are engaged in an inter-subjective dialogue. Each person is conjecturing theories about what each other means or interprets. Perfection and justification are impossible.


This.^

Quoting tom
AlphaGo can be as efficient as it likes. It will always fail the Chinese Room. It cannot create the knowledge that it is playing Go!


Suppose alphago was tasked with learning the problem of the context in which chinese is used and was able to converge upon a solution such that it could efficiently and consistently pass a turing test.

Then suppose we Chinese room Alphago and Searle.

For the people outside the room the Chinese room is just a black box.
If we ask them and they insist that the black box understands Chinese how would we account for that apparent knowledge?

If the man inside insists he is only performing the necessary actions he was instructed to preform we can conclude that the knowledge did not come from him right?

So either the people outside the black box have simply projected knowledge into meaningless strings of symbols.
Which would be a philosophical issue for another thread I would say.

Or the system of instructions can function in the role of the software while the man functions in the role of hardware and when combined they produce Chinese for those outside to interpret.
If this is the account for the knowledge of Chinese then it would not conflict with the computational theory of the mind.

I am unable to think of any other reasonable options to account for the knowledge of Chinese if the people outside the black box insist that it is there.














apokrisis August 22, 2016 at 22:00 ¶ #17267
Quoting m-theory
Of course I disagree that the mind must necessarily always be biological...but that is a semantic debate surrounding how the term is defined.
You have decided that the term mind must be defined biologically to the exclusion of a computational model.


In your stubbornness, you keep short-cutting my carefully structured argument.

1) Whatever a mind is, we are as certain as we can be that biology has the right stuff. Agreed?

2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?

3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?

4) Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.

Quoting m-theory
Yes and as far as I could tell from your source material it was claimed that the origin of life contains a quantum measurement problem.
The term epistemic cut was used synonymously with the quantum measurement problem and the author continuously alluded to the origins of self replicating life.


This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.

Quoting m-theory
Imagine if the body and brain had a sudden interruption in the supply of electrons within its neurological system?
Biology is not without stability.


You seem to be imagining that electrons are like little Newtonian billiard balls or something. Quantum field theory would say a more accurate mental picture would be excitations in field. And even that leaves out the difficult stuff.

But anyway, again of course there is always stability and plasticity in the world. They are complementary poles of description. And the argument from biophysics is that dynamical instability is essential to life because life depends on having material degrees of freedom that it can harness. For biological information to act as a switch, there must be a physico-chemical instability that makes for material action that is switchable.

Quoting m-theory
I don't agree semantics can only occur in biology.


Fine. Now present that evidence.

Quoting m-theory
Again I refer to the alternative of a undecidable mind.
We could not know if we had one if the mind is not algorithmic it is that simple.
If we can know without error that we have minds this is the result of some algorithm which means the mind is computational.


No idea what you are talking about here.




m-theory August 22, 2016 at 23:15 ¶ #17276
Quoting apokrisis
2) The best theory of that what kind of stuff that actually is, is what you would expect biologists to produce. And the standard answer from biologists is biology is material dynamics regulated by semiotic code - unstable chemistry constrained by evolving memory. Agreed?


No of course I don't agree that the best theory of the mind must be biological.

Quoting apokrisis
3) Then the question is whether computation is the same kind of stuff as that, or a fundamentally different kind of stuff. And as Pattee argues (not from quantum measurement, but his own 1960s work on biological automata), computation is physics-free modelling. It is the isolated play of syntax that builds in its presumption of being implementable on any computationally suited device. And in doing that, it explicitly rules out any external influences from the operation of physical laws or dissipative material processes. Sure there must be hardware to run the software, but it is axiomatic to universal computation that the nature of the hardware is irrelevant to the play of the symbols. Being physics-free is what makes the computation universal. Agreed?


I must admit I can make no sense of this.

What the epistemic cut could be other than a measurement problem is beyond me and I had difficulty finding a good definition of that term in your reference sources.
I can not be sure how the problem relates to computational theory of the mind or if it is actually necessary as Pattee would insist that it is.

Pattee has also taken the liberty of defining the term semantics such that it will necessarily exclude anything which isn't biological.
Again this may be necessary because of the epistemic cut...or it may not.

The closest I came to grasping what he might mean by this term came from his references to von neumann.


from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.


I offered the that the pomdp could be a resolution.
You did not really bother to suggest any reason why that view was not correct.

Quoting apokrisis
Given the above - that biological stuff is fundamentally different from computational stuff in a completely defined fashion - the burden is then on the computationalist to show that computation could still be the right stuff in some way.


Mind is only found in living organic matter therefor only living organic matter can have a mind.
That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.
No matter how many papers Pattee has written.

Quoting apokrisis
This is another unhelpful idee fixe you have developed. As said, Pattee's theoretical formulation of the epistemic cut arose from being a physicist working on the definition of life in the 1950s and 1960s as DNA was being discovered and the central mechanism of evolution becoming physically clear. From von Neumann - who also had an interest in self-reproducing automata - Pattee learnt that the epistemic cut was also the same kind of problem as had been identified in quantum mechanics as the measurement problem.


Pattee does a poor job of generalizing this problem, especially considering the frequency for which he references the term epistemic cut.

This is the closest I came to finding a general sense of what Pattee might mean.


The epistemic cut or the distinction between subject and object is normally associated with highly evolved subjects with brains and their models of the outside world as in the case of measurement. As von Neumann states, where we place the cut appears to be arbitrary to a large extent. The cut itself is an epistemic necessity, not an ontological condition. That is, we must make a sharp cut, a disjunction, just in order to speak of knowledge as being "about" something or "standing for" whatever it refers to. What is going on ontologically at the cut (or what we see if we choose to look at the most detailed physics) is a very complex process. The apparent arbitrariness of the placement of the epistemic cut arises in part because the process cannot be completely or unambiguously described by the objective dynamical laws, since in order to perform a measurement the subject must have control of the construction of the measuring device. Only the subject side of the cut can measure or control.


In essence the epistemic cut is a measurement problem.
Perhaps I was wrong to call it a quantum measurement problem.

It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind (or rather that computation cannot form a subject object distinction at least).

Quoting apokrisis
Fine. Now present that evidence.


I did mention that I argued deductively that the mind must be something that is decidable.
but this was your response.
Quoting apokrisis
No idea what you are talking about here


My argument is on the first page below Tom's post.
Reply to tom






apokrisis August 23, 2016 at 00:05 ¶ #17291
Quoting m-theory
No of course I don't agree that the best theory of the mind must be biological.


Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.

Quoting m-theory
I offered the that the pomdp could be a resolution.
You did not really bother to suggest any reason why that view was not correct.


But it is a resolution in being an implementation of the epistemic cut. It represents a stepping back into a physics-free realm so as to speak about physics-constrained processes.

The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.

Quoting m-theory
Mind is only found in living organic matter therefor only living organic matter can have a mind.
That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.


Fortunately we only have to consider two theories of mind in this discussion - the biological and the computational. If you want to widen the field to quantum vibrations, ectoplasm, psychic particles or whatever, then maybe you don't see computation as being relevant in the end?

Quoting m-theory
It is not immediately clear to me how this general statement can be said to demonstrate necessarily that computation can not result in a mind.


So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.

Quoting m-theory
My argument is on the first page below Tom's post.


That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.
tom August 23, 2016 at 07:56 ¶ #17448
Quoting m-theory
Mind is only found in living organic matter therefor only living organic matter can have a mind.
That is an unasailable argument in that it defines the term mind to the exclusion of inorganic matter.
But that this definition is by necessity the only valid theory of the mind is not simply a resolved matter in philosophy.


But it is the sort of "unasailable" argument that will be forgotten when we create an artificial mind.

If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place.


Metaphysician Undercover August 23, 2016 at 10:22 ¶ #17457
Quoting tom
You are completely missing the point. It is impossible to transfer knowledge from one mind to another.


This all depends on what type of existence you think that knowledge has, which is determined by your metaphysical perspective. Some would say that knowledge exists only in minds, like you do. Some would say that knowledge exists externally to the mind, in the artefact, such as in the books, in the library. It is also possible to produce an inclusive metaphysics, which includes both these aspects of knowledge. In which case knowledge passes from one mind to another, having an active and passive form.
m-theory August 24, 2016 at 16:39 ¶ #17656
Quoting apokrisis
Yes. But you don't agree because you want to believe something different without being able to produce the evidence. So at this point it is like arguing with a creationist.


I produced a deductive argument.

Quoting apokrisis
The bit that is then missing - the crucial bit - is that the model doesn't have the job of making its own hardware. The whole thing is just a machine being created by humans to fulfill human purposes. It has no evolutionary or adaptive life of its own.


The human mind did not create it's own hardware.
We are simply products of chemistry.

Quoting apokrisis
So computation is a formal action - algorithmic or rule-bound. And yet measurement is inherently an informal action - a choice that cannot be computed. Houston, we have a problem.


Measurement is not an informal action when it produces something desecrate.
Imagine if something like a measurement does face the mind.
We are saying that the mind cannot take into account it's own existence within the context of what is measured.
Of course I don;t agree that is a productive way to define the mind because we do not consider the mind something that is not included in our observations.

Quoting apokrisis
That's a good illustration of the absolute generacy of the measurement problem then. To have a formal theory of the mind involves also the informal choice about what kind of measurement stands for a sign of a mind.

We are talking Godelian incompleteness here. In the end, all formal reasoning systems - all syntactical arguments - rely on having to make an abductive and axiomatic guess to get the game started. We have to decide, oh, that is one of those. Then the development of a formal model can begin by having agreed a basis of measurement.

But then you mix up the issue of a measurement basis with something different - the notion of undecidability in computation.

Science models the world. So as an open-ended semiotic process, it doesn't have a halting problem. Instead, it is free to inquire until it reaches a point of pragmatic indifference in the light of its own interests.

You are talking about a halting problem analogy by the sound of it. And that is merely a formal property of computational space. Some computational processes will terminate, others have a form that cannot. That is something quite different.


If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.

The alternative is that we cannot be sure we have minds, there is no finite set of steps that our brains could take to derive the answer to the question of our own minds.

Of course we do not use the term this way.
So why should we define the term formally this way?

m-theory August 24, 2016 at 16:54 ¶ #17658
Quoting tom
If you are wondering how we can know that we have created a mind, we will know because we will have understood the mind well enough to program it in the first place.


The question of which theory of the mind is correct and which theories are mutually exclusive of each other is still very much an open one in philosophy.

I will take this point you have here to mean until there is more clarity and greater consensus from further discovery that it will remain an open question.

Of course I cannot disagree with that.
apokrisis August 24, 2016 at 22:42 ¶ #17697
Quoting m-theory
If the mind is something you can be sure that you have and that you can be sure correctly each time you inquire about the presence of your own mind...this would mean the term mind will be something that is fundamentally computational.


Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.

Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?

It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.
m-theory August 26, 2016 at 21:14 ¶ #18088
Quoting apokrisis
Yeah. I just don't see that. You have yet to show how syntax connects to semantics in your view. And checking to see that "we" are still "a mind" is about an irreducibly semantic act as you could get, surely? So how is it fundamentally syntactical? Or how is computation not fundamentally syntax? These are the kinds of questions I can't get you to engage with here.


This seems like this is some of that folksy dualism you were talking about.
Semantics and syntax are separated by some special case of nature.

I did engage these points.

I do not view semantics as something that can even occur without syntax and I offered an example of pomdp which could handle the "irreducible" subject object distinction in a deciding agent.

Quoting apokrisis
Out of curiosity, why did you cite partially observable Markov decision processes as if they somehow solved all your woes? Did you mean to point to some specific extra they implement which other similar computational architectures don't - like having an additional step that is meant to simulate actively disturbing the world to find if it conforms with predictions?


I mention it because I believe it can be argued that the epistemic cut you mentioned is not an intractable problem.

Quoting apokrisis
It seems to me still that sure we can complexify the architectures so they add on such naturalistic behaviours. We can reduce creative semantics to syntactically described routines. But still, a computer is just a machine simulating such a routine. It is a frozen state of habit, not a living state of habit.

A real brain is always operating semantically in the sense that it is able to think about its habits - they can come into question by drawing attention themselves, especially when they are not quite working out during some moment.

So as I've said, I agree that neural network architectures can certainly go after biological realism. But an algorithm is canonically just syntax.

Turing computation must have some semantics buried at the heart of its mechanics - a reading head that can interpret the symbols on a tape and do the proper thing. But Turing computation relies on external forces - some clever hardware designing mind - to actually underwrite that. The computation itself is just blind, helpless, syntax that can't fix anything if a fly spot on the infinite tape makes a smudge that is somewhere between whatever is the symbol for a 1 and a 0.

So to talk about AI in any principled way, you have to deal with the symbol grounding problem. You have to have a working theory of semantics that tells you whether your syntactic architecture is in some sense getting hotter or colder.

A hand-wavy approach may be quite standard in computer science circles - it is just standard practice for computer boffs to over-promise and under-deliver. DARPA will fund it anyway. But that is where philosophy of mind types, and theoretical biology types, will reply it is not good enough. It is just obvious that computer science has no good answer on the issue of semantics.


Semantics cannot exist without syntax.
To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.

The argument that semantics is strictly non-computational is ridiculous to me for this reason.
Even if I did agree that only biology could produce semantics I certainly would not agree that some impenetrable and irreducible problem accounts for semantics in biology.

I don't agree that your references have demonstrated that semantics in biology is necessarily an impenetrable and irreducible problem for which no computation can be applied in resolution.
In the pomdp example infinite objective states and subjective beliefs sets are not intractable and there are methods for general solution.

I will concede that exact solutions are not possible, but this means that the burden for Pattee would be that he must demonstrate that exact solutions are necessary.
Simply waving his hand at an epistemic cut and saying there is no computational method for dealing with it is not convincing.





apokrisis August 26, 2016 at 22:49 ¶ #18101
Quoting m-theory
Semantics cannot exist without syntax.
To implement any notion of semantics will entail syntax and the logical relationships within that syntax.
To ground this symbol manipulation simply means to place some agency in the role of being invested in outcomes from decisions.


Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?

A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.

But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.

How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?

Sure, neural networks do try to implement this kind of biological realism. But the problem for neural nets is to come up with a universal theory - a generalised architecture that is "infinitely scalable" in the way that Turing computation is.

If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.

Every time some new algorithm must be written by the outside hand of a human designer rather than evolving internally as a result of experiential learning, you have a hand-crafted machine and not an organism.

So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.

m-theory August 26, 2016 at 23:43 ¶ #18110
Quoting apokrisis
Great. Now all you need to do is define "agency" in a computationally scalable way. Perhaps you can walk me through how you do this with pomdp?


Agency is any system which observes and acts in it's environment autonomously.

Quoting apokrisis
A notion of agency is of course central to the biosemiotic approach to the construction of meanings - or meaningful relations given that this is about meaningful physical actions, an embodied or enactive view of cognition.


The same applies to a computational agent, it is embedded with its environment through sensory perceptions. It must be able to act within it's environment in order to learn what it can and cannot do and how that information is related to its goal of seeking reward.

Quoting apokrisis
But you've rejected Pattee and biosemiotics for some reason that's not clear. So let's hear your own detailed account of how pomdp results in agency and is not merely another example of a computationalist Chinese Room.


I made it quite clear why I reject an argument from intractable infinite regress concerning an epistemic cut. I pointed out that belief about states and states themselves may be infinite but that there are methods for general solutions.

Pattee would need to demonstrate that exact solutions are necessary for semantics.
That has not been demonstrated.

I also provided a link that is extremely detailed.

Quoting apokrisis
How as a matter of design is pomdp not reliant on the agency of its human makers in forming its own semantic relations via signs it constructs for itself? In what way is pomdp's agency grown rather than built?


Pomdp is a method for resovling an epistemic cut that is argued to be a necessary dilemma for agency. Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.

Quoting apokrisis
If pomdp turns out to be merely an assemblage of syntactic components, their semantic justification being something that its human builders understand rather than something pomdp grew for itself as part of a scaling up of a basic agential world relation, then Houston, you still have a problem.


I get the feeling that you are trying to suggest that if you can point to some syntax then this proves there is no semantics. Again I will suggest to you that such a definition of semantics is incoherent.

I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.

That this system is a semantic one and not brute force syntax is offered by the AlphaGo example. To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.

The agent of this system makes intuitive decisions based on experiences and systems of belief that may or may not be strictly relevant to go, it does not have its beliefs and policies hand crafted by humans as would be the case with a brute force engine.
AlphaGo was not designed to play go, it was designed to learn how to solve a variety of problems and in order to be good at playing go it must understand what go is.

Quoting apokrisis
So given pomdp is your baby, I'm really interested to see you explain how it is agentially semantic and not just Chinese Room syntactic.


Once again I have already explained why the Chinese room argument fails.
Arguing the Chinese room simply means you are saying that organic matter is the hardware side and the instructions are the software side.
Taken as a whole it is the hardware and the software that provides the understanding.
Not one or the other as the Chinese room implies.

The Chinese room does not refute computational theories of the mind, never has, and never will. It is simply suggests that because the hardware does not understand then the software does not understand.

That is fine
and also completely irrelevant.

Computational theory of the mind is not the notion that one or the other (hardware or software) results in understanding, it is the theory that these things combined will result in understanding.




apokrisis August 27, 2016 at 00:39 ¶ #18120
Quoting m-theory
Agency is any system which observes and acts in it's environment autonomously.


Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

Quoting m-theory
The same applies to a computational agent, it is embedded with its environment through sensory perceptions.


Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

Quoting m-theory
Pattee must demonstrate that exact solutions are necessary for semantics.


But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

http://www.people.vcu.edu/~mikuleck/rosrev.html

Quoting m-theory
I also provided a link that is extremely detailed.


The question here is whether you understand your sources.

Quoting m-theory
Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.


Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

Quoting m-theory
I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.


Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

Quoting m-theory
To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.


But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

Quoting m-theory
The Chinese room does not refute computational theories of the mind, never has, and never will.
It is simply suggests that because the hardware does not understand then the software does not understand.


Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
m-theory August 27, 2016 at 01:57 ¶ #18141
Quoting apokrisis
Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.


Quoting apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.


Alright I concede your point here.
When I mention terms I don't understand what they mean, except by some folksy definition of these terms.

I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.

Quoting apokrisis
But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.


I will have to re-review your sources.
I saw no mention of Godel incompleteness.
Are you suggesting that if I review Theory of Self-Reproducing Automata that Von Neumann will layout an argument from incompleteness that demonstrates that the mind is not computational?

Quoting apokrisis
You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

http://www.people.vcu.edu/~mikuleck/rosrev.html

This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.
I found no such example.
The entire thing is completely text with no formal logic or mathematical notation what-so-ever.

Quoting apokrisis
The question here is whether you understand your sources.


Quoting apokrisis
Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.


I suppose you are right I don't understand my sources and my sources do indeed prove your point is correct and I believe it is magic that accounts for my points.

Quoting apokrisis
Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.


The idea that semantics and syntax are independent and mutually exclusive sound more like folksy dualism to me than computational theories of the mind do.

Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?

Quoting apokrisis
But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.


Again AlphaGo learned to play go from scratch.
It was not built to play go it was built to learn problems and how to solve those problems.
The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.
That is like saying if you are taught you how to play chess any understanding you have of chess comes from the fact that you learned from somebody else.

Quoting apokrisis
If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.


Again I can make no sense of your "physics free" insistence here.
As best as I can understand this is an allusion to thought experiments in computation that ignore physical constraints for the sake of making some greater theoretical point.
Application of computation in the real world certainly is not "physics free."
Also it is not clear that a computational theory of the mind must be physics free.
And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.

Quoting apokrisis
Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.


I did not anticipate that you would insist that I define all the terms I use in technical detail.
I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further our exchange.
Also when I attempted to do this previously you insisted I don't understand my points and in fact my arguments prove your point and not mine.













apokrisis August 27, 2016 at 02:37 ¶ #18151
Quoting m-theory
I don't really have time to explain repeatedly that fundamentally I don't agree that relevant terms such as these examples are excluded from computational implantation.


Repeatedly? Once properly would suffice.

Quoting m-theory
This link seems very poor as an example of a general mathematical outline of a Godel incompleteness facing computational theories of the mind.


Read Rosen's book then.

Quoting m-theory
Perhaps if you had some example of semantics that exists independently and mutually exclusive of syntax it would be useful for making your point?


You just changed your wording. Being dichotomously divided is importantly different from existing independently.

So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.

And that is indeed the basis of my pan-semiotic - not pan-psychic - metaphysics. It is why I see the essential issue here the other way round to you. The fundamental division has to develop from some seed symmetry breaking. I gave you links to the biophysics that talks about that fundamental symmetry breaking when it comes to pansemiosis - the fact that there is a nano-scale convergence zone at the thermal nano-scale where suddenly energetic processes can be switched from one type to another type at "no cost". Physics becomes regulable by information. The necessary epistemic cut just emerges all by itself right there for material reasons that are completely unmysterious and fully formally described.

Quoting m-theory
The semantics of go was not built into AlphaGo and you seem to be saying that because a human built it that means any semantic understanding it has came from humans.


What a triumph. A computer got good at winning a game completely defined by abstract rules. And we pretend that it discovered what counts as "winning" without humans to make sure that it "knew" it had won. Hey, if only the machine had been programmed to run about the room flashing lights and shouting "In your face, puny beings", then we would be in no doubt it really understood/experienced/felt/observed/whatever what it had just done.

Quoting m-theory
Again I can make no sense of your "physics free" insistence here.


So you read that Pattee reference before dismissing it?

Quoting m-theory
And again it is not clear that there is an ontic issue and the hand waving of obscure texts does not prove that there is one.


I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.

Quoting m-theory
I did not anticipate that you would insist that I define all the terms I use in technical detail.
I would perhaps be willing to do this I if I believed it would be productive, but because you disagree at a more fundamental level I doubt giving technical detail will further or exchange.


I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?

m-theory August 27, 2016 at 03:03 ¶ #18156
Quoting apokrisis
So it is not my position that there is pure semantics anywhere anytime. If semantics and syntax form a proper metaphysical strength dichotomy, they would thus be two faces of the one developing separation. In a strong sense, you could never have one without the other.


At least we agree on that point.
These terms are interdependent.

Quoting apokrisis
So you read that Pattee reference before dismissing it?


This does not make it any clearer what you mean when you are using this term.
Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.

Quoting apokrisis
I can only hand wave them if you won't even read them before dismissing them. And if you find them obscure, that simply speaks to the extent of your scholarship.


No, it is not a mainstream view that the problem of the origin of life is a Godel incompleteness problem.
That is a rather obscure claim.

Quoting apokrisis
I've given you every chance to show that you understand the sources you cite in a way that counters the detailed objections I've raised.

Pompd is the ground on which you said you wanted to make your case. You claimed it deals with my fundamental level disagreement. I'm waiting for you to show me that with the appropriate technical account. What more can I do than take you at your word when you make such a promise?


We don't have a technical account of your issue.
It was a mistake of me to try and find a technical solution prior I admit.















apokrisis August 27, 2016 at 04:06 ¶ #18170
Quoting m-theory
This does not make it any clearer what you mean when you are using this term.
Again real world computation is not physics free, even if computation theory has thought experiments that ignore physical constraints.


Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)

Quoting m-theory
We don't have a technical account of your issue.
It was a mistake of me to try and find a technical solution prior I admit.


:-}

m-theory August 27, 2016 at 04:40 ¶ #18174
Quoting apokrisis
Again, real world Turing computation is certainly physics-free if the hardware maker is doing his job right. If the hardware misbehaves - introduces physical variety in a way that affects the physics-free play of syntax - the software malfunctions. (Not that the semantics-free software could ever "know" this of course.)


And again the same is true of biological systems, biology will not function properly and will die without a given range of environmental stability.

Perhaps you mean to suggest that the range that life can operate in is greater than the range current hardware can operate in and that this proves that current hardware cannot produce a general purpose A.I.

You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..

Quoting apokrisis
:-}


Well your sources have not been very clear in the formal sense, at least not to me.
And you have not done much except say that I don't understand your sources, or my own, when I attempt to address the issues you believe your sources raise.

So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?

But
Suppose we concede the point that evolution and the origin of life has a Godel incompleteness problem and let's ask what that would imply about computational theories of evolution, the origin of life, or even the mind.
Chaitin, I believe, offers a good example of the computational view surrounding that issue.

Of course such views are not so pessimistic as the one you have taken.

Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.







apokrisis August 27, 2016 at 05:28 ¶ #18180
Quoting m-theory
You will have to forgive me if I find that line to be a rather large leap and not so straight forward as you take for granted..


Only because you stubbornly misrepresent my position.

Quoting m-theory
So, to quote von Neumann, what is the point of me being percise if I don't know what I am talking about?


Exactly. Why say pomdp sorts all your problems when it is now clear that you have no technical understanding of pomdp?

Quoting m-theory
Here is another video of Chaitin offering a computational rebuttal to the notion that computation does not apply to evolution.


Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.



m-theory August 27, 2016 at 06:38 ¶ #18182
Quoting apokrisis
Forget youtube videos. Either you understand the issues and can articulate the relevant arguments or you are pretending to expertise you simply don't have.


I never claimed to be an expert.
Those are your words not mine.
I claimed that pomdp is a possible solution to a problem of infinite regress.
Once again you fail to correct me that error.
Perhaps you mean to argue that pomdp does not solve your particular infinite regress problem.
Ok fine, forgive me for suggesting that it might.

Also what a crude fallacy of you to imply.
One must be an authority to make valid points.
I expect better of you.

Never mind that it does not follow that if the origin of life is not finitely commutable then necessarily the mind is not finitely commutable.
(You never connected these and those connections were not made readily clear in anything you have linked)

Let's just assume you are right and say that one does necessarily follow from the other.
Allow me to give a brief overview of Chaitin's approach to a Godel incompleteness problem in biology and its implications for computational theories of evolution.

In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).

Of course I would suggest the same applies to arguments about an incompleteness in the origin of life (And so does Chaitin).

The implication for computational models would simply mean that role computation plays would be more open ended, and not at all that computation would be mutually excluded as you have attempted to imply.

Chaitin discusses at length the philosophical issues you have raised and how he believes they are related to computational theories in biology.
It is worth reviewing for those who are interested in how the issues raised by von Neumann, Godel, and Turing are related to computational theories of biology.

Overall Chaitin argues that one does not simply discard computation on the grounds of Godel incompleteness rather one ought to embrace the limitlessness incompleteness implies.


Your source even name drops Chaitin.
So I thought you would be more receptive to hearing him out for that reason.

At any rate thanks apokrisis for you input and thanks for challenging my views.
Your effort has not been in vain completely, I believe from our interaction I have deeper appreciation of the topic than I did when I began this thread.




tom August 27, 2016 at 17:18 ¶ #18202
Quoting m-theory
Suppose we concede the point that evolution and the origin of life has a Godel incompleteness problem and let's ask what that would imply about computational theories of evolution, the origin of life, or even the mind.
Chaitin, I believe, offers a good example of the computational view surrounding that issue.


As I mentioned in another thread:

"Von Neumann showed that an accurate self-reproducer must consist of a replicator and a vehicle."

Of course Chaitin takes the computational view of life, because Von Neumann *proved* that it cannot be otherwise, before the discovery of DNA.


apokrisis August 28, 2016 at 02:57 ¶ #18257
Quoting m-theory
In a very abstract way Chaitin shows that a very generalized evolution can still result from a computational foundation (albeit in his model it is necessary to ignore certain physical constraints).


I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.

On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.

As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.

As a fundamental problem for life and mind, this is not even on the usual computer science radar.

Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.

m-theory August 28, 2016 at 06:32 ¶ #18279
Quoting apokrisis
I listened to the podcast and it is indeed interesting but does the opposite of supporting what you appear to claim.


Told you it was good.

Quoting apokrisis
On incompleteness, Chaitin stresses that it shows that machine-like or syntactic methods of deriving maths is a pure math myth. All axiom forming involves what Peirce terms the creative abductive leap. So syntax has to begin with semantics. It doesn't work the other way round as computationalists might hope.


No. On incompleteness Chaitin stress that we simply have to conclude that there is no finite set of axioms to describe all mathematical truths. He does not suggest that computational theories are myths. In fact he stress quite the opposite. I mean come on, the guy built a toy model of evolution based on computation. Chaitin is very much a proponent for computation not against it.

Quoting apokrisis
As Chaitin says, the problem for pure maths was that it had the view all maths could be derived from some finite set of axioms. And instead creativity says axiom production is what is infinitely open ended in potential. So that requires the further thing of some general constraint on such troublesome fecundity. The problem - for life, as Von Neumann and Rosen and Pattee argue mathematically - is that biological systems have to be able to close their own openness. They must be abe to construct the boundaries to causal entailment that the epistemic cut represents.


Quoting apokrisis
As a fundamental problem for life and mind, this is not even on the usual computer science radar.



Chaitin does not mention any epistemic cut and neither does von Neumann in any thing I have read.
That term seems to be something Pattee has coined himself.
It probably has not caught on in mainstream science because there is not a clear definition for it.

Quoting apokrisis
Then Chaitin's theorem is proven in a physics-free context. He underlines that point himself, and says connecting the theorem to the real world is an entirely other matter.

But Chaitin is trying to take a biologically realistic approach to genetic algorithms. And thus his busy beaver problem is set up in a toy universe with the equivalent of an epistemic cut. The system has a running memory state that can have point mutations. An algorithm is written to simulate the physical randomness of the real world and make this so.

Then the outcome of the mutated programme is judged against the memory state which simulates the environment on the other side of the epistemic cut. The environment says either this particular mutant is producing the biggest number ever seen or its not, therefore it dies and is erased from history.

So the mutating programs are producing number-producing programs. In Pattee's terms, they are the rate independent information side of the equation. Then out in the environment, the numbers must be produced so they can be judged against a temporal backdrop where what might have been the most impressive number a minute ago is already now instead a death sentence. So that part of the biologically realistic deal is the rate dependent dynamics.


Yeah Chaitin stresses that more research in the area is a worthy pursuit and hopes that people begin creating real simulations using his work.
I hope so to.











Punshhh August 30, 2016 at 12:50 ¶ #18620
Reply to m-theory
Yes the thinking mind of a human could be described as an algorithm. But I don't think that this is the whole story, there is consciousness and being, which do not require computation in the brain in the same sense, in that there is a metabolic component and possibly, due to philosophical analysis, an immaterial component.

I presume that you imagine a working A.I. device, do you also imagine it having consciousness?