Turing Test and Free Will
The Turing test is used to check whether an AI (artificial intelligence) is at human-level intelligence or not. What bears mentioning is that the AI need not necessarily be conscious like humans. It just has to appear human-like; the AI has to simply simulate human intelligence.
This is very similar to the situation in which a forger manages to make an exact duplicate of an original piece of art and, of course, there's no possible test that can tell them apart.
There's a lot of debate about free will and no one has yet gotten close to either proving/disproving it. However that may not matter because we structure our lives as if free will exists. Often people ask about the illusion of free will. All this points to what I've said - that we live as if we do have free will.
We could say, supposing determinism is true and free will is absent, that the simulated free will is indistinguishable from actual free will. In other words the simulated free will passes the Turing Test and is identical to actual/true free will
I guess I'm saying SIMULATED FREE WILL = REAL FREE WILL.
Comments...
This is very similar to the situation in which a forger manages to make an exact duplicate of an original piece of art and, of course, there's no possible test that can tell them apart.
There's a lot of debate about free will and no one has yet gotten close to either proving/disproving it. However that may not matter because we structure our lives as if free will exists. Often people ask about the illusion of free will. All this points to what I've said - that we live as if we do have free will.
We could say, supposing determinism is true and free will is absent, that the simulated free will is indistinguishable from actual free will. In other words the simulated free will passes the Turing Test and is identical to actual/true free will
I guess I'm saying SIMULATED FREE WILL = REAL FREE WILL.
Comments...
Comments (46)
Moving to free will. If we see a computer that appears to act freely (just like people do), we might conclude that it too must be free because its behavior looks to be free, but that entirely begs the question. The question being begged is "Does free will exist?" That is, how can we say that the computer is free because it looks free like us when we're not even sure we're free? Unlike consciousness, where we know we're personally conscious and we don't ask "Does consciousness exist," we don't know if we're free and we do ask "does free will exist?". The best we can say is that the computer looks to be free like us, but we're not even sure we're free.
In support of that particular passage only:
“....I adopt this method of assuming freedom merely as an idea which rational beings suppose in their actions, in order to avoid the necessity of proving it in its theoretical aspect also. The former is sufficient for my purpose; for even though the speculative proof should not be made out, yet a being that cannot act except with the idea of freedom is bound by the same laws that would oblige a being who was actually free. Thus we can escape here from the onus which presses on the theory....
....We have finally reduced the definite conception of morality to the idea of freedom. This latter, however, we could not prove to be actually a property of ourselves or of human nature; only we saw that it must be presupposed if we would conceive a being as rational and conscious of its causality in respect of its actions, i.e., as endowed with a will; and so we find that on just the same grounds we must ascribe to every being endowed with reason and will this attribute of determining itself to action under the idea of its freedom....”
(Fundamental Principles of the Metaphysics of Morals, 1785)
The simple fact is that humans can easily notice from simple interaction things like sarcasm, jokes or hostility or a multitude of different attitudes beneath the simple discussion. As computers (Turing Machines) simply follow algorithms and are totally unable to do anything else than that, the demands simply grow exponentially on what the algorithms have to perform. Imagine how a discussion changes instantly if someone cracks a hilarious joke. How sophisticated the algorithm (program) has to be understand that the text has a joke and it is indeed funny?
And the basic problem is that any computer or Turing Machine simply cannot perform the task "do something else than what is given in your program in a way not defined in the program".
Hey......
First....the phrase “free will” is a mischaracterization of a distinctly human condition. We don’t have “free will”; we have a will that determines its volitions without encumbrance, that is, autonomously, and at the same time obligates itself to those volitions, which we call duty.
Second....The “free” used to describe the will is just a contraction of the idea “freedom”, a pure concept of reason, which makes autonomy possible, and as such does not belong to the will as a description of it, but as a condition necessary for it.
———————-
Your “Free to act in accordance with the dictates of reason as duty” is close enough to:
“....Everything in nature works according to laws. Rational beings alone have the faculty of acting according to the conception of laws, that is according to principles, i.e., have a will. Since the deduction of actions from principles requires reason, the will is nothing but practical reason....”
——————
Quoting tim wood
If you cannot do as reason instructs due to some physical obstruction or incapacity, that is a different kind of freedom than the kind that facilitates such instruction. If you cannot do as reason instructs because you have attained to a conflicting moral judgement, the will is freely exercised in so doing. You are every bit as unencumbered to be immoral as much as you are to be moral, even if the consequences of the former may be somewhat less satisfactory than the latter.
I do not have free will being here, however, here, I do have an amount of free will.
Please refer to the what I underlined because it means illusionary/simulated free will = real free will. We're not sure because we can't distinguish the real from the illusion, the original from the copy.
Does anything do other than what has been programmed into it?
Quoting Mww
The public usually understands the Turing test epistemically according to the realist interpretation, since they normally by cultural default understand consciousness in a cartesian fashion as referring to intrinsic functional semantics of the brain. They consequently view the Turing test as a fallible appearance test of a machine's internal functional properties, properties that exist independently of appearances to the contrary, say if the Turing test gave a false-negative.
The less popular alternative view, that appeared to be the view of Wittgenstein, is the anti-realist, non-cartesian interpretation of the Turing Test, whereby it is understood that if a machine passes our consciousness test, then the machine is conscious by definition. In other words, a Turing test isn't so much a test for Turing-test independent consciousness, rather the Turing test articulates the visible circumstances in which we say that a thing is conscious or not conscious.
The critical ontological difference of this latter view, is that the functional semantics of a brain or machine are understood holistically as being irreducible to the brain or machine in and of themselves. As Wittgenstein hinted in PI, A game of chess is only recognized as a game of chess when embedded within a relevant culture. Therefore Wittgenstein would likely have sided with Searle and rejected Dennett's "China Brain"; for while the population of china might be able to use semaphore to simulate the brain's internal organisation, the surrounding context isn't present to attribute intentionality.
As for the question you actually asked, i believe the notion of free-will is to a large extent orthogonal to understandings of the Turing test.
You said you cannot do as reason instructs, which implies a disability. Given that reason will not instruct the impossible as a moral judgement, and given that reason will not instruct beyond physical capacity, there should be no rational argument to support why you cannot do as reason instructs. Thus, Kant would say you have been morally corrupted by succumbing to a merely arbitrary empirical good, an inclination, rather than standing with the pure moral good of its practical instruction, at the sacrifice of your moral constitution. In short, you have chosen to act immorally, so, yes......you should have reasoned mo’ better.
—————————
Quoting tim wood
Feelings, or desires, cannot be the motor.....the ground, the primary condition......of morality, because it is entirely feasible to feel bad upon doing the good act. It helps us to recognize a good act by the good feeling we get from its accomplishment, but we stand in danger of not doing the good act if we fear the bad feeling we might get from its accomplishment. That which either supports or detracts from a pure moral action, and the effects of which we cannot know until after the act is done, cannot serve as ground for it, because morality is always determined before its volitions are manifest in the world.
Such is the foundation of the deontological moral doctrine, in which respect for law in and of itself, regardless of the content of any law in particular, grounds moral dispositions. Herein we may disregard our feelings when it becomes possible we won’t like them, because we have acted out of respect for law, which inspires no feelings at all except having done right. In this respect, it is not reason that is the motor, but it is practical reason that says what form the law will assume, the categorical imperative, for which thereafter our actions respect.
As for the “motor of morality”, meaning that which drives the fundamental human condition, I think Kant would go with the transcendental conception of “freedom”, transcendental in order to distinguish from the conception of freedom associated with degrees of various and sundry empirical restrictions, but rather to denote and prescribe an unconditioned a priori causal principle, that is not itself an effect of any antecedent principle. Hence, moral reasoning is practical, for the determination of its laws, but at the same time absolutely pure, because of its transcendentalism, its source being pure thought alone, having no empirical predicates whatsoever.
What say you?
Make this thought experiment:
1) Some professional team of psychologists etc. observe your doings for a while and they are able to make a quite clear and totally realistic synopsis of how you behave, how you react in different situations and how you manage it in social situations and with other people.
2) They and you go through their findings and have a long discussion about it and how it relates to other people and so on.
3) Here's the question: Do you think that you would be able to notice something that you might learn from their observations and the discussion to improve yourself or be, as they say, a better person?
If you say at least "perhaps", then you aren't a computer. A computer cannot look at it's own program and improve it in a way not written in the program. Now a human being can understand just how he or she has behaved, what has been his or her program and change it. That's what consciousness is.
I sense something fishy with that statement... I think the last line is the most fishy. We do not know that being programmed deterministically rules out consciousness because we cannot rule out that human beings also behave deterministically. The computers physical bits might be different to us humans, but we still have those bits and in theory we could see our brain's individual molecules, atoms, electrons and so forth in action.
Also, computer programs exist that improve their "own" code and chip design. One of the great fears of some computer scientists is the AI Singularuity - the point where computer self improvement becomes a runaway unstoppable process leading to some sort of hyper intelligence.
Please read carefully what I said. Turing Machine simply cannot perform the task "do something else than what is given in your program in a way not defined in the program. Whatever neural network mimicking machine deep learning we are talking about, IN THE PROGRAM there has to be specific instructions how to learn, how to rewrite the program. An algorithm following Turing Machine cannot do anything else. This is crucial to understand because it goes to the mathematical essence on just how a Turing Machine and an algorithm works. This is also the reason why computers can win in games: there are confined rules what to do and a game cannot evolve to something totally else with different objectives and different rules. This is based on what a Turing Machine does.
So can AI find this "singularity"? Well, we have many crucial definitions on the way to it that we even now don't understand. Just look at the various debates around here. My point isn't that it is impossible, the point is that currently we are not there yet. A lot has to happen.
But what is typical for us is that we believe that the present scientific paradigm tells everything we need there is for everything to be solved. So was with the mechanical world-view of Newton's time and so is now.
For whether or not a human or machine is considered to be an automaton depends upon one's definition of their conceptual boundaries across space an time. Is a machine that suffers an internal fault the same machine running the same program? Are sensory inputs considered to be part of the machine's operation? etc. etc.
And what makes you think humans do not have these limitations? The way our brains function and create new connections is based on a fixed set of rules.
The interesting thing about the AI "singularity" is that conceptually an AI could rewrite it's base code in a way a human never could.
Yes but there are complications in that concept because what is the it in "its"?? Humans have a body, but computers can connect, So I can see why some might interpret the singularity as one gigantic entity. It's life Jim, but not as we know it?
I don't see how that's a complication. The concept of "self" would obviously be different for an entity that could, say, copy itself. But that seems unrelated to the topic.
Well, we can argue if humans are conscious or not! Or think about it in this way: what does it mean to be creative, to have a new idea? Did someone tell you exactly how you should get a new idea?
The point here what I try to make and I've described in the thought experiment above is this: we can easily understand the decision making system we ourselves use and alter it in a creative way. It doesn't mean that Computers cannot emerge to have AI (and that naturally depends on the definition of AI), what it means that how Turing Machines work now cannot do that. They surely can (likely even at the present) fool us to believe that they are a living person when you are just interacting with a computer program.
Yet that doesn't make the program having AI as it simply follows a well written software, an algorithm. That's all what Turing Machines can do. Sorry, but that is the goddam definition.
It means that you can create amalgams of previous experiences or ideas. All new ideas consist of previous experiences. A purple polka dotted people eater can't be thought of without having the concepts of purple, polka dots, people, and eating prior to creating it in your mind.
Most new ideas arent useful unless they apply to the world in some way. Computers can be programmed to assemble information in unique ways and then try to apply it to some goal in the world, and its usefulness is dependent upon how it relates to some truth in the world.
The obvious counter argument is that human brains also just follow a well written software. You say we can "alter our decision making system", but this is true only to an extent. The logic behind the decisions stays the same. We cannot simply incorporate entirely new inputs of sensory data, or change our perception to include or exclude dimensions.
And that is quite different from a Turing Machine which basically uses simple math to follow an algorithm. What we do extremely well and are masters in, and computers might do in the future, is recognizing patterns. This importance of patterns is the reason why math and computing is so dominant. Yet we can do even better: we can handle information that has no pattern, is unique. We have this utterly incredible ability to make a narrative: First happened this, the happened a totally new thing...then Harry Hindu made a comment from another perspective. That's not computation. You cannot extrapolate from the start a pattern that will tell the rest (and your comment). There is no pattern to be computed. And that makes us so awesome compared to Turing Machines.
Quoting Harry Hindu
And typically you need the human to choose just what is useful. In a nutshell, computers have a really big problem of 'thinking out of the box'. It really is a theoretical, logical problem for them. I think that people are simply in denial about this because basically they don't understand just how a Turing Machine works.
Then 'the software' simply isn't a traditional mathematical algorithm.
Again, how do you know that? In a deterministic universe, can not everything be expressed as an algorithm?
No. Of course not.
Not only is the Laplacian determinism false, but also then you would have the Entscheidungsproblem answered differently, which was proven to be negative exactly by Turing with the idea of a Turing Machine. The Church-Turing thesis has importance here.
If I lost you here, I'll try to explain this as simple as I can what I mean.
a) Assume that everything can be expressed with an algorithm in the universe. This means that there is a specific algorithm explaining every phenomenon etc in the universe.
b) The above would mean that there is a positive answer to the Entscheidungsproblem: there would be an algorithm to decide whether a given statement is provable from the axioms using the rules of logic. This is just how algorithms as functions work.
c) Turing (alongside Church) showed that this isn't the case.
d) It's not by accident that Turing Machines are the theoretical structure of all computers. Hence this problem isn't just something that can be evaded. This might not be a practical obstacle as many things can easily be solved by algorithms. The problem only comes when the situation isn't routine and the computer/software should "think out of the box".
e) As I said, we humans don't calculate everything. We can handle information that hasn't got a pattern and still make sense of it.
I don't buy this. By this logic, If there is a test to communicate with a squirrel and convince the squirrel that the entity at the other end of the test is a fellow squirrel, then humans have not yet achieved the intelligence of a squirrel.
I think AI will have long since surpassed humans in intelligence and consciousness before one is capable of imitating a human to this degree.
Computers do recognize patterns of on and off logical gates. What I see you doing is making a lot of claims about what humans can and what computers can't do, but no explanation as to why that is the case. How and why do you recognize patterns? How and why does your mind work?
Quoting ssu
In order to deem something as useful, you need goals or intent. Computers can be programmed with goal-oriented behavior and use the information that they receive through their sensory devices to achieve that goal. Something is useful if it accomplishes some goal.
Harry, how does your mind work? How do you prove rigorously that you are conscious? What is consciousness? It's evident from philosophical debate that we don't exactly know these issues. Yet we make these astounding leaps of faith that we indeed are conscious.
However...
What we know is how Turing Machines work: they have an exact definition of themselves and how they work. They follow algorithms. We know exactly what an algorithm is also. This is clear too from Turing and Church. And there are strict limitations on just what a mathematical algorithm can be. First of all, an algorithm has to be a well defined step-by-step procedure. Well defined means that there the instructions (the algorithm) tells always what to do. The algorithm cannot say "do something else" for a computer. That simply isn't well defined! There has to exact step-by-step procedures to "do something else" for a computer, which really isn't what we mean by "do something else", something that is quite open ended. Ambiguity is simply not allowed in an algorithm.
The pattern that Turing Machines can solve is a pattern that is computable. Yet unlike computers, we can even make sense of various things that are quite patternless. Something that doesn't have a pattern, we use narrative. History itself is the perfect example: nowhere else is randomness so obvious. Many even don't consider history a science, which tells exactly how random with unique phenomena history is. The narrative nature of history should be obvious to everybody, just read a history book. Not many functions there explaining historical events!
Also we can be creative and come up with something new or handle ambiguous issues or instructions. Hence we are able to deal with things that are non-algorithmic, but for a computer this is cannot be. Everything that is non-algorithmic has to be in the end transformed to the computer to be algorithmic.
Artificial Intelligence
[Quote]The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it".[19][/quote]
.....like the plague!!! “And how does that make you feel?” is of absolutely no interest to me, but “And how did you come to think that” tells me everything I might want to know.
——————————
Quoting tim wood
There is an argument that, initially, private happiness drives morality. Quick analysis shows how any mere desire is very far from strong ground for a good will, or, examination of the vast diversity of subjectivity with respect to what happiness means, and the methods for its attainment, shows the weakening of morality itself as a fundamental human condition, insofar as instinct has determining power over the will rather than reason.
——————————
Quoting tim wood
OK. I can live with that. Moral philosophy cannot abide the infinite cause/effect regress intrinsic to empiricism. If we merely think a cause which holds no logical contradiction in itself, we are permitted to assign an effect to it also without its contradiction. Doesn’t have to actually be the case; just has to be logically possible.
——————————-
Why did psychology and psychologisms make an appearance here? Care to elaborate on that a bit?
Exactly, so how can you make the leap to say that a robot with a computer brain isnt conscious?
I am aware. Does that mean I'm conscious? I have goals, or intent. Does that mean I'm conscious? If a robot was aware and possesses goals are they conscious?
I think consciousness could be an information model of the body's sensory feedback loop. The fabric of consciousness is the same as the rest of reality. That isn't saying that reality is really a mind, like an idealist would. That would be an anthropomorphic projection. What it is saying is that fabric of reality is information.
Quoting ssu
As I stated before, humans can't do something else either. They can only assemble information that they already possess, not something that they don't. Thinking out of the box entails assembling existing information in new ways, and as I said again, most of the time, these new bits of assembled information are useless. It isn't until they are tested in the world, do we find out whether or not they are actually useful - not until the information is used and the results observed, can we say that something was useful. Algorithms can vary in complexity, and the mind could be using a very complex algorithm that we haven't been able to crack yet.
Have you read about the computational theory of mind?
Why do you think that computers have provided us our "best prospects yet for machines that emulate reasoning, decision-making, problem solving, perception, linguistic comprehension, and other characteristic mental processes"?
Because we simply know how in the end a very mechanical device called a computer works. That's the answer. We surely can make that leap.
Quoting Harry Hindu
What does that mean that 'you are aware' and how is it philosophically different from the problem of consciousness?
Quoting Harry Hindu
Complexity of an algorithm doesn't change the definition of an algorithm. Sorry, but this is mathematics. Definitions do matter. Look it up: algorithms have a quite clear definition.
It's like saying that every number is a rational number and we haven't just found the correct rational number to everything. Hence you wouldn't call an irrational number a rational number, but perhaps you could call it a 'complex' rational number? Well, if we make an agreement that irrational numbers are 'complex' rational numbers, perhaps we can then assume that all numbers are 'rational numbers'. But then of course, we would be saying that numbers are just numbers.
In the same way when you just assume that complexity can solve these issues, it is a similar argument if you have to change the definition of an algorithm. My opinion is that we're still not there yet, even if we can get there. Not everything from the fundamental parts of mathematics and philosophy simply isn't known yet.
Because we live in our currently "most advanced state" of human knowledge. And the algorithm churning Turing Machines are at the present the thing we have. And before it was mechanical clock-work devices. People made similar comparisons then that everything was like mechanical clocks. Just very advanced ones. And we had idea of the Clockwork Universe. Sound familiar to the ideas today that the Universe is just one supercomputer?
Yet note that humans have nearly always lived on this edge of the best knowledge ever (the exception is the time we had this huge crisis in Globalization called Antiquity turning into the Dark Ages). Voltaire ridiculed quite aptly Leibniz with doctor Pangloss in Candide. Yes, the early 18th Century was indeed "the best of all Worlds" as we know so well now.
This isn't what I was trying to get at. We also know many things about the brain and can make predictions about what you experience based on a your brain scan. Damage to a certain area as the result of a stroke can limit one's use of language or erase memories. My question is more about what is a computer really like "out there" - separate from our experience of it being a "physical" piece of hardware running software (which is basically hardware states). What is a brain really like "out there" - separate from our experience of it being a "physical" piece of hardware (the brain) running software (the mind - which is basically brain states)? Brains and computers are made of the same "physical" stuff. So how is it that we can say brains have consciousness, and computers don't?
Quoting ssu
I would say that awareness and consciousness are the same thing. "Consciousness" is a loaded term.
What it means to be aware is that there is an aboutness to my experience - of having a perception about a situation or fact.
Quoting ssu
An algorithm is a set of steps to follow intended to solve a specific problem. Mathematical equations are algorithms; so are computer programs. Algorithms are closely related to logical thinking. They are like an applied version of deductive reasoning. Algorithms are for problem-solving. If you aren't trying to solve a problem, then are you using your intelligence? The Turing Test is a test for intelligence, not consciousness. Can you have one without the other?
But it is a physical machine that does run a program if it works. It doesn't abruptly change it's software and decide to do something other that the programmer programmed it to do. If a computer would do that, then we could perhaps assume it was 'aware' (and likely pissed off about it's programmer).
Quoting Harry Hindu
We don't know these issues yet so well and that is a fact. Hence we can mean a lot of different things by AI, for example. This is the problem. As I said, we know our computers and how they work far more better.
Quoting Harry Hindu
Do you always use deduction? How about inductive reasoning? Never tried that? How about abductive reasoning?
Quoting Harry Hindu
Not all use of intelligence is problem solving. Of course one might argue everything to be a "problem" that we have to solve.
I think a better test may be if a computer could create something unaided by humans that humans could use or appreciate, then we got intelligence. Create a calculus, a trigonometry, propose the theory of general relativity, propose of philosophy of language, etc
But what if the programmer programmed the computer to change its programming? People only change their programming when they learn something new.
Being aware doesn't entail changing ones programming. It entails have knowledge of some situation or fact. It requires senses. If a robot used the information acquired by its senses to change its programming, would you say that it's aware and intelligent?
Quoting ssu
Dont those types of reasoning require the information provided by the senses?
I'll add that humans have a programmer - natural selection.
That would be just what the present computer programs have done for ages. Ever heard of cybernetic systems? Something invented during WW2 was hailed to be the solution to everything... until it faded from the popular jargon. Yet this doesn't overcome this issue at all: the computer is exactly following the algorithm and hence is quite predictable in just what it will "learn".
Quoting Harry Hindu
Do they? Anyway, the problem is that you have to somehow morph inductive and abductive reasoning into the deductive way as, again, Turing Machines just compute, follow algorithms.
Quoting Harry Hindu
Here's the thing. That there is natural selection doesn't mean a thing when the question is how will Harry Hindy reply this or that question. And one obvious way would be to say that even if we can seem to be choosing ourselves just what we do, there would be a metaprogram that we follow the we can't change or understand. Yet this argument makes it even worse: for the computer it is really a program, not a metaprogram, that the programmer has to tinker with.
There is free will in the form that you are aware of yourself and can change your decisions what you make. There is also determinism in that if we define the future to be what truly happens, then it is deterministic: the future will happen. The only thing is that you cannot know everything even if everything is determined. Why so?
It's simply that we are part of the Universe hence that we cannot extrapolate everything from the present to the future because we are ourselves actors ourselves. Simple logic excludes this option from us.
What is utterly false and totally illogical is this idea of Laplacian determinism. Laplace made the argument in the following way:
This is simply wrong. This is because the entity is part of this World and we can put this entity (depicted as Laplace's Demon) to a situation where the correct forecast of the future would entirely be depended the opposite of the forecast it makes. No amount of objective knowledge can help you when you are being put to be the subjective decider. Hence whatever it does (or doesn't do), will effect the outcome in a way it cannot forecast the correct future.
Actually this is use of the Cantor's diagonalization method, negative self-reference which is at the heart of not only the paradoxes, but also the incompleteness results. It would be similar to say to you that "Please give an response you will never in your life give". Obviously there are such responses as your and my time here is limited. And obviously we cannot give those, because any response we give is outside of that set of responses.
OK, that was a nice use of the Turing test. If we can't tell the different we might as well treat it as free will. Fine. But what does free will mean? Most of us use language that implies free will, but also that implies compulsion. We don't treat it as binary. I am impulsive. I can't control my desires. I can't focus but I want to. I want to quite smoking (do you? wouldn't you then right now). So determinism is packed into our language as well as free will. Both are packed into our experience. It isn't binary. It's not neat like we experience life as if we have free will. We experience degrees of both. Believing in free will leads to a variety of conclusions, for exmaple around responsibility. How much do we weigh the senses of free will and our senses of compulsion?