You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Are we running out of time to resolve issues of conciousness and free-will?

Pseudonym February 08, 2018 at 08:35 9525 views 25 comments
For at least the last 2000 years, philosophers have idly speculated on the question of what conciousness is and whether we have free-will. Only in the last few hundred has it become polarized into the debate we recognise today, but even with Kant, Schopenhauer, Hobbes and Hume the debate was almost entirely academic, entering public policy only with regards to crime and punishment.

We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.

If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?

Comments (25)

Noble Dust February 08, 2018 at 08:41 #151142
Quoting Pseudonym
are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?


1 vote for "yes".
René Descartes February 08, 2018 at 08:45 #151143
[Delete] @Baden
Pseudonym February 08, 2018 at 08:47 #151144
Reply to Noble Dust

Really? Care to explain why? I don't mean why you think free will and consciousness are real and unique to humans (or perhaps biological life), I mean why you think it is sensible to be so sure about that belief that it's worth risking the future of humanity on. Just to make a point? Or is there some greater threat you see from admitting that it might just be an illusion and we ought to proceed under that assumption for now?
Pseudonym February 08, 2018 at 08:49 #151145
Reply to René Descartes

But that's exactly the question. With AI something of an unknown, do you believe humans (alone) have free will strongly enough to just let the computer scientists get on with it?
Noble Dust February 08, 2018 at 08:52 #151146
Reply to Pseudonym

The tone of your OP was clear that you thought it was unwise to make the stake; so I made it.

I don't like these OPs where the questions are leading me somewhere.

And because I believe in free will being something uniquely human, I have no fear of AI overcoming that free will. AI could lead to catastrophic pain and death, but if it does, then that catastrophic pain and death is the responsibility of humans with free will because humans with free will created AI, and so it will always be the fault of those humans, no matter how out of hand it might get.
Pseudonym February 08, 2018 at 09:10 #151148
Quoting Noble Dust
The tone of your OP was clear that you thought it was unwise to make the stake; so I made it.


What? Because I indicated I thought it unwise you decided to go for it. I'm touched that my opinion is so influential in your decision making, even if only to oppose it.

Quoting Noble Dust
I don't like these OPs where the questions are leading me somewhere.


How does the question lead you somewhere? It's a simple enough question. Do you believe in the uniqueness of consciousness and free will enough to stake the future of humanity on it, do you think we should proceed under a presumption that is safer, or alternatively do you think there is even more risk from presuming these traits are not unique?

Noble Dust February 08, 2018 at 09:13 #151149
Quoting Pseudonym
What? Because I indicated I thought it unwise you decided to go for it. I'm touched that my opinion is so influential in your decision making, even if only to oppose it.


No, I just like a good fight every now and then.

Quoting Pseudonym
How does the question lead you somewhere? I


The thread title is suggestive, as is:

Quoting Pseudonym
philosophers have idly speculated


Quoting Pseudonym
Only in the last few hundred has it become polarized into the debate we recognise today


Quoting Pseudonym
the debate was almost entirely academic


Quoting Pseudonym
We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way.


Quoting Pseudonym
If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.


Quoting Pseudonym
If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?


Basically the entire post, in other words.
Noble Dust February 08, 2018 at 09:16 #151150
Quoting Pseudonym
Do you believe in the uniqueness of consciousness and free will enough to stake the future of humanity on it, do you think we should proceed under a presumption that is safer, or alternatively do you think there is even more risk from presuming these traits are not unique?


Again, the assumption on your part here is so obvious as to not even warrant a more detailed response.
Pseudonym February 08, 2018 at 09:34 #151153
Reply to Noble Dust

I haven't the faintest idea what you're going on about. Am I not allowed to have an opinion about this in order to open a discussion? Yes I think free will and consciousness are illusions. I think any sufficiently complex system designed with self-analysis would exhibit the same traits. Does that somehow mean I'm not allowed to ask others what they think would be a pragmatic assumption with regards to AI?
Noble Dust February 08, 2018 at 09:45 #151155
Reply to Pseudonym

Where did I say you're not allowed to have an opinion about this? Or that you are "somehow not allowed to ask others what they think"? And clearly I pegged your positions correctly.


Pseudonym February 08, 2018 at 10:01 #151158
Quoting Noble Dust
Where did I say you're not allowed to have an opinion about this?


You said "I don't like these OPs where the questions are leading me somewhere."

Then I asked "How does the question lead you somewhere?"

Then you replied with a list of quotes in which I imply my opinion.

Ergo, you "don't like" the fact that I implied my opinion (your definition of "questions [which are] leading me somewhere").

Take A to be 'questions [which are] leading me somewhere'

You state you don't like A.

I asked you to define A.

You give a set of paragraphs in which I express my opinion.

Hence my conclusion that you don't like me expressing my opinion in the OP.

Where have I gone wrong?
Noble Dust February 08, 2018 at 10:06 #151160
Reply to Pseudonym

No no, I'm fine with you expressing your opinion. I just don't like threads where the questions are supposed to lead me somewhere, i.e. where the question is being begged. But beg away; I'm not questioning your right to beg the question.
René Descartes February 08, 2018 at 10:07 #151161
[Delete] @Baden
Noble Dust February 08, 2018 at 10:10 #151163
Reply to Pseudonym

And I'm just saying I don't like those sorts of threads. I was expressing an emotion. Of course you can keep making threads that annoy me; I shouldn't even have to say that.
Noble Dust February 08, 2018 at 10:15 #151165
Reply to Pseudonym

And, more importantly, again, no where did I state that I think you're "not allowed" to have an opinion. you're conflating me not liking your approach with you not being allowed to have an opinion.
Pneuma February 08, 2018 at 10:18 #151166
So are we saying that human beings do not have the possibility of functioning as autonomous, self regulating, self directional (free) moving beings, (I am not talking about physical biology here)? Why is it that we always need another “authority”, The Church, State or Science? Is the answer to the problems of the human condition really just another external authority? Why the thought of human autonomy seems to be “some grater threat”?

Pseudonym February 08, 2018 at 10:41 #151170
Quoting Noble Dust
And I'm just saying I don't like those sorts of threads.


Yeah I get that, I'm trying to find out what it is you don't like about them. Not that it's particularly related to the thread topic, but your reaction intrigued me. I still don't feel like I've understood your position.

If a question is up for debate (i.e not entirely factual), it's not 'question begging' to already have an opinion on the answer. 'Question begging' is when the answer is actually implied in the wording of the question, in such a way that you can't even ask the question without assuming the answer as a matter of fact. It's not the same as asking a question whilst holding an opinion on the answer. I think that's way too high a standard to hold anyone to, to come at each question of moral or political import without holding any view whatsoever as to the answer. I have an opinion on the answer already, but I don't know what everyone else's opinion is, that's why I'm asking.

On a wider note, it's something I've found endemic on this site so far and it stifles proper discussion When a question is asked or statement made, there seems to be a knee-jerk reaction for people to write their opinion on the topic, not the question. I've asked here a very specific question about people's views on the pragmatism of having a view that free-will and conciousness are unique to humans (or biological life). Already, all I've got are people stating their opinions on free-will and conciousness (the topic), not whether those views are actually a pragmatic/safe way to approach the question of AI (the actual question). What I asked was what arguments they have for considering those positions pragmatic, or the least risk approach, in the light of advances in AI.

If I might join in the 'types of thread that annoy me' discussion you've opened, then that would be my number one, threads where people just state their views at each other on wide and vague topics with no attempt to actually relate them to anything practical or engage with the philosophical sticking points.
Pseudonym February 08, 2018 at 10:46 #151171
Quoting René Descartes
I think AI has a chance of doing evil but most likely it won't be because of free will but because of accidents or because of the will of humans controlling them.


OK, so why do you think this, what's line of thinking has lead you to this conclusion?

Quoting René Descartes
I don't see AI truly thinking for themselves


Again, I'd be intrigued to hear you reasoning, but more pertinent to the question, why you have weighed it the way you have against the harm that could be caused if you are wrong. Do you see some greater harm in presuming free-will is obtainable by an AI that has lead you to think we need not take this approach? Or perhaps is your faith so important to you that you feel it needs to be expressed regardless of the risk?
Pseudonym February 08, 2018 at 10:48 #151173
Quoting Pneuma
So are we saying that human beings do not have the possibility of functioning as autonomous, self regulating, self directional (free) moving beings, (I am not talking about physical biology here)?


Yes, that's certainly what I'd say about it, but that's not the question I asked. The question is, do you think it is important to maintain a belief in free-will as unique to humans in the light of advances in AI, or do you think the risk from potentially being wrong is great enough to warrant more caution?
Wayfarer February 08, 2018 at 10:53 #151174
Quoting Pseudonym
For at least the last 2000 years, philosophers have idly speculated on the question of what conciousness is and whether we have free-will.


I think that this has really only been the case for the last several centuries. The term 'consciousness' was coined in about the mid 1700's (from memory, by one of the Cambridge Platonists). And the modern 'mind-body' problem that you're referring to, really harks back only to the early modern period, and as a consequence of the 'mind-body' dualism of Rene Descartes, and the way philosophy of mind developed after that. The ancients used to speak in terms of the soul, or of nous, but there is no obvious synonym for 'consciousness' in their lexicon, that I'm aware of.

As for 'what will distinguish us from AI' - for the forseeable future, AI technologies will continue to reside in computer networks and their connected devices. I suppose you might be in a situation where you might not know if you're talking to a bot at a call centre, but there is no real prospect of AI looking realistically human anytime in the foreseeable future. And as it does reside in devices, then we exercise at least the power over it, of being able to disconnect it.

Quoting Pseudonym
is it safe to pin the future of humanity on some fragile metaphysical constructions?


Your suggested alternative being....?
Pseudonym February 08, 2018 at 11:07 #151177
Quoting Wayfarer
Your suggested alternative being....?


Presuming a fragile metaphysical construction that is less risky.

That's basically the argument I'm making. I'm asking if anyone sees any real dangers in presuming (when it comes to AI) that free-will and conciousness may well be properties entirely emergent from complex systems. It seems to me to be the safest option, to act as if free-will were not unique to humans, just in case it isn't, and treat the progress of AI under than assumption.

As to the fact that we will remain in control, that's exactly one of the safety measures that might be put in place under a presumption that it is possible for a machine to obtain free-will, but without that presumption, or at least if we do not take the possibility seriously, I can easily see it seeming like an attractive option to leave the machine in charge of it's own power supply, solar harvesting or internal fission, for example.
Ying February 08, 2018 at 13:17 #151209
Quoting Pseudonym
We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.

If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?


Well, we don't have any working definition of what a thought is or what an idea is, on a neurological level. According to Jaron Lanier.



So talk about hard AI as if it's a thing is like talking about nuclear fusion in the living room. Maybe possible in the future after some or the other huge breakthrough, but banking on such issues is science fiction at this point in time.

[edit]
Great. No timestamp on the youtube vid. Here's the link with timestamp:
https://www.youtube.com/watch?v=L-3scivGxMI&feature=youtu.be&t=1991
[/edit]
Pseudonym February 08, 2018 at 14:18 #151225
Quoting Ying
Well, we don't have any working definition of what a thought is or what an idea is, on a neurological level. According to Jaron Lanier.


Quoting Ying
So talk about hard AI as if it's a thing is like talking about nuclear fusion in the living room. Maybe possible in the future after some or the other huge breakthrough, but banking on such issues is science fiction at this point in time.


You've made quite a leap there from what a single, rather fringe, computer scientist thinks to two very firm statements about what is the case. It's this kind of presumption that I'm questioning here. 20 years ago it would have been fine for you to simply go along with whatever your preferred philosopher is saying on the matter without having to justify that belief. What I'm asking, suggesting I suppose, is that we no longer have that luxury because even the possibility of advanced AI means that we need to consider worst case scenarios rather than simply what we 'reckon' is right.
Ying February 08, 2018 at 14:24 #151228
Quoting Pseudonym
You've made quite a leap there from what a single, rather fringe, computer scientist thinks to two very firm statements about what is the case.


Yeah I'm sure some internet rando like yourself knows more about these matters than someone actually active in the field. Bye now.
Rich February 08, 2018 at 14:26 #151229
The real problem is with those who believe we are just like robots and not only act like such, but allow the government/industrial complex (particularly the Medical/Big Pharm industries) to treat us like such. Gradual loss of freedom, privacy, and rights us the biggest problem we face and the education system is designed to encourage this with the idiotic idea that we are just computers - fodder for the super-rich. It is no different than when, in the past, people were taught that certain races were sub-human and could be used as slaves.

It is the promulgation of the idea that we are just computers that is the biggest danger people face.