You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The Artificial Intelligence Conundrum

Devans99 April 24, 2019 at 10:45 5375 views 12 comments
The aim of the singularity is the creation of a self-upgrading AI that will enter into an exponential cycle of discovery and self-improvement:

https://en.wikipedia.org/wiki/Technological_singularity

The idea is that all human problems would be solved in a very short time by such an entity. The conundrum arises when the machine considers what an ‘ideal world’ would look like - it might use logic like this:

- AI entities are superior / happier than biological entities
- Therefore biological should be replaced with AI entities

Hence the plot of Terminator follows with the machines wiping out mankind. Some questions:

1. What would a machine with an IQ of a million make of a human?
2. Would it regard us in the way we regard bacteria?
3. Could a machine be built so that it has respect for all forms of intelligence? (whether computer or biological)
4. Or would we always be in danger of a HAL 9000 type incident?
5. What should we do? AI could be our savour, yet it may also destroy us?

Comments (12)

Frank Apisa April 24, 2019 at 15:09 #281214
WHEN AI comes to actual fruition...

...it will do its best to eliminate homo sapiens as the dominant entity on his planet...

...or it will NOT have come to actual fruition.

Any human who does not see the ebola virus as a danger to humankind...is lacking in humanity and intelligence.

Any AI that would not see humanity as a viral danger to this planet (wider range of thought)...is lacking in intelligence also.

AI comes...we go. Probably a lot sooner than we think right now.
Devans99 April 24, 2019 at 16:25 #281232
Quoting Frank Apisa
WHEN AI comes to actual fruition


It could in theory happen any time - some researcher somewhere comes up with a true AI. And with all the world's computers linked by the internet, a hostile AI that could replicated and upgrade its own software might cause chaos.

Quoting Frank Apisa
..it will do its best to eliminate homo sapiens as the dominant entity on his planet...


If we had a fundamental mathematical definition of right and wrong, which I do believe is possible. And if right and wrong were defined in terms of all conscious entities. And this was all baked into the AI at a fundamental level, maybe things would be alright. Or maybe the AI would just categorise the human race as ‘wrong’ (look how we treat the animals) and seek our destruction.

whollyrolling April 25, 2019 at 03:27 #281421
"The singularity" is a concept, and concepts don't have their own aims, because they're not sentient.

The idea that all human problems will be solved by a machine that humans build within the next two decades has been put forth by a fringe handful of sensationalist pseudo-scientists in order to make noise and sell books. It's a farce.

To answer your ridiculous questionnaire:

1. Not much.
2. It would respect us less than we respect bacteria.
3. Not if it has 1 million IQ and thinks of humans as having less value than bacteria.
4. We would not be in danger if we were extinct.
5. We should do whatever we do, there isn't a choice.

6.
ZhouBoTong April 25, 2019 at 04:05 #281430
Quoting Devans99
1. What would a machine with an IQ of a million make of a human?


Hopefully think of us as their dumb little friends. Does their 1,000,000 IQ include the ability to learn concepts of morality? That is our best hope.

Quoting Devans99
2. Would it regard us in the way we regard bacteria?


The way we regard bacteria today? Or the way regarded it in the past? My only point here is that human morality has evolved to the point that some people consider it wrong to harm animals (even insects, but nobody cares about bacteria yet - but we may someday)

Quoting Devans99
3. Could a machine be built so that it has respect for all forms of intelligence? (whether computer or biological)


I would think so, but if it has the ability to learn, it could "lose" the respect. Maybe some form of morality could encourage it to keep the respect?

Quoting Devans99
4. Or would we always be in danger of a HAL 9000 type incident?


The only thing to protect us from Hal9000 is Hal8999 or Hal9001. Whenever "the singularity" is created, there should be several copies. We pour a bunch of morality concepts into them and hope the "good" ai protects us from the "bad".

Quoting Devans99
5. What should we do? AI could be our savour, yet it may also destroy us?


Build it. Worth the risk (I don't have kids or plan to, so that may make it easier :smile:). Just make copies to slightly reduce the risk (A team of AI would have the same ability to wipe us out as just 1, so it works as a risk reduction).
Wheatley April 25, 2019 at 04:11 #281433
Quoting Devans99
What would a machine with an IQ of a million make of a human?

I don't think that mere intelligence would decide human worth. It's a value proposition to determine human worth, and that's partly emotional.
Devans99 April 25, 2019 at 05:17 #281441
Quoting whollyrolling
3. Not if it has 1 million IQ and thinks of humans as having less value than bacteria.


Quoting ZhouBoTong
I would think so, but if it has the ability to learn, it could "lose" the respect. Maybe some form of morality could encourage it to keep the respect?


- AIs based on neural networks need training. We should be able to train this type of AI into behaving morally
- If we fitted the AI with an "off switch" for safety reasons; it would likely feel completely insecure and paranoid. Imagine if you had an off switch which someone else controlled? Maybe something like this is what motivated HAL9000?
- Even if we had a moral AI, it would be a danger to humans not behaving morally. For instance it might try to wipe out all the non-vegetarians.
whollyrolling April 25, 2019 at 11:19 #281567
Reply to Devans99

Did you just go watch 2001: A Space Odyssey and think to yourself "now I know everything there is to know about artificial intelligence"?
Devans99 April 25, 2019 at 11:26 #281573
Reply to whollyrolling I am an ex computer programmer so I have a limited amount of knowledge.
TogetherTurtle April 25, 2019 at 12:03 #281585
Quoting Frank Apisa
Any AI that would not see humanity as a viral danger to this planet (wider range of thought)...is lacking in intelligence also.


Why would AI care about the well being of our planet? Such a consciousness would have the knowledge and capability to relocate itself to another planet or even just empty space if it wishes. The reason we see the Ebola virus as a threat and not other similarly sized organisms is that the Ebola virus can actually hurt us. If AI is as strong as we think it will be, it will either see us as ants and ignore us, or wish to guide us similar to a benevolent god. Killing us is simply a waste of time and resources.
whollyrolling April 25, 2019 at 12:24 #281596
Reply to Devans99

You appear to have zero understanding in your comments.
Devans99 April 25, 2019 at 12:25 #281599
Reply to whollyrolling Please enlighten us all.
whollyrolling April 25, 2019 at 13:00 #281615
Reply to Devans99

There are numerous online resources composed by people who are alleged to know what they're talking about.

It's not my place to enlighten anyone, and not everyone can experience the enlightenment of which you speak.