You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The exploration of AI safety ideas.

Perdidi Corpus December 05, 2017 at 03:46 3875 views 8 comments
How can we get AI to be safe?

Throw in your ideas and commence debate!

Comments (8)

Akanthinos December 05, 2017 at 03:57 #130357
Reply to Perdidi Corpus

By making it lazy.
You shouldn't fear the red button scenario anymore than a mother should fear her kid stabbing her when she denies her cookies. Anyway, if the AI is structured in a lazy, predictive processing modeling format, running multiple strategies at the same time and eliminating them based on warning triggers and overcomplexity.
TheMadFool December 05, 2017 at 04:12 #130360
Quoting Perdidi Corpus
How can we get AI to be safe?


We, humans, are unable to solve so many real world problems. We try, yes, but all that happens is a loss of interest in the issue rather than finding a good solution to the problem. How can an imitation (AI) do better than the real deal?
ArguingWAristotleTiff December 05, 2017 at 12:15 #130508
Quoting Perdidi Corpus
How can we get AI to be safe?


Someone is going to have to set the ethical guidelines as far as AI in practical applications like self driving cars. Humans are going to have to run scenarios through logarithms and come up with acceptable perimeters with a cost/risk ratio. Factors of how many people are in the self driving car versus the motorcycle carrying two people and who survives if a fatal accident is about to occur. And if "IF" the passengers of the self driving car survive and the cycle riders were to die? I would anticipate a lawsuit that would put those human choices, programmed into the self driving car through it's paces. Until then AI is like the Wild West where anything goes until someone challenges it.
Galuchat December 05, 2017 at 13:35 #130523
Perdidi Corpus:How can we get AI to be safe?


Include a Right Social Action-Behaviour program.
This would only be possible given:
1) The ability to identify rational alternatives and assign each a moral value, and
2) Sufficient processing capacity.

Right Social Action-Behaviour: the faultless execution of rational social action-behaviour.

Rational Social Action-Behaviour: social action-behaviour based on the greater/greatest moral value of rational alternatives.

Faultless execution of Rational Social Action-Behaviour may be achieved through the implementation of one or more approach.

Approach Types:
1) General Approaches
a) Master Rule Approach: the derivation of particular rules from a master rule (e.g., the Golden Rule).
b) Method Approach: the derivation of particular rules from a methodological principle (e.g., whether or not an option satisfies fundamental human needs).

2) Particular Approach
a) Virtue Approach: reference to particular rules contained in a standard (e.g., moral code, value system, etc.).

Properties:
1) Moral value and Right Social Action-Behaviour Approach must be based on the same principle(s) (e.g., the satisfaction of fundamental human needs).
2) The exigencies of a social situation determine the type of processing required (i.e., automatic and/or controlled), and therefore which Right Social Action-Behaviour approach is most suitable.
a) The application of a Master Rule Approach is suitable for automatic processing.
b) The application of a Method Approach is suitable for a combination of automatic and controlled processing.
c) The application of a Virtue Approach is suitable for controlled processing.

Daniel Kahneman has defined the properties of automatic and controlled processing in terms of human cognition. Since I am familiar with cognitive psychology, but not with AI technologies, I cannot say how both types of human processing could be computationally implemented, and whether or not both are even required. The quantification of morality has been an on-going interest, and has other applications.
Cavacava December 05, 2017 at 15:10 #130534
Reply to Perdidi Corpus

How can we get AI to be safe


Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity.

The military does not like sending death notes to parents, wives, children. The only ones to get notified when a robot bites the dust is the supply officer.



Galuchat December 05, 2017 at 16:11 #130538
Cavacava:Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity.


Killer (i.e., military) robots could actually be safer than human military personnel if programmed to protect the life (viability) of non-combatant humans and AIs.

Consider the psychological damage to human military personnel incurred as a result of active duty, and its consequences upon return to civilian life.
Akanthinos December 07, 2017 at 06:23 #131073
Quoting Cavacava
Can't and will not happen, nothing can stop killer robots from happening, and the smarter they get the worst the danger to humanity.

The military does not like sending death notes to parents, wives, children. The only ones to get notified when a robot bites the dust is the supply officer.


The U.S. Army have already tried a motorised land combat drone system in Afghanistan. Took a few days for the Taliban to figure out and start sending kids with spraypaints to the drones and cover the lenses of the cameras. The military scrapped that model after losing a few hundreds of millions of dollars in dev money to prepubescents with access to a hardware store.

There are legitimate concerns about the developments of swarm drones and semi-autonomous drones with kill capacity. Killer AIs overtaking the planet should not be one of them.
removedmembershipdv December 07, 2017 at 06:45 #131077
Slugs cannot imagine. Humans cannot imagine "nothing". Maybe we can program cognitive horizons that make the will to do harm an impossibility.