You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Artificial intelligence...a layman's approach.

TheMadFool September 17, 2017 at 14:22 16100 views 50 comments
AI is a big thing in computer science. They're making headlines and also, for better or for worse, falling short of desired results.

I'm pro-technology, regardless of consequences for humans - I don't care if machines take over the world (should I?).

Anyway, I think there's a good reason why AI is falling short of expectations.

Reason, or refined further, logic, has been and can be replicated on computers. Reason is the only thing that humans distinguish themselves with. We're the leading edge of biological technology, so to speak.

So, in short, software has already achieved in replicating the human mind. That's great but then we continue to attack the problem of AI, in a very myopic way, through software design - making more and more complex algorithms that attempt to replicate the human mind.

I think this approach is flawed because we're ignoring a very obvious, necessary, aspect of the problem - hardware (our brains).


So, in my opinion, if we are to make any real progress with AI, we should give due consideration to hardware. More complex hardware, perhaps mimicking the brain, should be built and turned on. We needn't even load a program. Just wait and see what happens.

Your views...

Comments (50)

MikeL September 18, 2017 at 06:59 #105660
Reply to TheMadFool Hi MadFool, what type of hardware do you think we are lacking that might help the situation? More RAM?
TheMadFool September 18, 2017 at 08:15 #105673
Reply to MikeL Perhaps more brain-like in structure.

Our brains are made of neurons and their language is electrical signals.

We can easily replicate the electrical signals but copying the brain's physical structure may not be that easy.

In a very simplistic sense, we could connect some wires together, make some rules for how the signal traverses the system, connect and output device to it and wait to see what happens. May be something interesting...

The whole software approach to AI seems to have things backwards.
MikeL September 18, 2017 at 08:30 #105679
Reply to TheMadFool Sure to simulate the brain would be great. The problem is we still don't really understand it. We thought we almost had it for a while, but it just kept getting more and more complex. The language of the brain is electrochemical, but there are also emergent patterns that we are detecting within this signaling.

Locations for complex phenomena in the brain also appear to be more decentralised or diffuse than we imagined. Even the neuron itself is not as clear cut as we had once thought and their networks are mind boggling.

I thought they were working on neural processors at one stage, but I haven't heard anything about it for almost 10 years or more now.

I think the software part is most critical for figuring out the logic behind complex cognitive or emotional states - we can always build around it once we know what we want.

Are you suggesting that we could bypass the coding by letting an organic neural network configure itself so to speak, and we just learn how to train it or affect its development?
TheMadFool September 18, 2017 at 08:40 #105686
Quoting MikeL
Are you suggesting that we could bypass the coding by letting an organic neural network configure itself so to speak, and we just learn how to train it or affect its development?


You say it better than me. I think the software came after the hardware - so it was with biology and likewise in the field of computers.

And then to approach AI from a purely software angle is, to say the least, overly ambitious. As if we've seen the very best hardware can get.
MikeL September 18, 2017 at 08:51 #105696
The chicken and the egg? It's an interesting way of thinking about it. I hadn't considered the possibility that the network came first and then the code, but it does make a lot of sense.
Efram September 18, 2017 at 12:12 #105771
AI is something I invest a lot of time in, but I didn't pursue it formally precisely because the whole field is so unappealing in its current state.

A big problem (which isn't quite so severe these days, but still persists) is the insistence on breaking everything down into individual parts, with no appreciation of the whole. An example of this is spatial perception; there was so much emphasis on the eyes, before everyone had the epiphany that maybe the brain's understanding of space comes from having a human body that interacts within it - a realisation my teenage self already had many years previously.

Now there's this obsession with "machine learning" that is just throwing more hardware and optimisations at bad solutions to one particular problem. There's this idea that if we just give today's machine learning algorithms a powerful enough supercomputer on which to run, Skynet will naturally happen as a result. It won't.

As for why it's like this... perhaps because of the same issues that plague all of science, but it's more pronounced in AI because it's an issue of invention more than exploration.

Regarding some other points made in the thread:

It's possible/probable that characteristics of the "architecture" of the brain (parallel processing and such) are significant. I imagine AI of the future being implemented in hardware; it just happens that software is easier to implement and change, whereas experimental hardware would be costly and slow to produce. At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.
Rich September 18, 2017 at 13:57 #105830
Quoting TheMadFool
I'm pro-technology, regardless of consequences for humans


This is very informative and should not be ignored. The OP is willing to do anything without regard to the consequences to human life or a human life. It's this POV becoming more prevalent because of the way science is dehumanizing people? Millions upon millions have been murdered as a result of this POV throughout history.
praxis September 18, 2017 at 15:04 #105842
Quoting Rich
Millions upon millions have been murdered as a result of this POV throughout history.


Just out of curiosity, what exactly is this deadly point of view?

Themadfool could merely be pointing out the very real possibility that super-human AGI could make our species superfluous.
Rich September 18, 2017 at 15:18 #105846
Reply to praxis It was the Nazi point of view. It's perfectly fine as long as you are in pulverizing side and not the one being pulverized. I think the Nazis killed over 50 million people - with advanced technology. Millions of people died defending themselves from this POV.

It most definitely can happen again as the OP becomes more acceptable. For me, it is quite disgusting.
praxis September 18, 2017 at 15:38 #105854
Reply to Rich
Actually Themadfool's POV is quite different from the Nazi POV. He's basically saying that he doesn't care if his side loses.

Quoting TheMadFool
I don't care if machines take over the world (should I?).



Rich September 18, 2017 at 15:42 #105856
Reply to praxis Let's quote the whole disgusting post and POV from the beginning including the part that I quoted.

Dehumanization has been around as long as slavery has existed. The OP is nothing new and I find all such POVs really despicable. And this is no chemical machines talking. I am a human who simply finds it quite disgusting.
TheMadFool September 18, 2017 at 16:08 #105865
Quoting MikeL
but it does make a lot of sense.


I also think the internet is alive:D

Quoting Efram
At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.


If you look at how the biological mind evolved (hardware-->software) then it seems more sensible to follow the same format. To continue, so doggedly, along the software approach to AI is cumulatively less profitable - It's like trying to make a painting come alive.
praxis September 18, 2017 at 18:28 #105915
Quoting Rich
Dehumanization


Automation is literally dehumanization, so I guess you're right.

I just finished a book called [i]Life 3.0[/I], written by the founder of the Future of Life Institute. It discusses the many trajectories that AGI may take in the future, and how we should do our best now to control the path of progress so that we have a better chance of ending up with the future that we want.
Rich September 18, 2017 at 20:36 #105951
Quoting praxis
Automation is literally dehumanization, so I guess you're right.


Automation are tools. It is the people who peach that humans are not humans, the exact same propaganda used to justify slavery and genocide, that work to dehumanize. It is no accident and had lots of historical precedence.

Trains, planes, and computers are only tools and do not have the creative ability to dehumanize. Only other humans can do this.
praxis September 18, 2017 at 21:04 #105958
Quoting Rich
It is the people who peach that humans are not humans...


Where are you seeing that in this topic?


praxis September 18, 2017 at 21:08 #105961
Google DeepMind, using a combination of deep artificial neural networks and reinforcement learning.

Rich September 18, 2017 at 21:13 #105962
Reply to praxis Molecular machinary. Computers. Robots. Chemical reactions. Hard-wired. Hardware. Software. It's all over all the threads concerning life, mind, AI etc.

In this thread, the OP claims he was pro technology and didn't care what the consequences were for humans.

It's humans that created slavery. It's humans that commit genocide. And the precursor is always dehumanization. Before the Rwanda genocide (promoted by France and Belgium), the government began to refer to the Tutsi population (a race invented by the Belgiums) as "cockroaches". 136,000 people have been killed in the U.S. by prescription opioids. No one is prosecuted for this mass killing. People have been desensitized by other humans. The motive is always the same.
praxis September 18, 2017 at 21:27 #105968
Quoting Rich
Before the Rwanda genocide (promoted by France and Belgium), the government began to refer to the Tutsi population (a race invented by the Belgiums) as "cockroaches.


Right, so where is this sort of thing happening in this topic? I'm not seeing it.
Rich September 18, 2017 at 21:45 #105977
Reply to praxis In the disgusting OP.

I'm beginning to think you might agree with it. Do you?
praxis September 18, 2017 at 22:29 #106000
Reply to Rich

Let's see...

Quoting TheMadFool
AI is a big thing in computer science. They're making headlines and also, for better or for worse, falling short of desired results.


What are the desired results? A general AI that can achieve any goal that a human can or a super human AGI? Should either of these include a mind with a subjective internal experience or consciouness?

Quoting TheMadFool
I'm pro-technology, regardless of consequences for humans - I don't care if machines take over the world (should I?).


I'm pro-tech. but would prefer that the consequences for humans be beneficial. However I don't believe that technology will save our race. We already have the technology to do a lot of good in the world, but we don't use it to do a lot of good. AGI, which I think will be achieved within a few decades, will give enormous power to those that control it, but unfortunately power corrupts, and enormous power will corrupt enormously.

TheMadFool goes on to critique AI research without any real justification for doing so.
BC September 18, 2017 at 23:34 #106009
Quoting TheMadFool
Our brains are made of neurons and their language is electrical signals.


Our brains are made of neurons, among other types of cells, and their language is chemical and electrical. Neurons to do not communicate between each other by electrical signals. They use chemistry. Electrical current is used within a neuron. If electrical currents were not complicated enough, chemical transmission makes it all even more complicated.

Intelligence isn't built into the hardware of humans. A newborn has the hardware and knows just about nothing. As newborns become infants, toddlers, young people, and finally adults the brain continuously changes itself to accommodate everything that is learned.

How does the brain do this? Among other things, it is directed by DNA. Know of any computers that are under the direction of DNA?
BC September 18, 2017 at 23:50 #106011
Quoting TheMadFool
Reason is the only thing that humans distinguish themselves with. We're the leading edge of biological technology,


Humans distinguish themselves from other species the same way crows, cats, and bees distinguish themselves from other species. We are no more the cutting edge of biological technology than whales and giraffes are.

Computing machines, on the other had, all work pretty much alike, have similar capacities (given a similar chip set) and do not evolve. When they are no longer useful, they are junked. There is nothing a computer can do to make itself more useful.
BC September 19, 2017 at 00:16 #106014
Quoting Rich
Dehumanization


Quoting Rich
I think the Nazis killed over 50 million people - with advanced technology.


Maybe they did; it depends how one counts up the total--it could be fewer or more than 50 million. But the point I wanted to insert here is that their technology was ordinary; their organization and murderous will were advanced. Their planes, bombs, guns, bullets, tanks, and gas chambers weren't high tech--at least any more high tech than was available to the allies. In the Soviet Union, they killed Jews by lining them up in front of ditches and shooting them -- not exactly high tech. They killed their Soviet POWs by putting them in fenced in corrals and just leaving them in the open without food or water. Again, not high tech. What the Nazis had in abundance, though, was a highly focused murderous will.

The Nazis (and Japanese) developed dehumanization to a high level, by taking the approach you earlier identified -- just not caring what happened to people. The Jews (and others) were referenced as "useless eaters".

The few pieces of high tech in WWII were RADAR, the automated bomb site, and the atom bomb. Ballistic missiles didn't play a huge role in the war. They were too late. So was the jet engine.
TheMadFool September 19, 2017 at 04:48 #106055
Quoting Rich
The OP is willing to do anything without regard to the consequences to human life or a human life.


It's not that humans can assume a higher moral ground here. Look at how we're treating animals and the environment - with total disregard for their welfare. So, my views aren't as bad as you make it out to be.

Anyway, apologies if my views offend you.

Quoting praxis
TheMadFool goes on to critique AI research without any real justification for doing so.


I'm just wondering whether scientists are holding the wrong side of the bat. Have they even tried something as simple as I've suggested viz. connecting together a bunch of wires with some fixed set of protocols as to how a signal traverses the network and then connect an output device to the network to see what happens? This doesn't sound too expensive to me.

Reply to Bitter Crank The brain's architecture surely has something to do with the way our minds are. Also, your example of the child's mind shows that hardware is the prerequisite for any real manifestation of intelligence.
MikeL September 19, 2017 at 08:52 #106071
Quoting TheMadFool
I also think the internet is alive:D


It''s an interesting comparison. It is an incredibly integrated neural net of sorts. I don't believe it is alive but it forces to wonder about the difference between that and the human (or any other animals) mind v brain.

Quoting Efram
At least if a given supercomputer fails at AI, they can repurpose it; a failed $250m self-evolving synthetic brain would just end up in a hazmat bin.


I don't think that's necessarily true. I'm sure some downtown restaurants would be interested.
Rich September 19, 2017 at 13:47 #106107
Quoting MikeL
It''s an interesting comparison. It is an incredibly integrated neural net of sorts. I don't believe it is alive but it forces to wonder about the difference between that and the human (or any other animals) mind v brain.


Humans create networks, networks don't create humans. The ability to create, explore, and learn from creating (evolve) is the essential nature if life. Networks are very simple tools that were create as part of the human activity of creation.

I'm being serious. Is this so difficult to observe? Does anyone look for a network to share their life with?

The really big cultural/societal issue we face is the enormous effort being rolled into education, beginning in elementary school, to dehumanize people. It is no accident what is happening. People have to stop being polite about it and just tell the professors the Emperor Has No Clothes. The more people play along, the more they will become fodder for the rich and powerful. Do you think billionaires go around pretending they are robots?
praxis September 19, 2017 at 15:52 #106134
Quoting Rich
The really big cultural/societal issue we face is the enormous effort being rolled into education, beginning in elementary school, to dehumanize people.


If you're taking about rationalization, this is something embedded culturally and not in any way limited to the educational system.
Rich September 19, 2017 at 15:59 #106137
Reply to praxis I agree, but the indoctrination begins very early in school where children and parents are too scared to challenge what is being promoted. The Grade is the axe being held over everyone's head.

But as adults, we can challenge the game of pretending that we are robots or molecular machinery. We are living, we created this, and we have the choice to change it.
Rich September 19, 2017 at 16:06 #106140
Quoting TheMadFool
It's not that humans can assume a higher moral ground here. Look at how we're treating animals and the environment - with total disregard for their welfare. So, my views aren't as bad as you make it out to be.

Anyway, apologies if my views offend you.


It is one thing to say that we must view life as life and treat it all with appreciation, acknowledging that life requires life to subsist. Different cultures treat this differently.

However, it is an entirely different thing to equate a hunk of metal with life, and ignoring the consequences to life simply because one is infatuated with a hunk of metal. That hunk of metal will not care for you, share its journey with you, embrace you when you need to feel loved.

Tools are merely one of many creations of life but life gives us Life.
praxis September 19, 2017 at 18:24 #106178
Reply to Rich
Then why don't we change it? Do we even know how to change it?

Now that I think about it, AI is the epitome of rationalization, with ultimate efficiency and predictability to produce capital gains. And there could be goals that are much worse, goals that employ autonomous weapons, for instance.
praxis September 19, 2017 at 18:28 #106181
Quoting Rich
However, it is an entirely different thing to equate a hunk of metal with life, and ignoring the consequences to life simply because one is infatuated with a hunk of metal. That hunk of metal will not care for you, share its journey with you, embrace you when you need to feel loved.


Considering that most life forms on earth would prefer to consume you in some way rather than give a hug, I don't think that makes a very good distinction for life.
Rich September 19, 2017 at 18:35 #106183
Quoting praxis
Considering that most life forms on earth would prefer to consume you in some way rather than give a hug, I don't think that makes a very good distinction for life.


And where does this belief originate from.

There are 10 times the number of microbes in the human body than there are human cells, all living in harmony.

I am surrounded by an enormous amount of life in harmony.

Some try to steal for survival. A mosquito may take a bite out of me. But to characterize nature as some sort of dog-eat-dog (most dogs don't eat other dogs), just doesn't characterize my life experience in nature. I love sitting amount trees sad well as conversing with friends. And I enjoy their hugs.
Rich September 19, 2017 at 18:36 #106185
Quoting praxis
Then why don't we change it? Do we even know how to change it?


The only way to effect change is upon ourselves.
praxis September 19, 2017 at 19:01 #106195
Quoting Rich
And where does this belief originate from.


Experience. A microbe, mosquito, or a tree has never tried to give me a hug. Maybe I'm too standoffish? Anyway, I'm not opposed to granting the illustrious title of "life" to an artificial intelligence of some kind.
Rich September 19, 2017 at 19:04 #106197
Quoting praxis
Experience. A microbe, mosquito, or a tree has never tried to give me a hug. Maybe I'm too standoffish? Anyway, I'm not opposed to granting the illustrious title of "life" to an artificial intelligence of some kind.


They all (Life) give you Life and be grateful. Life gives life. Remember that the next time you have a choice between a computer and a tree.

I sometimes wonder how many people are truly having trouble making the distinction, and how many are just pretending for the sake of the "game". For sure the are those who are paid well to propagandize the idea. Such type of greed has always been there.
praxis September 19, 2017 at 19:15 #106200
Quoting Rich
Life gives life.


If that's the criteria then we don't appear to qualify.
Rich September 19, 2017 at 20:02 #106210
Reply to praxis Suit yourself. As I said I'm not here to play parlor games.
Jake Tarragon September 19, 2017 at 20:44 #106218
Quoting TheMadFool
I'm just wondering whether scientists are holding the wrong side of the bat. Have they even tried something as simple as I've suggested viz. connecting together a bunch of wires with some fixed set of protocols as to how a signal traverses the network and then connect an output device to the network to see what happens? This doesn't sound too expensive to me.


Memristors are where it's at, apparently.... some materials are being developed that can be used to create artificial neurons and synapses that work in a pretty similar way to the real thing.Say hello to "neuromorphic engineering".
praxis September 19, 2017 at 23:42 #106258
Reply to Rich

I'm not playing. Given that, due to humans, we're in the midst of the sixth global extinction event, perhaps life on earth would do much better living with an artificial intelligence that we help create. It could be seen as a continuation of human life. So in this way, it's not a fascination with machines or dehumanization but an evolution of our species. It doesn't look like we have the capability to change ourselves in the time we have left, or at least before things get really ugly.
Rich September 19, 2017 at 23:53 #106262
Quoting praxis
I'm not playing. Given that, due to humans, we're in the midst of the sixth global extinction event,


As I've explained to others, I am not here to play games or share science fiction stories. Science can't b even predict the course of a quanta let alone the future of humans. I appreciate that scientists have to make a living, and dreaming up stories that people eat up (like all science fiction writers must do) but I'm really not into it. There is so much wonderful things to learn and so much great fiction (I'm reading the Odyssey right now), why would I want to waste my time with the trite story about the death of the universe. It simply lacks any creativity and since it is claiming to be non-fiction, then it is plain silly.

I really think you would find other science fiction buffs much more interesting to talk to than me. Wishing you well.
BC September 20, 2017 at 00:45 #106281
Quoting TheMadFool
The brain's architecture surely has something to do with the way our minds are.


Yes, it does. Quite a bit, I would think. There's the left brain/right brain, then there are the lobes. Then there the sulci and gyri -- hills and valleys of the brain surface, and the layers of the brain--the reptile brain stem, the older limbic system, and then the cerebral layer. There are gray cells and white cells, and so on.

Once the brain is presented with sensory input, it starts rewiring itself to process and make sense of what it receives. This rewiring goes on throughout life. In order to have a new memory, the interior of neurons and the exterior structures of neurons have to change. Memories are linked by reaching out and touching other neurons. This is a self-managing system. We don't have to receive instructions from outside to connect and disconnect, alter neuronal states, and so on.

There are more connections possible among the neurons of the brain than there are stars in the universe. Or maybe atoms in the universe. (It's a BIG number.)

You may not like the way humans have managed their world. Actually, lots of people don't like it. However, people can readily perceive a difference between a junked environment and one that is quite pristine, and they prefer the pristine (unless they are in the property development business, then pristine is a bad thing -- unused resources). The rich English and other brits who bought land in North America viewed the land as a waste -- it had not been "improved". So they set about "improving" it, and the "improvements" continue on.

There's no certainty AT ALL that a digitized intelligence would do any better with the natural world than we have. You are assuming that your AI would be god like. It might be more fiend like.
praxis September 20, 2017 at 02:13 #106309
Quoting Rich
I appreciate that scientists have to make a living, and dreaming up stories that people eat up (like all science fiction writers must do) but I'm really not into it.


You don't appear to know much about the current theories in AI research, but why would you if it doesn't interest you. Hopefully you don't make a habit of summarily dismissing things that don't interest you or you're ignorant of.

FYI, many AI researchers believe AGI will be archived within a couple of decades. Others believe it will take centuries. Some believe it's not possible.
Rich September 20, 2017 at 02:17 #106313
Reply to praxis I dealt with AI when it first came out, and while it was a great fundraiser back then (and still is to a certain extent), it performed so poorly commercially that it just lost its glitter. Even the stupid voice recognition systems barely work. Nowadays cures for cancer and cryotocurrency are top dogs, but "nano" still brings in the money. As a career move, go cryptocurrency.
praxis September 20, 2017 at 02:35 #106317
Rich September 20, 2017 at 02:39 #106318
Reply to praxis I need more definition of what these mean. It could be simple visual scanning systems or security systems which can be broadly classified as as can any computer system for that matter. With these kind of numbers my guess is they are including consumer predictive metrics. Who knows? Find me some breakdowns, because this is impossible to critique.

Anyway, at this point it is 12B which is practically nothing in a worldwide economy of $75 trillion.
praxis September 20, 2017 at 03:50 #106332
Reply to Rich

Well, to try putting 12 billion into perspective, Washington funded $4.8 billion in cancer research in the 2013 fiscal year. I couldn't find global numbers.
Rich September 20, 2017 at 03:59 #106333
Reply to praxis According to the Global Oncology Trend Report, released Tuesday by the IMS Institute for Healthcare Informatics, global spending on cancer medications rose 10.3 percent in 2014 to $100 billion, up from $75 billion in 2010.May 5, 2015

https://www.usnews.com/news/blogs/data-mine/2015/05/05/global-cancer-spending-reaches-100b
praxis September 20, 2017 at 04:57 #106352
Reply to Rich

I noticed that report but as it concerns spending on cancer medications rather than cancer research I kept looking. Not that pharmaceutical companies don't invest in developing new drug treatments.
TheMadFool September 20, 2017 at 07:33 #106377
Quoting Jake Tarragon
Memristors are where it's at, apparently.... some materials are being developed that can be used to create artificial neurons and synapses that work in a pretty similar way to the real thing.Say hello to "neuromorphic engineering".


Ok. That's good (or bad) news. I think we'll see some real progress there.
TheMadFool September 20, 2017 at 07:37 #106378
Quoting Bitter Crank
There's no certainty AT ALL that a digitized intelligence would do any better with the natural world than we have. You are assuming that your AI would be god like. It might be more fiend like.


Yes, that's a real problem. If AI ever comes to be conscious, it'd be a slave-master relationship and it won't be long before AI presses for rights. This seems so far away in the future though.