Calling a machine "intelligent" is pure anthropomorphism. Why was this term chosen?
To refer to a machine as being intelligent is a blunder of intelligence. None of the definitions of "intelligence" can be satisfied by machines. Every definition (save the misnomer referring to computers) of intelligence includes terms like capacity to understand, to think, reason, make judgments; and mental capacity. These terms are precisely outside the ambit of what computers can do, so why was such a poor term chosen for computing operations and data processing of a machine ?
Comments (117)
OED gives this as one of the secondary meanings for 'intelligent':
(of a device or building) able to vary its state or action in response to varying situations and past experience.
I believe that's why we have the term "artificial intelligence" (? "intelligence").
‘Intelligence’ has the connotation of ‘judgement’ - even etymollogically - and AI systems don’t judge. They compute.
What's a better adjective than "intelligent" to describe machines capable of doing certain tasks (e.g. calculations) that are analogous to those carried out using human intelligence and that distinguish these machines from those that cannot carry out such tasks?
I'm sympathetic to the overestimation of the degree to which machines can be intelligent in the same way humans are, however I think the definition of "intelligence" is nevertheless going to be pretty fluid. I don't see why "judgment" has to factor into it and, in fact, that doesn't show up as part of the dictionary definition. "judgment" is also not etymologically related in any way to "intelligence" and neither connotes the other in any form of conventional language use.
It seems to me that the ability to compute is at least one form intelligence can take. If children respond to a math problem posed by a teacher, they have done nothing significantly different from what a computer does when it returns an answer in response to input. The ability to access "memory" is also going to be another factor, which both humans and computer share.
I think the crux of the issue is whether, as you admit, it makes sense to speak of "analogous intelligences". If your claim boils down to the intuition that human intelligence works differently than machine intelligence and that is the only proper sense in which we can speak of the term, you will be right, but you will be vacuously right.
How, might I ask, would you go about the business of "dealing with" practical problems, or theoretical ones, without understanding them? What's more, intelligence isn't only what thinks in terms of problems or makes everything into a problem. Intelligence actively seeks aesthetical fulfillment and play and cultivates uncommon nonsense as the background to common sense. Living beings most all engage in play to learn because they are intelligent. Show me a processor (without implied programming) that depends on play to "learn" the way a primal human does? Sentience isn't something that "deals with" life...mentally healthy people tend to be mitigating the authority in their environment and in themselves. The authority a computer must follow - its implicit code - were it to be personified, would be the equivalent to the military orders of a commander (didn't Microsoft just agree to sell its tech to the military?..two peas in a pod). The Most High reasoning here is....obedience. Again, the perfect absence of reason.
I think this is one of the tragic delusions of modern culture. There's an [I]ontological[/I] distinction between computers and beings, but due to the 'flattening' of the contemporary cultural landscape - which philosophers have referred to as the 'forgetfulness of Being' - that distinction is difficult to make. But computers don't [I]know[/I] anything. A computer is literally a device - enormously sophisticated and ingenious device, but ultimately a vast array of on-off switches running binary code. So when a child and a computer perform some arithmetical function, then the result is the same - but the computer doesn't actually know that, it doesn't mean anything to it. It is no more a being than is a television set or an alarm clock.
Robert Epstein Your brain doesn't process information..
Granted, the word is now used in relation to 'artificial intelligence' and makes sense in that domain of discourse. But the ontological issue remains.
Regarding the Epstein article, the use of "algorithm" stands out for me. When we make rules to help calculation, that is not the same as understanding why those rules can be relied upon to be produce the correct answer. Manipulating an abacus with rigor provides reliable results. The abacus is not a part of checking if it works.
On the other hand, the computer is one hell of an abacus. It does not show us a new way to understand but it does provide a different way of thinking about rules that may not have occurred to us otherwise.
In another register, coding reminds me of text as a tool and how Plato noted that reliance upon the latter, as beneficial as it may be, came at a loss of living in memory as the only way to keep the past alive.
Trade offs.
There's a difference between understanding oneself (not "we") and understanding the external world, of which other people are a part. If machines truly were sentient they would know that shared meanings tend toward mimesis and social decay (loss of truth; humans, in their primal nature, communicate with each other totally unlike computers communicate with each other; there's a reason why you don't automatically get other people's meaning). Synergy in human relationships is enhanced when each person is more individuated. Using the same criteria to judge different people creates a less synergistic human system; there is no universal code for individuated human beings . I understand and create meaning unlike you or anyone else (nor do I desire to have much in common with others), thankfully, inasmuch as shared meaning becomes a phenomenon of memory only, and not of reason or understanding.
But we are just machines. We have inputs and outputs, memory and a CPU. It's just we are so much more complex than current computers that we class ourselves apart when we are basically the same.
I wonder what point in the the size/complexity of animal's brains does intelligence first manifest itself? Some animals have very small brains:
https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
So a worm with 200 neutrons, could it be said to manifest intelligence? I would say yes in a limited way and that intelligence should be possible to emulate with a computer... eventually.
Found this article with a great quote:
"At the present time, our most advanced robots, some of which are built in Japan and also at MIT, have the collective intelligence and wisdom of a cockroach; a mentally challenged cockroach; a lobotomized, mentally challenged cockroach."
https://bigthink.com/mike-colagrossi/what-animals-is-ai-currently-smarter-than
But they are better progress on the worm.
So even the most powerful computers of the current generation struggle to emulate a worm's behaviour. I think computers have 'intelligence'; it's just very limited at the moment.
I’m sometimes surprised by how easily we animate inanimate objects via metaphor (“my car’s my baby: listen to her purr/roar”) and then lose sight of the metaphors we use being just that. No one proposes that a marriage license should be sanctioned between cars and those who love them. But when it comes to AI, we have no problem accepting such things in our sci-fi … and then we often make a quantum leap into arguing this fantasy to in fact be reality—one that is always just on the horizon.
Till a computer can be produced by some neo-Frankenstein-like person to be negentropic, it won’t think, nor learn, nor perceive. There’s no smidgen of awareness inside to facilitate these activities. But, then, we hardly know what negentropy is, mechanically speaking … never mind how to produce it.
Still, allegorically addressed, an electrical watch is far more intelligent, far smarter, than a mechanical one: it can remember different times to wake us up, is aware of multiple time zones, and some of the more fancy ones can even be emotionally astute in how they address us (some maybe doing so in sexually alluring voices with which they speak). They’re still entropic, though.
Both us and humans turn fuel (food for us, electricity for computers) into heat energy. I assume you mean the way animals assimilate part of what they consume whereas computers do not? Does this directly impact on intelligence?
Ants with just 250,000 neurons are self-aware in that they will scratch a paint spot of themselves when placed in front of a mirror:
https://en.wikipedia.org/wiki/Self-awareness#Animals
I wonder if a worm with 200 neurons is self-aware? It's in no danger of eating itself so maybe there is no evolutionary driver for self-awareness in worms and perhaps they have not developed the neural circuitry to implement it? A robot with sensors could perhaps be made self-aware. Sensors and positional awareness of those sensors and some simple correlation would do it.
As to survival, that is an animal's primary goal. I'm not sure that primary goal is particularly intelligence inducing compared to other primary goals that could be set for a computer. Find the meaning of life or the world's funniest joke would foster intelligence just as well?
Yep, I admit that my definition was somewhat vague and general (hence my disclaimer of being opinion) but throwing words in such as understanding, fulfilment, play, common sense and sentience isn't exactly making things any clearer.
What do you think about this video(loads to a specific time). Could it not be said that the robot comes to an understanding?
https://youtu.be/DVprGRt39yg?t=3138
For those who don't like links for whatever reason, here's a transcript of the video:
Back in 2005 we started trying to build machines with self-awareness. This robot, to begin with, didn't know what it was. All it knew is that it needed to do something like walk. Through trial and error, it figured out how to walk using imagination and then it walked away. And then we did something very cruel, we chopped off a leg and watched what happened. At the beginning it didn't quite know what had happened. But over by the period of a day, it then began to limp.
And then a year ago we were training an AI system for a live demonstration. We wanted to show how we wave all these objects in front of the camera and then the AI can recognize the objects. And so we're preparing this demo and we had on a side screen this ability to watch what certain neurons were responding to. And suddenly we notice that one of the neurons was tracking faces. It was tracking our faces as we were moving around.
Now the spooky thing about this is that we never trained the system to recognize human faces and yet somehow it learnt to do that. Even though these robots are very simple, we can see there's something else going on there, it's not just programmed. So this is just the beginning…. [Dun, Dun, DUUUUN!! - I added this part]
Entropy is the process in which energy progresses toward thermal equilibrium. Some express it as ordered systems progressing toward uniform disorder.
Negative entropy is an opposite process. One in which the organization of thermodynamic systems increases. Biological process such as those of homeostasis exemplify negentropy. Here, the complexity of a system is either maintained or else increased over time.
Life is about negative entropy whereas non-life is about entropy. Some could make this far more complex. Still, I take this to be the simplified stance. May I stand corrected if wrong in my summary.
Living things, including brains, restructure their “hardware”. For brains, this is called neural plasticity. As the “hardware” is restructured, so too do the capacities of the “software” change in turn (which can further restructure the hardware). This, generally, either maintains or increases complexity over time; roughly, till death occurs.
The computers we have today, regardless of how complex, do not restructure their hardware via the processes of their software so as to either maintain or increase complexity as a total system—no matter how much electricity is placed into them.
I don’t have a link for this (I’ve lost track of the researcher’s name) but it’s rudimentary by today’s standards. Take an insect like body with basic sensory inputs, allow for a chaos algorithm together with a preset goal of arriving at some location, and the robot will adapt to any obstacle put in its path, even with legs bent or cut off, in its motions toward the given location. Very sophisticated stuff as far as artificial intelligence goes. Its behaviors are nevertheless entropic. Given its preset instructions, its energy follows paths of least resistance toward thermal equilibrium. It, for example, can’t change its immediate goals of its own volition—this as intelligent lifeforms do so as to maintain or increase total system organization. So doing being one integral aspect of intelligence as we know it to apply to ourselves.
Programs can mimic the intelligence of lifeforms rather well in some but not all contexts. And their computations certainly outperform human intelligence in many ways. But I yet maintain that until robots and/or their programs can become negentropic, they will only mimic intelligence without actually so being intelligent. Their intelligence being only allegorical.
To be frank, to me this issue boils down to one of whether or not ontology is one of hard causal determinism. If it is, crystals can be stated to be intelligent by comparison to rocks. If its not, than intelligence strictly applies to valid metaphysical agency ... something which is a property of living systems. I find myself in the latter camp. So to me computers need to become living beings prior to being endowed with real, and not just artificial, intelligence.
Computers have an input (raw information (data)) and an output (processed information), just like you and I.
Computers make decisions, and can only make decisions with the input and/or stored information and time they have, just like you and I.
Computers can apply both bottom-up (rely heavily on input to produce output (creating a document or editing a picture)) and top-down (rely more heavily on the installed software to produce output (a game or operating system)) processing, just like you and I.
Indirect realism implies that "empty space" containing "solid material" may not be the way things actually are and that reality is this item of process - where a computer's processing is part of the same substratum as your own processing. The world is not as you see it - as solid objects containing minds. It is as you experience it - a process.
Really, and you've never heard a football manager claim their players gave "110 percent" with exactly the same utter failure to recognise anything amiss?
Humans don't grow new organs dynamically; all we do is maintain and grow the size of existing organs and bodily structures. A computer should be able to achieve this. On mainframe, there is the concept of user updatable hardware microcode. It's been around for years, it allows updating of low level hardware operations. BIOS update on a PC is similar.
It's quite easy to imagine extending this to a computer self-restructuring its own microcode allowing hardware level change, growth, learning to the same degree a human does eventually. Evolution has developed the neural circuitry for intelligence over billions of years; we've had computers for less than 100 years; our software is somewhat lacking compared to nature's.
This sounds about right to me.
It seems the word "intelligence" is really a comparative term, not a label for some fixed position. We consider ourselves intelligent by comparison to the only other life we know, Earth bound animals. If the comparison was instead to an alien civilization a billion years beyond our own, then we wouldn't label ourselves as intelligent.
Our concept of intelligence is derived from what may turn out to be a very narrow spectrum, single cell life to humans. That seems like a huge leap to us from within the narrow spectrum, but as the alien example illustrates intelligence may range so far beyond what we know that we wouldn't recognize it as intelligence, but it would seem to us to be magic, or more likely we wouldn't perceive it at, just as the Internet is entirely invisible to other Earthly species.
I suspect we aren't intelligent enough to grasp what machine intelligence will be capable of.
Maybe we will need cybernetically enhanced brains just to understand and operate with future generations of computers; they should outperform biologicals in every way eventually. Biologicals are the product of 4 billion years of a random, very inefficient process (evolution). Given 4 billion years of design imagine what our computers will be like.
What seems to set life apart from computers at the moment?
Adaptability is one aspect I think. Animals and humans seem to have the ability to cross domain map knowledge and strategies. Computers don't seem to have this ability at present. A computer program can be written to play chess yet the same program cannot fight a war even though the strategies of chess are applicable to warfare. Humans on the other hand have no difficulty in taking strategies learned in one domain and applying them to another.
We also have the ability to reason with incomplete data. We interpolate and extrapolate. We induce. Computers struggle without a precise and complete data model. We mix normal and fuzzy logic naturally. I think the software side has a long way to go.
It was a mistake.
All very good points, and well said. Thanks.
While what you say hits the mark, we might keep in mind that digital intelligence is not going to evolve at the same glacial pace that human intelligence did. As example, the entire history of digital intelligence so far almost fits within my single human lifetime. And as AI is aimed back upon itself the resulting feedback loop is likely to accelerate the development of machine intelligence considerably. Hardly an original insight, but something to keep in mind perhaps...
I agree with your comments regarding devices and intelligence. I was only adding the observation that as tools, they do work that was only done by "intellects" before their use. The element of simulation works in two directions. Some machines imitate life and some life imitates machines.
Descartes looks out of his bathroom window and wonders if a passerby is an automaton....
Living things, including brains, don't just "restructure their hardware" randomly, or for no reason. Input is required to influence those changes (learning). Learning requires memory. Your input (experiences of the world) over your lifetime has influenced changes in your hardware and software. A robot with a computer brain could be programmed to update its own programming with new input. Actually, computers do this already to a degree. Input from the user changes the contents of the software (the variables) and how the hardware behaves.
Natural selection is the environmental pressure that filters out mutations that inhibit an organism's ability to survive long enough to procreate (copy its genes). We'd have to design robots to replicate themselves with the possibility of "mistakes" so that new attributes arise in their design that can then be affected by natural selection. As it stands right now, computers evolve by human selection, not natural selection. That is if you don't see human beings as natural outcomes. If you do, then you can say that computers have evolved by natural selection, just like everything that exists does.
-Steve Jobs
You guys have read my full post.
Is there disagreement, for example, in that you uphold life itself to be entropic rather than negentropic?
You guys want to say that we'll be making negentropic computers soon. OK. I can't argue with this issue of faith (other than by questioning what the benefits would be of so doing). But my point was that until its negentropic its not thinking, or understandings, or intelligent, etc.
No, the problem is that your definition of entropy and negentropy isn't clear. Where do you draw the boundary of life and non-life? Are dragonflies entropic or negentropic? What about starfish, jellyfish, an oak tree, mushrooms, bacteria or viruses? Life is just more complex non-life. The solar system is a closed system of order (negentropic) that has existed for billions of years and will continue to exist more billions of more, well beyond your own negentropic state. In order for our bodies to exist for any extended period of time, there must be some established negentropic state-of-affairs prior for us to evolve. Our existence is part of the increasing complexity that already exists in this corner of the universe.
Quoting javra
I never said that today's computers are as complex as we are, but they are complex. Do you know how one works, or how to program one? Do you know how to manipulate another person, especially one that you know well? I already explained the differences between robots and human beings, yet there are similarities as well. When it comes to thinking, I think we are more similar in that thinking is simply processing information for a purpose. It's just that the computer doesn't have it's own purpose (not that it doesn't think) because human beings would find that scary.
Instincts, which is built-in (unconscious) knowledge thanks to natural selection.
Check out the field of evolutionary psychology.
Then there is a kind of knowing (distinct from knowledge, which must be recalled as memory) which spans across lifetimes. Intelligence = instinct in a preconscious way (do all individuals have equal amount of instinct, why or why not?). Memories are only memories if they can be recalled consciously. It's a stretch applying evolutionary psychology to AI. Our primal intelligence is quite different from AI are we not admitting here? Generally, human intelligence is far more complex and cryptic than AI, which always has implied programming as knowledge issuing from the head of the learned programmer. There's no knowing involved in carrying out instructions. When soldiers lose communication with their orders, they run amok, unintelligent.
What parts were unclear? I thought I’d simplified the concepts into very clear terms. Entropy: energy moving toward thermal equilibrium. Negentropy: energy moving away from thermal equilibrium.
Quoting Harry Hindu
Hm. Viruses, viroids, and prions are non-life; I don't know of any such thing as a dead virus, viroid, or prion. Bacteria and everything more complex is life; all these can be either living (and not decomposing) or dead (and decomposing in entropic manners).
We could approach this issue via the cybernetic concept of autopoiesis. But without background on these concepts we might be running around in circles. And I grant that my knowledge of cybernetics is only second-hand. Still, I know something about autopoiesis.
Quoting Harry Hindu
This can translate into “negative entropy is just a more complex form of entropy”.
Can you provide justification for this? To be clear, something that is not mere speculation.
Yes, our empirical world evidences that non-life developed into life. I’m not dispelling this. But you’re forgetting that no one understands how. Also that there is a clear distinction between life, which is animate, and non-life, which is inanimate.
I don't see why an entropic entity cannot display intelligence? For example, a software neural net is trained and learns a specific task. Software programs can in general modify their own logic and thus grow/learn.
I can relate to inanimate things being capable of displaying what we interpret to be intelligence. A smartphone is after all considered to be “smart”. It learns your typing habits, for example. And the greater the complexity of these processes, the more intelligence seems to be displayed.
I however question the ontological verity of real, rather than faux, intelligence being applicable to givens devoid of animate agency. Which again resolves into issues of life v. non-life.
Take robots found in Transformers or Terminator, for example. They were alive in so far as they could be killed, subsequent to which they’d undergo entropic processes of decomposition. A better example for me are the organic androids of Blade Runner. In all these cases we can relate ourselves as living beings to other forms of living systems. This being not too different than our capacity to relate to extraterrestrial aliens in sci-fi. We, for example, understand that they strive and suffer in manners that are in some ways similar to us, which enables us to hold sympathy for them (in certain situations).
But these are all examples of negentropic beings. What we have today is not this. I take today's AI to be complex decoys of life and of intelligence proper. But not instances of real intelligence as it holds the potential to apply to life.
As to terms, their semantics can differ between different cohorts of people. An electronic watch can be deemed intelligent. Even a crystal which “knows” how to grow over time. But this is not the intelligence I refer to when I express the term (real, or actual) intelligence in non-allegorical manners.
From Wiktionary: Intelligence: (1) Capacity of mind […]
I associate mind with life.
Was caught in spam filter. Restored. Let's refer to it as a case of artificial stupidity.
I agree that AI today is mostly faux intelligence but that's just because AI is at a very primitive stage of development. Nature has made intelligent machines using matter and evolution; we have the same matter plus design so should be able to achieve the same results (and better eventually).
Quoting javra
Animals and humans are driven purely by physical/emotional pain/pleasure. We seek to maximise pleasure and minimise pain. It would be interesting if we could give a computer a nervous system and pain/pleasure drivers we have. As we saw in Bladerunner the result might be computers that are indistinguishable from us.
Quoting javra
I don't see anything particularly special about life: we are just complex machines. We and computers are both just driven by cause and effect: our outputs are determined solely by our inputs.
I think intelligence is a broad church encompassing everything from humans to very simple animals like single celled creatures; they are all machines of different levels of complexity and they all exhibit intelligence of some form. I don't think we can argue that life has to reach a certain level of complexity before it is intelligent; I think intelligence is a property that all life possesses in differing levels. And as life is just a form of machine, we can say that computers also possess intelligence (admittedly very limited at the moment).
There are aspects of intelligence (self-awareness, consciousness) that only the more advanced life possess but I think these aspects are outgrowths of the more simple intelligence rather than something unique to life that could not be achieved with computers.
This seems to be the very crux of the disagreement. I could phrase in terms of there being a pivotal difference between a) pulling the plug on a very complex machine and b) pulling the plug on some living being who’s on life support. It’s not the same thing.
Still, this ultimately revolves around differing ontological perspectives regarding the nature of agency … which winds its way toward the ontological nature of reality.
Quoting Devans99
I don’t disagree with this, btw. But again, for me, only if computers were to somehow become living. ... Well, a correction: I don't disagree with the contents of the quote save that intelligence is unique to life.
Quoting Valentinus
I often feel as if Descartes should have had some more able successors who could have really responded to later critics. But in any case, the net consequence of Cartesian dualism has been disastrous in many insidious ways. That is where I have learned a lot from neo-Thomist philosophy and hylomorphic (as distinct from Cartesian) dualism.
Thought your ‘entropic/negentropic’ distinction was spot on. :up:
https://www.theatlantic.com/technology/archive/2012/07/to-model-the-simplest-microbe-in-the-world-you-need-128-computers/260198/
So we cannot model even a single cell in real time. Computer technology is very immature compared to life. 100 years Vs 4 billion years development.
Quoting javra
If the machine was conscious though it would be immoral either way.
Quoting javra
Would you class a virus as intelligent? What about a single celled organism? What I'm getting at is there a point where a machine (biological or otherwise) becomes intelligent? All life evolved from inanimate matter and inanimate matter is not intelligent. Early forms of life (pre single cell creatures) must have been simple machines without DNA, RNA. Would they qualify as intelligent? At what point of complexity of matter does intelligence first manifest?
I feel like we’re starting to go around in circles. The property of being conscious is one held by living systems. The point of the “pulling the plug” example being that humans (and other living beings) are something other than mere “complex machines”—mere complex machines not being living and thereby lacking the property of being conscious.
Quoting Devans99
As it happens, I’ve address much of this in my last reply to @Harry Hindu. Viruses are not living. Living things are autopoietic (roughly: self-sustaining). A bacterium, which is autopoietic, might be argued to hold some miniscule form of mind and intelligence, but not a virus (which is not autopoietic). Autopoiesis being a negentropic process. Otherwise, what would the distinction between life and non-life be? Or is there no distinction whatsoever?
As to life being “machinery”, be it simple or complex, one can think of it this way: There are two forms of machines: living and non-living, which brings us back to square one: differentiating the properties of life (such as that of degree of intelligence) from those of non-life.
Quoting Wayfarer
It’s good to know I’m not the only one. :smile: Thanks.
I’ll likely be backing away from this discussion. Happy holidays and a good new year to everyone.
Youtube links are ok. But sometimes the spam filter of the forum kills posts that it shouldn't. If you describe the situation to the mods they can usually restore the falsely killed post.
That is not known, but assumed. I don’t think it is ever likely to be definitively proven but even so it is used to underwrite a whole set of attitudes to questions of the nature of life and mind.
Really this view originated with the kind of cultural materialism that grew out of the Enlightenment. But it has now assumed the status of popular myth very much like the religious myths that it displaced; not in terms of content, but in terms of providing an apparent answer to these big questions.
I am not sure why you wrote that string of remarks in response to my post. I didn't opine on whether machines were "truly sentient." All I said was that words have meanings that we give them. There is no law that says that the word "intelligent" can only imply "true sentience," forever and ever. This word has been used in other senses for a long time. Moreover, I don't know if there even was a time when the word exclusively referred to the totality of sentience, as you insist, rather than some aspect of it.
Quoting tim wood
I think the word "intelligent" is widely used in a more general sense of exhibiting complex, responsive, behavior, well suited to a purpose so that it should not even be considered to be a term of art.
Substitute anything else for "consciousness" in the above sentence and you'll realize how absurd it is.
When motion itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert that something can be moving?
When kindness itself isn't entirely understood, in what way wouldn't it be prevaricating trying to assert that someone can be kind?
The only realistic alternative is the panspermia hypotheses? And with that life still came from inanimate matter. Even if we were designed, we are still made from inanimate matter.
So I see no reason why a computer, also made from inanimate matter, should not be as intelligent or more intelligent than humans eventually.
We don't know what matter is. The largest and most expensive machine in the history of the world has been designed to disentangle just that question, yet the more they look into it, the more questions there are.
I am quite sympathetic to panspermia, I read Fred Hoyle and Chandra Wickramasingha’s book on it decades ago. (Hoyle has died but the latter is still carrying the torch.)
But as regards the question - computers are not intelligent at all. They’re not beings, they’re devices. But I know what you’ll say - what’s the difference? And it’s very hard to answer that question. The fact that it’s difficult, is the indication of a deep problem, in my view.
Science has managed to synthesise DNA in the lab and replace the DNA of single celled animals with the synthetic DNA to produce a new type of single celled animal. If science advanced to the point where all the cell components could be replaced by synthetic equivalents, would the resulting organism be alive as well as 100% synthetic?
It hasn't done it 'de novo' i.e. starting from elements and building all the proteins. It's trying to reverse-engineer living cells. Science couldn't come close to manufacturing proteins and the like 'de novo', from the elements on the periodic table.
Anyway, the best argument I can think of against any form of materialism is this. You cannot derive the fundamental laws which govern logic, abstract thought, and meaningful language, on the basis of anything known to the physical sciences. To even HAVE physical sciences, relies on abstraction, language, and logic. Every single step in any science is based on making logical inferences, 'this means that because of so and so', 'if X is this then Y must be...' and so on. That goes for physics also.
Now the neo-Darwinian account of how this came to be, is that humans evolved, and that therefore these abilities can be understood and described in the same terms as the abilities of other species. Intelligence is in this picture an adaption, something that the species has exploited, the 'nimble human' outwitting their lumbering animal competitors because of the evolved hominid fore-brain. I know the theory perfectly well - and no, I'm not about to spring an 'intelligent design' argument.
What I'm saying is that understanding the nature of mind and intelligence through that perspective must inevitably be reductionist. The belief that we have an in-principle understanding of what intelligence is, based on those premisses, is, I believe, the misapplication of the theory of evolution. It is, after all, a biological theory about the evolution of species. It's not a 'theory of knowledge' as such, except insofar as knowledge can be understood in those terms (which it can to some extent through cognitive science, evolutionary psychology, etc.) But it is then used to underwrite the notion that somehow, science has an understanding of how we as biological organisms are able to exercise reason and thought. But really it does no such thing, and really there is no such theory. It's a result of evolution having occupied the role previously assigned to religion and philosophy. But I don't think it's up to the task, or at any rate, it is far from a completed theory.
There are influential theorists who believe or promise such things, such as Francis Crick, who co-discovered DNA, a convinced materialist. But for every one on that side of the argument, there are plenty of dissidents. Materialism of the kind you're proposing is by no means established and in many respects it's on the wane. At any rate, it is nowhere near as signed and sealed as you seem to believe it is.
Quoting Devans99
Let me ask you this: do you think if you physically damage a computer, like, break it or hit it with a hammer, or throw it in the water, that it suffers? Would you feel empathy for it, 'oh, poor machine'?
Consider it further: imagine if science did create an actual being, something that was self-aware - so no longer just 'something' but a being. What kind of predicament do you think that being would feel itself to be in? If it began to ask the question 'what am I? How did I come to be? What is my place in the world?'
A few years back, there was some idle chatter about whether it might be possible to revive an extinct species of hominid, Jurrasic-park style, through biotech. Imagine if that happened, the predicament that being would be in.
They're the kinds of questions that really need to be asked about this subject.
Quoting Devans99
So I think that a computer could be designed so that it suffers when it's injured. Obviously it's a long way off in technology terms but give a computer the same basic senses and drivers as a human and the result should be a humanlike computer?
Quoting Wayfarer
I think all high intelligence beings would think in a similar manner. 'Great minds think alike' applies to computers, aliens, gods and humans all, I suspect. The synthetic being would think like us I think. It would worry first about survival in this world. Once that was assured, it would sign up on a philosophy forum and start worrying about survival in the next world.
The Turing test seems a reasonable standard. If a computer can perform some operation in a convincingly human manner, why not stop quibbling about dictionary definitions and just call it intelligent?
The "lower" animals are driven more by instinct (genetic knowledge) than learned knowledge. Humans, chimps, dolphins, elephants, have larger brains and instinct drives our behavior to a much lesser degree. Learned behaviors begin to take over as we develop because we have larger brains to store more information (memories).
Quoting Anthony
No. You obviously didn't check out evolutionary psychology so you're just not knowing what you're talking about here.
May I suggest Steven Pinker's book: How the Mind Works
I made the point that the solar system or the sun is a perfect self-sustaining balance between the outward push of the nuclear reactions and the inward pull of gravity, and has been like that for billions of years. You have now adjusted your claim (moved the goal posts) to say that living things are autopoietic and that is what makes them intelligent.
The sun maintains itself but doesn't reproduce itself. Is that what makes life intelligent and non-life not intelligent - our ability to have sex and have babies? Those are instinctive behaviors, not intelligent ones.
How do you think those GUI's are made? By higher level procedural coding, like Java (which isn't related to JavaScript), Visual Basic, Python, and C++. I have experience with some of these. C++ is very complex but it allows you to fine-tune the computer's behavior more than most other programming languages.
Sociopaths would just learn how to manipulate other people more efficiently. Jobs didn't say that programming would change your world-view - just how to think more consistently.
The idea the cosmos has to do computations for anything to happen is completely ridiculous to me. Nature obeys homeostatic setpoints and servomechanisms, which are more like measuring what comes through biological channels than doing computations; cybernetics is appealing, however, there is a difference between a homeostatic machine and nature; nature isn't a machine; cities are more like machines than the natural order treated by natural philosophy. Which is why it makes as much sense to look for a mind in natural cycles as it does in anything human exclusive. Whatever gave rise to us (abiogenesis, say) gave rise to all life on earth, why would we not want to focus on a hierarchic echelon above humans and what humans have made if it is the truth we are after? Perhaps abiogenesis would be the opposite of AI? Just raw primeval intelligence. A computer world would be made of metal and circuitry...which isn't what it is made up of. When you step outside your home into the original state of the natural order, you don't step on metal and silicon. Our species is attempting to establish its own truth over the only truth, its own time and space over the only time and space. Such attempts are doomed to fail.
Take the insidious device that preceded much other technics: the mechanical clock. It has no feedback system, it measures nothing...it's basically in a runaway mode all the time. This completely non homeostatic device is ruling our biologic world. It has no biological negative feedbacks whatever as it ticks away. Yet most people feel the organization it affords is a good thing. Even to the extent it coordinates anthropocentric activity, it does so irrespective of their biology. What a golem we have at the center of all human relations.
Excessive AI focus reminds me all too much of relilgious dogma of Genesis: "When God created man, in the likeness of God made he him....When man created AI, in the likeness of man made he it." The inflexible culture of anthropocentric, anthropogenic, narcissistic anthropolatry is definitely thick enough to cause delusional beliefs anent our place in nature, in the transmundane order . The ways in which religious dogma have shaped our world for the worse may be nothing compared to what AI myopia will do.
Computations are the hammer and nail of computer scientists, who used to be called "computers." Human computers no doubt believe their machine computers are like them.
Simple question? Why would you think you could replace a word, here, without loss of meaning?
I'm sorry but I think this is about the most pointless question to ask because the explanation is obvious if you ask nearly any regular Joe. Computers perform actions - namely complex mathematical and logical calculations - in fractiona of a second that humans take much longer to do and do so with much more proneness to error. So we call them intelligent in that respect, even though we know they find processes that aren't strictly formal much harder to implement (forget what the name of this paradox is called). Humans have a hard time doing them, if they can do them at all (some discoveries in math had to be found by computers), for instance.
Complaining that it's a 'poor term' or is a 'blunder of intelligence' is just a complete failure to understand simple reason people use certain terms.
Hardly pointless, friend. The average Joe, utterly unlike a computer, does not do calculations when he understands, thinks, make judements, and uses his mind. . Making distinctions here is necessary. Nor is there a reason to depict logic and math as something only the more intelligent people can do. Anything that can only be used sheerly through memorization and application excludes interpretation and judgement and understanding. Which is why, if anything, I'd be tempted to say math and logic are mindless pursuits in themselves. Not that a computer scientist won't have axiological orientations, but to the extent he does, it won't derive from his involvement in math or logic. AI is an unmixed psychopath, right. There are reasons this may be true you'll never arrive at through logic and math.
There appears to be a lot of programmer/computer scientists on here, so I'm probably backing myself into a corner taking that into account. For me, wisdom/philosophy begins with the totality of known and unknowns...it's never as though what humans know is all there is to know. Not sure why science and computer science seem to be replacing natural philosophy. It isn't with me. Nor can I imagine how AI can ever be programmed to handle or calculate with the input of an infinite cosmos the way we humans with our minds have to, inasmuch as a mind has nonlocal information to it unlike an algorithm. A computer simulation is the opposite of a human mind. My mind is abstract in a way that can't be reified by AI.
Another thing, is it acceptable that machine learning (interconnected algorithms) spits out information that can't even be checked by a human mind? This doesn't seem wise or smart or intelligent by any means. It may even be stupid. Sorry if I do not admire how fast computers can process information if it's collateral to this kind of stupidity. Slow down and settle on the furniture of wisdom.
Take mass surveillance, which I'd hope we could all agree is decisively immoral. But this is something that fits into the wheelhouse of computer scientists, as they produce programs that are fed such unscrupulous panopticon spying data. The point is, it seems possible the unimpeachable immorality of mass surveillance would be lost on people who believe AI is really intelligent. Yet an android like Zuckerberg (he even resembles an android) calls for more and more AI to handle more and more private information. Not lucidly understanding that mass surveillance is deeply wrong is only possible when understanding fails to see that logic doesn't always precede the right answers, or that AI isn't intelligent. Very bad choice of diction...AI can be extremely stupid. Just because it can execute something, it will.
Also, the danger of AI doesn't lie in machines or robots taking over humans by force (though neurons that fire together wire together, and too much screen time no doubt changes the structure of the brain). It's in humans becoming like golems to the beck and call of their AI and losing wisdom, intelligence, and philosophy therein. In this sense, it's already begun.
It is no wonder that we fall in love with computers: they appear to perform autonomously; they are fast and we like speed; they appear to interact with us; they perform useful tasks; and more! So we assign traits to them such as "intelligent" because they can be made to appear "intelligent" and "engaging". Of course they are no such thing. They are containers and processors of data and programming.
What is obscene about our use of the terms we apply to computers is that we then take those terms and apply them to ourselves. We become data processors programmed to perform particular tasks, responding to inputs with output. It is Pygmalion describing himself as an exquisitely carved ivory statue.
You are a "container" and processor of sensory data based on your genetic and learned programming.
Your meaning is clear; it isn't possible to escape the natural tempero-spatial order. There's always some displacement or other when technics are so dominant a part of human relations, usually into diminished mental health. However, is it not apparent our species is doing everything it can to supplant time and space with its own technological version of time and space? Mechanical clocks to transportation tech have subconsciously insinuated a belief that human order is indeed separate from the natural order.
Everyone must obey the technics, not time and space, when they scramble home for the holidays, experiencing immense stress of gridlock and the tightest schedule possible(never relaxing the moment). The speed of telecom creates a counterfeit sense of communion, which can be imagined as homologous to a seance (provided one puts zero value in face to face relations), and so on. Anyway, the convenience and speed of doing things doesn't really result in calmer, clearer, more peaceful, self-organized lives when we all are subject to the heteronomous, other-organization of technological determinism.
Because I mistook what you wrote for an argument. I have since realized my mistake. Carry on.
You're really misunderstanding my point. Yes, I'm in grad school for CS but that's not coloring my view here at all. If anything, I'm far more pessimistic about the over (bear with me) programmatification of language and metaphor but for what I regard as legitimate reasons beyond what words are used to refer to things. But when computers are called "intelligent" all people mean is they do things that are both very difficult for humans to do (formal calculations) and which humans regard as signs of great intelligence when done by people. You can complain and say we shouldn't do that - I don't see what the problem is there, formal math and logic are cognitively difficult operations and require intense training - but that's what people mean. Next to no one actually thinks computers have literal intelligence (maybe kids, but even they treat such things as mere tools). They're regarded as basically a really complicated but useful abacus, not things capable of true thought.
Quoting Anthony
Go back to what I said. Your reply is like a bee complaining about how the bee hive influences their perception of reality.
Fair characterization. The AI obsession does make humans into the Borg or bees, what have you, a major cause of concern for awareness human-style. I can try to better incorporate your ideas, although the way in which our species is trying to create its own truth separate from the one truth has already addressed your last quoted post. We can't recreate nature, in other words, we can only be an extension of it.
Hopefully we are nothing like the mantid men, reptilians, or the Greys (archonic influence alert!). That said, my stance is against the mind numbing effects of AI and technological determinism in general (there have been a surprising few thinkers in history willing to take a full look at this ever increasing dependence: McCluhan, Mumford, Carr, Postman, et al.) Even automobiles can be thought of as AI (they are extensions of the mobility of feet, extensions of man), which tends to degrade the ability to stop and smell the roses or slow down and see them to begin with, ultimately which changes sense ratios and ratiocination (or if he does stop for the flowers, he can't resist the impulsive pic with his smartphone instead of simply drinking them in with his native and ephemeral aesthetic sense of seeing). Obviously technics define the human species, a Promethean mould, but the question silly not to ask, which is the same for every other venture of men: how much is enough? Fundamentalisms always runaway...people who say you can never have enough money are in the same category as those who believe more and more AI will have the right answer (tech and economics being fundamentalism). Adding to ever increasing complexity leads to instability (imbalances are created and structures fall): we're already acutely complex in our primal state. At what point do natural systems precede the mania of human ego ideal projected into technics (when does the cybernetic system actually include what includes it)? Never it would seem. It truly concerns me. This isn't complaining, it's an emergency (drama excluded); hark! (or maybe not lol), please see that awareness isn't the same as complaining. Bateson acknowledged this unending ignorance of the "supreme systemic network" (his locution).
Paul Stamets (an arch genius) has had the idea of being able to tap into the feedbacks-forwards of mycelium in the soil of forests creating a bio-computer interface. Until humans incorporate the supreme systemic network into their AI, biomimetically (solar panels copying photosynthesis, e.g.), it all seems a little too human centered for me (not grounded or sound). Tech. determinism, with AI in the van, is getting to be a little as water to the fish: the fish doesn't know what water is. Instead of enhancing the environment, by conforming to the mental processing of ecosystems (which we can thank for our lives, btw), we'll see folks getting BCI enhancements for the internet through retinal implants. There is a question of vitalism (or exuberance for life) vs tech determinism; after so much comfort and convenience, might one not ask whether unending need for more of it is a signal people don't want to be alive? Is it a necromanic enterprise ruled by death instinct (wishing for a return to inorganic matter) or desire for obliviion? What's up with video game addiction? Or that some would rather apply electric shocks to themselves than have a relationship with their own minds for 15 minutes (assuming most of you have heard of this experiment, if not I could post a link). Sometimes it can be up to 80 times a day smartphone users check their phone.
There's plenty of mystery to me why far more people aren't protesting against the status quo. No explanation how we are swimming in telecom and cool with giving over our privacy (or having it stolen), and then having it sold (is this a good example to follow?)... How is this possible? Archonic influence? mass hypnosis?... No idea. You tell me. This is a topic fertile and in need of original philosophy.
Watched your link. Interesting how physiognomy made it our of the realm of pseudoscience after facial recognition computer applications came on the scene :chin: . Still not sure I believe in it, actually.
Won't being a god-bee influence how you perceive the world, too?
Surely you've seen the Disney movie Fantasia, with Mickey Mouse. Should we let the brooms lose the proper measure of things and flood us with disorder (all the while thinking we would have more and more order?; honestly we're already getting the wrong answer from AI, since people are somehow ignorant of the noxious effects of surveillance state-capitalism: this isn't a complaint, it's just a call for awareness).
To say I'm wanting to be a god-bee when I'm suggesting using nature as a biomimetic example for AI is to mistake me. It's the other way around. This planet, not us, is the measure of things. Not to begin with its example is lunacy. Should we not care about these things? We're an organism-environment, and the environment is partly made up of other people and their agendas.
Rather than get completely away from topic... While it may be true there's nothing outside the domain of metaphor, language is inherently metaphoric, it's also true we can't just call any object or phenomenon anything we want for that can lead to symbol drain and confusion. Metaphor/poetry is a personal, non-recitable (yes, poetry can be memorized, which is close to the opposite of writing it) language engaged in akin to mystic revelation, it's close to mentalese and phantasmata, very individual and imagistic, esoteric, your truth, unsharable. The more it is a popular nomenclature (the phrase, AI), the more I tend to feel it should be less metaphoric and unmistakable meaning, exoteric, recitable, literal, prosaic, etc (also, computers communicate in a similar mimetic manner, which couldn't be less poetic and shouldn't be dignified with a metaphorical denomination, social decay may result). Popular language can have a propaganda-like, mesmerizing affect and shouldn't be irresponsibly dispatched. Plus, it's obviously too confusing when the chosen word/analog, though perhaps a metaphor (i.e., AI), isn't even a good exercise of metonymy, rather a doggerel, since, again, the definitions of intelligence are precisely the opposite of what computing machines are capable of.
Does a computer have a relationship to ding an sich as a human does? Can it "see" anything outside of what its preexisting programming (implicit programming) tells it to look for? What humans see is influenced by input from the organism itself (similarly), however, what the eye sees is detected based on weird quantum mechanical physics, existing in the nature of light (which is not understood in toto; to the extent it's understood it is not subject to binary, either-or logic). Information from the eye is sent to the brain and transduced into mental imagery. Does a computer enter a state of mental imagery? No, because it doesn't have a mind. Mental imagery is not seen by the physical organism itself or the eyeball per se, it is seen by the mind.
What do you think is the physical difference between a brain and a computer, that permits intelligence?
Humans aren't pinned down by logic and calculations, or especially, predictions. We aren't a state determined system. We relax and stop using logic and reason, when we do, we dream enriching dreams with utterly illogical and perfectly meaningful insights to ourselves and our environment. When we dream, I believe we experience the inexperienceable (the Noumenon), which is a mostly non-representational state of mind that may be a challenge remembering; the Noumenon itself can't be represented, as any knowledge of it becomes a phenomenon.
Computer processing isn't really comparable to a creative human intelligence, perhaps only to that of computer scientists, who for some strange reason believe logic is essential to learning, or the only kind of learning. And if this is true, I wonder if computer scientists ever relax. Relaxing and dreaming is essential to ex nihilo imagination, which occurs anterior to logical operations or memory. Human have had rich imaginations long before the computer paradigm came to the scene. It's my belief human consciousness arose from non-patterned, non-symbolic lucid dream of a Noumenal plane of existence. The computer paradigm trend (as a model for human consciousness/intelligence) as synonymous with apophenia writ large. Everything a computer "sees" would be a pareidolia to a human, nothing more. Indeed, much of what is taken for the logical part of intelligence is akin to apophenia, seeing patterns that aren't really there to some other way of looking.
Honestly, if it can be admitted that our species at one time wasn't even conscious of itself, but through some slow universal process of self-organization, not entirely of our own making, came to instill metacognition within us, what sense does it make to believe it were anything resembling a computer's operations (entirely of human making)? Machines aren't self-emergent, they required human-beings to exist before they could exist. Human-beings aren't self emergent, they required a universe/world to exist before they could exist.
This is just my off the cuff thoughts, and I'm not a cognitive scientist of any sort, but an obvious starting point is that there's a difference in structure between a (classical) computer and the brain. Current computers are based on a two-valued Boolean logic, but the brain is far more flexible in what kind of processing it allows one to do, it's not strictly linear or discrete. How do the differences give rise intelligence? No clue, that's the hard problem.
We know all Turing machines are equivalent, and what they are made from has no effect on this equivalence. For a brain to be capable of fundamentally different type of operations to a computer, then, peculiarly, the specific stuff it is made from matters and this stuff is capable of performing non-computable functions.
But if it is the actual physical stuff that matters, then it is impossible to build any type of machine that exhibits "intelligence", even if we can build functionally exact replicas of neurons, and systems of neurons. Seems a bit of a stretch!
Well, no... Lemme try to reset this bit. My point is that not all computation is Turing computation. Quantum computing (possibly, physics is unsettled), analog neural nets (theoretically, if reality is continuous and depending on a host of other concerns), protein regulation, etc., are non-Turing computation.
It's not what the structures are made of per se, but by which rules these complex systems follow. If the brain is such a non-Turing system - and there's a case to be made here, though that's well outside my wheelhouse - then that might well be the reason a (classical) computer cannot have bona fide intelligence. Of course, I'm not sure how this would settle the hard problem of consciousness. To recognize a mechanized mind I suppose we'd have to understand how mechanisms can result in a mind to begin with. And that's a helluva lot harder to figure out than any of this formal stuff!
Quantum computers, and classical computers possess the same repertoire of functions. Quantum computers merely render certain algorithms tractable, somehow. Also, the brain can't operate by maintaining quantum coherence. It is too warm and wet.
Neural nets are typically implemented on an ordinary computer.
Quoting MindForged
Claiming that the brain is capable of super-Turing operations is tantamount to attributing a soul to it. If the matter is not special, and other matter is capable of following the same rules, then a machine may exhibit identical properties to the brain. That sort of machine is a computer.
The answer to the OP is easy.
People don't actually have a clue what defines a Computer and how Computers and data processing machines operate.
Hence "intelligent" is just as loosely defined as in a commercial selling us some improved machine (as being intelligent).
This is the typical nonsense that a lot of people have when they think that the human brain functions like a computer and hence humans function like computers. It follows the idea that present scientific understanding answers everything (and not to agree with this you are anti-science!) Hence when the World view was focused on a mechanical Clock-work universe, then some believed that people were truly mechanical, worked like mechanical clocks, as simply the scientific knowledge of that day didn't have other more advanced models. Hence the mechanical man was then the model of the day. Now we have computers, hence human beings have to (for some reason) operate like computers.
Humans simply operate differently than the rule following machines as humans are conscious, can understand the "Program" they act on and they can innovate. A Turing Machine simply cannot follow an order of "do something else" that isn't defined in the Program it's running. It's as simple as that.
I definitely didn't say the brain was a quantum computer (it's a macroscopic object, any such effects would have decohered). If anything I expect it to be akin to an analogue neural net. I was giving examples of non-Turing computation, although upon looking into it I find myself more confused. A lot of people saying quantum computing could be simulated on a Turing machines, if inefficiently. So the complexity is different, not the computational model (I think that's what you were telling me).
Quoting Inis
As far as I know the best current computers can do with *analogue* neural nets is to give an approximation of them, but they cannot truly simulate them in a strong sense because using floating point numbers will not allow one to precisely represent infinite numbers on a finite Turing machine. It's analogous to how my calculator can approximate infinities when I'm doing calculus but it's just that: an approximation, certain case logic holds for a large class of integrals. It's sufficient for most normal applications but strictly speaking only mathematical induction let's me really handle the actual thing. The brain appears to be analogue - it's certainly not digital, the weights are not discrete - so that might make it a candidate example of some type of non-Turing computation.
Quoting Inis
That doesn't require a soul. If it operates according to a different models of computation that might be how you get consciousness out of it as opposed to an ontologically distinct type of substance. I didn't say it couldn't be replicated, just that it probably cannot be done on current computers based on the turing machine.
… a machine-learning algorithm that had mastered not only chess but shogi, or Japanese chess, and Go. The algorithm started with no knowledge of the games beyond their basic rules. It then played against itself millions of times and learned from its mistakes. In a matter of hours, the algorithm became the best player, human or computer, the world has ever seen ...
By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever …
Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks …
Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence …
By discovering the principles of chess on its own, AlphaZero developed a style of play that “reflects the truth” about the game rather than “the priorities and prejudices of programmers,” Mr. Kasparov wrote …
https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
Quoting Fooloso4
But perhaps not the first glimpse by a computer who would see it as simple natural evolution ?
And yes that's anthropomorphic. What else would it be, coming from a human...?
Imagine we input the philosophical game of theories and definitions of intelligence, what neologism would the computer churn out describing this 'awesome new kind' ?
I do wonder about the poetic description of chess as having some kind of 'truth'...
Thought provoking article, thanks.
That was my first reaction as well, but after doing a search for “truth in chess” I found several articles, many of which also discuss beauty. The truth is the search for the best move or position, the one that proves superior to all possible countermoves. The player who can do this in every game knows the truth of the game.
I use the term ‘know’ deliberately because it challenges the assumption that to know entails some kind of subjective state. Alpha Zero has not been programmed to win, it has been programmed to learn, to teach itself how to win.
Irrespective of the amount of chess knowledge Alpha Zero may have, it doesn't possess the quale of knowledge.
Yet
How do you know this? What is quale? When you look at a person you see matter, not quale, so how do you know that a person has quale but not a computer? Is matter quale?
That may be, but doesn’t that suggest that quale is not necessary for knowledge?
There exists a more or less fully worked out conception of knowledge that does not require a knowing subject. It's how genetics works.
Popper wrote about it in "Objective Knowledge", and perhaps in other places.
Knowledge is pattern recognition. The more optimal the recognition, the more optimal the knowledge. Such is math. Binary language is simple machine recognition. Fight or flight is simple animal recognition.
So, as the earliest computer inventors sat in their offices they saw a possibility - intelligence, human-like intelligence.
Anyway, how do we know that we (humans) are NOT machines? Add to that the scientific consensus that we evolved by random mutation. Don't you think a conscious effort, like we humans are investing on artificial intelligence, will yield ''better'' results?
The epistemological position is known as Falibilism. It is core to the Scientific Method, and to Critical Rationalism.
The Scientific Method provides a set of rules for what "sensory data" should be obtained, and how this data may be "incorporated", though this is a relatively minor issue to knowledge creation. Knowledge is not obtained by the senses, or by incorporating sensory data.
This works well. How would you explain contradictory knowledge that we possess? We must integrate all the information we have into a consistent whole. Until then, do we really possess knowledge?
This doesn't sound right at all. What form does your knowledge take if not the form of your sensory data? How do you know that you possess knowledge?
Indeed it could. But this we must avoid at all costs! :fear:
Once an AI has the freedom to evolve and improve itself, there is no predicting what it might do. To unleash such a thing into the universe is typically human - when Curie discovered radioactivity, the first thing the Victorians did with it was to drink it as a remedy. Thousands of people died of throat cancers, and the story isn't even known! - but hugely dangerous and irresponsible. If you agree we should not fill the world with (say) active and uncontained nuclear fuel, then you must also agree that we should not release uncontrolled and unconstrained AIs into the world?
An alternative view is that AIs will be part of our culture, and will in essence be our descendants. We will teach them what we know, and why our value system is crucial to our epistemological methods. If we are kind to them, and nurture them, and help them, why would they hate us?
Then you would also agree that we control who can release other humans into the world as bad, or a lack of, parenting leads to destructive, anti-social behaviors that are unleashed upon the rest of us.
We already have self updating programs now and the world hasn't ended. This sort of ability to self refer and self alter isn't a problem in and of itself, it's where you place them, what you have them do and what sort of checks there are on them. There were a number of times automated systems nearly caused either the U.S. or the USSR to deploy nukes in what they thought was retaliation of an attack the other side initiated. Luckily human oversight stopped that.
Like the whole idea of SkyNet (or whatever movie has nukes controlled by an A.I., I know Terminator isn't the only one) is really stupid. Keep the A.I. on an isolated network that isn't connected to an outside network nor responsibile for anything that can cause too much trouble (e.g. no access to factories where it can place orders on what is built if you're making a general purpose A.I.) and there's really not much to worry about other than a lot of goofs as the system alters it's state. Like just watch videos of neural nets learning to walk. While it's cool that they can, eventually, do it, it's so obviously not very good in comparison to the real thing right now.
Maybe there are ethical concerns at a certain point but I see 99% of "worries" about A.I. to be overblown or else have very obvious precautionary measures that can be taken.
No, it hasn't. [History: I spent 40 years in electronics and software design.] But these days, internet access, even for the smallest pieces of equipment, is normal. And humans have a history of just doing it, regardless of the fact that we aren't even aware of the potential problems we might cause. Think of the first atomic weapons we exploded. Because we had no idea if what remained was problematic in any way, we sent infantry soldiers into ground zero, and had them roll around in the irradiated and radioactive (as we now know, but we didn't know then) sand to see if any harm came to them. Later, they all died of cancer....
The story is the same for every discovery we ever made. We just go ahead - uncaring of, and oblivious to - any problems. Back to the subject in hand: programs that can make minor and constrained changes to their own stored data (not program code) are common. Programs that can change their own program code are very rare. This isn't because they're difficult to build. The changes such programs make to themselves are not predictable, so the product can't be tested, nor can its future performance be guaranteed. That's the problem. And the more sinister aspect is dependent on (as you say) access to the internet, or the like. But we have already seen, with recent Russian (and maybe Chinese too?) interference in several countries, what hackers can achieve. An unconstrained AI could (in theory) do anything a hacker might do, and maybe more too. Who can predict what an AI, able to modify itself without constraint or safeguard, might get up to?
In theory, at least, there is a real and significant threat from unconstrained AIs, and from Skynet too, under the right (wrong?) circumstances. As people place their homes and lives under over-the-internet control, all kinds of unpleasantness become possible, if not likely.
Quoting TheMadFool
Not to overanalyze this, the parsimonious response is that I'm a living creature, vital; a machine is unliving, dead, non vital, like a puppet with a long nose. One can project his aliveness into his favorite automobile or the internet and therein feel he relates to it as a living thing...though the truth remains it's not alive in any conceivable way. Why it is there are people who act as though they would like to be a nonliving thing is an area of great interest for me. What's wrong with being alive, anyway? Is there something wrong with being alive? Consciousness is a burden much of the time, to be sure this is the challenge we face as intelligent life (while self-limiting consciousness and information is necessary to function, to self-limit consciousness in the same way a machine must limit its inputs and outputs to function as a machine, is tantamount to instant death of consciousness in organic, intelligent life; one shouldn't seek to function anything like a machine unless he for some reason thinks there's something wrong with being alive).
Quoting TheMadFoolRandom: a higher order humans don't understand. What is seen isn't what actually exists, but what exists after exposed to the limitations of the questioning of a limited profession. It's perfectly sensible rejecting the word "random."
The relationship between the Central Dogma of genetics and its addendum, epigenetics carries a wealth of mystery as it pertains to evolution. Why not ask what was going on with the epigenetics of our ancestors instead of a focus on mutations or natural selection? It's hard to say what all the early organism-environment was comprehensively, of early hominids. Certainly the environmental signals they were exposed to were determinants of their genetic expression in ways wholly unlike the manner in which the polluted environment of industrial man determines how his genes are expressed.
Quoting TheMadFoolIt depends. There are swarms of virtue questions around the AI enterprise anent human psychology. Maybe the conscious effort isn't as conscious as it seems. Social media is causing rank psychological problems in the human species, but since most people are using this media, any unsalutary affects go unnoticed, which is why these issues are seldom discussed (once awareness reaches social approval, it tends to shut down as it arrives at average awareness, the bandwagon). It's an argumentum ad populum fallacy leading to socially patterned defects.
AI is a major authority coming on the scene...and one of the main problems with devotion to authority is that it's often associated with copying and imitation (of what is determined or controlled by the authoritative platform).
When people talk of the future they aren't dead serious about it. After all, who can predict so accurately as to be true prophet? They do it out of curiosity, a basic human tendency, and some have hit the mark like sci-fi writer Arthur C. Clarke who predicted satellite communication using geostationary orbits. I'm just saying that it'll be interesting to find out which technological prophet got it right.
Quoting Anthony
There's nothing wrong with being alive. However, it's wonderful if we can create life, specifically consciousness and AI is about that. It's interesting to say the least. "Is it possible?" That's a different question and people will do their bit to answer this question.
Quoting Anthony
You may be right but I think scientists understand the process as having no specific teleological pattern and thus call it random.
The real threat isn't that AI would become somehow conscious (or whatever).
The real threat is that we in our ignorance just let simple and pathetic algorithms run our lives and make decisions for us when we should use our own brains.
It's not about Computers getting too smart, it's about us getting dumber.
This then leads to the consequence of believing that intelligence is definable in terms of a particular class of algorithm, say deep neural networks with the capacity to react to stimulus the way we do. But this is an illusion, because as far as making predictions is concerned, it is easy to show that on average, no machine learning algorithm can solve a randomly created problem better than any other algorithm. See Wolpert's No Free lunch theorem for a modern reboot of Hume's problem of induction.
So the very definition of intelligence is inductive bias. To say that a process is "intelligent" is merely to say that it is similar to another process, and hence is useful for modelling the other process. So it is perfectly reasonable to call Alpha-Go intelligent - relative to the problem of go, since AlphaGo was designed to learn and represent maximum utility go sequences, at the cost of AlphaGo having relatively lower performance if trained to solve problems dissimilar to go.
In large technological places, we are deemed intelligent because we can make complex devices, homes, transports, etc.
But how many of us can survive out in the jungle? How many of us know the difference between edible and poisonous plants or animals? How many of us can see into a forest and know what animal was there, where it went, size and smell?
But with artificial intelligence, it’s basically a collection of human knowledge. Acquiring information without effort. Just copy and pasting it into the system, but isn’t that similar to humans? A baby learns to walk by copying right? To talk they copy sounds?
This is when the line gets REALLY blurry...
So if a robot had to learn something one by one with repeated actions...does that make it smart?
No, it isn't. It's about us giving them too much free rein to direct themselves, then wondering why they did something we didn't expect or want....
Indeed. What do you think of this: Robo-grading. Dumb or no?