You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

Here is how to make a computer conscious, self-aware and free willing

Zelebg October 30, 2019 at 19:56 10625 views 68 comments
1. Camera A: visual input extern -> feeds into 2.
2. Program A: subconsciousness & memory -> feeds into 3.
3. Display A: visual output inner -> feeds into 4.
4. Camera B: visual input inner -> feeds into 5.
5. Program B: consciousness & free will -> feeds into 6.& 2.
6. Speaker: audio output extern

This "being" is quite limited in the ways it can sense and act upon external world. However, I claim it in principle still has all the sufficient hardware to actualize consciousness greater than that of humans, not in qualia sense of the experience, but considering everything else, including free will.

I suppose it might be questioned what exactly should "Program A" be doing and what part of that should go onto inner screen, but most objections I expect to land around "Program B", that it is not what can be called consciousness and that it can not exercise free will. I am interested to hear those arguments. What concepts will you lean on, just how exactly do you disagree?

Comments (68)

Tim3003 October 30, 2019 at 20:11 #347115
I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will. You need some externally derived driver for pain and pleasure. Without their stimuli the concept of 'will' is impossible to actualise. So-called free will has to choose between criteria for a decision. Ultimately, the way it decides is by weighing the pain/pleasure the different choices will entail. You could simulate pain and pleasure by alloting scores to visual cues but I think you would then produce a very odd and limited form of intelligence.
Do you include mirophones to catch sound ? How does your AI learn language? Without it how can it conceptualise and express complex ideas?
Actually this subject interests me too and I have thought about the practicalities of a computer based consciousness. It seemed to me you'd have to synthesise all the attributes of a human. There is no short-cut. Still, I don't want to sound dismissive.. :smile:
fdrake October 30, 2019 at 20:13 #347116
Quoting Zelebg
2. Program A: subconsciousness & memory -> feeds into 3.

Quoting Zelebg
5. Program B: consciousness & free will -> feeds into 6.& 2.


Quoting Zelebg
What concepts will you lean on, just how exactly do you disagree?


For the purposes of the intended discussion do you care about how A and B work?
philsterr October 30, 2019 at 20:51 #347120
Quoting Zelebg
"Program B", that it is not what can be called consciousness and that it can not exercise free will. I am interested to hear those arguments. What concepts will you lean on, just how exactly do you disagree?


If you create a program that has 'conciouness' and 'free will', then it can be called conciousness and can excercise free will.

How would you achieve that though? If your program is run on a common computer, it will boil down to a deterministic set of instructions. Its audio output will entirely be determined by its initial code and visual input history. The person who codes and interacts with it can have 100% control over its output.
Could you call that free will?
Zelebg October 30, 2019 at 22:17 #347153
Reply to Tim3003
I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will.


I would argue program can be made to incorporate 'qualia' properties in a sufficiently robust way that enables those concepts to interact with other concepts in the thought function of the consciousness program.

Perhaps computer's inner representation of qualia would be in terms of pie-charts or whatever, but does it really matter if at the end it can still make all the same conclusions and express them with the same kind of semantics as humans do?



You need some externally derived driver for pain and pleasure. Without their stimuli the concept of 'will' is impossible to actualise.


I can imagine all my sensory inputs stop working and all I can do is speak. But I would still be able to say I'd rather have a cup of milk than punch in the face.



So-called free will has to choose between criteria for a decision. Ultimately, the way it decides is by weighing the pain/pleasure the different choices will entail.


Is it not sufficient to have goals? If offered choices that are not relevant to those goals, it can then choose at random, which is the only one absolutely free kind of choice. Right?



How does your AI learn language?


Oh, I see now where are coming from. I am talking about it in more abstract terms - what can and can not be done in principle. So in this example computer has already learned or has been programmed, and I don't want to go into those detail unless there is an argument it can not be done in principle.
Zelebg October 30, 2019 at 22:24 #347159
Reply to fdrake

For the purposes of the intended discussion do you care about how A and B work?


Only if there is an argument such programs could not be made in principle. And I should probably mention both programs are able to modify and expand themselves & each other.
Zelebg October 30, 2019 at 23:38 #347170
Reply to philsterr

How would you achieve that though? If your program is run on a common computer, it will boil down to a deterministic set of instructions.


The question of free will, but more generally it's the question of top-down or downward causality. Let me expand this a bit so we have wider range to pick examples from.

Layers of existence - atom - molecule - cell - organ - organism - consciousness - ecosystem - planet - solar system... and there are two important and mysterious boundaries. First, where molecules become 'alive' as a collective in a cell, and second, where organs become 'conscious' as a collective in an organism. But our question stands before any of the layers, and the question is whether these collective entities from higher levels could be something more than just a sum of their parts - is there a point where what actually happens is no longer determined by the dynamics of the lower level elements, but instead by new emergent properties of the higher level?



Its audio output will entirely be determined by its initial code and visual input history. The person who codes and interacts with it can have 100% control over its output. Could you call that free will?


It all depends on the definition of 'free will'. So I can't answer your question before we settle the definition and the rest of semantics. However, I can claim it is as free as human intention can be, which means determined by such things as personality and goals, if you would agree with this?
DingoJones October 31, 2019 at 00:25 #347184
Reply to Zelebg

How do know that what you have created is conciousness?
Zelebg October 31, 2019 at 01:04 #347192
Reply to DingoJones
Proper definition will be the judge. Agreed?

And by proper I mean the one most of us agree on. But in any case, all arguments put forward here should basically be about some definition or another because we are talking in general terms, limits and possibilities, rather than anything specific. Although examples and comparisons can go into particular details in any of those emergent levels of existence I mentioned.

Since my job here is to show this computer "being" indeed satisfies all the necessary definitions for my claim to be true, then maybe you would like to present the definition of 'consciousness' so we can start?
DingoJones October 31, 2019 at 03:28 #347232
Reply to Zelebg

However you want to define consciousness, im asking how would you know. The reason why Im asking is because it would be very difficult to do, considering how very little we actually know about consciousness. How do you know you will have replicated it in this computer when you would have no way of accounting for missing aspects/basis (because you do not even know what they are)?
Banno October 31, 2019 at 03:37 #347237
Quoting Zelebg
5. Program B: consciousness & free will -> feeds into 6.& 2.


Doesn't any one else feel uncomfortable with consciousness already being in this explanation of consciousness?

How is this not a vicious circularity?
Zelebg October 31, 2019 at 03:56 #347241
Reply to Banno
It's just description of the functions. It can still be questioned can ordinary computer host such program, or is there something fundamental about those functions symbols can not capture.
Banno October 31, 2019 at 03:59 #347243
Reply to Zelebg It's circular. You make consciousness using consciousness.
Zelebg October 31, 2019 at 04:26 #347249
Reply to DingoJones

However you want to define consciousness, im asking how would you know. The reason why Im asking is because it would be very difficult to do, considering how very little we actually know about consciousness. How do you know you will have replicated it in this computer when you would have no way of accounting for missing aspects/basis (because you do not even know what they are)?


You can't just say Newton's laws are not quite complete description of celestial motion for no reason at all, you have to point at something even vaguely, like there is something wrong with Mercury orbit.

You are basing your opinion on some definition of 'consciousness' where there is something unknown about it. What is it? This computer talks, sings, writes poems, knows all the internet and can answer any of your questions on any language at least at the level of Wikipedia standard. It can tell you what it wants, about its personality, its habits, likes, dislikes, wishes, dreams... we can even watch what it dreams. Damn, if it's not more conscious than me, but how much more conscious can it even be?
Zelebg October 31, 2019 at 04:34 #347251
Reply to Banno
To rephrase, those are labels not explanations. You may question whether the label is appropriate or not.
Banno October 31, 2019 at 04:42 #347253
Reply to Zelebg Then you've lost me.
Banno October 31, 2019 at 04:43 #347255
Quoting Tim3003
I think that the qualia sense of experience is crucial. Without it there can be no consciousness nor free will.


Yet how could you know that the system had such an experience?

And if you cannot know such a thing, why doesn't that render the project infertile?
Zelebg October 31, 2019 at 04:57 #347261
Reply to Banno
It's like you read that nubered list as arguments, but that's just a list of hardware components. I'm just saying a PC can be programmed to be conscious, self-aware and free willing. I'm not trying to explain anything until someone points to something that needs explaining.
Banno October 31, 2019 at 05:02 #347262
Reply to Zelebg Ah. I was misled by the title.

As you were.
Banno October 31, 2019 at 05:02 #347263
Reply to Tim3003 More generally, that's why qualia are petty darn near useless.
Banno October 31, 2019 at 05:04 #347264
Quoting Zelebg
I would argue program can be made to incorporate 'qualia' properties in a sufficiently robust way that enables those concepts to interact with other concepts in the thought function of the consciousness program.


Again, how would you every know that a program incorporated qualia?
Zelebg October 31, 2019 at 05:18 #347268
Reply to Banno
Yet how could you know that the system had such an experience?


We can code it into the program and so we can be certain it has it. Can we not? It also has that inner screen, so I can say there it is qualia right there even we can see, unlike my qualia which you can not.
Zelebg October 31, 2019 at 05:21 #347270
Reply to Banno
Ah. I was misled by the title.

With title I wanted to suggest this particular arrangement of hardware components is important to achieve all that.
Banno October 31, 2019 at 05:24 #347271
Quoting Zelebg
We can code it into the program and so we can be certain it has it.


Ah, I see, so you think that you can create consciousness by creating a screen for your homunculus!

The errors are compounding here, I think. But I will re-read what you have written, with this new version in mind.
Banno October 31, 2019 at 05:26 #347272
Reply to Zelebg So now I have an image of your homunculus being conscious because inside it is another homunculus, which is conscious because inside it...

Is that what you have in mind?
Zelebg October 31, 2019 at 05:35 #347274
Reply to Banno

Again, how would you every know that a program incorporated qualia?


Maybe you mean it is fake if computer's inner representation of qualia is, say some list of pie-charts? I'd say no, because your electro-chemical representation of qualia hardly can be expected to make any more sense. And, actual meaning may not be embedded directly in the lower level representation of the concept definition, but calculated relative in connection to all other stored concepts.

This "relative" meaning then may be the same kind of 'feeling' about the same qualia as that of human, even though extracted from different hardware using different symbolization. But I don't think any of it matters if the machine can draw from those concepts, however internally represented, exactly the same conclusions as we do.
Banno October 31, 2019 at 05:58 #347279
Reply to Zelebg

See if I can register my objection by giving a simplified version of your machine; one that is intended to understand an utterance by translating it.


1. Microphone A -> feeds into 2.
2. Translator A: feeds into 3.
3. Speaker A: -> feeds into 4.
4. Microphone B: -> feeds into 5.
5. Translator B: -> feeds into 6.& 2.
6. Speaker: audio output extern

So a sentence in English is translated into some other language and then back into English, but included is the reflexive link from 5 to 2.

Would anyone suppose that this device understood the sentences it translated?

I think the situation with your device is the same.
Zelebg October 31, 2019 at 06:00 #347280
Reply to Banno

Is that what you have in mind?


I think it can be argued without it, but yes, little driver seat for consciousness, all with joystick and tiny little monitor so it can play itself as it likes. Isn't that exactly how it feels? It's interesting parallel in any case, and I do not see where the analogy breaks.
Zelebg October 31, 2019 at 06:04 #347282
Reply to Banno
Would anyone suppose that this device understood the sentences it translated?


Great question. And again to answer it we must first talk about some definition, in this case what "to understand" means. Would you care to define it?
petrichor October 31, 2019 at 06:09 #347283
Reply to Zelebg

Computers just execute instructions that are really themselves just high-level calls for packaged low-level logic gate operations on bits, the bits themselves only being meaningful to the human observers who assign meaning to them.

You have to explicitly tell the computer what to do at each step. No hand-waving allowed. Suppose you want the computer to feel pain. You can't just write a program that says the following:

if condition X is true, feel pain

How would you go about writing the actual instructions for feeling pain? What are the step by step instructions which, if followed by a machine incapable of feeling, will cause it to feel, and to feel pain?

Let's have it. How to suffer: Step 1...


Banno October 31, 2019 at 06:28 #347287
Quoting Zelebg
And again to answer it we must first talk about some definition, in this case what "to understand" means.


I don't see why.

Zelebg October 31, 2019 at 07:04 #347289
Reply to petrichor
You have to explicitly tell the computer what to do at each step.


Execution of specific functions goes step by step. But what functions will run, with what parameters and when, can be triggered and varies relative not only to external events, but since the process is recursive, change of parameters and function branching may be triggered by the "thought" process itself.



How would you go about writing the actual instructions for feeling pain?


Is pain necessary for consciousness, self-awareness, or free will?
Zelebg October 31, 2019 at 07:07 #347290
Reply to Banno

Definition is necessary so we know what exactly is it you are trying to say. Is there some argument I should address? I don't see it.
Banno October 31, 2019 at 07:17 #347291
Quoting Zelebg
to understand


Reply to Zelebg

I've no trust in definitions, nor do I think them needed here.

Is there a sense in which the machine I describe understands? I don't see it. It's just replacing one string with another.
Tim3003 October 31, 2019 at 12:37 #347345
Quoting Zelebg
How does your AI learn language?


Oh, I see now where are coming from. I am talking about it in more abstract terms - what can and can not be done in principle. So in this example computer has already learned or has been programmed, and I don't want to go into those detail unless there is an argument it can not be done in principle.


I think that working out a structure for AI in principle is meaningless. You need to consider practice based tests like the Turing test. The advanced use of language is - as far as we humans know - essential for intelligence, so any 'principle' that does not answer how it is to be achieved in practice is suspect.

Zelebg October 31, 2019 at 18:11 #347412
Reply to Banno
I am talking about the machine I described. Your marchine needs not to understand, mine does.
Zelebg October 31, 2019 at 18:23 #347415
Reply to Tim3003

I think that working out a structure for AI in principle is meaningless.


It's not because there are arguments it can not be done in principle, on a PC.


The advanced use of language is - as far as we humans know - essential for intelligence, so any 'principle' that does not answer how it is to be achieved in practice is suspect.


I am not aware there are problems around AI learning to speak, are you referring to something specific?
NOS4A2 October 31, 2019 at 18:27 #347417
Reply to Zelebg

This might be irrelevant, but my only objection is that this sort of thinking assumes the computational theory of mind.
Zelebg October 31, 2019 at 21:52 #347514
Reply to NOS4A2
In that case any argument against that theory should work against this machine as well. What is the best criticism for computational theory of mind? And someone would need to argue that, I can't argue against myself. Or can I?
Sir Philo Sophia February 02, 2020 at 02:54 #377883
Quoting Zelebg
5. Program B: consciousness & free will -> feeds into 6.& 2.


Quoting Zelebg
I'm not trying to explain anything until someone points to something that needs explaining.


I really like where you are trying to go, but I believe your premise is not only malformed but misguided. That is, your (circular) question is really asking for a definition of consciousness, and you are trying to imply the AI solution will not require qualia, just agency.

Before I take a stab at it, I'll ask you to clarify a few things:
1. what does 'free will' mean in your program? I suspect you are talking about a sense of agency, and I think that is where your system blows up. Unlike others, I don't expect self-awareness and agency requires qualia; however, I do believe it requires a holistic state of being which you will never get in any kind of conventional coding or AI systems.
2. How will your system be able to know to question/doubt it is consciousness, know if it is communicating with a sentient being or not, and know what any of that really means? I doubt qualia is needed for this line item, but you have to detail how you would do it (and let me shoot you down!) :grin:
3. how do you program it to have an ego "I" in all of its glory and ugliness, which is not what 'free will' is about?






Sir2u February 02, 2020 at 03:27 #377888
Quoting Zelebg
1. Camera A: visual input extern -> feeds into 2.
2. Program A: subconsciousness & memory -> feeds into 3.
3. Display A: visual output inner -> feeds into 4.
4. Camera B: visual input inner -> feeds into 5.
5. Program B: consciousness & free will -> feeds into 6.& 2.
6. Speaker: audio output extern


Actually this looks like a visual recognition program more than a conscious computer. Some of the new mobile phones and computers have facial recognition software that will welcome you with a cheery "Hello" when it recognizes your face. And the steps are nearly the same.

1. camera picks up your face
2. camera software sends image to facial recognition software
3. 4. facial recognition software sees and scans image then looks for id in data base
5. facial recognition software finds id and decides to send authorize to user
6. speaker says "Hello"

My cell phone listens to me all the time, I think that most of us know this. I always talk about cats to my phone and it always shows nice pictures of them on the lock screen. But do not try to tell me that the bloody thing is doing it consciously, because that is not true.

Sir Philo Sophia February 02, 2020 at 03:31 #377889
Reply to Sir2u
You missed @Zelebg's magical module which your cellphone is missing:
5. Program B: consciousness & free will -> feeds into 6.& 2.

He's not going to tell you how he programmed that, though, until you pay the piper, which I think I've done. Now, we're hopping for more than crickets in return..
Sir2u February 02, 2020 at 03:40 #377891
Quoting Sir Philo Sophia
You missed Zelebg's magical module which your cellphone is missing:


No, I did not miss it. I explained it.

Consciousness in living beings allows us to recognize things by comparing them to things we know. Once we have recognized a person we can decide to say hello or not.
Sir Philo Sophia February 02, 2020 at 03:49 #377895
Reply to Sir2u

no. that is pattern recognition, which current AI robots routinely do.
Sir2u February 02, 2020 at 03:51 #377896
Quoting Sir Philo Sophia
no. that is pattern recognition, which current AI robots routinely do.


What is the difference?
Sir Philo Sophia February 02, 2020 at 04:02 #377900
do you not appreciate the difference between pattern recognition (e.g., an AI neural network detecting the presence of a orange) and the abilities human Consciousness?
fishfry February 02, 2020 at 04:22 #377903
Quoting Zelebg
1. Camera A: visual input extern -> feeds into 2.
2. Program A: subconsciousness & memory -> feeds into 3.
3. Display A: visual output inner -> feeds into 4.
4. Camera B: visual input inner -> feeds into 5.
5. Program B: consciousness & free will -> feeds into 6.& 2.
6. Speaker: audio output extern


Let us know when you've figured out how to implement 2 and 5.
Sir2u February 02, 2020 at 04:55 #377905
Quoting Sir Philo Sophia
do you not appreciate the difference between pattern recognition (e.g., an AI neural network detecting the presence of a orange) and the abilities human Consciousness?


No, explain it to me please.

And then maybe you can explain why my telephone is not a conscious thing using Zelebg's method.
Sir2u February 02, 2020 at 04:58 #377906
The best way to reply to someone is to select the text you want to reply to, then click on the reply button that appears.
IF for some reason nothing happens, try right mouse button click on quote button.
Sir Philo Sophia February 02, 2020 at 05:48 #377912
Quoting Sir2u
No, explain it to me please.


I think Zelebg will explain it to us when he replies. stay tuned...
Sir2u February 02, 2020 at 17:02 #378012
Quoting Sir Philo Sophia
I think Zelebg will explain it to us when he replies. stay tuned...


So you don't have an answer, but I am wrong? :chin:
Sir Philo Sophia February 02, 2020 at 17:14 #378021
Reply to Sir2u
I already told you where/why I think you are wrong. "5. Program B: consciousness & free will -> feeds into 6.& 2" does not equal "a visual recognition program" and it is Zelebg job to explain to you why not, or your job to explain to us why you logically conclude they are equal or highly similar.
Sir2u February 02, 2020 at 17:56 #378039
Quoting Sir Philo Sophia
I already told you where/why I think you are wrong. "5. Program B: consciousness & free will -> feeds into 6.& 2" does not equal "a visual recognition program"


In what way do you think it is different?

Quoting Sir Philo Sophia
and it is Zelebg job to explain to you why not,


But you are the one that said I was wrong, not him.

Quoting Sir Philo Sophia
or your job to explain to us why you logically conclude they are equal or highly similar.


I did, if you did not understand the comparison just say so.

Zelebg February 03, 2020 at 19:55 #378379
Reply to Sir2u

1. Camera A: visual input extern -> feeds into 2.
2. Program A: subconsciousness & memory -> feeds into 3.
3. Display A: visual output inner -> feeds into 4.
4. Camera B: visual input inner -> feeds into 5.
5. Program B: consciousness & free will -> feeds into 6.& 2.
6. Speaker: audio output extern
— Zelebg

Let us know when you've figured out how to implement 2 and 5.


Why is that a problem?


And then maybe you can explain why my telephone is not a conscious thing using Zelebg's method.


Most importantly, or for a start, it does not have the same hardware configuration.
fishfry February 03, 2020 at 21:25 #378413
Quoting Zelebg
Let us know when you've figured out how to implement 2 and 5.

Why is that a problem?


Because nobody knows how to implement 2 and 5, or even if it's possible to do so.
Zelebg February 03, 2020 at 22:28 #378432
Reply to fishfry
Because nobody knows how to implement 2 and 5, or even if it's possible to do so.


Not known is 'what' to implement, so 'how' is not even a question yet. But what I am suggesting here is both what and how to implement, and relevant part is hardware configuration, not software modules.
fishfry February 03, 2020 at 23:26 #378462
Quoting Zelebg
Not known is 'what' to implement, so 'how' is not even a question yet. But what I am suggesting here is both what and how to implement,


Ok. How do you implement 2 and 5?
Sir2u February 04, 2020 at 00:17 #378481
Quoting Zelebg
Most importantly, or for a start, it does not have the same hardware configuration.


Does not have the same hardware configuration as WHAT?

Your idea of an conscious computer or a human being that has no hardware?

The example I gave you fits exactly to your specs, so what is the problem with it? Can you explain that to me?


Quoting Zelebg

I suppose it might be questioned what exactly should "Program A" be doing and what part of that should go onto inner screen


But what I am suggesting here is both what and how to implement, and relevant part is hardware configuration, not software modules.


Make up your mind, is it the software or the hardware that you think is important? Because, as the song goes, you can't have one without the other.
Zelebg February 04, 2020 at 01:30 #378505
Reply to Sir2u
The example I gave you fits exactly to your specs, so what is the problem with it?


My 3. and 4. are hardware, yours software.


Make up your mind, is it the software or the hardware that you think is important?


Hardware. The first sentence you quoted does not contradict that.
Zelebg February 04, 2020 at 01:35 #378509
Reply to fishfry
Ok. How do you implement 2 and 5?


You think the problem is 2. and 5. and I say the so called 'hard problem of consciousness' is 3. and 4., which needs to be implemented with some kind of display / camera system, rather than by any kind of software algorithm.
fishfry February 04, 2020 at 01:43 #378514
Quoting Zelebg
You think the problem is 2. and 5. and I say the so called 'hard problem of consciousness' is 3. and 4., which needs to be implemented with some kind of display / camera system, rather than by any kind of software algorithm.


You said you offered suggestions on how to implement 2 and 5. I didn't see them. Changing the subject doesn't answer my question. How do you implement 2 and 5?
Zelebg February 04, 2020 at 01:57 #378523
Reply to fishfry
You said you offered suggestions on how to implement 2 and 5. I didn't see them.


No, I did not say that. I asked you what is the problem, then suggested you do not know what is the problem, and finally I said what is the real problem and how to solve it. Overall, nothing to do with 2 and 5, but 3 and 4.
fishfry February 04, 2020 at 02:11 #378530
Quoting Zelebg
No, I did not say that. I asked you what is the problem, then suggested you do not know what is the problem, and finally I said what is the real problem and how to solve it. Overall, nothing to do with 2 and 5, but 3 and 4.


Someone using your handle wrote:

Quoting Zelebg
?fishfry
Because nobody knows how to implement 2 and 5, or even if it's possible to do so.

Not known is 'what' to implement, so 'how' is not even a question yet. But what I am suggesting here is both what and how to implement, and relevant part is hardware configuration, not software modules.


I ask again: How do you implement 2 and 5?
Zelebg February 04, 2020 at 02:15 #378534
Reply to fishfry
I ask again: How do you implement 2 and 5?


You are misinterpreting and I already explained your error.
fishfry February 04, 2020 at 02:27 #378540
Quoting Zelebg
You are misinterpreting and I already explained your error.


Do you know how to implement 2 and 5? Yes or no?
Zelebg February 04, 2020 at 02:45 #378547
fishfry February 04, 2020 at 03:24 #378562
Quoting Zelebg
Yes.


Are you keeping it secret?
Sir2u February 05, 2020 at 00:55 #378802
Quoting Zelebg
My 3. and 4. are hardware, yours software.


3. Display A: visual output inner -> feeds into 4.
4. Camera B: visual input inner -> feeds into 5.


How do you figure that it HAS to be hardware? Why would software need internal a viewing screen for another camera to watch? Why not just go from 2 to 5 without the BS in the middle.

It would make more sense if the external camera feed "showed" (3) the recognition program the data so that it could "see" (4) it. which is exactly as what a facial recognition program does.

Sir2u February 05, 2020 at 00:57 #378803
Quoting fishfry
Are you keeping it secret?


Of course he is, but all will be revealed when he goes to pick up his Nobel prize.