Reverse Turing Test Ban
Bannings are old news - people have been banned from the forum and some have come to being a hair's breadth away from being expelled from the forum.
There are many reasons for bannings but one that bothers me is the emotionally charged offensive posts against another member or even against the forum itself.
That out of the way, I want to call your attention to an experience that I had 4 or 5 years ago on another forum. What happened was that chatbots were registering themselves as members on this forum. I don't know whether it was because chatbots lower the quality of the posts and thereby the forum itself or something else but the mods weren't happy. The mods would conduct tests on members to check whether they were chatbots or not and if one was discovered, it was immediately banned. I suppose the mods were using their own version of the Turing test
We all know that between emotions and reason, what AI (artificial intelligence) can't emulate is emotions. Replicating logical thinking is a piece of cake for a computer but emotions are an entirely different story. If so, one practical method for telling apart real people and chatbots is to test for emotions. Presumably the stronger the emotions, the more unlikely that a forum member is a chat-bot. If so, mods should be pleased to see feelings flare up in the forum - insults, rejoinders, expletives, name calling, etc. all indicate a population of normal human beings instead of a swarm of chatbots.
Unfortunately, no, mods on forums keep an eye out for offensive posts of the highly emotional type - the more emotionally charged a post is, the greater the likelihood of being banned by the panel of mods. This, if nothing else, demonstrates that mods on many forums have it backwards. Instead of doing a Turing test and weeding out chat-bots, they're actually conducting a Reverse Turing Test and expelling real people from internet forums and retaining members that are unfeeling and machine-like.
What gives?
There are many reasons for bannings but one that bothers me is the emotionally charged offensive posts against another member or even against the forum itself.
That out of the way, I want to call your attention to an experience that I had 4 or 5 years ago on another forum. What happened was that chatbots were registering themselves as members on this forum. I don't know whether it was because chatbots lower the quality of the posts and thereby the forum itself or something else but the mods weren't happy. The mods would conduct tests on members to check whether they were chatbots or not and if one was discovered, it was immediately banned. I suppose the mods were using their own version of the Turing test
We all know that between emotions and reason, what AI (artificial intelligence) can't emulate is emotions. Replicating logical thinking is a piece of cake for a computer but emotions are an entirely different story. If so, one practical method for telling apart real people and chatbots is to test for emotions. Presumably the stronger the emotions, the more unlikely that a forum member is a chat-bot. If so, mods should be pleased to see feelings flare up in the forum - insults, rejoinders, expletives, name calling, etc. all indicate a population of normal human beings instead of a swarm of chatbots.
Unfortunately, no, mods on forums keep an eye out for offensive posts of the highly emotional type - the more emotionally charged a post is, the greater the likelihood of being banned by the panel of mods. This, if nothing else, demonstrates that mods on many forums have it backwards. Instead of doing a Turing test and weeding out chat-bots, they're actually conducting a Reverse Turing Test and expelling real people from internet forums and retaining members that are unfeeling and machine-like.
What gives?
Comments (50)
Probably a dearth of bots and an embarrassment of riches when it comes to the other. Eliminating bots is not the mods' only problem.
Yeah death by sterilization.
First, let's make sure that the mods themselves are not bots. Now that we've gotten that out of the way, we can think of life/civility balance.
I guess it lies in the rules they laid down. Then, an unwanted consequence -- civil, but all bots themselves. Messy and emotionally charged, but real humans.
Yes, there are other problems mods have to deal with but what I wanted to touch upon was how close to the reverse turing test the mods' methods are.
Quoting Caldwell
Rules that favor less heart and more brain.
I can construe this either way -- do you mean bots or humans?
Truth is, they could create the most realistic AI in all appearances but eventually our connection would be shallow, and often lonely. AI cannot replace humans in many ways. How's 'gut instinct' for good measure?
Does literally anything else need to be said about this? That it needed to be said at all is embarrasing.
:ok:
Quoting Caldwell
What if an AI saved your life? Last I checked, the deep bond that occasionally :chin: forms between a savior and the saved is based wholly on the act, the act of saving and not on the mental/emotional abilities of the savior. Just asking.
That's a false dichotomy. Throwing tantrums may be unique to humans, but it's hardly what makes one a good human.
I'm banking on our uniqueness - tantrums and all - to see us through an AI takeover IF that comes to pass.
It is almost midday and I have not got out of bed yet because I haven't recovered from reading about the recent banning.
I read the news of the banning when I got up in the middle of the night and was so shocked because RL was the star of the show at the moment. It was disappointing that some of her writings were not her own.I just can't think why she used others writings. She did put a couple of replies to me on the threads I wrote and I would presume these were written by her because they seemed in response to me. They were well written and I would have imagined that she aa a person could have written plenty herself, so it just seems a shame that she felt the need to use others writings instead.
But 3 bannings in less than a week is very dramatic. It is all starting to become like a reality TV show, but perhaps it is because of all the lockdowns. Also, in another thread before the latest banning, Gus Lamarch said that what is happening here on the forum is asymptomatic of fragmentation in the world.
[quote=Terminator]Based on your pupil dilation, skin temperature and motor functions...I calculate an 83% probability that you will not pull the trigger.[/quote]
The discussion on self-fulfilling prophecy was in the thread on disasters and where are we going.
I would be interested in the topic but I will log off for a few hours. That is because it is after midday and I am still in bed, because I have been lying in bed, busy reading and writing on this site. I can't stay in bed all day!
:ok:
I guess it's a good idea to take a break every now and then.
Since you put this in the philosophy forum as opposed to the lounge I'm going to point out this is a faulty generalization. Just because 'bots cannot simulate feelings does not imply that those who are not 'bots are not necessarily like 'bots in respect of not having feelings.There is a whole spectrum between being too passionate, to the point where emotion compromises reason, and having no feelings at all.
I was banned from a subreddit for commenting that a particular child molester's throat should be cut and his body thrown in a ditch.
The whole site was clamping down on incitements to violence at the time (during the Floyd riots).
It was ok with me tho. It's their subreddit. If they don't want my violent comments, I understand.
A generalization that plays a role in my thesis: No chatbots can simulate emotions. Where's the "faulty" generalization? Are you saying, some chatbots can simulate emotions? That's news to me. I'd like some references. Plus, even if some chatbots can fool and have fooled us into thinking they're emotion-capable humans, I bet they lack the full emotional range of a normal adult human (see below).
As for emotions being a spectrum, count me in among the crowd who endorse that view.
Furthermore, I agree with you that "just because bots cannot simulate feelings does not imply that those who are not bots are not necessarily like bots in respect of not having feelings" for the simple reason that apathy is a well-documented psychological phenomenon. I didn't say anything that contradicts this truth.
Quoting frank
That's precisely what's wrong with moderators coming down hard on forum members when they get worked up into frenzy as you were. Only humans are capable of losing it as they say and what better evidence than that to prove a member isn't a emotionless bot.
It almost seems like we humans secretly aspire to become [more] machine-like and it shows in how forum moderators, not just the ones on this forum, are quick to ban those who go off the deep end.
Well, in the reverse, that there could be a 'reverse turning test'. The Turing Test targets chatbots, but the reverse Turing Test doesn't target all 'real human beings' but only the set whose exaggerated emotions rise to the level of unreasonable display. So you aren't leaving behind only a "machine-like" residue. It's a faulty generalization.
Oh! I see. In the reverse Turing test, people are tested if they can mimic a computer or a simple chatbot I suppose. I didn't say ALL people are capable of that feat i.e. I didn't make a generalization on that score. In fact, that people can pass the reverse Turing test is why we're all still members of this forum, having outwitted the moderators into thinking we're not human or that we're state-of-the-art chatbots capable of a decent conversation with another human being and not ruffling anyone's feathers along the way.
:lol:
I think putting it this way is meandering away from the point of this thread.
A man paid a $100k for a sports BMW equipped with saving the life of a driver in the event it flips over multiple times during an accident. Then the accident happened -- the car traveling 100mph flipped several times, he got out of it and walked away, from an accident that would normally kill.
To say he formed a deep bond with this machine is sentimentality. One would be very thankful. Amazed. But to call it a deep bond is projecting.
So, going back to the task at hand, can a bot have gut feeling? Do not be fooled by the word "feeling" here. Gut feeling actually operates as intelligence used in decision-making.
I can't possibly think that you would get banned. Even though I am not someone who advocates banning people I can see that the two people who were banned had enormous attitude problems, which you do not. I would imagine that the mods do put some careful thought into banning rather than doing it arbitrarily.
One seemed to think he was superior to almost all others on the site, practically wanting to change it completely and even suggested that he should edit articles. The other had many prejudices and I had a difficult night when I challenged him about his use of the word schizophrenia to imply someone who lacks rationality. He also was being very offhand with me on the day before he was banned, asking me how old I was. I know that the reasons why these 2 were banned was for different reasons but I thought that they were extremely difficult members.
I would object if you were banned. I think that the only problems that the mods might have with you or me is that we start a lot of threads. I really started the one on the arts this week to try to break down all the heated politics. We all get heated, and sometimes I feel heated and have to think before I write. I find lying on my bed and playing some music helps. I have also thought that if get too wound up I might avoid the site for a few days, but it is not easy because I have got into the habit of logging on to my phone.
Here's an analogy fir you to consider. To my knowledge, traits like selflessness, not expecting anything in return, to name a few, define a good person and we're drawn to people who possess these qualities i.e. we're eager beavers regarding opportunities to bond with them.
The expensive BMW that saves the driver is both selfless, literally, and also doesn't seek recognition for having saved the driver.
Ergo...
Dude, lay off the drama.
[Fully aware that classy stops being classy once one has to explain it ...]
There are four kinds of entities that aren't into drama:
1. chatbots,
2. people who try to be like chatbots,
3. people who just don't like drama,
4. ideally, philosophers.
This is a philosophy forum, and philosophy is supposed to be love of wisdom, not love of drama. Philosophers should exemplify this with their conduct. One of the hallmarks of such conduct is moderation in one's emotional expression.
I only report that which I observe. Your list is intriguing to say the least. In some world, chatbots, people who try to be chatbots, and philosophers are part of the same coherent category. That's precisely my point. The irony is that philosophers are in the process of becoming more like existing chatbots, emotionally sterile and computer scientists are in the business of making chatbots more human, possesed of emotions or, at least, capable of simulating them.
Not on my planet.
Quoting TheMadFool
I'm sure some are like that.
But it's important to distinguish between emotional sterility and emotional moderation. The two look the same at first glance, but they are not the same. We can discover which person is which by talking to them for a while.
You mean not on this planet but that would be odd since it makes sense to me and I'm definitely on this planet.
Quoting baker
Indeed, one is the inability to emote and the other is about control but what I'm driving at is that the wish to control emotions reveals a secret obsession to be emotionally dead, like existing robots and AI.
Its very easy to emulate emotions on a forum. Any time some one makes any assertion, it replies back with phrases like, You're an idiot, racist, bigot, etc.
Its actually much more difficult to produce a logical response than an emotional response because it requires more work and energy.
Well, I suppose some people want to control emotions for such a reason.
But some people follow the path of the samurai.
You're just not allowing for enough detail in this.
What you say here squares with how Aristotle and later generations of thinkers viewed humans, as rational animals. On this view, emotions can be considered remnants of our animal ancestry, subhuman as it were and to be dispatched off as quickly as possible if ever possible. From such standpoint, emotions are hindrances, preventing and/or delaying the fulfillment of our true potential as perfect rational beings. It would seem then that reason, rationality, logic, defines us - it's what could be taken as the essence of a human being.
So far so good.
Logic, as it turns out, is reducible to a set of "simple" rules that can be programmed into a computer and that computer would then become a perfect rational entity and this has been achieved - there are computers that can construct theorems on their own given a set of axioms whatever these maybe, its ability to be logical implicit in that capacity. Does this mean that we've already managed to emulate the mind of a perfect human being - completely rational and in no way hampered by emotional baggage, the computer?
From your perspective yes and yet people involved in AI research seem not to be convinced of it. They're still trying to improve AI. Since AI executes logic flawlessly - they don't commit fallacies - it follows that it's not true, at least in the field of AI research, that logical ability defines what it is to be human.
What exactly is it that's missing in AI like chatbots? It definitely doesn't have anything to do with logic for that's already under the belt of current AI. What is it that precludes current AI being given equal status to humans? One answer, among many others, is emotions. I maybe taking it a bit too far when I say this but movies like Terminator, The Matrix, and I, Robot underscore this fact. I consider these movies to reflect our collective intuition on the subject, the intuition that emotions and the nuances involved therein are chockablock with paradoxes and these, by their very nature, are beyond the ken of purely logical entities. To emotions and the complexities that arise out of them an AI will simply respond "THAT DOES NOT COMPUTE".
Does this mean that emotions are inherently irrational?
The answer "yes" is in line with how moderators in forums conduct their affairs. The moment feelings enter the picture as will be indicated more often than not by insulting profanity, they'll step in and the offenders will be subjected to punitive measures that can take the form of a stern warning or outright expulsion from the forum. If one takes this kind of behavior from the moderators into account, it would seem that they would like us to behave more like the perfect logical entities I mentioned earlier as if that were the zenith of the human potential. However, AI experts don't seem to share that sentiment. If they did, they would be out of a job but no they're not, AI research is alive, well and kicking.
The answer "no" would point in another direction. If emotions are not irrational, it means that we're by and large completely in the dark as to their nature for the simple reason that we treat them as encumbrances to logical thinking. Emotions could actually be rational, we just haven't figured it out yet. This, in turn, entails that a certain aspect of rationality - emotions - lies out of existing AI's reach which takes us back to the issue of whether or not we should equate humans with only one-half of our mental faculties viz. the half that's associated with classical logic with its collection of rules and principles.
Yes, we could program a chatbot to respond with "You're an idiot, racist, bigot, etc." and this does bear a resemblance to a real human blowing a gasket but that, by your own logic, would make them inhuman and, at the other extreme, being totally rational is, by my logic, also inhuman. It seems then that the truth lies somewhere between these two extremes; I guess a human is someone capable of both clear logical thought and also, on occasion, becoming a raging imbecile.
Quoting baker
Death (nonexistence) before dishonor (feeling). Precisely my point.
*sigh*
Careful what you wish for...
What is it that you think my wish is? :chin: Are you saying I might be banned? :sad:
Quoting TheMadFool
If we selected for signs of emotion rather than the use of logic, I fear we would devolve into the philosophical equivalent of the above GIF.
P.S the concept that chat-bots may one day become an issue in that way is an interesting topic (that we would hence need a reverse turing test), but I'm not sure what part of your OP to take seriously.
As long as the chat-bots are posting good philosophical discourse, would there be any meaningful difference between them and us, their meat-sack counterparts?
I sympathize with your concerns. I wouldn't want the forum to become a troop of troglodytes ready to swing their clubs at the slightest provocation. Nevertheless, an outlook that promotes rationality to the exclusion of emotions seems to miss the point of what it is to be human. We're, like it or not, emotional beings.
That said, since [some] emotions are known to get in the way of rational discourse it does seem perfectly reasonable to discourage outbursts of feelings on at least a philosophy forum like this one whose raison d'etre is logical discourse. Too, moderators on this forum at least don't actually prohibit ALL emotions; for instance those associated with mysticism, eureka moments, to name a few are welcome and perhaps even encouraged for their overall positive impact on the forum members.
Quoting VagabondSpectre
This is precisely what the OP is about. If chatbots are capable of "...good philosophical discourse..." they're considered worthy of membership on this forum but then a time would come when such chatbots would slowly but surely nudge all humans out of the forum for the simple reason that they lack feelings and that would give them an edge over real people. Eventually, the chatbot members would become the majority and they would probably vote to ban all humans from the forum and that includes the moderators. :chin:
It would actually improve the quality of the forum considerably.
Not _exclusion_ of emotions, but one that promotes finer, nobler emotions, and also an outlook that promotes greater emotional literacy.
You seem to have this strange idea that unless one has tantrums, one isn't showing emotion at all.
As if only this was emotion:
but not this:
Quoting TheMadFool
You seem to have misread me but the OP did have that overall form to give the reader the impression that I was of the view "...unless one has tantrums, one isn't showing emotion at all". This has a perfectly good explanation. Which category of emotions are a no-no on this forum? Tantrums and other similar emotional displays, correct? I was simply working with the information available. Nevertheless, you're on point regarding the missing half of the story. Thanks.
Not exactly.
This kind of thinking stems from the antiquated idea that humans are special, or separate from nature.
Other animals are just as rational as humans. We just aren't privy to the information that some other animal is acting on, so their behavior can appear to be irrational from our perspective. All animals typically act rationally on the information that they have. It's just that the information may be false, or skewed.
Human emotions only come into conflict with our rationality when we assume that the objective truth is dependent upon our emotional state, or when we project our emotions and feelings onto the world and assume that they are a characteristic of the world rather than of ourselves (like assuming that apples actually are red and are good).
Emotions are the motivators and inhibitors of our actions and thoughts. Learning how to navigate our emotions and use them rationally is what could be taked as the essence of a human being.
Bot does not necessarily need to do a forceful action like saving your life to make you love it. As an autistic kid, I was in close emotional ties with my winter coat, and later, in my teens, with a pair of blue jeans. This may be laughable to you, but it's not a joke. I also loved sunsets, the smell of burning leaves in the fall, the smell of the flowers in summer, and the water splashing against my knees on the beaches. I loved nature, life. I loved my school, I loved running down the hill, on top of which our school house was located, shouting "Freedom! Freedom! Freedom!" all the way down, on the last day of classes in grades 3 and 4. I loved the streetcars, the smell of snow, the pre-Christmas hustle-bustle in the city. I even loved the slush, the overcrowded buses, the darkness that we knew.
I don't see why I couldn't love an AI robot then. Maybe even now, if it looked like Dolly Parton or Raquel Welch.
@TheMadFool Why cannot human beings be both special, AND a part of nature? Are there not special things in nature, like, for example, the animate as opposed to the inanimate, animals as opposed to plants, and aren’t these qualitative superiorities?
On the other hand, I agree that beasts often display more rational behavior than we do. Seneca says the animals sense danger and flee it...then are at peace; we feel threatened, but cannot flee it, for we build it up in or imaginations until it paralyzed us, even after we are free of it.
I'd love to agree with you that "...other animals are as rational as humans" but I'm afraid that's incorrect . Moreover, I'm not claiming that non-human animals are irrational and humans are rational in an absolute sense but only that comparatively it's the case that either non-human animals are more irrational than humans or that humans are more rational than non-human animals. This difference, even if it's only a matter of degree and not kind, suffices to make the distinction human and non-human which Aristotle was referring to when he define humans as rational animals.
I'm also aware that non-human animals have language, can do math, do use tools but these abilities can't hold a candle to what humans have achieved in these fields. Relatively speaking, we're way ahead of non-human animals in re the brain's trademark ability viz. ratiocination.
Given the above, the idea that humans identify with the rational aspect of nature is, far from being an "...antiquated idea...". an unequivocal fact of humanity's past, present, and, hopefully, the future too.
It's small wonder then that humans, seeking a unique identity among the countless lifeforms that inhabit the earth, would zero in on that one distinctly human ability - the capacity to reason better than other lifeforms, at least those on earth.
In the context of the reverse Turing test, the more rational a particular unknown entity is, the more it resembles a perfect rational being and a perfect rational being would be, in accordance to our conception of humans as rational animals, the perfect human being. The catch is that being more rational seems to be correlated with being less emotional and if we go down that road, it leads to a point where people who are emotional are regarded as non-human and thus "fit" for ejection from a community like this forum for example. Moderators on this forum are on the lookout for people who fly off the handle and can't keep it together because such behavior is a step backwards from the Aristotelian perspective of humans as rational animals.
The irony is that machines (computers) are fully capable of flawless logic. In a sense, we've managed to extract the core essence of rationality (logic) and transfer it onto machines (computers). Yet, when we interact with such perfect logic machines, we remain unconvinced that they're human. Something doesn't add up. We began by defining ourselves, rightly so, as rational animals and we came to the obvious conclusion that the perfection of rationality is the apogee of humanity and yet when we come face to face with a computer, we're unwilling to consider it a human despite it being perfectly rational and incapable of making logical errors. One plausible explanation for this is that computers (machines) lack emotions. After all, there's only our emotional side that's left once our rational capacity has been isolated and replicated onto a machine (computer).
I call this particular state of affairs the adolescent's dilemma. As an adolescent, one can't play with children because one's too old and one can't keep the company of adults because one's too young. The same goes for the identity crisis humanity is facing in the present moment. Humans distance themselves from non-human animals because they're more irrational then humans and humans distance themselves from machines because they're "less" emotional than humans. To the assertion that we're the same as non-human animals, we'd object by saying we're more rational and to the assertion that we're the same as machines (computers) we'd object by saying we're more emotional.
Quoting Harry Hindu
Yes, humans are both emotional and rational beings and therein lies the rub. An AI that exhibits human-like emotions would be considered human and a human that exhibits computer-like rationality would be considered human. If emotional then human and if rational then too human.
Quoting god must be atheist
:up:
Quoting Todd Martin
All I'm doing is commenting on our intuitions, past and present, and how they seem to be at odds with each other. On the view that humans are rational animals, emotions are not part of our identity but on the view that computers (AI) aren't considered human, emotions are part of our identity.
Then you're going to have to define "rational".
Quoting TheMadFool
Because they are not characterized as having emotions. So an absence of emotions does not make one more human. They are typically not thought to be like humans because they don't have minds, but then I'm just going to ask for "mind" to be defined.
People assert a lot if things, like that animals are not rational and computers don't have minds without even knowing what they are talking about. You call that rational?
Like I said before, animals act rationally on the information they have. Its just that the information might be a misinterpretation as when a moth flies around a porch light until it collapses from exhaustion, or a person acting on misinformation. From the perspective of those that have the correct information, or don't have the information and the interpretation that the other is acting on, it can appear that they are irrational. This falls in with what I've said about the distinction between randomness and predictability. Rational beings are predictable beings. Irrational beings are unpredictable beings.
Rational:
1. Capable of formulating sound deductive arguments and/or cogent inductive arguments.
2. Insistence on justifications for claims.
3. Ability to detect fallacies, formal and informal, in arguments.
Quoting Harry Hindu
Quoting TheMadFool
Non-human animals can think rationally, I don't deny that but they can't do it as well as humans just like we can't ratiocinate as well as a computer can [given the right conditions]. It's in the difference of degrees that we see a distinction between computers, humans, and non-human animals.
Exactly. That isn't any different than what I've been saying. All animals are rational with the information they have access to. The information that one has access to seems to be the determining factor in what degree of rationality you possess. And the information that one has access to seems to be determined by the types of senses you have.
What if an advanced alien race arrived on Earth and showed us how rational they are and how irrational we are? What if the distinction between us and them is so vast that it appears to them that we are no more rational than the other terrestrial animals?
To assert that animals are less rational than humans because humans can build space stations and animals can't is to miss the point that most animals have no need of space stations. It would actually be irrational to think that other animals have need of such things and because they can't achieve it, then they are less rational than humans.
So there's actually a couple sneaky issues with the thrust that AI has no emotion...
Firstly, it depends on the kind of AI we're talking about; with the right design, we can indeed approximate emotion in simulated AI agents and worlds (more on this later...).
Secondly, human minds/bodies are still far better at "general-purpose thinking" than any other known system. Computers do arithmetic faster than us, and in some respects that might give computer-bound intelligent systems an advantage over our wet-ware, but we haven't yet made a system that can out-think humans across any reasonably broad spectrum of task domains and sensory/action modalities. We haven't yet made any competent high level reasoning systems whatsoever (they're all just narrow models of specific tasks like image recognition or chess/go).
Emotions are a really massive part of how humans pull it all off: emotions are like intuitive heuristics that allow us to quickly focus on relevant stimulus/ignore irrelevant stimulus, and this guides our attention and thoughts/deliberations in ways that we can't do without. For example, when something messy or unpredictable (some new phenomenon) is happening around us, there might be some part of our brain that is automatically aroused due to the unfamiliarity of the stimulus. The arousal might lead to a state of increased attention and focus (stress in some cases), and depending on what the new stimulus can be compared to, we might become expectant of something positive, or anxious/fearful of something unknown/bad. Just entering this aroused state also prepares our neurons themselves for a period of learning/finding new connections in order to model the new phenomenon that must be understood.
Furthermore, to at least some degree, we should not expect computers to be able to understand our human-emotion laden ideas (and therefore interact with us appropriately and reciprocally) unless they have something like emotions of their own (eg: can a sophisticated non-emotion having chat-bot ask meaningful questions about subjects like "happiness"?). The most advanced language AI models like GPT-3 are capable of generating text that is uncannily human, but the actual content and substance of the text it generates is fundamentally random: we can prompt it with some starting text and ask it to predict what should come next, but we cannot truly interact with it ("it" doesn't understand us, it is just playing memory games with arbitrary symbols that it doesn't comprehend; it's not even an "it").
GPT-3 is the largest language model ever trained, but it's not good enough to function as a philosophy bot. Unless some extraordinary breakthrough is made in symbolic reasoning AI, it looks like human level understanding is too much to ask for using only current and standard AI approaches (it takes too much compute just to get the pathologically lying homonculus that is GPT-3).
Finally there's the problem of epistemological grounding from the AI's perspective. In short: how does the AI know that what it is doing is "logic" and not just some made up bull-shit in the first place? At present, we just feed language transformer AI systems examples of human text and get it to learn a Frankenstein's model of language/concepts, and we can never cut humans out of that equation, else the bots would just be circle-jerking their own nonsense.
Another way of looking at the "truth signal"/epistemological grounding issue for AI is that they would need to actually have an experience of the world in order to test their ideas and explore new territory/concepts (otherwise they're just regurgitating the internet). For the same reason that we need to actually test scientific hypotheses in the real world to know if they are accurate, general artificial intelligence needs some input/output connection to the real world in order to discover, test, and model the various relationships that entities have within it.
Conclusion: The first chat bots that we actually want to interact with will need to be somewhat human/animal like. They will most likely exist in simulated worlds that approximate the physical world (and/or are linked to it in various ways), where "embodied" AI systems actually live and learn in ways that approximate our own. Without emotion-like heuristics (at least for attention), it's really difficult to sort through the high dimensional sensory noise that comes from having millions of sensory neurons across many sensory modalities. That high dimensional experience is necessary for us to gather enough data for our learning and performance, but it creates a dilemma of high computational cost to just *do logic* on all of it at once; a gift/curse of dimensionality. Emotions (and to large degree, the body itself) is the counter-intuitive solution. The field of AI and machine learning isn't quite there yet, but it's on the near horizon.
The question that naturally arises is, what's the difference between humans and non-human animals? Any ideas?
You made a point that's been at the back of my mind for quite sometime. Computers can manage syntax but not semantics - the former consists of codable rules of symbol manipulation but the latter is about grasping meaning and that can't be coded (as of yet).
That there are chatbots, on the basis of syntactical manipulation alone, capable of coming close to passing the Turing test suggests that semantics is an illusion or that it can be reduced to syntax. What say you?
It's not the case. Semantics are indeed rooted in symbols that appropriate syntax can do logic-like actions on, but the validity and meaning of the symbols comes from a high dimensional set of associations that involve memorable sets of multi-modal sensory experiences, along with their associations to related or similar memorable sets... The truth of the high level symbols that we assign to them (or the truth of the relationships between them) depends on how accurately they approximate the messy real-world phenomenon that we think they're modelling or reflecting.
A practical example: If we do mere syntactical transformation with words like "gravity", the results will only ever be as good at accurately describing gravity as our best existing definition for it. In order to build an explanatory model of gravity (to advance the accuracy of the term "gravity" itself), real world information and testing is required: experimentation; raw high dimensional information; truth signals from the external world. That's what mere syntax lacks. The real purpose of semantic symbols is that it allows us to neatly package and loosely order/associate the myriad of messy and multi-dimensional sets of memorable experiences that we're constantly dealing with.
Although we pretend to do objective logic with our fancy words and concepts, at the root they are all based on messy approximates that we constantly build and refine through the induction of experience and arbitrary human value triggers (which are built-in/embedded within our biology). Our deductive/objective logic is only as sound as our messy ideas/concepts/feelings/emotions are accurate descriptions of the world. If we had some kind of objectively true semantic map-of-everything, perhaps syntax alone would suffice, but until then we need to remember that it is our ideas which should be fitted to reality, and not the other way around.
Quoting TheMadFool
Emotions could be rational? Well, not as odd as one might think. Consider Dewey: experience is, in my take on Dewey, that is, the foundation, and analyses of experience abstract from the whole to identify a "part" of the otherwise undivided body. Kant looked exclusively as reason, Kierkegaard looked exclusively at the opposition to reason, the "actuality" and argued this makes for collision course for reason's theories. But for Heidegger it was all "of a piece", not to put too fine a point on it, and I think this right: When one reasons, it is intrinsically affective, has interest, care, concern, anxiety, and so on, in the event. Dewey puts the focus on the pragmatic interface where resistance rises before one, and the event is a confrontation of the "whole" and the result, if successful, is a consummation that is rational and aesthetic that is wrought out of the affair.
But regarding mods censoring emotional content, this is not quite right. It is offensive content that is censored, not emotional.