top of page

The Contemporary Fundamental Differences Between AI and Humans


When we think of AI, a lot of preconceptions are shaped by the current science fiction media. AI is a lot more than science fiction robots. Robotics is a sub-technology of AI, while AI is the branch of computer science that aims to assist human beings by mimicking human intelligence through computer programs. It likewise has its very own serious history. AI is a result of man’s technological creation, but can it ever overtake us? Is AI replacing human partners in terms of relationships? What is it about AI that will end up replacing millions of blue-collar jobs in the future? To respond to such inquiries, we need to initially comprehend the basic contemporary differences between AI and Human Intelligence by investigating the current improvements in the field. Here are the three most fundamental differences one can observe between Artificial Intelligence and Human Intelligence.


1. The Emotive aspect.

As human beings, we have abstract emotions, we experience love, grief, passion, motivation, and so forth while an AI-fueled Machine doesn't. This can have positive and negative ramifications. Since AI is not yet capable of feeling on a level similar to humans, emotions don't influence the nature of work that such AI-controlled machines or robots do. When given the in-built instructions, machines are not biased in their functioning as humans either. When no human emotions rule the AI programs their operational ability advances as compared to human beings. They don't look for a "break time", or arrive at a saturation point. The capacity to do repetitive tasks without error makes them more efficient than us.

On the other hand, the obvious cons include not being able to empathize, give respect, and have compassion. Although AI-powered robots have been encouraged to be used for geriatric care, some experts argue against it.

“Giving them (elderly) a primitive, fake, inanimate, and non-emotional robot to interact with can be cruel.”

-Kai-Fu Lee, Taiwanese

computer scientist,

businessman, and writer.

According to Kai-Fu Lee, in the future, jobs that are low in compassion and low in creativity will certainly be at risk of being replaced by AI. However, jobs that are creative, thoughtful, and strategic will continue to thrive. Even though today we are nowhere near to an AI that can feel emotions like how humans do, different subsets under artificial intelligence have got us envisioning a future where someday we would be able to create emotionally charged machines. As per MIT Sloan school –


“Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures understand simulates, and reacts to human emotions. It’s also known as affective computing or artificial emotional intelligence. The field dates back to at least 1995 when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Using further technologies such as Sentiment analysis (a sub-field of natural language processing) & multimodal emotional AI, (which analyses facial expression, speech, and body language) the techniques that aim to improve the natural communication between man and machine, as well as gain insights into an individual’s mood. However, the way humans communicate is truly complex, does AI truly understand how we feel or the subtexts beneath? Even as humans we tend to miss/add culturally sensitive contexts, as well as add sarcasm and idioms which can completely alter the meaning and therefore the emotions. Sometimes we tend to leave out things and be silent which can also imply how we are feeling. The present-day AI is not sophisticated enough to understand such subtexts.


One of the pioneering personalities to have tried to differentiate between human and artificial intelligence was Alan Turing, who with his experiment, devised the Turing test. A test wherein, a judge has to talk to a computer as well as a human being over text for some time. If at the end of the experiment the judge is not able to distinguish the computer from the machine, it would have passed the Turing test. To this date, it is claimed that no computer has ever properly passed it, the Eugene goostman chatbot was said to be the first chatbot to pass the Turing test however it is said that the chatbot didn’t pass the Turing test, but a test inspired from it, the criteria for passing which (devised by the organizers), was satisfied by the chatbot. The latter seems more legible, as when one chats with the official chatbot, one can easily realize that the answers it gives aren’t very sensical or context-sensitive, nor does it pick up abstract cues.


2. Self-awareness

It implies consciousness regarding one’s individuality. It is important to note, though, that while consciousness is being aware of one’s environment and body, self-awareness means recognizing that awareness. Currently, AI systems exist on different levels, ones that do not remember the past and cannot use past experiences to help the present, others that can use information recently learned. However, we have yet not reached the stage of a type of machine that can make decisions like a human or be aware of themselves and know their internal states. Even today, many experiments are aiming to devise a method of differentiating between human thinking as well as that of a machine.


One such example is an experiment by Professor Selmer Bringsjord at the Rensselaer Polytechnic Institute in New York. He played out a variant of the ‘wise man riddle’ on the robot, in which each of the three robots was modified to accept that two of them had been given a "dumbing pill" that would make them quiet. Two were hushed. It was then asked which of them hadn’t received the pill, naturally only one said “I don’t know”. After hearing its answer, the robot changed its reply, understanding that it was the one who hadn't got the pill. The way that the robot figured out how to understand that it had not been given the pill, implies that it displayed a level of mindfulness. However, regardless of whether that implies that the robot is mindful, is yet to be settled upon. Some researchers named it as “mathematically verifiable awareness of the self”.


3. Sentience

Consciousness is a prerequisite for sentience (the ability to recognize and feel sensory inputs), even though present-day machine perception allows the computers to use “sense”, by gathering data via computer vision, machine hearing, machine touch, and machine smelling, machine perception is still limited as it only grants machines limited sentience, rather than full consciousness, self-awareness, and intentionality. For example, as humans, we are still able to recognize objects and images that appear to be blurry, but a machine still struggles to. Hence when it comes to sentience, humans are far more superior still.


To conclude, present-day AI works optimally in controlled environments but has trouble with open worlds, poorly defined problems, and abstractions. Even though they differ from human beings in the case of -emotions, consciousness & sentience, there is a dearth of potential development left to be done, which can expand the ability of how AI sees, feels and experiences, and thinks about our world. Do you think AI has reached a stage where it can replace human intelligence?


Want to delve deeper into the topic with an expert in the field? Stay tuned for our upcoming article with an AI Ethicist!


0 comments

Comments


bottom of page