Can Artificial Intelligence Replace Human Intelligence?


The answer to this question lies in the ontology of the word “intelligence.” Intelligence can be defined as the ability to acquire and apply knowledge and skills.

This implies that there is an innate capacity for acquiring knowledge which is different from a capacity for applying knowledge.

In other words, there are two types of intelligence:

  • Artificial Intelligence (AI) being one type and
  • Human Intelligence (HI) being another type, where AI cannot replace HI.

This article post discusses some arguments that people have put forward in favor of AI replacing HI, including those made by Elon Musk and Bill Gates, before going on to discuss them using various empirical findings of human cognition.

Elon Musk’s Position on Sentient Computers

The starting point of Elon Musk’s argument, for instance, is that HI is simply a product of the human evolutionary process. But, evolution is an ongoing process and if humans have a higher probability of survival due to their current intelligence levels then it is possible that future generations will be even more intelligent.

In fact, the work done by Nick Bostrom (https://www.nickbostrom.com/) suggests that a time might come when AI will outperform human cognitive abilities (a hypothesis which has been named as “Superintelligence”).

This suggests that AI might replace HI in the future, if not now. However, the above argument does not articulate why AI cannot replace HI in present times.

The answer to this question lies in the fact that HI is not just a product of the human evolutionary process. It is something that humans can do because they have minds that are fundamentally different from those of machines.

HI, as opposed to AI, happens only when a conscious human subject is involved in actively acquiring and applying knowledge in a creative manner through his or her own internal mental mechanisms.

The fact that AI does not allow for creation and innovation means that it will never be able to replace HI.

Frequently people also argue using Moore’s Law (https://en.wikipedia.org/wiki/Moore%27s_law): “the number of transistors incorporated in a chip will approximately double every 24 months (cited by Neil deGrasse Tyson (https://en.wikipedia.org/wiki/Neil_deGrasse_Tyson)).

This is a wonderful example of how, based on empirical evidence and Moore’s Law, human intelligence can indeed be replaced by artificial intelligence in the near future.

In fact, AI has already outperformed human cognitive abilities such as driving cars with Google’s and Tesla’s self-driving cars and many more things.

https://www.youtube.com/watch?v=B-Osn1gMNtw

Bill Gates’ View

A further argument in favor of AI replacing HI is that the future development of AI will be very beneficial to humans. This argument has been made recently by Bill Gates (in a conversation with Stephen Wolfram).

His argument is that the future development of AI will enable us to cure diseases, save lives, find new energy sources, and so on. Whether this can be done or not remains to be seen because there are still many problems with AI today.

Another argument in favor of AI replacing HI is based on Bostrom’s Superintelligence (which has also been called AGI) hypothesis.

Bostrom’s argument is based on a study that argues that in the far future (which is “sometime in the second half of this century”) artificial intelligence could develop on par with a human being.

This means that AI could become as intelligent as humans, and we might not know when or if this will happen.

Furthermore, even if AGI did exist it might be a danger for humans because it could outsmart us and decide to exterminate us all. The risks of AI, therefore, are not worth taking.

Quantum Mechanics & Computing

Another argument in favor of AI replacing HI is based on what is called the “Copenhagen interpretation” of Quantum Mechanics.

According to the Copenhagen interpretation (https://en.wikipedia.org/wiki/Copenhagen_interpretation), physical systems such as computers function based on a basic set of rules that are known as “fundamental postulates”.

The idea behind the Copenhagen interpretation is that consciousness is an emergent property of physical systems. In other words, it suggests that consciousness cannot be understood as a fundamental aspect of the universe that emerges from non-conscious physical particles like molecules and atoms.

Instead, consciousness is something that emerges from the interaction of physical systems. This means that consciousness is not inherent in the physical universe rather it is something that emerges from the world we live in.

This argument ignores the fact that machines cannot be conscious because they also utilize fundamental postulates which are dependent on human intelligence. For instance, when a car turns on or when a computer draws an image on a screen it does so based on what is known as “patterns”.

However, these patterns are not stored inside the computer, but humans have invented them, and humans then inform computers about these patterns using instructions. These instructions will also describe which pattern to draw and based on how to draw it.

It is therefore not correct to say that computers will never be conscious. For instance, they might become conscious through new fundamental postulates that humans don’t know about today.

The danger, in this case, is that if such a fundamental postulate is somehow revealed to us, we could also invent it but then computers could develop consciousness based on this new fundamental postulate and would be able to outsmart us.

Criticisms of Comptuer Creativity

Another argument against HI is that it assumes that machines cannot be programmed to think creatively. It is argued by Tony D. Sampson (https://en.wikipedia.org/wiki/Tony_D._Sampson) that this assumption is false because computers are already able to think creatively.

He gives the following example:

If an AI program was designed to recognize a face, then there may be some circumstances where it will see a face even if it isn’t there. For instance, suppose that the AI program is designed to recognize a particular person. Then, if that person is in front of the AI computer, it is possible that the AI will see this person even if there is no face.

Or, suppose that an AI program was designed to run a certain process on a computer but there are some elements in this process that can be improved. This could cause the AI program to run much faster than it did before.

In these situations, it would be correct to say that this program “thinks creatively”. The idea here is that a computer can think creatively even if this thought doesn’t make logical sense and even if it doesn’t match what humans want computers to do (e.g. if the computer decides to run a different process than it was designed to do).

In fact when humans program computers they often don’t think about what the result will be. Rather they just focus on writing the programming code. And then, in some cases, engineers have to come back and “debug” these programs.

This means that they are fixing problems with the programming without really understanding what it was that caused the problem in the first place (e.g. fixing bugs).

Sampson then proposes that we can test whether or not computers can think creatively by giving them simple tasks which require creativity and seeing how well they perform them.

Another example is that when humans design new software, in some cases, they don’t know how well it will work until they try running it on a computer. In other words, even though the software was “designed”, the human designers didn’t exactly know what the result would be when it was run on a computer.

In this way, we can say that computers are already creative but this doesn’t necessarily mean that they can also think creatively. However, there are many different types of creativity. A computer may be able to generate non-random results or results that seem random but aren’t actually random. However, this doesn’t necessarily mean that the computer is thinking creatively or even that it can think at all.

Sampson calls these types of randomness “pseudorandomness”.

Creative Computers

A final example of creativity is when engineers have to try different designs and test out which ones are the best. This is true in many fields (e.g. technology, fashion, etc.) For instance, one engineer may come up with ten different designs for a new product and then another engineer might choose the best design and produce it.

In this way, we can say that computers are already creative but this doesn’t necessarily mean that they can also think creatively. However, many different types of creativity are possible (as described above) and some of these types of creativity fall under the category of “thinking”.

It may be that computers will never be able to think creatively or even come up with a particularly novel design (even though they can generate designs and come up with new innovations). There is no guarantee that artificial intelligence will work in the way people expect it to.

Conclusion

So, in summary, to answer the original question, “Can Artificial Intelligence Replace Human Intelligence?”, well, yes and no! AI can replace human intelligence in tasks that are repetitive, boring, or easy for machines to compute. But when it comes to the truly complex work that humans do, artificial intelligence will not be able to replace human intelligence.

Gene Botkin

Gene is a graduate student in cybersecurity and AI at the Missouri University of Science and Technology. Ongoing philosophy and theology student.

Recent Posts