How Does Computer Vision Differ From Human Sight?


How does computer vision differ from human sight?

A computer views all information with equal regard unless told otherwise. Meanwhile, humans possess intrinsic biases that assign value to some images.. A computer can quickly observe fine patterns within images which a human might miss. However, humans are better at detecting and analyzing faces.

To start with, what is computer vision?

Computer vision (CV) enables computers to see and comprehend digital images such as videos and photographs. What it does is try and mimic the human vision by simply recognizing objects in photographs, and then decoding the information from these objects through automation. The point here is that the computer can be able to comprehend information about images while not requiring the help of a human being. This means that computers can carry out tasks such as determining if is there is an object in a photo, the number of objects in the photo, and where exactly the objects are located in the photo. This may appear to be a simple task since human beings do it with little or no effort but it’s actually a complex process.

So, how are computers able to have a vision?

First of all, one should understand that the vision of a computer is enabled by something referred to as pixels. Pixels are some tiny squares that form the whole image. Each and every pixel has varying brightness levels and is represented by a number. Each number has a color which it represents.

The big question is, how are computers able to recognize the pixel that makes up a given image? The vision of a computer basically revolves around pattern recognition. For computer vision to work properly, machines have to learn from us people how to recognize images. The only way to accomplish this is through repetition. This can be done by feeding computers with pictures that have labeled images as many times as possible. For instance, if you want your computer program to recognize a sheep, you will be required to have fed it multiple pictures labeled sheep. To effectively label the sheep, you will have to simply draw a box around the sheep and write the name sheep on it. The computer will be able to identify the given pixels that are in the box and will be able to associate the pixel structure with that of the sheep.

Upon viewing enough of these pictures, the computer will now to ready to figure out an image that has a sheep in it and one that doesn’t. The computer will be able to do this on its own. You must understand that a computer must be fed with millions of images to be able to identify the sheep on its own. But this should not be scary though as there are numerous pictures of sheep shared over the internet every day.

How Human Vision Works

The human vision basically involves light and does not have anything to do with patterns or repetition. In a nutshell, humans do not require to undergo a learning process teaching them how to see. This is because vision is engrained in us. The human vision consists of a number of steps. To begin with, light bounces off the image in front of you and gets into your eye through a part of the eye known as the cornea. The cornea is then able to direct light into the iris and pupils which work together to regulate the amount of light that is entering your eye. When light now passes through the cornea, it reaches a part of the eye known as the retina. This part of the eye contains some special sensors which are known as rods and cones which are involved in figuring out different colors.

The moment the rods and cones get exposed to light, they change the visual information and translates it into electric information. What follows is that the optic nerve then transfers this information all the way to the brain. What then happens is that the brain’s visual cortex then interprets the electrical form of the image and allows you to form a visual map.

Human Vision & Computer Vision – Contrasted

We must acknowledge that both human and computer vision can at times be fooled. For instance; human beings are biased to identify faces even when they are not really there. In something referred to as the rotating mask illusion, when the mask rotates, one should be able to see from the back. This said though, one must understand it’s the brain that is automatically perceiving that the image is at the front. In the case of a computer, it surely cannot be biased when it is the case of identifying faces. However, it may at times classify images that humans cannot identify as some given objects. It means that a computer may identify a combination of different colors and refer to it as a given object just because the colors look close to the appearance of the object.

https://youtu.be/nMJWR95oCpQ

How is computer vision being used today?

Though computer vision has taken a long time to develop, there have been big strides that have been made in this industry. Mobile technology with built-in cameras, new algorithms like convolutional neural networks, accessible hardware designed for computer vision, advanced computing power, and an abundance of data have all contributed to the rise of computer vision.

Computer vision right now is being used to run self-driven cars, healthcare, facial recognition, and many other departments. It is also playing a big role in the medical field in helping doctors identify cancer patients with increased speed and accuracy when doing analysis of chemotherapy response assessments. In addition, the US air force is making use of computer vision in its dogfighting simulator.

One thing that we have to agree on is that computer vision is much faster than human vision. In computer vision, the signals are transferred through powerful electrical impulses, while the human vision, the signals are transmitted through some chemical events involving potassium and sodium ions.

Conclusion

To wrap it up, computer vision is a technology that has emerged and become beneficial in very many ways. It has been used to upgrade and improve various systems and the exciting part about this is that it has not been fully exploited to its optimum potential.

Gene Botkin

Gene is a graduate student in cybersecurity and AI at the Missouri University of Science and Technology. Ongoing philosophy and theology student.

Recent Posts