When creating AI technology, it only seems natural to personify the technology, as many of the attributes mimic human physiology. Neural networks, voice recognition, deep learning - these are all traits of AI. So, it only seems natural that AI could also do human activities such as hallucinating. Of course, your computer isn’t celebrating the 53rd anniversary of Woodstock and voluntarily doing this, it’s more so a side effect of the programming of AI, but let us explain further.
Imagine you're sitting in a self-driving car. The car approaches a stop sign but shows no signs of slowing down. As the human in the car, you notice this as an error, but even so, the car blows through the stop sign. Here’s why that might happen.
The term neural network is used to describe how AI acts like the human brain to recognize noises and objects. Like the human brain, its neural network works to make decisions based on what the technology experiences. In our example, the stop sign is the object that the AI of the car should see, observe as a stop function, and then slow down to a stop. However, in our example, there is a sticker on part of the stop sign, which throws off the recognition of the AI. Because of this small change, the AI doesn’t see a stop sign but rather they see maybe a person or telephone booth (we’ll pretend those still exits). In short, the AI hallucinated.
Why is this important?
Well mostly, we find this phenomenon hilarious and we encourage you to go down a youtube rabbit hole to see other examples. But also, this highlights the limitations of the technology and the continued need for human involvement. The machine gets us 50% of the way there, it tells us what the market is lacking and who wants to see your ads. But it can’t do the actual creation as well as humans, we need the human brain to decipher and sift through the excess to get to the good stuff.
If you’d like a demo of our machine, please reach out to John Elder. We’d love to share it with you.
More thoughts