I’m generally an AI optimist here. Maybe an apologist also.
But I recognize some of the dangers, the biggest of which is the incentives we give it. What if the incentives are malevolent?
Fakers have been around for hundreds of years. Forgers, impressionists, and impersonators have been at their craft for a long time. Bilking us out of our money. Making us believe the make-believe. But there was an art to it. A very human skillset required.
Now, we have AI.
AI is a tool that is and will be very good at faking the crap out of us. Words, audio, and most insidiously, video. No human skills required. Just incentives.
Video is the best tool we have for objective understanding of what happened. It provides context in both space and time. We use it in the legal system. We use it for historical analysis. We use it for academic research. Armed with movie studios in our pockets, we walk around capturing all the moments of our lives in video for both posterity and personal remembrance. It’s a powerful tool.
However, we also know its limitations.
We know when we’re watching a movie or a TV show, even documentaries and reality TV, that it is or might be make-believe. Even video segments on The News cause us to raise an eyebrow occasionally. Green screens, editing, camera angle, and CGI live in our vernacular. We know how they work. We know their capabilities. They’re a front-of-mind part of how we understand what we’re watching.
Until now. AI blows the “objective video” paradigm out of the water. Take a peek at this one:
As the saying goes, “Believe none of what you hear and only half of what you see.”
I fear the new, more accurate version is, “Believe nothing.”