Deepfakes get real (and real easy)

662404-deepfake-facebook.jpg

Call it "deepfake panic." The world is waking up to the fact that artificial intelligence (AI) will soon enable anyone to produce fake photographs, videos and audio transcripts that look and sound real.

The panic is misplaced.

Deepfake panic centers on the fear that some famous person, such as a politician, will be blamed for saying or doing things they never said or did.

A bigger risk is that notable people actually caught transgressing on video or audio recordings will be able to convince the public that the authentic media is actually a deepfake. In other words, where pictures and videos and audio recordings once served as proof, deepfake technology will enable people to believe or not believe in the authenticity of any media based on their biases or preferences.

Deepfakes represent a further slipping away of the world of shared truths and toward a world where everyone has their own truth and all sources of information are suspect.

The biggest risk of all, however, is not in deepfake media that's published or mass-distributed, but in the one-on-one use of deepfake fraud in social engineering attacks.

Here's how deepfakes will take social engineering attacks to a whole new level.