Political advertising has long been filled with half-truths about opponents. Often news media coverage too. Just think how often half a Donald Trump quote is used to portray him as dangerous or evil. A couple of weeks ago, a robocall in New Hampshire sounded like President Biden, but it wasn’t him. It was generated by opponents to sound like him through artificial intelligence. It’s known as a deep fake.
A vulgar video was recently produced showing a character who looked like Taylor Swift engaged in pornographic acts.
Idaho House Minority Leader Ilana Rubel is sponsoring legislation that would criminalize deep fakes intended to deceive.
The greatest fear I’ve heard about the use of AI in generating these pictures and videos is that a world leader could be portrayed as announcing a nuclear strike, and that could lead to the destruction of humanity.
On the other side of this argument is the notion the government has a responsibility to protect the rabble from disinformation.
Ron Nate, President of the Idaho Freedom Foundation told Newsradio 96.1 FM and 1310 KLIX that we could be criminalizing funny memes and satire.
Rubel begs to differ. She responded that satire would be protected (and who determines what satire is?) She also says if the meme is criminal, only the originator would be charged and not tens of thousands of people who could share it on Facebook. At that point, how do you track the origin? Easy, it would be required to have the identity of the creator attached.
A few billion people are on social media. Good luck policing the globe.
26 Paintings That Became Album Covers
Gallery Credit: Allison Rapp