HomeOpinionLying has never been easier!

Lying has never been easier!

AI is evolving exponentially, and the regulations meant to control it are falling behind

In a Facebook post, Canada’s Chief Public Health Officer, Dr. Theresa Tam, revealed the dangers of the COVID-19 vaccination, claiming that doctors worldwide had found ‘sinister rubber-like clots’ in autopsies of the vaccine recipients. Before the post was taken down, Dr. Tam urged viewers to “act before it’s too late.” She advertised a product that would clean the blood vessels and make them more flexible. 

It doesn’t sound real, does it? That’s because it’s not. The video was made entirely by AI. The type of AI that makes realistic photos, videos, and audio is called ‘Generative AI’ (gen AI). Just the thought of someone doctoring your face into anything should be terrifying on its own. More pressing is that in an age of misinformation, videos like this can further spread false narratives and embolden people who already believe it. In that (now archived) Facebook post, people criticized the real Dr. Tam, with one user even calling for her jail time. 

In a bold move to expose the dangers of this technology, Laura McClure, a New Zealand MP, showed an AI-generated nude picture of herself to the NZ parliament. She made it in less than five minutes and shared it because “it needed to be shown how important this is and how easy it is to do, and also how much it can look like yourself.” McClure stated that the lack of regulation on AI-generated sexual and harmful content needs to change. 

AI misuse is not limited to images, as some people have reported receiving phone calls from panicked family members asking for money for various reasons, only to discover that it was an AI-generated voice mimicking theirs. It’s the same technology that can bypass a bank’s voice verification check. At its current rate, AI development threatens to become a boon for con artists rather than a tool for productivity.

When someone is scared of what AI can do, it is hard not to put them in the same box as people who are afraid of new technology. However, this fear comes from understanding. The more I learn about what AI can do and the little security it has, it becomes impossible not to fear it. You probably feel confident that you can decipher what images are AI, and for the most part, that’s true. AI images and videos can be obvious and terribly made, but not always. Gen AI has become better and it is already making it nearly impossible for people to distinguish between what’s real and fake.

Naturally, one might think the solution is to restrict what AI models are allowed to do by imposing restrictions on them. In 2023, one of the most cited AI researchers, Yoshua Bengio, along with more than 30,000 others, signed an open letter to companies that have AI systems more powerful than GPT-4. The letter demanded the companies pause the development for six months before training more powerful models, all of this with the goal of allowing more time to plan and manage risks associated with AI development. The industry largely ignored the letter. Not only that, but there are several reasons why the terms of service by AI companies are not necessarily enforceable

Companies currently have limited control of what people do with their models once they are released, especially open-source AI. Service terms are vague and broad, making them difficult to enforce legally. There are no real consequences for the violators of the terms and conditions besides getting their accounts banned. Most importantly, AI models cannot consistently detect when they’re being used in violation of their terms of service.

If regulations on AI are overly strict, it could have the potential to stifle innovation. However, something like compulsory watermarking of AI-generated content could be a way to combat this predicament. A signal embedded in the code, invisible to the naked eye, but tells the platform it’s on that it’s AI-generated could help counter the sheer amount of misinformation and non-consensual explicit images online.

We almost have the tech that has been talked about for generations in stories, however, we are ignoring the warnings that came with them and treating AI with little caution. The line between a tool and a weapon is thin, and far too many people are crossing that line and misusing gen AI. The AI companies are trying to maintain control over how their product is used, but it is not enough. We desperately need dedicated legislation to control it.

Other articles
RELATED ARTICLES

Most Popular

More From Author