Getty Images
Getty Images

On the heels of the recent 'fake' Joe Biden AI robocall incident, the FCC (Federal Communications Commission) has expanded current laws to ban AI use.

Now, AI cannot be used to create fake voices for robocalls

According to sources, the 1991 Telephone Consumer Protection Act has been expanded to include the use of artificial intelligence to create fake human voices that are utilized in robocalls. The Biden incident involved the use of fake human voices created by AI, used in thousands of robocalls prior to the New Hampshire Primary. The 'Joe Biden' voice urged voters not to participate in the primary election.

870 AM KFLD logo
Get our free mobile app

US government data indicates about 55 billion robocalls were made in 2023, according to You Mail, a company that offers robocall blocking services.

   The law, according to CNN business:

"With Thursday’s change, scam robocalls featuring cloned voices would be subject to the same fines and consequences associated with illegal robocalls that do not use the technology. The FCC had announced it was considering the proposal last week.

Violations of the TCPA can carry stiff civil penalties."

 The only way an AI-generated voice can be used in a robocall is if the caller gets, in advance, permission from the recipient to continue with the call and its message.

  The new mandates go into effect immediately. Over the years, some phone service companies or providers who help facilitate illegal robocalls have been forcibly shut down by the government and even had their equipment confiscated.

CHECK IT OUT: How To Unlock Your iPhone With Your Voice

More From 870 AM KFLD