Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

The FCC is stepping up to ensure AI voice calls reveal when they’re deepfakes.

Transparency in AI is crucial to combat deception.

Robocalls can no longer masquerade as humans.

AI can now imitate human voices so convincingly that deepfakes can trick many people into believing they’re listening to a real person. As a result, AI-generated voices have been used for automated phone calls. To address the more harmful instances of this, the US Federal Communications Commission (FCC) is working to enhance consumer protections with a proposal designed to strengthen defenses against unwanted and illegal AI-generated robocalls.

The FCC’s plan aims to classify AI-generated calls and texts, enabling the commission to establish regulations and guidelines, such as requiring AI voices to disclose that they are artificial when making calls.

Given AI’s use in dubious communications and activities, it’s not surprising that the FCC is working on regulations for these technologies. This initiative is part of the FCC’s broader effort to address robocalls as both a nuisance and a fraud risk. AI complicates the detection and avoidance of these schemes, prompting the proposal that would mandate the disclosure of AI-generated voices and messages. Under the proposal, calls must begin with the AI clearly stating that both the voice and the content are artificial. Organizations failing to comply would face substantial fines.

The new plan builds on the FCC’s earlier Declaratory Ruling, which declared that using voice cloning technology in robocalls is illegal without the recipient’s consent. This ruling stemmed from an incident where a deepfake voice clone of President Joe Biden, combined with caller ID spoofing, was used to spread misleading information to New Hampshire voters before the January 2024 primary election.

Leveraging AI for Assistance

In addition to targeting the sources of AI-generated calls, the FCC plans to introduce tools to alert individuals when they receive AI-generated robocalls and robotexts, especially those that are unwanted or illegal. This could involve improved call filters to prevent such calls, AI-based detection algorithms, or enhanced caller ID systems to identify and flag AI-generated communications. For consumers, the FCC’s proposed regulations promise an added layer of protection against the increasingly sophisticated tactics employed by scammers. By promoting transparency and advancing detection tools, the FCC aims to reduce the risk of consumers falling prey to AI-generated scams.

AI-generated synthetic voices have also been used for many positive purposes. For example, they can help individuals who have lost their voice regain the ability to speak and provide new communication options for those with visual impairments. The FCC recognized these benefits in its proposal, even as it addresses the negative impacts of such technologies.

“Confronted with a surge of disinformation, about 75% of Americans are worried about misleading AI-generated content. This is why the Federal Communications Commission is centering its efforts on AI around a fundamental democratic principle—transparency,” stated FCC Chairwoman Jessica Rosenworcel. “The concerns about these technological advancements are legitimate. However, by prioritizing transparency and taking prompt action against fraud, I believe we can mitigate the risks and leverage the benefits of these technologies.”