Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

Grok-2’s new AI image generator sparks concern as it fuels a surge in deepfakes, raising serious ethical questions.

Elon Musk’s generative AI platform, xAI, has launched a new image generator capable of creating virtually anything, leading to a surge of deepfakes and other dubious imagery online.

This development is likely to spark yet another chapter in the ongoing struggle for oversight of emerging technology.

The latest Grok chatbot on X, formerly Twitter, now allows users to generate images from prompts and post them directly to their X accounts.

This feature is part of the new beta releases, Grok-2 and Grok-2 mini, which introduce a redesigned interface and additional features. The Grok-2 mini is a scaled-down version of the standard chatbot.

Grok users, who are exclusively subscribers of X’s Premium+ tier starting at $16 per month, eagerly embraced the new service, creating and posting images with enthusiasm.

However, the image generator lacks explicit guidelines to prevent misuse, and there is no watermark or indication that the images were created with AI.

Some of the images posted include depictions of Donald Trump and a pregnant Kamala Harris as a couple, as well as former Presidents George W. Bush and Barack Obama posing with illegal drugs.

Additional generated images include depictions involving guns, creating an illusion of violence, as well as sexualized themes. Examples include Mickey Mouse wearing a “Make America Great Again” cap while holding a cigar and beer, and images of prominent women like U.S. Vice President Kamala Harris and pop star Taylor Swift in lingerie.

Analysts suggest this new feature could lead to further regulatory challenges.

As regulators grapple with challenges like misinformation and harmful content, the Grok image generator’s unrestricted output could make it even harder to maintain a safe and accurate online environment,” Sid Bhatia, regional vice president at New York-based AI company Dataiku, told The National.

The situation could speed up regulatory discussions and prompt quicker development of guidelines and laws to manage AI-generated content while balancing innovation and safety, said Andreas Hassellof, CEO of Switzerland-based tech consultancy Ombori.

X’s approach to content moderation, fact-checking, and algorithmic amplification will be far more crucial than the initial image generation capabilities,” he added.

A challenging situation

OpenAI’s DALL·E 3, for example, emphasizes on its website that it actively works to prevent “harmful generations” to support its “risk assessment and mitigation efforts in areas like propaganda and misinformation.

The absence of safety measures in Grok-2’s image generator raises concerns that it could be exploited to spread misinformation. xAI did not address these concerns in its release statement.

Grok-2 only mentions safety guidelines if specifically asked, responding with statements like, “I avoid generating images that are pornographic, excessively violent, hateful, or that promote dangerous activities,” as reported by The Verge.

I won’t generate images that could be used to deceive or harm others, such as deepfakes intended to mislead or images that could cause real-world harm,” stated another response from Grok-2.

In contrast, OpenAI’s ChatGPT automatically rejects requests to create potentially harmful or offensive content.

Elon Musk has long advocated for free speech, and his acquisition of X has reinforced this position. While X has complied with government content moderation requests under his leadership, Musk has faced criticism for suspending accounts that have been critical of him.

One of X’s most controversial decisions came in June when the platform announced it would permit pornographic posts, provided they are “consensually produced, properly labeled, and not prominently displayed.”

What makes Grok-2 truly revolutionary is its unrestricted image-generation capabilities, including the ability to produce NSFW [not safe for work] content—a feature that sets it apart from other AI models, which often impose strict ethical guidelines,” wrote Anakin.ai, a San Francisco-based generative AI platform, in a blog post.

Whether the absence of safeguards in Grok’s new image generator is Mr. Musk’s way of testing the limits of user and regulator patience remains unclear.

 

“His legal battles with various governments and the perceived inconsistency in his moderation practices create a complex picture of his commitment to free speech, suggesting that his approach may be shaped by personal and business considerations,” said Paul Turner, executive director at Capex.com for the Middle East.

The timing of Grok’s new feature is crucial, as social media and the growing influence of AI have blurred the lines between reality and fiction, enabling some users to exploit loopholes and circumvent existing rules.

“We expect that additional safeguards will be introduced,” said Alexander Ivanyuk, technology director at Switzerland-based cybersecurity firm Acronis, in an interview with The National. “While other generative AI systems have also undergone such processes, they can still be exploited by clever and creative individuals to produce potentially offensive content.”

However, it’s possible that Grok will introduce some restrictions, although they may be less stringent than those on rival platforms, he added.

“Ultimately, it’s difficult to fully prevent individuals from pushing boundaries and finding ways to bypass safeguards to produce provocative content,” Mr. Ivanyuk said.

Further misinformation

The launch of Grok-2’s image generator reignites the ongoing struggle with online misinformation, a significant issue in 2024, an election year in the U.S. where social media has become a key tool for candidates and their supporters.

Mr. Musk has grown increasingly political, often sharing his opinions on X, the platform he acquired for $44 billion in 2022. On July 14, he “fully endorsed” Mr. Trump, the presumptive Republican presidential nominee, just a day after an assassination attempt. This week, they conducted an interview together on X.

Already carrying inherent risks, the generator could further amplify harmful content, according to Mr. Bhatia.

“Grok’s ability to generate detailed responses on sensitive topics underscores the need for robust risk management, especially given the platform’s wide reach and the fast dissemination of information.”

AI image generators, particularly those used for deepfakes, pose notable copyright challenges. These tools might utilize images from other social media platforms without the original creator’s permission, leading to potential copyright infringements.

“Because AI-generated images lack human authorship, they are not eligible for copyright protection, which complicates the legal landscape,” Mr. Turner said.

“To tackle these issues, clearer licensing agreements, enhanced transparency in image usage, updates to copyright laws, and improved public education on AI and copyright may be required.”