Check out the latest on this and other exciting AI news.
Google’s reputation as the world’s most popular search engine has taken a hit since it started using AI Overviews, which provide AI-generated answers at the top of search results. These summaries, powered by Google’s Gemini generative AI chatbot, aim to simplify searches by delivering quick answers from authoritative sources. However, the reliability of these sources is questionable, as highlighted by some bizarre and incorrect responses.
During its annual I/O developers’ conference, Google explained that AI Overviews were designed to make searching “faster and easier” by having Gemini scour authoritative sources online. Despite the goal of assisting users, the AI has produced some strange and incorrect answers. For instance, an AI response suggested eating rocks daily, citing a humor website, and another recommended adding non-toxic glue to pizza sauce. More disturbingly, it advised drinking urine to pass kidney stones.
Google has defended the accuracy of most AI Overviews, claiming extensive testing was conducted before their launch and appreciating user feedback. However, following some of the odd responses, Google made over a dozen changes to the AI Overviews system, including restricting answers related to current news and health topics. They also improved detection mechanisms for nonsensical queries and limited content from satire and user-generated sources.
Google clarified that these issues were not due to “hallucinations,” a common problem with large language models, but rather misinterpretations of queries or nuances. The company is still working on refining the AI Overviews, which is why Google’s use of the world as a beta tester has been criticized. The Washington Post noted this trend of launching AI products with much fanfare, only to pull back after issues arise, such as the pause of an image-creation tool in Gemini due to bias concerns.
As Google and other AI companies like OpenAI, Microsoft, and Apple work to earn user trust, there are some workarounds. Although AI Overviews cannot be turned off, CNET suggests using a different browser instead of Google’s Chrome. This period of adjustment highlights the ongoing need for careful management and improvement of AI tools to ensure user safety and trust.
OpenAI has acknowledged that propagandists are exploiting its AI tools and has responded by introducing new features for users of the free version of ChatGPT-4.
It was another eventful week for OpenAI. The company faced allegations from Scarlett Johansson, who claimed OpenAI used her voice without permission to develop Voice Mode in ChatGPT. On May 29, OpenAI announced it had completed the rollout of new features—browsing, vision, data analysis, file uploads, and GPTs—for the free version of its ChatGPT chatbot, powered by the latest model, ChatGPT-4.
Voice Mode for ChatGPT-4o is set to launch in alpha within the next few weeks, with early access available to users subscribed to the $20/month Plus version, according to a company spokeswoman. Additionally, a desktop app version of the ChatGPT-4o-powered chatbot is on the way, as highlighted in a blog post about new features. The MacOS desktop app began rolling out to Plus users in mid-May.
According to a blog post dated May 30, the company also unveiled a ChatGPT-4o-powered chatbot designed specifically for universities to ethically integrate AI into student, faculty, researcher, and campus operational environments. This new chatbot is expected to be released during the summer.
According to The Washington Post, the major development last week was OpenAI discovering that groups from Russia, China, Iran, and Israel were utilizing its technology to impact global political discussions. This raises significant concerns about how generative artificial intelligence could facilitate covert propaganda efforts by state actors, especially as the 2024 presidential election approaches.
The Washington Post reported that the propagandists, whose accounts were deleted, utilized ChatGPT to draft and translate posts across various languages, as well as develop tools for automated social media posting. OpenAI confirmed in a blog post that despite these activities, none of the five groups engaged in “deceptive activity” significantly increased their audience engagement or reach through OpenAI’s services.
However, concerns remain about the evolving tactics of malicious actors. OpenAI reassured the public that they are actively addressing these threats: “Threat actors operate across the internet. So do we.” The company’s efforts have disrupted covert influence operations using AI models for tasks such as generating comments and articles, creating social media profiles, conducting research, debugging code, and translating texts.
OpenAI published a detailed 39-page “Threat Intel Report” in PDF format, identifying groups like Bad Grammar, Zero Zeno, Doppelganger, and Spamouflage involved in these activities. The report could potentially inspire a thriller film adaptation, perhaps even generated by ChatGPT itself.
According to CNBC, former OpenAI board member Helen Toner stated that one reason for CEO Sam Altman’s temporary dismissal last November was his obstruction of the board’s oversight duties. Toner alleged on The TED AI Show podcast that Altman withheld information, misrepresented company events, and sometimes lied to the board. Specifically, she mentioned that the board was not informed in advance about the release of ChatGPT in November 2022, learning about it only when Altman and others announced it on Twitter (now X).
L’Oreal and other beauty brands are using AI to assist consumers in selecting products.
As AI continues to integrate into daily life, it’s increasingly common for beauty brands to adopt this technology to transform how consumers receive advice and make purchases. Recently, L’Oreal introduced a range of AI-driven tools offering skincare analysis, hair color assessment, and a chatbot equipped with augmented reality to recommend and virtually try on products. CNET’s Katie Collins provided insights into L’Oreal’s Beauty Genius app, showcasing its capabilities.
“If successful, Beauty Genius has the potential to eliminate the trial-and-error process often associated with purchasing skincare and cosmetics,” Collins explained. “This could reduce financial waste from buying products that don’t match our skin type and ultimately decrease the industry’s environmental impact by minimizing unused products sitting in medicine cabinets.”
The FCC has suggested a $6 million penalty for an individual who orchestrated robocalls involving President Biden.
The US Federal Communications Commission has recommended imposing a $6 million fine on a fraudster who utilized AI technology to produce fraudulent robocalls impersonating President Joe Biden’s voice before the New Hampshire presidential primary in January. This marks the FCC’s first enforcement action involving generative AI technology.
According to the agency’s press release, political consultant Steve Kramer engineered these “malicious” robocalls, which were distributed to thousands of voters, discouraging them from participating in the primary election. The Associated Press reported that the robocalls featured an AI-generated voice resembling President Biden, using his famous phrase “What a bunch of malarkey,” and falsely claiming that voting in the primary would prevent voters from casting ballots in November.
Steve Kramer, who admitted to orchestrating the deepfake audio message, is also facing criminal charges. Although he did not respond to requests for comment about the proposed fine, Kramer previously explained to the AP in February that his intention was not to influence the election outcome but rather to raise awareness about the potential risks of artificial intelligence.
Following the incident, the FCC moved in February to outlaw AI-generated robocalls. In May, the agency introduced a proposal mandating political advertisers to disclose the use of AI-generated content in television and radio advertisements. However, the FCC lacks jurisdiction to enforce similar rules for ads on digital and streaming platforms, as noted by the AP.
Elon Musk secures $6 billion to develop a competing AI platform aimed at challenging OpenAI.
Elon Musk, a co-founder of OpenAI involved in a legal dispute with the startup over its shift towards profitability, has raised $6 billion from venture capitalists and investors to establish a rival AI platform aimed at challenging OpenAI, despite having reportedly supported OpenAI’s transition to profitability. Axios described this funding round as one of the largest in venture capital history, with backers including Andreessen Horowitz, Sequoia Capital, and Saudi Arabia’s Prince Alwaleed bin Talal, some of whom also support OpenAI.
Musk’s xAI startup, which launched in July 2023, has named its AI model Grok and released an open-source version called Grok-1 in March, sparking discussions about the definition of “open source” in this context. According to xAI’s blog post announcing the funding, the funds will be used to bring its initial products to market, enhance infrastructure, and accelerate research and development of future technologies. The company emphasized its commitment to developing AI systems that are truthful, competent, and beneficial for humanity.
In a bid to compete with major AI players like OpenAI, Microsoft, Google, and Anthropic, which are investing heavily in their large language models and AI chatbots, Musk’s xAI plans to build a supercomputer to support the next iteration of Grok by fall 2025, as reported by The Information. Musk acknowledged at a recent technology conference in Paris that xAI has significant ground to cover to match the technology of OpenAI and Google, suggesting potential advancements by the end of the year.