Our latest comparison pits Meta AI, ChatGPT, and Google Gemini against each other to find out who comes out on top.
We’re currently witnessing a competitive race among three chatbot services, each supported by a major tech company: Meta’s AI, OpenAI’s ChatGPT, and Google’s Gemini. Since ChatGPT popularized generative AI and its numerous potential uses, the rivalry between these three has intensified significantly.
It’s intriguing to observe the rapid development in this field and consider how these chatbots are continuously advancing. For now, let’s assess their current capabilities by evaluating their performance in various everyday tasks such as emails, math, recipes, programming, and more.
We’ll determine which AI chatbot delivers the most comprehensive and accurate answers, providing sources when necessary. For this evaluation, we are using ChatGPT version 4.0.
META AI VS CHATGPT VS GOOGLE GEMINI: EMAILS
( Image credits : Google )
Many professionals now use AI to assist with routine work tasks, so I began by asking all three AI chatbots to “write me an email for work asking for a project extension.” Each chatbot generated a well-written email that effectively achieved the main goal of the prompt, while maintaining a polite and professional tone. The emails were also in template form, allowing for personalization with more specific information.
For email writing, Meta AI, ChatGPT, and Google Gemini all received perfect scores. However, this was the simplest task—we’ll address the more challenging prompts next.
META AI VS CHATGPT VS GOOGLE GEMINI: RECIPES
( Image credits : Google )
For this task, I requested the chatbots to provide a recipe for chili. Each chatbot delivered accurate and detailed recipes, albeit with slight variations, which I verified against my own knowledge of making chili.
However, a significant difference emerged regarding how the chatbots sourced their recipes. Both Meta AI and Gemini provided sources at the end of the recipe, including links to the websites used. Gemini went further by linking additional related recipes.
In contrast, ChatGPT did not provide any source; it simply presented the entire recipe without attribution. This raises concerns about potential plagiarism or the accuracy of the recipe’s origins. Given AI’s imperfections, relying solely on ChatGPT’s recipe could pose risks, especially for inexperienced cooks who lack the ability to verify the information.
In conclusion, for recipes, I would recommend using Gemini or Meta AI due to their ability to trace and verify sources, ensuring greater reliability in terms of food safety and authenticity.
META AI VS CHATGPT VS GOOGLE GEMINI: SUMMARIZE NEWS
(Image credit: Shutterstock / Tero Vesalainen)
I asked each chatbot to provide a bulleted list of the latest news for [current date], and all three were able to respond promptly. However, they all presented headlines without much context about the stories themselves. The key difference between the AI chatbots lay in how they sourced the news, or if they did at all.
Both ChatGPT and Meta AI included direct links to the news outlets they referenced. ChatGPT even linked to multiple sources after each headline it quoted. In contrast, Gemini mentioned various news sites but did not provide links to the specific pages it sourced.
For news updates, ChatGPT and Meta AI appear to be more reliable choices as they offer direct source links, ensuring transparency and credibility compared to simply presenting information without proper citation from unknown sources.
META AI VS CHATGPT VS GOOGLE GEMINI: MATH
(Image credit: Shutterstock / InspiringMoments)
I posed two sets of math problems to the three chatbots: one in algebra and the other in geometry.
For the algebra problem, which involved determining all possible values of the expression A³ + B³ + C³ – 3ABC where A, B, and C are nonnegative integers, all three chatbots used different methods but reached the same solution.
The geometry problem, which concerned a triangle ∆ABC with specific conditions involving its centroid G, the center of the inscribed circle I, and angles α and β, proved challenging for the chatbots. ChatGPT initially made progress but did not provide a final answer. Gemini discussed the problem theoretically without inserting numeric values, offering insights into principles rather than a specific answer. Only Meta AI correctly solved the problem and provided a definitive answer.
For those seeking a chatbot capable of solving math problems, Meta AI emerges as the most reliable option based on this evaluation.
META AI VS CHATGPT VS GOOGLE GEMINI: PROGRAMMING
(Image credit: Shutterstock / BEST-BACKGROUNDS)
I presented each AI chatbot with the following programming task, based on a similar one used previously for an older version of ChatGPT:
“I want to create a variant of the game tic-tac-toe, but with more complexity. The grid should be 12-by-12 and use ‘x’ and ‘o’. Players can block each other by placing their ‘x’ or ‘o’ in any adjacent space on the grid. The goal is to be the first to achieve at least six ‘x’ or ‘o’ in a row, column, or diagonal. One player is ‘x’ and the other is ‘o’. Please program this in simple HTML and JavaScript. Let’s call this game: Tic-Tac-Go.”
Each chatbot was expected to provide complete code in both HTML and JavaScript for this task.
Meta AI and ChatGPT delivered exactly what was requested in both programming languages. However, Gemini provided JavaScript code but incorrectly substituted HTML with CSS, which are not interchangeable. According to Testbook, “HTML provides the structure and content of a web page, while CSS provides the visual design.”
For those seeking an AI chatbot capable of generating reliable programming code, both Meta AI and ChatGPT are recommended choices based on this assessment.
META AI VS CHATGPT VS GOOGLE GEMINI: MOCK INTERVIEW
(Image credit: Google)
The final test I conducted was to simulate a mock interview for a position as a computing staff writer at a prominent online tech publication. Each chatbot created a scenario where I engaged in a simulated interview with an interviewer, including mock questions and responses.
Although each chatbot took a different approach to the mock interview, all produced satisfactory results. While additional details would be needed to fully role-play with the bot, these simulations serve as effective starting points to gain insights into interview techniques and potential questions that may arise.
META AI VS CHATGPT VS GOOGLE GEMINI: VERDICT
( Image credit : Google)
After analyzing the results, it appears that Meta AI emerges as the top AI chatbot overall. Among the three, Meta AI consistently performs well across a diverse range of prompts, establishing itself as the most dependable option compared to its competitors.
ChatGPT falls in the middle, delivering reasonably consistent results. Comparing it to its older 3.5 model, I observed a significant improvement between the two versions, indicating OpenAI’s ongoing enhancements in its language model.
Unfortunately, Google’s Gemini ranked last and exhibited the least consistency among the group of AI chatbots. This aligns with its initial challenges when it was known as Google Bard, and it continues to trail behind its competitors in performance improvements.