AI is aware of all of it — however what occurs when it makes it up?
I keep in mind analysis analysts being essentially the most pissed off group again in November 2022 when ChatGPT exploded onto the tech scene. They have been being requested to experiment with and use AI of their workflows, but it surely didn’t take lengthy for them to come across a serious stumbling block. In any case, would you threat your profession and credibility over a brand new expertise fad?
Whereas content material creators like myself, knowledge scientists, and engineers have been thriving with AI adoption, we might solely empathize with our analysis analyst friends as we partnered with them to search out new methods to make OpenAI, Gemini, Langchain, and Perplexity cater to their necessities. Everybody tried constructing belief in AI as we placed on our researcher hats.
However quickly, the consensus was that AI hallucinations have been an issue for information staff, whether or not you have been a researcher, content material creator, developer, or a enterprise chief.
Quick ahead to 2025, and regardless of all of the developments in AI, hallucinations haven’t disappeared. Whereas firms like Anthropic, OpenAI, and NVIDIA are pushing the boundaries of AI reasoning fashions, the ghost of hallucinations nonetheless lingers. Our newest G2 LinkedIn ballot reveals that just about 75% of pros have skilled AI hallucinations, with over half (52%) saying they’ve skilled AI hallucinations a number of occasions.
These new developments would possibly promise smarter, quicker, and extra dependable AI, however the query stays — are they robust sufficient to maintain hallucinations at bay?
Let’s take a better have a look at the most recent AI LLM updates shaping the {industry}:
A timeline of key AI LLM mannequin updates in 2025
- February 24, 2025: Anthropic launched Claude 3.7 Sonnet, the world’s first hybrid reasoning AI mannequin to reinforce and broaden output limits
- February 27, 2025: OpenAI unveiled GPT-4.5 Orion, integrating varied applied sciences right into a unified mannequin for streamlined AI functions
- March 18, 2025: NVIDIA introduced the open Llama Nemotron household of fashions with reasoning capabilities to empower enterprise
- March 20, 2025: At GTC 2025, NVIDIA launched NVIDIA Dynamo, an open-source software program designed to speed up and scale AI reasoning fashions in AI factories
Hallucinations, the ‘Reply Economic system’, and real-world challenges
As AI fashions evolve with new capabilities, the way in which we work together with data can be reworking. We’re witnessing the rise of a mega-trend that our very personal Tim Sanders calls the “Reply Economic system.” Individuals are transitioning from search-based analysis to an answer-driven model of studying, shopping for, and dealing.
However there’s a catch in all of this. AI chatbots appear to be delivering immediate, assured responses — even after they’re mistaken. And regardless of accuracy issues, these AI-generated solutions are influencing selections throughout industries. This shift poses a essential query: are we too fast to simply accept AI’s responses as reality, particularly when the stakes are excessive? How robust is our belief in AI?
Whereas AI chatbots are shaking up search and AI firms are leaping in the direction of agentic AI, how robust are their roots when hallucinations hang-out? AI hallucinations will be as trivial as Gemini telling folks to eat rocks and glue pizza. Or as huge as fabricating claims like those under.
AI hallucinations: a timeline of authorized challenges
- January 6, 2025: An AI skilled’s testimony was challenged in courtroom for counting on AI hallucinated citations in a deepfake-related lawsuit, elevating issues in regards to the credibility of AI-generated proof
- February 11, 2025: Attorneys in Wyoming confronted potential sanctions for utilizing AI-generated fictitious citations in a lawsuit towards Walmart, highlighting the dangers of counting on hallucinated knowledge in authorized filings
- March 20, 2025: OpenAI confronted a privateness grievance in Europe after ChatGPT falsely accused a Norwegian particular person of homicide, elevating issues about reputational injury and GDPR violations
There have been a number of different notable AI hallucination mishaps in 2024 involving manufacturers like Air Canada, Zillow, Microsoft, Groq, and McDonald’s.
So, are AI chatbots making life simpler or simply including one other layer of complexity for companies? We combed by G2 critiques to uncover what’s working, what’s not, and the place the hallucinations hit hardest.
Greater than your common publication.
Each Thursday, we spill scorching takes, insider information, and information recaps straight to your inbox. Subscribe right here
The G2 take
A fast comparability of ChatGPT, Gemini, Claude, and Perplexity reveals ChatGPT because the chief at a look, with an 8.7/10 rating. Nonetheless, a better look reveals that Gemini leads when it comes to reliability — by a slim margin.
Supply: G2.com
Whereas ChatGPT has larger capabilities of studying from person interactions to cut back errors and perceive context, Perplexity and Gemini beat it at content material accuracy with an 8.5 rating.
Supply: G2.com
Almost 35% of critiques spotlight the accuracy hole
These AI chatbots are being utilized in small companies, SMEs, and enterprises by every kind of pros — analysis analysts, advertising leaders, software program engineers, tutors, and many others. And a deep dive into G2 assessment knowledge reveals a obtrusive development: inaccuracy stays a shared concern throughout the board.
We will’t assist however discover that, proper off the bat, a mean of ~34.98% of critiques have issues about inaccuracy, context understanding, and outdated data.
Supply: Unique G2 Knowledge
Customers aren’t shy about flagging their frustrations. Out of the lots of of critiques, accuracy issues topped the checklist of cons:
- ChatGPT: 101 mentions of inaccuracy, with outdated data including to the frustration
- Gemini: 33 cases of inaccurate responses, compounded by 26 complaints about context understanding
- Claude: Fewer reviews, however with seven accuracy points and 5 issues about recognition
- Perplexity: Whereas boasting fast insights, it wasn’t immune — customers identified seven limitations associated to AI accuracy
Whereas China’s DeepSeek has turned heads and wreaked inventory market havoc on account of its pace and cost-saving go-to-market (GTM) product, it doesn’t have a particular (and dare we are saying authorized sufficient) presence within the USA for legitimate issues over security and potential knowledge siphoning. Speculations round its reliability outweigh the attract of affordability.
Our VP of Insights, Tim Sanders, known as it out for its hallucination price in a current interview.
“DeepSeek’s R1 has an 83% hallucination price for analysis and writing, which is far increased than the ten% hallucination price of different AI platforms.”
Tim Sanders
VP of Analysis Insights at G2
Gemini: The ironic productiveness booster for analysis analysts
We famous a number of analysis analysts use Gemini. Some significantly want the analysis mode and use it for educational and market analysis.
“Each day use, significantly in love with analysis mode. Gemini’s pace enhances the browsing expertise total, particularly for individuals who use the web for intensive analysis and work duties or who multitask.”
Elmoury T.
Analysis Analyst
However right here’s the twist: analysis analysts aren’t raving about Gemini for its analysis reliability. As an alternative, it’s the seamless connectivity to Google’s suite of instruments and customizable person expertise that steals the highlight. Productiveness boosts, streamlined workflows, and smoother job administration? Completely. Trusting it for rigorous analysis? Not a lot.
Whereas Gemini’s analysis mode aggregates data from the web, accuracy and fact-checking aren’t making the headlines. Reminiscence administration points and sluggish efficiency additionally hold it from being a real analysis powerhouse.
Supply: G2.com Critiques
ChatGPT: energy participant with precision pitfalls
From code era to market analysis, ChatGPT has grow to be a every day go-to for professionals to brainstorm, generate content material shortly, and reply complicated questions. But, accuracy issues persist.
Geopolitical subjects and nuanced analysis usually result in deceptive outcomes. Context understanding is strong, however misinformation and hallucinations nonetheless plague customers.
Person critiques reward ChatGPT’s polished tone and contextual understanding, however this confidence usually masks the occasional hallucination. Customers highlighted its tendency to supply plausible-sounding however inaccurate data, particularly in complicated or nuanced situations like geopolitics. It’s a textbook case of “sounding good however not at all times being proper.”
Paid account customers are impressed with its new multimodal inputs, voice interactions, and reminiscence retention but additionally spotlight its limitations in knowledge evaluation, picture creation, and total accuracy.
General, paid customers discover the product dear in comparison with different free alternate options accessible available in the market owing to ChatGPT’s server down time and accuracy points.
Supply: G2.com Critiques
Supply: G2.com Critiques
G2 critiques additionally surfaced how customers undergo back-and-forth with ChatGPT to get their desired outcomes. At occasions, customers ran out of allotted tokens shortly, leaving their queries unhappy.
Supply: G2.com Critiques
However for some customers, the advantages far outweigh the pitfalls. For example, in industries the place pace and effectivity are essential, ChatGPT is proving to be a game-changer.
G2 Icon use case
Peter Gill, a G2 Icon and freight dealer, has embraced AI for industry-specific analysis. He makes use of ChatGPT to investigate regional produce tendencies throughout the U.S., figuring out the place seasonal peaks create alternatives for his trucking providers. By lowering his weekly analysis time by as much as 80%, AI has grow to be a essential instrument in optimizing his enterprise technique.
“Historically, my weekly analysis might take me over an hour of handbook work, scouring knowledge and reviews. ChatGPT has slashed this course of to simply 10-Quarter-hour. That’s time I can now spend money on different essential areas of my enterprise.”
Peter Gill
G2 Icon and Freight Dealer
Peter advocates that AI’s advantages lengthen far past the logistics sector, proving to be a robust ally in at this time’s data-driven world.
Perplexity: pace meets smarts — with a facet of stumbles
Perplexity’s exterior internet search functionality and speedy updates have earned it a strong fanbase amongst researchers. Customers reward its capacity to supply complete, context-aware insights. The frequent integration of the most recent AI fashions ensures it stays a step forward.
Nevertheless it’s not all sunshine and summaries. Customers flagged points with knowledge export, making it more durable to translate insights into actionable reviews. Minor UX enhancements might additionally considerably elevate its person expertise.
Michael N., a G2 reviewer and head of buyer intelligence, said that Perplexity Professional has reworked how he builds information.
Supply: G2.com Critiques
“Easiest method of conducting tiny and complicated analysis with correct prompting.”
Enterprise leaders and CMOs like Andrea L. are utilizing totally different AI chatbots to both complement, complement, or full their analysis.
Supply: G2.com Critiques
G2 Icon use case
Luca Piccinotti, a G2 Icon and CTO at Studio Piccinotti, makes use of AI to navigate complicated market dynamics. His staff makes use of AI to course of huge quantities of knowledge from surveys, social media, and buyer suggestions for sentiment evaluation, serving to them gauge public opinion and spot rising tendencies. AI additionally streamlines their survey workflows by automating query era, knowledge assortment, and evaluation, making their analysis extra environment friendly.
To translate insights into actionable methods, Luca depends on predictive analytics to forecast shopper habits, monitor rivals, and personalize advertising campaigns. His most well-liked AI instruments? Perplexity for analysis and ChatGPT for managing and refining the info.
“Perplexity is our trusted companion for analysis functions, whereas we use ChatGPT for managing the obtained knowledge. We additionally use extra instruments and wrappers, API, native fashions and many others. However the unbeatable ones are Perplexity and ChatGPT at this second.”
Luca Piccinotti
G2 Icon and CTO at Studio Piccinotti
Claude: a reasonably trustworthy, human-like, data-deficient counterpart
Claude’s conversational tone and contextual understanding shine by in critiques. Customers respect its willingness to confess when it doesn’t know one thing quite than hallucinating a response. That stage of transparency builds belief.
Nonetheless, restricted coaching knowledge and functionality gaps in comparison with rivals like ChatGPT stay areas for enchancment. And whereas its strengths lie in conversational accuracy, its structured knowledge evaluation continues to be a piece in progress.
In contrast to most AI chatbots that confidently present incorrect solutions, Claude customers respect its transparency when it doesn’t know one thing. This “honesty over hallucination” strategy is a novel promoting level, making it a most well-liked selection for customers who worth dependable suggestions over speculative responses.
Supply: G2.com Critiques
Nonetheless, customers additionally expressed frustrations round Claude’s skilled mode, citing its utilization bandwidth and lack of customer support.
Supply: G2.com Critiques
Verdict: AI for analysis — yay or nay?
It’s a cautious yay — which continues to be higher than the traditional “it relies upon”.
AI chatbots are undeniably priceless analysis instruments, particularly for rushing up data gathering and summarizing. However they’re not flawless.
4 key takeaways
Hallucinations, accuracy points, and inconsistent reliability stay challenges.
- Gemini is likely to be your productiveness sidekick, simply not your analysis fact-checker if you happen to’re a analysis analyst who values integration and productiveness over pinpoint accuracy.
- ChatGPT is a productiveness booster for fast analysis duties, however fact-checking stays a should, even if you happen to’re paying a bomb for the paid subscription.
- Perplexity is a dependable information companion for researchers who worth pace and cutting-edge AI.
- Claude is the selection for these looking for trustworthy, human-like responses, however don’t count on it to crunch complicated datasets.
My tried-and-tested prompting hacks to keep away from AI hallucinations
- Immediate construction = Be exact + give context + specify what you need the specified final result to be + warn it about what its output shouldn’t have + share an instance if potential
- Use a immediate that calls on AI’s chain-of-thought reasoning to test accuracy and determine hallucinations. Ask the AI chatbot: “Break down the steps you adopted to supply this output. Additionally, are you able to clarify your rationale for doing so?”
- Use templatization and observe organization-wide pointers on utilizing AI chatbots and LLMs for work
- People within the loop stay vital, particularly in high-stakes environments like authorized analysis, market analysis, medical analysis, monetary analysis, and many others.
- At all times confirm and cross-check sources. We all know life will get busy, however a fast test is at all times cheaper than a lawsuit!
Hallucinate much less, confirm extra: keep away from the AI tunnel imaginative and prescient entice
Count on AI fashions to double down on accuracy and transparency. Advances in multimodal AI and retrieval-augmented era (RAG) might cut back hallucinations. Perplexity, OpenAI, Google, and Anthropic now have their very own AI search capabilities, which can plug into real-time person knowledge to sharpen the accuracy and relevance of outputs.
Regardless that newer fashions like DeepSeek R1 are being constructed at one-tenth the price of main rivals, its trustworthiness will decide its destiny within the world market.
Ultimately, AI chatbots and LLMs are your analysis sidekick, not your fact-checker. Use them correctly, query relentlessly, and let the info — not the chatbot — prepared the ground.
Loved this deep-dive evaluation? Subscribe to the G2 Tea publication at this time for the most well liked takes in your inbox.
Edited by Supanna Das