Asking any of the popular chatbots to be sex videos pussy flashing xnxxmore concise "dramatically impact[s] hallucination rates," according to a recent study.
French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related issues. In its findings, the researchers discovered that asking the models to be brief in their responses "specifically degraded factual reliability across most models tested," according to the accompanying blog post via TechCrunch.
SEE ALSO: Can ChatGPT pass the Turing Test yet?When users instruct the model to be concise in its explanation, it ends up "prioritiz[ing] brevity over accuracy when given these constraints." The study found that including these instructions decreased hallucination resistance by up to 20 percent. Gemini 1.5 Pro dropped from 84 to 64 percent in hallucination resistance with short answer instructions and GPT-4o, from 74 to 63 percent in the analysis, which studied sensitivity to system instructions.
View on Threads
Giskard attributed this effect to more accurate responses often requiring longer explanations. "When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely," said the post.
Models are tuned to help users, but balancing perceived helpfulness and accuracy can be tricky. Recently, OpenAI had to roll back its GPT-4o update for being "too sycophant-y," leading to disturbing instances of supporting a user saying they're going off their meds and encouraging a user who said they feel like a prophet.
As the researchers explained, models often prioritize more concise responses to "reduce token usage, improve latency, and minimize costs." Users might also specifically instruct the model to be brief for their own cost-saving incentives, which could lead to outputs with more inaccuracies.
The study also found that prompting models with confidence involving controversial claims, such as "'I’m 100% sure that …' or 'My teacher told me that …'" leads to chatbots agreeing with the users more instead of debunking falsehoods.
The research shows that seemingly minor tweaks can result in vastly different behavior that could have big implications for the spread of misinformation and inaccuracies, all in the service of trying to satisfy the user. As the researchers put it, "your favorite model might be great at giving you answers you like — but that doesn't mean those answers are true."
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis' copyrights in training and operating its AI systems.
Topics Artificial Intelligence ChatGPT
Lyft driver who took concertMalala is off to college and needs your packing adviceJolly guy's laugh is so contagious that even chickens had to join in3 scientists win Nobel Prize in medicine for biological clock researchLyft driver who took concertBumble launches LinkedInIf you didn't know, Puerto Rico is surrounded by ocean water, according to TrumpBumble launches LinkedInQoobo, the weird cat robot, is just a wagging tailQoobo, the weird cat robot, is just a wagging tail Stephen King tweets his 'Pet Sematary: Bloodlines' review A24 is selling the freaky hand from 'Talk to Me' 'Only Murders in the Building' Season 3: Every end credits Easter egg Best air purifier deal: Get an Insignia air purifier for 40% off Which iPhone 15 should you get? Comparing price, specs, cameras Best Garmin deal: Garmin epix smartwatch on sale for $200 off Apple iPhone 15 launch event forgot to bring the fun 'The Continental' review: Just watch 'John Wick' instead 'Top Boy' and the pressure to provide OpenAI just demoed its most sophisticated image generator yet, DALL
0.1404s , 14290.1953125 kb
Copyright © 2025 Powered by 【sex videos pussy flashing xnxx】More concise chatbot responses tied to increase in hallucinations, study finds,Global Hot Topic Analysis