Scaled-up LLMs are more prone to sensible yet wrong answers

Making it less reliable

Researchers at the Universitat Politècnica de València, in Spain, have found that as Large Language Models (LLMs) grow larger and more sophisticated, they are more likely to span sensible yet wrong answers, making them less reliable. Researchers analyzed three popular LLMs; GPT by OpenAI, the LLaMA of Meta, and the BLOOM suite developed by BigScience, and noticed that with new versions accuracy increases, so do the hedging, refusal, or evasiveness. The study also found that LLMs rarely admit to a user that they do not know an answer.

- Advertisement -
Explore more ..

Scientists 3D-Print Ultra-Hard Carbide for Next-Gen Industrial Cutting Tools

Hiroshima University researchers use hot-wire laser processing to manufacture tough WC–Co cutting materials with less waste

BMW Reveals Next-Gen iX3 With 400 kW Fast Charging and New EV Platform

Built on the Neue Klasse platform, the electric SUV pairs 800-volt charging with BMW’s new software-centric architecture.

Robots Learn to Follow Your Pointing Finger

Brown University system blends speech and gestures to help machines find objects faster

Tiny ESP32 Robot Roams Your Desk Like a Curious Pet

Open-source companion bot reacts to touch using simple, low-cost hardware
- Advertisement -