Wednesday, January 22, 2025

Even advanced LLMs are vulnerable to simple manipulations

Today’s LLMs can be misused.

Recent research from EPFL shows that even the latest safety-equitable LLMs are susceptible to simple prompt manipulations called adaptive jailbreaking attacks. These can affect the model’s responses in a negative or unexpected manner.

Researchers from the School of Computer and Communication Sciences’ Theory of Machine Learning Laboratory (TML) experimented with many top LLMs, making unprecedented assaults that worked out a hundred percent of the time since there’s no resistance whatsoever.

Explore more ..

a modern technology device

Recovering gold from e-waste

Turning e-waste into gold.
Technology
Illustration of the atomic arrangement within a single lanthanide-doped nanocrystal. Each lanthanide ion can emit light

New nanoscale force sensors made from luminescent nanocrystals

A tour de force.