Researchers from George Mason College have demonstrated a approach to manipulate synthetic intelligence (AI) fashions by altering a single binary digit of their reminiscence.
The sort of assault, named “Oneflip”, targets the saved values, often called weights, that decide how an AI system features. These values are stored as strings of 1s and 0s in a pc’s reminiscence.
If certainly one of these bits is modified on the proper location, it may shift the AI’s habits with out decreasing its general accuracy.
Do you know?
Subscribe – We publish new crypto explainer movies each week!
Toobit Tutorial For Freshmen (FULL Animated 2025 Information)
The underlying methodology borrows from a recognized {hardware} flaw referred to as Rowhammer. This system includes repeatedly accessing one a part of a reminiscence chip to unintentionally change the worth of a close-by bit.
The brand new analysis focuses this methodology on reminiscence areas that retailer AI parameters to regulate the AI’s habits with only a single flip.
To hold out the assault, an intruder first must run some kind of software program on the identical system because the goal AI. This may occur by way of a malicious app, an contaminated file, or unauthorized entry to a shared cloud service.
As soon as in, the attacker searches for part of the mannequin’s reminiscence the place a minor bit change could possibly be helpful with out elevating suspicion.
A single altered bit doesn’t sometimes trigger main efficiency points. The AI nonetheless appears to operate as anticipated, so most routine audits won’t spot something flawed. It’s this stealthy nature that makes Oneflip particularly tough to detect.
On August 19, Microsoft’s head of AI, Mustafa Suleyman, raised considerations in regards to the speedy progress of AI. What did he say? Learn the complete story.








