A recent study has revealed that artificial intelligence (AI) tools could be exploited by malicious actors to effectively pilfer user passwords by analyzing keystrokes, showcasing nearly impeccable precision in this endeavor.
As detailed in the research conducted by Cornell University, an AI program demonstrated remarkable performance in replicating typed passwords, achieving an astonishing 95% accuracy rate when activated on a nearby smartphone.
A team of computer scientists in the United Kingdom trained an AI model to identify the sounds produced by keystrokes on a 2021 MacBook Pro, a laptop commonly available to the public.
The AI tool exhibited remarkable accuracy even during a Zoom video conference, accurately interpreting typing sounds picked up by the laptop’s microphone.
The researchers highlighted that this technique, referred to as an “acoustic side-channel attack,” poses a significant threat, as users generally underestimate the vulnerability of their typing sounds, focusing more on concealing their screens during sensitive activities like password entry.
To ascertain the precision of their approach, the researchers executed 36 key presses on the laptop, each key being pressed 25 times with variations in pressure and finger placement. The AI program was capable of distinguishing unique aspects of each keystroke, including sound wavelengths, in order to achieve its accuracy.
For the experiment, an iPhone 13 mini smartphone was positioned approximately 17 centimeters away from the keyboard.