Hack the smartphones through voice command
The researchers succeeded in hacking the audio assistant of smartphones. This could help Google and other manufacturers improve the security of their smartphones.
Researchers at the University of Berkeley and Georgetown in the past year have been scrutinizing the vulnerability of voice recognition software over the past year to show that many devices are using it. They are focused on Google Assistant software; this feature is widely available on Android and Google software on the iOS operating system. Developing a way to distort voice commands is enough for a Google Assistant to recognize it, while many people don’t realize it’s dangerous.
The researchers tried to test the voice commands with obscure words such as Okogoogle in Google Voice Recognition software to measure the degree of software error recognition by humans.
In the human experiment,
only 5 percent of the individuals were able to identify the ambiguous word,
but in the Google Assistant experiment, 8 percent of the words were ambiguous by the software. The experiment showed that the software recognized the power of ambiguous and complex words more than humans.
At first glance,
many of the voice commands spoken may come in the form or speed of expressing ambiguous words;
as a human being, it is inappropriate to identify ambiguous words that we have heard correctly before, but to identify words with a mental presence. We have a little bit harder.
This study shows that it is easier to identify and identify compound words than other words. For example, “Call 911” is a synthetic word that humans have been able to detect at 1 percent and Google speech recognition at 2 percent, perhaps because English people have heard the words before. .
But changing the composition of the command and making the correct combination of words
with the same command is enough to make the voice assistant know it differently,
and this is a clear danger to many audio devices without sound authentication due to the volume control. The account will come.
How to deal with audio hacking
One of the preventive measures of hacking your device through voice command
is to set up an audio assistant to request confirmation after each voice command. Of course, this is like pulling a dot in an ocean.
Some easy-to-identify combinatorial commands:
The discovery team followed up on assistive software from Google, Siri Apple and Amazon’s Alexa that could be attacked. Of course, each of these companies can implement a different way to deal with voice hacking. Some safeguards, such as the use of voice verification or digital signature – verification security code, can serve as the ultimate confirmation of human separation from the machine. Researchers point out, however, that the proposed algorithms have become obsolete in the verification of voice codecs (codecs) and are not in line with current software developments.
A more sophisticated method can be used such as voice recognition and identification of the owner of the device,
although many devices have previously offered limited access to this,
reports suggest that this requires training users to work with the device,
as well as with shared users ( Such as Eco Amazon) This will cause problems. The research team identified one of the most practical and effective ways to counteract audio hacking by using low-quality and undetectable voice command recognition filters.