Cyberattacks in the lab are not a serious threat – yet


Most cyber dread stories have more in common with horror fiction than practical reality, and I’m not just talking about the trendy cyber warfare stuff that pops up online. Me being me, I focus on hacking threats.

Certainly, I crave the eccentric cyber security research papers, but there is a secret to digesting these papers so as not to confuse theoretical risk with practical risk. There is a huge difference between fascinating research work, carried out by extremely talented people, and the practical risk you are likely to be exposed to as a result.

This is not to say that such research is useless; Far from it, in fact. The most technologically fanciful lab threats can evolve into very real threats, though they often bear little resemblance to the original. There are three fascinating researches that stand out, and all of them fall when it comes to current real-world threat issues.

Attack of mechanical keyboards

Let’s start with Keytap3, which immediately announces that it has already evolved somewhat from the original research of Georgi Gerganov. The name also rightly suggests that it’s about typing.

This implication is an attempt to be able to listen remotely and literally to what you type and then convert that audio to written output. It does this by analyzing the n-gram frequency – the contiguous sequence of elements in a sample – of recorded audio clusters.

Gerganov is not the first to consider this a spy methodology, nor will he be the last, and I applaud him for his efforts so far. Despite, I must say, my inability to replicate the success he had in the lab when participating in a demonstration of the technology. You can try it by visiting the demo website and allowing your keystroke audio to be analyzed.

Gerganov says he doesn’t have access to the recordings because the test runs in your client browser and none of the data is downloaded or stored by the researcher. This is one of the reasons why the results are poor: without the broader input data from a wide range of the real world at once keyboards and microphones, as well as different typing speeds and styles, the experiment is likely to work best with the variables it understands from lab development.

“One possible explanation for the results you’re seeing is that simply Keytap3 is somehow over-matching my setup or style,” he tells me. “While I’ve tried to keep the implementation as general as possible, without making unnecessary assumptions about typing style or devices (keyboard and mic), it’s still possible that the algorithm only works well in the limited set of environments I’ve tested it with.

Gerganov has only two mechanical keyboards and says the results are “pretty good” when using this small set of data points. He would appreciate more data from the demo participants: it’s up to you whether or not to upload the recording after the demo so he can expand the input data.

In case you were wondering, he doesn’t think typing speed is a huge factor. Instead, the main factor is the ability to match key sounds to determine if distinct sounds are being made by the same key, for example. “Currently, Keytap uses a time-domain cross-correlation metric to match keys to each other and it’s definitely not perfect,” says Gerganov, before adding that he was surprised it worked too. good. He is currently working on improving the algorithm using frequency domain metrics.

‘Ghost touch’ don’t make me shiver

The next step is an experiment smartphone threat vector that caught my attention because it works with both iPhone and Android devices.

There are still ways to gain access to devices, although most of them require either temporary physical ownership of the device or the actual owner for installing something malicious. Or, in the case of BadUSB Attacks, a subtle combination of the two. Rather than owning the phone, BadUSB attacks require owning or using a malicious memory drive or even a specially crafted data/charging cable. The Wired Ghost Touch (WIGHT) attack model uses the malicious charging port approach. Cable type doesn’t matter, and bypassing data blockers allows attackers to remotely “scan” the touchscreen.

Researchers from Zhejiang University, China, and two from Darmstadt Technical University, Germany, have demonstrated how it works using both a Galaxy S20 And one Apple iPhone SEand a few other less popular smartphones.

It works by injecting specially designed “malicious noise” signals that evade noise reduction and voltage management filtering while impacting capacitive touchscreen measurement systems. In fact, the researchers claim that they can perform three types of attacks by synchronizing the injected noise with the device’s touchscreen scan cycle: a “ghost touch” that does not require physical intervention from the user, a “tamper attack” that changes the position actually touched to another, and a denial of service that prevents any touch from being recognized.

I’ve read about previous research into phantom touch, but all of this requires the target device to be on screen and a few millimeters away from a table or desk, with bulky equipment set up below . To me, this lowers the threat level to negative, as even a highly targeted individual deserving of such attention would almost certainly have defensive measures in place to defeat him.

The WIGHT model does not require USB cable data access permission, which is an advantage, nor is the electromagnetic radiation approach of these devices under the table. Instead, by injecting a common-mode signal that cannot be absolutely filtered but still produces a differential-mode signal, through single-ended circuitry, the necessary touchscreen interference can be achieved.

It’s much more advanced than Keytap3, but still doesn’t give the collywobbles to me or you, as the touchscreen positioning accuracy is still in the 50/50 ballpark.

There is, however, a chilling aspect to the attack methodology. The researchers say that since the attack signal is a high-voltage alternating current, it could give a smartphone user a very unpleasant shock outside of carefully controlled laboratory conditions.

Scary in some kind of minority report

What to do if your account has been compromised before did you open it? While that might sound like a third entry in the stakes “that doesn’t apply to real-world lab threat research,” it isn’t. This threat vector sounds incredible – but it’s actually doable right now. The researchers found that 35 of the top 75 web services were vulnerable in some way.

The work, using a Microsoft Security Response Center (MSRC) grant, was undertaken by independent security researcher Avinash Sudhodanan and Microsoft senior researcher Andrew Paverd. The research paper well worth a read and a real;y worrying.

Andrew Paverd describes it as a “new class of attacks affecting websites and other online services”. It’s scary precisely because a cybercriminal can gain access to an account even before you create it. It’s worse, in that they could then take over that account once you have it. He has a kind of Minority report feel – but is far from a fictional fantasy.

Using one of five different attack scenarios, an attacker creates an account for a web service which is then reactivated by a user and then, after giving them time to use the account and add value through financial and other data, then regains control. The five methods require different scenarios to play out and involve exploiting a weakness in merging classic and federated accounts: not logging users out after a Reset password; Trojan identifiers; failure to invalidate change email capability URLs in the password reset process; and exploiting an unverifiable identity provider vulnerability.

Everything is rather complicated, but you cannot ignore the test result of almost half of the targeted service providers. That said, it’s not certain that it even works beyond that 50/50 test result. It requires that a user has not yet joined a service, that the attacker knows that fact and wants to start using it at some point, and the email address that they will use – this which is exaggerated.

It also requires that the web service in question not send a verification email to the address provided by the user while preventing any further action until this verification has been received. Using unique email addresses for each account would also effectively mitigate the success of such an attack, and the ease with which this can be achieved these days is a path I would recommend. Notably as unique email IDs, especially when also used as account login usernames, also make other attack scenarios harder to pull off. Win-win.

Featured Resources

Integrate innovation, intelligence and sustainability into your industrial processes, with the cloud

EMEA Manufacturing and Industry Symposium 2022

look now

Enabling Safe Blended Learning in Schools

The importance of raising security awareness among key players

Free download

Access new levels of creative freedom

Discover the advantages of 3D-aided design

Free download

Sharpen your competitive edge in manufacturing

Smarter asset management

Free download


About Author

Comments are closed.