The British retailers The Co-Operative Group (Co-op), Marks & Spencer (M&S) and Harrods have been disrupted by major cyberattacks in the past few days. While not all details of the hacker or hackers are known, the proximity of the attacks could suggest that one threat actor is responsible for all three, possibly the Scattered Spider hacking group that has already been linked to the M&S attack. How should retailers and banks and everyone else respond?

Retail Hacker Tactics

While they are the latest UK retailers to suffer IT disruption, Christmas sales at the supermarket chain Morrisons were badly hurt by a cyber attack last year and both Currys and JD Sports suffered from attacks that breached customer data. Clearly retailers are vulnerable and with half a billion wiped from M&S’ value because they could not accept contactless payments and Co-op stores left with empty shelves, the significance of these attacks cannot be overstated.

What is going on? Perhaps the IT departments were infiltrated by North Korean remote workers who had obtained their positions with fake resumes? We know that such miscreants are already very sophiscated. An HR department that conducted four video interviews, which confirmed that the individual matched the photo on their application (it has been enhanced using AI), and carried out additional background checks that all came back clear (because a stolen US identity was being used) ended up hiring a bogus employee who immediately began uploading malware. Another company eventually discovered it had fallen prey to a coordinated scheme to secure remote overseas jobs for North Koreans and that more than a third of its entire engineering team were North Korean!

If not North Korean Python programmers, then did agents of a foreign power access a hitherto unknown quantum computer to break secret codes and enter the retailers’ networks by duplicating private keys to defeat network security? Did insiders turn on their hosts and attempt to paralyse them in revenge for some unwanted change in terms and conditions? Were IT systems provided by key suppliers infiltrated by undercover sabotages operating on behalf of rival retailers?

No, of course not. It wasn’t bogus employees or code crackers, it was the same attack that happens everywhere, all the time. The hackers rang up the helpdesk and pretended to be members of staff who had lost their passwords. The UK National Cyber Security Centre (NCSC) said, in connection with these attacks, that firms should reassess how their IT help desk “authenticates staff members” before resetting passwords, especially senior employees with access to high-level parts of an IT network. Well, duh. It’s the same old social enginnering hack as it always was.

It doesn’t have to be like this. In the finance sector, one of the more everyday uses of biometrics is for account recovery and I don’t see why retailers can’t use the same kind of technology to fix their know-your-employee (KYE) authentication just as banks use it to fix their know-your-customers (KYC) authentication to restore account access.

(Look what companies such as Keyless and Anonybit are doing, for example.)

M&S warned in its most recent annual report that the shift to hybrid working had made it more susceptible to cyber attacks and I note with interest that part of the Co-op’s response to the attack was to order staff to keep their cameras on during remote work meetings and to “verify all attendees”. An internal email sent to 70,000 members of staff also asked them to not record or transcribe Teams calls, which rather implies that hackers have been sitting in on internal meetings and keep the transcripts in ordr to gain information for improved social engineering attacks as well as potentially obtaining information about internal systems to aid in future hacking.

We all understand that the nature of crime is changing and cybercriminals are smart. I’m not sure that keeping the cameras on, while a good policy for many reasons, will make much difference here though. AI is already able to generate video feeds of people that can fool co-workers and has been used to do this for nefarious purposes for years: Arup lost $25 million to fraudsters who used an AI to impersonate the company’s chief financial officer and instruct a subordiante to transfer funds during a multi-person video conference call in which, according to Hong Kong police, “it turns out that everyone [the subordinate saw] was fake”.

Deepfakes like that one becoming pervasive, and not only in banking and retailing. The owner of a London art gallery lost £30,000 after spending months negotiating an exhibition with a fake Pierce Brosnan. In another UK case, a woman was arrested after allegedly dressing up in a series of wigs and disguises to take citizenship tests on behalf of at least 14 other people, both make and female, using “doctored ID documents” to evade detection. The owner of an AirBnB property rented it out to a woman who had a stolen identity and passed the reference report with a forged driver’s licence: she then stole the furniture and sublet the house as a party pad!

AI Hacker, AI Defences? No.

Federal Reserve Governor Michael Barr recently said that in the face of increasing AI-based deepfake attacks such as that one that banks must “fight fire with fire” and invest more in AI themselves. I disagree. It is possible that the use of facial recognition, voice analysis and behavioral biometrics will be able to detect AI fakes, until those fakes get better. And it may well be that significant investments in AI may help to defend banks against a tsunami of AI fraud, but that may be a temporary relief as the fraudsters up their game. But why go down this road? Instead of trying to out-AI the attackers, why not use a tried and tested technology that cannot be faked: digital signatures.

AI-vs-AI is a red queen’s race. Instead we should demand that banks, retailers, media companies and everyone else evolve their use of tried and test security infrastructure to thwart the modern hacker armed with deepfakes. As I wrote before, you might be able to make a pretty convincing fake Brad Pitt video, but you cannot make a pretty convincing Brad Pitt digital signature. Instead of requiring employees to try and guess whether they are really looking at the deputy assistant manager for invoice reconcilliation (north east region) or a bot, we should be giving them two-factor authentication rather than passwords, verifiable credentials with strong biometric authentication rather than camera-on mandates, transcripts encrypted and digitally-signed and tamper-resistant storage for cryptographic keys (eg, in mobile phones).

Read the full article here

Share.