0 Items: 0

Want to start reading immediately? Get a FREE ebook with your print copy when you select the "bundle" option. T&Cs apply.

How AI is Changing the Cyber Security Landscape

A person's head with a circuit board in the background

The following is an edited extract from Hacked.

In March 2023, Charles Gillen was arrested on the tarmac at St John’s International Airport in Newfoundland, Canada, carrying $200,000. It is alleged that this money came from a scam in which at least eight senior citizens were defrauded over a three-day period with phone calls that appeared to be coming from their grandchildren.

In each call, the grandchildren were heard saying that they had been in an accident, that drugs were found in the car with them and that they needed money to either pay for bail or legal fees. The grandparents were all convinced, after hearing the voice of their grandchild on the phone, to hand cash over to a man who came to their home and collected envelopes of money.

This is one example of growing reports that we are entering a new era of scams, in which criminals use artificial intelligence to make their tactics of manipulation much more convincing.

How did we get to this point? And what are the big cyber security implications of AI?

Large Language Models (LLMs)

Large Language Models (LLMs) can be traced back to the birth of AI in the 1950s and then, in 1967, the creation of the first chatbot, Eliza. However, they really hit the mainstream with the release of ChatGPT (developed by OpenAI) in November 2022. It is powered by huge amounts of data and can essentially make predictions – analysing text and learning how humans put words together.

Within three months of launching, ChatGPT had attracted one million active users, making it the fastest-growing application in history (it took Instagram over two years to hit the same milestone, and TikTok nine months). ChatGPT is not the only LLM, but it is – thus far – the fastest growing.

LLMs are increasingly likely to be used by cyber-criminals to add speed, scale and sophistication to their cyber-attacks, particularly when it comes to social engineering. Research from Microsoft and OpenAI shows how state-affiliated adversaries are using LLMs, from research on their targets to troubleshooting technical errors and impersonating trusted individuals and institutions.

For a long time, cyber security advice has warned people to look out for poor spelling and grammar as a red flag for scams and phishing emails. Armed with LLM assistants, the native language of an attacker is no longer relevant in their ability to craft sophisticated phishing emails. They can now conduct social engineering attacks at greater speed and scale and with a new level of sophistication.

LLMs are therefore clearly good at gathering and generating information and this brings us to another security and privacy concern, which is the extent to which they hoover up the information you share and expose it for public consumption. They do not currently take the information one person enters and include it in the LLM. However, the LLM provider does take the prompts that you enter and store them, inevitably using them for future development, which means the data is accessible to the LLM provider and, potentially, their third parties. Like any online service, there is always the risk of the data being hacked or leaked and made public.

Deepfakes

Much like LLMs, the history of deepfakes is both long and short. The first fake photograph is credited to Hippolyte Bayard in 1840, titled “Self Portrait as a Drowned Man”. Two decades later, in 1860, the first manipulated photograph was credited to an image of U.S. President Abraham Lincoln, in which his head was composed onto the body of the politician John Calhoun. Photo manipulation has since been used for political purposes and propaganda, as well as pranks.

In 2017, this moved to a new level, when the term “deepfake” was coined by a Reddit user of the same name.

In 2019, the AI firm Deeptrace found that the number of deepfake videos online had doubled in nine months.

Deepfakes use deep learning (multiple algorithms working together) to create fake synthetic media – hence the portmanteau name. The technology can be used to create convincing false photographs and audio, as well as videos. In 2017, creating deepfakes took technical skill, time, a lot of data and a lot of computing power.

Now, multiple websites and apps have become available to make deepfakes without skill, time or even much data. While the level of sophistication varies, deepfakes are already having an impact on cyber security at the individual, organizational, national and international levels.

Organizational implications of AI

In February 2024, an employee at a Hong Kong company claimed she was duped into paying HK$200m (£20m / $25m) of her firm’s money to fraudsters in a deepfake video conference call, where the criminals posed as senior officers of the company.

There have been growing reports such as these, with deepfake technology being used to mimic the voices of professionals and convince their colleagues to transfer money or share information.

Implications of AI for individuals

In October 2022, the Manga artist Chikae Ide revealed that her new work ‘Poison Love’ was based on her own experience of a romance fraud. In 2018, Ide was contacted via Facebook by someone claiming to be the actor, Mark Ruffalo. Although she was suspicious, the flattering message caught her attention and she agreed to a video call which put paid to her misgivings. It has been suggested that deepfake technology was used to impersonate Ruffalo on the video call, convincing Ide so fully that the two got unofficially married online before she ended up wiring him a total of 75 million yen, equivalent to $523,200 over three and a half years.

When it comes to defence, we are at a challenging time in the AI era, partly because of the exponential growth of the abuse of AI. We have invented skydiving and now we’re rushing to invent the parachute on our way through the skies.

Ultimately, verifying the identity of those we are communicating with is our best line of defence. We cannot trust based on sight and sound alone. Be tuned into whether a communication is unexpected or unusual, be aware when your emotional buttons are being pressed and take a pause to verify identities and information before trusting what you are seeing or hearing. When we can’t believe our eyes and ears, an anti-scam mindset becomes even more critical.

However, I do not think we should be fearful. Fear will not stop cyber criminals from developing new ways to exploit us, nor will it encourage technology manufacturers to build more safeguards into their developments. Instead fear only paralyzes us, distracting us from the more valid dangers that we need to be concerned with, making us more vulnerable. Technology is a tool and, just like any tool, whether it is good or bad comes down to its use.

Being aware of the threats and moving forward with preparation, not panic, is our best way forward.

Jessica Barker  From £ 14.99  From $ 17.99

Related Content

Article
Leadership, Artificial Intelligence, Management


Get tailored expertise every week, plus exclusive content and discounts

For information on how we use your data read our  privacy policy