Organisations deploying Generative AI must ensure that they identify and mitigate against cyber risks in order to comply with the various overlapping legislative frameworks. Our Artificial Intelligence team discusses some of the main cyber risks that organisations should be alive to before deploying.
Last year Generative AI dominated headlines, promising to transform the way in which organisations operate by drastically increasing efficiency and reducing labour-intensive tasks. As a result, the overarching narrative was one of adoption or risk becoming obsolete. While there is some truth to these claims, it's also important to bear in mind that as with any new technology, inherent risks equally exist.
What is Generative AI?
Generative AI is a branch of artificial intelligence that focuses on generating new data in the form of text, images, video etc based on existing data. It involves machines and algorithms that continuously improve by learning patterns from the vast amount of data fed into the machine. Put simply, Generative AI involves the following steps:
- The model or machine is trained using a huge dataset
- From this large dataset, the model or machine identifies and learns underlying patterns and structures in the dataset, and
- The generative process enables the creation of new data which mimics these learned patterns and structures.
Generative AI in cybersecurity
The potential for Generative AI to adversely impact and exacerbate cybersecurity risks is very real. Generative AI has significantly altered the cyber threat landscape as this novel technology is accessible to all and easy to use and understand. It is reported that cybercriminals have already found ways to exfiltrate data from Generative AI tools. These include using platforms based on Generative AI models trained on malware creation data and used for ill intent or to generate malicious code.
In addition, according to research studies, security has been labelled as a top hurdle for companies to overcome when looking to deploy AI. Remarkably, 64% of companies have indicated that they do not know how to evaluate the security of Generative AI tools[1].
We focus on five key cybersecurity risks associated with Generative AI, which will need to be carefully considered before seeking to implement this transformative technology:
Data breaches
Data breaches pose a significant cybersecurity risk when considering whether to utilise Generative AI. This is because these models store and process a colossal amount of confidential data or sensitive data such as personal data, health records or financial data. As a result, these models can be exploited by bad actors seeking to gain unauthorised access to private data for their financial gain. For example, by inputting specifically crafted data, an attacker can try to cause the AI model to output information it has been trained on, potentially revealing confidential data.
Internally, a breach may also result from an organisation having inadequate oversight such as insufficient security protocols, inadequate monitoring, weak access controls and/or deficient encryption. Therefore, without appropriate safeguards, GDPR infringements could occur.
Malware and ransomware
It is reported that Generative AI can produce new and complex types of malware which are capable of evading conventional detection methods. In the area of ransomware attacks, it is argued that non-cyber criminals who lack IT knowledge may now be able to carry out ransomware attacks by utilising chatbots[2]. In addition, more sophisticated cyber criminals with IT skills necessary for carrying out ransomware attacks may lack expertise in other fields. As a result, it is also argued that this cohort of criminals may also benefit from Generative AI by drafting more persuasive and professional phishing emails[3].
Organisations also run the risk of exposing any existing IT vulnerabilities and creating new ones where their IT systems are outdated, patch releases are not implemented, and / or relevant software updates have not been adopted.
The addition of any new application into a network creates new vulnerabilities that could be exploited to gain access to other areas in an organisation’s network. However, Generative AI poses a unique risk as it can contain complex algorithms that make it difficult for developers to identify security flaws.
According to experts: “AI is not yet sophisticated enough to understand the complex nuances of software development, which makes its code vulnerable[4]”.
Data poisoning
Model poisoning is another inherent cyber risk associated with Generative AI. This is a form of attack which targets AI models in their development and testing environments. These attacks involve the introduction of malicious data into training data which then influences the resulting AI model and its outputs. A Generative AI tool which has been the subject of a model poisoning attack may produce significant unexpected deviations in its output. It is also challenging to detect model poisoning, as the poisoned data can appear innocuous. As a result, organisations whose models fall victim to data poisoning, dependent on their security and organisational measures, may find themselves falling foul of the GDPR as well as AI specific legislation.
Data leakage
Lastly, where staff are not properly AI trained, the hunt for efficiency can lead to the leakage of sensitive or personal data through Generative AI products. Employees may enter confidential or personal data into a Generative AI product without being aware of the implications, and may even unwittingly disclose personal information through browser extensions and other software.
Additionally, failure to protect personal data from data scraping infringes on GDPR obligations as organisations / digital service providers are obliged to protect users’ personal data.
Conclusion
Inherent and significant risks are associated with the use of Generative AI, especially from a cyber perspective. As organisations explore the benefits associated with this transformative technology so do cyber criminals. It is now known that hackers are using Generative AI tools to improve the sophistication of their phishing attacks. This is because this technology enables them to gather personal information at large scale as well as creating more sophisticated spoof websites in a bid to trap individuals into sharing their credentials. As a result, organisations must implement robust security measures to adequately safeguard against these increasingly sophisticated cyber attacks.
Given that the EU AI Act’s final text is expected in the coming months, now is the time to ensure organisations’ adoption of Generative AI is implemented in compliance with both the GDPR and the AI Act. This is because adopting Generative AI without carefully considering and safeguarding against the inherent security risks exposes organisations to fines under both regimes (once the AI Act comes into effect) as well as reputational damage.
Legal advice should be obtained whereby organisations are unsure as to how they should appropriately safeguard against these security risks when deploying Generative AI.
For more information and expert advice, please contact a member of our Artificial Intelligence team and / or a member of our Cyber Incident Response team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
[1] Forrestor Report, “Maximizing Business Potential With Generative Al: The Path To Transformation”, 2023.
[2] International Cybersecurity Law Review, Springer Link, “Ransomware attacks in the context of generative artificial intelligence - an experimental study”, Volume 4, 07 August 2023, available here.
[3] ibid.
[4] Sean O’Brien, founder of Yale Privacy Lab, and lecturer at Yale Law.
Share this: