BlogNews11TH JUL 2024
AuthorSamir Yawar
3 min read
News

DBIR 2024: Does Generative Artificial Intelligence poses a cybersecurity threat in 2024?

Twitter
Facebook
WhatsApp
Email
LinkedIn
feat image for generative AI and cybersecurity blog

How is generative artificial intelligence posing a threat to cybersecurity efforts in 2024? This question has been on the minds of top security analysts ever since Gen AI tools took over the world by storm. Verizon’s DIRB 2024 sheds light on the subject as well.

Has ‘evil Gen AI’ directly contributed to a breach from November 1, 2022 to October 2023? Not so fast according to the authors behind Verizon’s latest data breach report who say that:

We did keep an eye out for any indications of the use of the emerging field of generative artificial intelligence (GenAI) in attacks and the potential effects of those technologies, but nothing materialized in the incident data we collected globally."

We take a look at their findings.

Generative AI and its effect on cybersecurity so far

However, they do believe that there has been some interest in GenAI in forums with vectors like phishing, ransomware, vulnerability and malware. But the kicker? Those mentions have been shocking low - just 100 cumulative mentions over the past two years.

GenAI mentions with attack types

Another takeaway from this report - most of these mentions pertained to selling GenAI tools for deepfakes and pornography.

The report goes on to say that given the the scale of social engineering pattern numbers from the past few years (which remain exceedingly high), it doesn’t take much sophistication for a phishing or pretexting attack to be successful.

Malware, like ransomware, is still as effective as ever without the use of GenAI tools in their development. Same goes for zero-day vulnerabilities that threat actors can use to infiltrate an organization.

Is Generative Artificial Intelligence even a cybersecurity threat?

Given the information so far, it could be tempting to brush off the use of GenAI tools to create new phishing emails or ransomware messages.

However, Microsoft, whose software tools are used worldwide, has revealed that state-sponsored actors have used large language models (LLMs) to target sectors such as information technology, higher education, government and more.

The Verizon report argues that despite that there hasn't been any ”attack-side optimizations that would register on the incident response side of things.”

Conclusion - GenAI is being used to develop threats but..

Generative artificial intelligence is yet to achieve that breakthrough moment for threat actors. However, that doesn't stop criminal elements from experimenting with it to develop new threats. This is the main takeaway from DBIR 2024 report who say that the generative AI hype is hard to escape from, and that the tool makers tend to exaggerate the effectiveness of GenAI-driven threats pertaining to phishing and social engineering.

Note: This post is part of our extensive coverage of Verizon's Data Breach Investigations Report 2024, detailing the top cybersecurity threats faced by governmental, non-profit and corporate organizations.

Samir Yawar
Samir Yawar / Content Lead
Samir wants a world where people can instinctively whack online scams and feel accomplished without the need for psychic powers. As an ISC2 member, he is doing his bit to turn cybersecurity awareness training into a fun concept with simple, approachable and accessible content. Reach out to him at X @yawarsamir
FAQsFrequently Asked Questions
The Verizon Data Breach Investigations Report (DBIR) is an annual publication by Verizon that provides a comprehensive analysis of data breaches and cybersecurity incidents. The report is based on an extensive collection of data from real-world security incidents, including data breaches, contributed by a wide range of organizations and security partners.