The Truth About How Generative AI Can Be Used In Cybersecurity
Thanks to ChatGPT, you’ve probably heard a lot about generative AI technology over the last few years.
Generative AI is artificial intelligence technology that works by taking input data like a request, processing it through different algorithms, and producing an output based on learned patterns. ChatGPT is a generative AI chatbot.
91% of security teams use generative AI, but 65% don’t fully understand the implications.
In this blog post, we break down some of the knowledge gaps around generative AI and go beyond the generic future promises, i.e., from “gen AI could be used for x, y, or z” to “what are the safe generative AI cyber security use cases right now?”
Generative AI Is Not the Only Kind of Cybersecurity AI
Generative AI, the technology behind ChatGPT, Gemini, and every other AI chatbot is a subset of AI. There are many different kinds of artificial intelligence technologies (IBM lists seven).
In cybersecurity, core AI technologies within AI-powered platforms like SenseOn include:
Machine Learning (ML): For finding patterns in data to detect threats based on anomalies.
Deep Learning (DL): Uses neural networks for complex threat detection, like malware analysis.
Natural Language Processing (NLP): Analyses text data (e.g., threat reports, phishing) to identify risks.
GANs: Creates synthetic data for testing and spotting forgeries in files or media.
Reinforcement Learning (RL): Trains AI to respond autonomously to threats in real time.
Graph Analysis: Maps data relationships to detect unusual network patterns.
All of these are a subset of AI similar to generative AI.
Keep in mind that generative AI is just one type of AI, not all AI. This is important because research by CompTIA shows that for many organisations, generative AI will be their first foray into using any kind of AI technology in a cybersecurity use case.
36% of organisations say they have not worked with AI/ML before but are now seriously exploring generative AI tools.
Generative AI Can Be Used for These 3 Cybersecurity Use Cases
We’ve been developing cybersecurity AI technology over the past decade and recently invested in integrating the market's most advanced generative AI large language model (LLM) tech into our platform.
We can say for certain that generative AI is a good fit, provided that the LLM is trained specifically for a particular use case, like:
Threat intelligence
A generative AI algorithm trained against the MITRE ATT&CK framework can help rapidly compare observed behaviour in their network to known tactics, techniques and procedures.
This helps security teams rapidly identify what type of attack is underway and what they need to do to stop it.
2. Incident response communication
Sophisticated cyber incidents tend to involve complex web or device and user event logs, often across multiple devices and network locations, that can be very hard to decipher and communicate, especially to non-security stakeholders.
Generative AI can solve this problem by turning disparate events into an event narrative, e.g., “This website was visited, then the user did this, and this all means x.”
Smart summarisation through generative AI makes it much easier for incident response teams to rapidly understand the actual “story” behind an incident and communicate it to other stakeholders.
3. ICT incident reporting
Generative AI is a fantastic solution for automating security incident reporting procedures not only for internal teams (as described above) but also for regulators.
While incident reporting should never be fully automated, regulations like the Digital Operational Resilience Act (DORA) require covered organisations to report incidents in detail and in a timely manner.
Generative AI can help pull together security data into report-ready content.
How Not to Use Generative AI In Cybersecurity
Chat GPT, Anthropic, Gemini, and other chatbots are not cybersecurity solutions.
These generative AI models are not designed for any cybersecurity use case and are not managed by a trusted security partner. Relying on them to augment security team capacity is not a good idea.
Using a public generative AI model like ChatGPT for the wrong purposes can actually harm your security posture.
It might seem like these models can help you create threat detection rules and incident reports and review code, etc., but based on current studies of accuracy when doing diagnostic tasks, for every four or five things that look correct, at least one will be wrong. The dangerous thing is that generative AI will never tell you when it is unsure.
Generative AI technology is also not suitable for threat detection tasks. Recent research based on the legal field shows that when these kinds of models are tasked with unpredictable scenarios, they are “mostly wrong.”
Whether generative AI is good for cybersecurity depends largely on the technology used.
SenseOn’s Generative AI Approach
We’ve been at the leading edge of cybersecurity AI technology for almost a decade. An innovator in modern AI cyber technology since 2017, SenseOn was recognised by the World Economic Forum in 2021 as a technology pioneer due to its advanced security AI.
That’s why we haven't lost any time integrating generative AI capabilities into our security platform.
We’ve developed a security-focused LLM that can be accessed directly through the SenseOn platform. This means our customers can get generative AI benefits like faster reporting and narrative-based incident response without any of the risks they might otherwise face.
SenseOn delivers secure, actionable and sustainable AI benefits to UK businesses so their security teams can focus on the proactive work they need to do.
We use a combination of AI models, including generative AI and ML, to deliver an average incident response time of just 10m 59s.
SenseOn’s AI enables comprehensive visibility from a single installed agent that does a suite of detection methods, including heuristics, signatures, deep packet inspection of network traffic, user behaviour, and more.
This helps remove some of the skill requirements for the complex security engineering that would otherwise be required to deliver this capability.
Learn more about how an integrated AI-powered platform can deliver results with a free report.