AI and the SOC: how artificial intelligence can influence cybersecurity services

AI and the SOC

Introduction

Artificial intelligence has been an increasingly-used technology for quite some time now, however, the launch of OpenAI’s ChatGPT has proven to have caused a seismic shift, with the ramifications coming thick and fast, both in the technology industry but also beyond it, including businesses in numerous sectors, as well as the public sector.

In this instance, we will examine the potential uses and consequences that OpenAI technology can have in the field of cybersecurity, especially in terms of how it can be integrated with a security operations centre (SOC) in the context of leveraging artificial intelligence in cyber defence.

What is a SOC and how can AI improve it?

Security Operations Center (SOC) services play a critical role in protecting organisations from cyber threats.

These services continuously monitor and analyse network traffic, system logs, and other security-related data to identify and mitigate potential cybersecurity threats.

However, with the increasing volume and complexity of data, it has become challenging for SOC teams to effectively identify and respond to cyber threats.

This is where OpenAI comes into play, as it can assist SOC services with its advanced AI and machine learning capabilities to enhance the effectiveness of identifying and mitigating cyber threats.

AI and the SOC: what are the key benefits of merging cybersecurity and artificial intelligence?

One of the key ways that OpenAI can assist SOC services is by automating many of the tasks that are typically performed by human analysts. For example, OpenAI can analyse logs and identify potential threats, such as patterns and anomalies in network traffic, detect malicious behaviour on endpoints, and prioritise alerts for further investigation. With the help of AI, SOC analysts can process vast amounts of data in a fraction of the time it would take manually, thus increasing the accuracy and efficiency of incident response and incident management. Additionally, AI can also help to automate repetitive and mundane tasks such as threat hunting, reducing the workload of SOC analysts, and allowing them to focus on more complex and critical tasks.

When it comes to AI and the SOC, another way that the former can add value is by using machine learning to predict future security threats. This allows organisations to proactively defend against potential attacks, rather than simply reacting to them after they have occurred.

By using machine learning models to analyse historical data, OpenAI can identify patterns and trends that may indicate an upcoming security threat, such as an increase in network traffic from a specific location or a sudden spike in login attempts from a particular IP address.

Additionally, OpenAI can also help to identify previously unknown threats using techniques like unsupervised learning and anomaly detection. These techniques can help to identify new types of attacks that have not been seen before, allowing SOC teams to proactively protect their organisations from these threats.

OpenAI can also assist SOC services by providing a more comprehensive view of an organisation’s security posture. By integrating data from multiple sources, such as network traffic logs, endpoint security solutions, and threat intelligence feeds, OpenAI can give SOC analysts a complete picture of the organisation’s security landscape. This allows them to identify potential threats that may have been missed by focusing on a single data source. Additionally, it also allows for more effective incident response as it can correlate data from different sources to provide more context and insights into an incident.

Another advantage of the AI and the SOC coming together is the ability to improve incident response time. By automating many tasks, OpenAI can significantly reduce the time it takes to identify and respond to a security incident. This can be critical in mitigating the impact of a cyber attack and can help organisations quickly recover from an incident. Additionally, OpenAI can also assist in incident response by providing insights and recommendations to SOC teams, based on the historical data of similar incidents, thus reducing the time and effort required to resolve an incident.

AI and the SOC: benefits, uses and consequences

Novacoast, a US-based technology services firm, has already investigated the potential uses of OpenAI technology to boost their productivity, potentially saving their executives a significant amount of time.

According to an interview with technology outlet CRN, the company revealed that it has already run small-scale tests before proceeding with a more substantial, wide-scale application of the technology.

The small-scale test, which had a span of eight weeks, did reveal some drawbacks, however, the results were encouraging enough to warrant the company giving the green light for wide-scale usage.

Among the downsides of the technology is the fact that the OpenAI chatbot has not been trained on data past 2021. This results in certain answers being created out of nothing or concocted to fit what the system thinks the answer should be.

“And yet, despite the fact that it can ‘hallucinate,’ and despite the fact that the dataset is old — which in security can be very bad — it saved people a ton of time,” company chief of operations Eron Howard said during the interview.

The company employs approximately one hundred security operations centre (SOC) analysts, alongside a host of other professionals, including penetration testers, threat hunters, developers and security engineers.

According to Howard, all of the aforementioned members of staff will have OpenAI integrated into their chat. This will then be monitored and analysed to determine how much time will be saved.

In terms of any drawbacks, the company has integrated certain limitations and rules to their version of the technology to limit such instances.

GPT-3 is also adept at summarizing the steps that are necessary for responding to a security issue, Howard said, since it can essentially query the corpus of knowledge that has been published by SOC analysts.

The technology can provide a SOC analyst with an average recommendation for the steps to take in a specific situation “without having to go to Google and read a bunch of blogs,” he said.

“GPT-3 is also adept at summarizing the steps that are necessary for responding to a security issue, Howard said, since it can essentially query the corpus of knowledge that has been published by SOC analysts,” writer Kyle Alspach explained.

“The technology can provide a SOC analyst with an average recommendation for the steps to take in a specific situation without having to go to Google and read a bunch of blogs,” he added.

The use of AI in cybersecurity extends to the cloud

US technology giant Microsoft has been front and centre when it comes to the launch and usage of artificial intelligence, having stepped things up in recent months as it aims to disrupt the search engine market. However, it goes beyond that, as it is already moving ahead with a number of projects in the context of cybersecurity, as well as its Azure cloud service.

Microsoft has already notified users and other interested parties that Azure OpenAI stores and processes data to provide the service, monitor for abusive use, and to develop and improve the quality of Azure’s Responsible AI systems.

The company’s Azure OpenAI service processes various types of data. This includes text prompts, queries and responses submitted by the user; training and validation data, which can be provided by the user in such a way that it fine-tunes their own model; as well as results data that has been generated after a training process has been completed.

A few weeks earlier, the company announced that it was “excited to announce the general availability of Azure OpenAI Service as part of Microsoft’s continued commitment to democratizing AI, and ongoing partnership with OpenAI”, after the company first launched its Azure OpenAI Service in November 2021.

Vaibhav Nivargi, Chief Technology Officer and Founder at AI platform Moveworks, said that OpenAI technology has enabled his company to deal with a number of unique use cases.

These include identifying gaps in their customer’s internal knowledge bases and automatically drafting new knowledge articles based on those gaps.

“This saves IT and HR teams a significant amount of time and improves employee self-service,” Nivargi said.

“This OpenAI Service will also radically enhance our existing enterprise search capabilities and supercharge our analytics and data visualization offerings,” he added.

Threat modelling and OpenAI

Numerous analysts have already documented the possibilities that OpenAI has when it comes to threat modelling, which refers to the structured process that seeks to identify security requirements, pinpoint security threats and potential vulnerabilities, quantify risks and vulnerabilities in a critical fashion, and create a priority list of all available remediation techniques and methods.

According to threat modelling expert, consultant and author Adam Shostack, one example where OpenAI performed well involved a request to list all spoofing threats for a system which has back-end service to back-end service interaction in Kubernetes environment in a table format with columns listed as threats, description and mitigations.

Although the result did not include all possible results, the list was adequately populated, providing the user with a solid base of results, potentially saving them hours of manual work and analysis.

SOC and AI: humans still matter

Continuing from the previous example listed above, it is important to note that human analysts still matter. Their focus, ability to think critically and in a nuanced way, and combine their expertise with years of experience are more important than ever.

“While chatbots can produce lists of threats, they’re not really analyzing the system that you’re working on, so they’re likely to miss unique threats, and they’re likely to miss nuance that a skilled and focused person might see,” Shostack warned.

“Chatbots will get good enough, and that “mostly good enough” is enough to lull people into relaxing and not paying close attention,” he added.

However, while OpenAI technology may be unable to replace them completely, it can help them to allocate their attention where it truly matters and expedite the entire process of any given security operations centre-related task and project. 

Furthermore, as Shostack explains, a significant portion of the energy and attention that engineers and analysts expend goes into managing large-scale software projects, both in terms of off-the-shelf solutions and more custom-made, tailored pieces of software.

“All these other tasks are done occasionally, slowly, rarely, because they’re expensive; so it’s not what the chatbots do today, but I could see similar software being tuned to report how much any given input changes its models,” Shostack said.

“Looking across software commits, discussions in email and Slack, tickets, and helping us assess its similarity to other work could profoundly change the energy needed to keep projects (big or small) on track. And that, too, contributes to threat modelling,” he added.

He concluded by stressing that all of this additional assistance from OpenAI technology can help to free up human work cycles for more interesting and perhaps more necessary and valuable work.

Optimising the SOC using AI

Another instance where OpenAI technology can help a company or organisation improve and bolster its cybersecurity efforts is optimisation, as briefly mentioned earlier in the article.

Indeed, according to one security firm, while OpenAI cannot match their accuracy rate for classifying email messages, it can, however, help improve efficiency, speeding up the entire process.

Thus, the organisation can achieve the same level of accuracy and effectiveness while expending less time to achieve it.

In this particular use case, where the firm deployed OpenAI to classify emails, the results were interesting.

Overall, using artificial intelligence, the firm achieved a true detections rate of 89.2 per cent, where 58.6 per cent of emails were classified as true negative, and 41.4 per cent of emails were true positive.

Naturally, 10.8 per cent of emails were falsely detected, with 4.5 per cent classified as false negative and 6.25 per cent as false positive.

The firm’s core findings were twofold. Firstly, they determined that OpenAI can be bypassed due to ethical constraints, something which has also been documented by the renowned technology outlet The Verge here. It should be noted that this can be limited to a certain extent through custom-made restrictions when the technology is fine-tuned by a security team.

Secondly, the fact that OpenAI can be used as a tool to enrich events but not completely automate them, a lesson which feeds back into the original point of freeing up analysts and engineers to focus on what truly matters.

Conclusion: there is value in OpenAI in a SOC

In conclusion, OpenAI has the potential to significantly enhance the effectiveness of SOC services by automating many of the tasks that are typically performed by human analysts and using machine learning solutions to predict future security threats.

By providing a more comprehensive view of an organisation’s security posture and improving incident response time, OpenAI can help organisations to be more proactive in defending against potential attacks and improve incident response capabilities.

Moreover, OpenAI is also able to identify unknown threats, correlate data from different sources and provide more context and insights into an incident. organisations that are looking to enhance their SOC services should consider leveraging OpenAI’s advanced AI and machine learning solutions and capabilities to improve their cybersecurity posture and better protect their assets.

Subscribe to our Newsletter

Sign up for our content, including blog articles, news, tips and more