AI in Cybersecurity: Upsides, Downsides, Opportunities, and Dangers

An internet globe symbol against a dark blue background.

Table of contents

Artificial intelligence erupted onto the business landscape with nothing short of a roar in the fall of 2022, and soon after it seemed like everyone had found new, innovative uses for generative AI systems like ChatGPT and DALL-E. Businesses everywhere began clamoring to integrate it into their products and find ways to use it to boost organizational efficiency. 

But for cybersecurity teams, this excitement and optimism comes as more of a mixed bag. 

On the one hand, artificial intelligence and machine learning are allowing cyber teams to automate and accelerate their operations in ways we dreamed of but never thought possible just a decade or so ago. In recent years, these technologies have been empowering small cybersecurity teams to get much more done with far fewer resources, allowing them to finally gain a step on hackers, cybercriminals, and the like.

On the other hand, many cyber teams realized that those same bad actors would be able access to the same powerful technology — and these systems don’t currently discriminate around who’s using them or for what purpose. For every step forward AI helps cybersecurity teams take, threat actors are also using it to take one — or more — as well.

Cybersecurity professionals knew they’d need to act fast to improve their organization’s cyber defenses to account for this. What they have been left with is a push-and-pull dynamic between the upsides and efficiency gains AI could generate for IT security and the new threats it would add to their already-complex cyber risk landscapes.

In this article, we’ll unpack the ramifications AI has for cybersecurity, including the opportunities AI presents for cybersecurity teams and the dangers — both internal and external — that it poses for every organization. We’ll also explore ways to start applying this powerful new technology to common cybersecurity and risk management tasks.

The rise of AI in cybersecurity

Despite the recent focus on the technology, forms of artificial intelligence have been around for a long time, and its application to cybersecurity operations is nothing new, either. Cyber teams have been using AI-powered solutions to monitor for unusual or anomalous activity on their networks or detect unauthorized access for decades. More recent advances like neural network technology and generative AI have allowed these systems to work faster, more accurately, and oftentimes independently.

Today, these technologies are allowing cybersecurity teams to automate and enhance their organizations’ defense and cyber risk programs — and they’re only growing in popularity. A 2021 report from MIT and Darktrace found that a whopping 96% of organizational leaders intend to adopt artificial intelligence for defensive purposes. More research from Blackberry found that 82% of IT security leaders plan to invest in AI cybersecurity solutions over the next two years.

Part of the reason? The same MIT/Darktrace survey found that 55% of respondents felt they did not currently have the ability to anticipate novel, AI-driven cyber attacks and planned to use these defensive AI capabilities to counter them. 

Cyber attacks are also happening more frequently and becoming much more costly, driving many organizations to seek more agile and adaptive ways to keep up. Applying AI to cybersecurity also makes teams more efficient, augmenting these often time-and-resource strapped teams.

AI in cyber attacks

This is all happening against a backdrop of increasing sophistication in the methods being used by threat actors tied to nation-states, cyber crime, and hacktivist groups, including those leveraging the same technology for nefarious purposes.

Over the past few years, we’ve seen startling new types of cyber attacks using advanced techniques like data poisoning, which seeks to manipulate the decision-making processes of the defense AIs organizations are putting into place, and deepfakes, where generative AI fabricates convincing audio-visual impersonations of real individuals. That request for $30,000 in Amazon gift cards sure seems a lot more convincing when it comes in the form of a video featuring what appears to be your actual CEO rather than a suspicious text message.

AI systems are also amplifying the computing power of cyber criminals, allowing them to more easily crack passwords and encryption, and generative AI like ChatGPT can write phishing emails as well as, if not better than, many humans and help international attackers better mimic the native language of targets abroad.

Modern uses for AI in cybersecurity

There are opportunities to leverage AI in nearly every facet of cybersecurity and cyber risk management, and more are emerging every day as the technology becomes more and more sophisticated.

First, let’s take a look at some of the most common types of AI systems cybersecurity teams are leveraging today.

Types of AI

Artificial intelligence refers to computer systems that are able to perform tasks that typically require human intelligence, discernment, and judgment, including problem solving, decision-making, learning, reasoning, and visual perception. 

AI systems come in a variety of forms, including:

Machine learning

Developing statistical algorithms that take in large quantities of data, learn from it without additional programming, make decisions based on it, and improve their output over time are known as machine learning. Common examples of these types of systems include the recommendation engines used by companies like Netflix and Amazon.

Machine learning typically happens in three ways:

  • Supervised learning, where humans label data and machine learning models learn to analyze, classify, and organize it based on patterns.
  • Unsupervised learning, where the model is given raw, unlabeled data to learn from without human input.
  • Reinforcement learning, where a model is given tasks and receives punishment for being wrong and rewards for being correct.

Neural networks and deep learning:

Neural networks and deep learning are subsets of machine learning that seek to create systems that can mimic human thought processes and solve more complex tasks than machine learning systems are capable of. ChatGPT is based on this technology. In cybersecurity, they can be used to prevent phishing attacks, detect and address malware, analyze network traffic and user behavior, and augment the daily tasks that cybersecurity professionals carry out to boost efficiency.

Natural Language Processing and large language models

Natural Language Processing, or NLP, is a field of artificial intelligence that focuses on teaching machines to understand and use human language. Amazon’s Alexa and Microsoft’s Siri virtual assistants use this type of AI.

Expert systems

Expert systems are AIs that are designed to execute very specific tasks at a high level, like analyzing massive quantities of data to produce insights on or predictions about specific topics. 

In cybersecurity, these different types of AI systems can be used to detect anomalous behavior, identify threats, and scan for vulnerabilities, and more. Since they’re able to detect new and novel threats and learn from the ones they’ve previously encountered, cybersecurity systems and tools using AI technologies can be more effective than traditional methods. Those methods require a heavier investment of time and resources, rely on manual analysis that can be less accurate, and often struggle to adapt quickly as new threats emerge.

AI cybersecurity benefits and use cases

So, how can cybersecurity teams begin applying AI to enhance cyber defenses, drive efficiency gains, and uncover ways to turn cyber risk into strategic opportunities?

AI in cybersecurity can be likened to the immune system in the human body, but for your organization’s network security: It can autonomously detect and address threats, learn from those threats to prevent repeat incidents or intercept new threats, and continuously scan for anything out of the ordinary.

Here are a few common applications of AI in cybersecurity:

Preventing cyber attacks

AI cybersecurity systems enable proactive cyber risk management. They are often able to use pattern recognition techniques to recognize malware, ransomware, and other forms of cyber attacks, then head them off before they cause a problem. They’re even able to anticipate future cyber attacks by recognizing when malicious code has been modified by hackers or cyber criminals in an attempt to evade network security measures. Technology that currently has these capabilities include Darktrace RESPOND, CrowdStrike Falcon, Tenable, and IBM’s Watson AI.

Artificial intelligence has the ability to consume and synthesize information about cybersecurity trends across industries and around the globe, helping cybersecurity teams anticipate and get ahead of cyber risk trends.

AI systems are also able to analyze incoming email traffic to detect and intercept phishing attacks or alert the cybersecurity team if one is successful.

Enhancing incident response

When cyber attacks do occur, AI systems can help you react faster — even in real time — to contain or repair the damage by quickly analyzing incident data and providing the results to cybersecurity leaders. It’s also shown the potential to develop self-healing capabilities that would allow it to automatically respond to threats, intrusions, and breaches.

Considering a single cyber attack has been estimated to cost organizations nearly $5 million on average, every bit of agility an AI system adds to your incident response stands to save you a lot of money.

Increasing accuracy of threat detection

AI systems are far less prone to human error than cybersecurity analysts using manual methods. Augmenting these teams with AI can help them reduce the rate of false positives or missed threats and free up their time to engage in more strategic work and better, faster incident response. This also introduces a higher level of reliability into your cybersecurity program.

High-volume data analysis

Artificial intelligence systems are extraordinarily good at sifting through massive amounts of data and flagging abnormal patterns or activities. With the volumes of data being produced today, it’s nearly impossible for teams to review it all manually. Feeding your security, firewall, and intrusion detection logs and other IT security data to an AI system can help it recognize routine behavior on your network and detect any problems like suspicious activity that may indicate an insider threat or data breach in progress.

AI models can also accelerate your cyber risk quantification programs by carrying out quantitative risk analysis in real time, recommending mitigations, and helping you track key risk indicators to proactively manage cyber risk.

Automated, continuous controls and vulnerability testing

AI systems have the potential to be used to automate continuous monitoring and testing of cybersecurity controls, vulnerabilities, and patch management across your organization. Performing this work manually is a very time-consuming process. Letting AI carry it out automatically and continuously can help you identify and rectify any gaps in real time and maintain audit readiness at all times.

Efficiency gains

AI can make time-consuming tasks like developing policy and procedure documents much easier to complete. Systems like ChatGPT can give cyber teams a quick start on this type of work by automatically developing an outline or even the first draft of these documents.

Allowing AI to mitigate routine or low-impact threats also allows your cybersecurity team to focus their attention on addressing the most critical cyber risks your organization is facing without sacrificing the ability to cover all bases. This elevates IT security across your entire organization.

Improving cyber risk cultures and cybersecurity training

AI systems can be used to produce more realistic simulations of cyber attacks like phishing attempts based on real-world examples, which can be used to raise awareness of cyber risk and reinforce good cyber hygiene practices across your organization.

Ability to scale

AI systems can be easily tweaked and improved to scale as your organization and its cybersecurity programs grow and as cybersecurity risk landscapes change.

Other uses

The above are some of the primary use cases for AI in cybersecurity that we’re seeing right now, but there are many, many more, including:

  • Predicting your risk of a breach
  • User authentication
  • Fraud detection
  • Automated threat intelligence
  • Spam filters

Challenges of AI in cybersecurity

Data quality

AI systems are only as effective as the quality of the data you’re training them on. If you don’t have access to a sufficient volume of data, or if the data you do have is low-quality, then your AI systems may be less effective or biased, leading to higher rates of false positives and defeating the purpose of using these systems in the first place. In the worst cases, this could even generate more cybersecurity issues for your organization.

AI-powered cyber attacks and AI vulnerabilities

As mentioned above, cybersecurity teams aren’t the only ones looking to leverage the power of AI to improve their effectiveness: Cyber criminals, hacktivist groups, and nation-states are already using these systems to increase the sophistication of their attacks and cyber operations.

As sci-fi as it might sound, AI systems operated by organizations and governments are constantly battling bots and other adversarial AI systems being piloted by threat actors in cyberspace. 

Data poisoning, for instance, is a common technique in which adversarial AI or hackers attempt to access and manipulate the data an organization’s AI system trains on to influence its behavior. And however successful your AI systems might be at detecting phishing attacks, the AIs being used by threat actors may be equally as good as producing ever-more convincing ones. Since AI systems have exhibited the ability to write code, hackers can also use them to dynamically change malware code to squirm around network security measures.

Privacy and ethics concerns

Any system running on artificial intelligence inherently consumes lots and lots of data. Oftentimes, that data can be of a highly sensitive or personal nature. Naturally, this raises data privacy and ethical use concerns — and these concerns are starting to draw the eye of regulators around the world. It’s up to the organizations leveraging AI to ensure they know where the data they feed these systems is going, how it’s processed, and to find ways to ensure it remains secure. It’s also imperative that you understand how any AI-powered third-party software or systems that your own systems integrate with are handling this data.

Over-reliance on AI

AI is certainly a powerful way to drive efficiency and improve your cybersecurity program’s effectiveness, but too much of a good thing can quickly become a bad thing. Cybersecurity teams need to temper the urge to let AI take the wheel and ensure that the appropriate checks and balances have been put in place, that human oversight is involved in making important decisions, and that there’s sufficient insight available into how the AI systems work and produce decisions and recommendations.

Best practices for using AI in cybersecurity

For all of the challenges facing cybersecurity teams that want to start leveraging AI systems, following these best practices can help mitigate the risks while still taking advantage of the benefits:

Have a plan

AI is not the sort of technology that you can just start implementing across your security operations. Before beginning to use AI in cybersecurity, make sure you’ve developed a plan for deploying, overseeing, and managing it. You should also develop policies around acceptable use of AI across your organization, so that everyone is on the same page about where, how, and for what purposes it’s OK to use AI.

Conduct regular security assessments

Just like cyber teams are constantly analyzing the security of their organization’s networks, it’s important to apply the same diligence to your AI systems. After all, they are, at their core, digital systems with vulnerabilities similar to any other software, system, or network. Carrying out regular AI security assessments is a must.

Learn more about conducting cybersecurity risk assessments here.

Develop your systems securely

If you’re building an AI-powered cybersecurity solution in-house, make sure the process adheres to the same secure development, configuration, and deployment standards that you’d use for any other commercial or internal products you produce.

Manage third-party and vendor risk 

If the AI you’re putting into place was built by a third party or outside vendor, make sure you’ve assessed any third-party risk involved with those entities through your organization’s existing third-party risk management program.

Maintain data security

Make sure you’re able to trace where any data you feed to an AI system is going and how it’s being stored and processed. If the data is of an extremely sensitive nature, you may want to consider not providing it to the AI at all.

Using AI and modern GRC platforms to manage cybersecurity risk

Artificial intelligence stands to completely revolutionize the way organizations approach cybersecurity, both by driving efficiency and accuracy for cyber teams and by increasing the complexity of cyber risk landscapes. This technology has grown by leaps and bounds over the past year alone, and shows no signs of slowing down.

While AI is a powerful tool for managing cybersecurity risk on its own, it’s even more powerful when coupled with a modern GRC platform like LogicGate Risk Cloud. Using these technologies in tandem can help you gain a holistic understanding of your cybersecurity landscape, automate planning, mitigation, and response, and connect cyber risk to business impact.

Ready to learn more about AI and GRC? Connect with one of our expert teams.

 

Related Posts