Skip to main content

© All rights reserved.

LOOKING FOR SOMETHING?

The Emerging Cybersecurity Threats and Solutions of Artificial Intelligence (And Some Resources to Help Ready You for Both)

As artificial intelligence solutions continue to grow into the pop culture lexicon, many envision this technology as the beginning of a dystopian nightmare that ends with us cowering away from the red glowing eye of an Arnold Schwarzenegger-skinned robot.

But while the threats of a doomed Terminator-esque fate loom in the imaginations of some, others are imagining ways this evolving technology will empower utility cybersecurity professionals to improve their security posture in ways that were previously not possible. We sat down with cybersecurity and AI expert James Edgar to learn about the security threats and solutions that are emerging from AI technology.

Utility Security Magazine:
So many view AI as a new and emerging technology. But the facts say something different.

James Edgar:
The reality is AI has been around since 1956. The term “artificial intelligence” was coined in 1955 and has been evolving and developing ever since. So, the reality is it’s been around. It’s here. It’s embedded in a lot of different things that we already use today, from Alexa to autonomous vehicles. So, we see it every day. And in many cases, we don’t realize that it’s being used. But from a cybersecurity standpoint in particular, we do see it in our every day, from security event monitoring to threat detection to behavior analysis.

Utility Security Magazine:
How are we seeing bad actors use AI to deceive and create security issues?

James Edgar:
We have definitely already seen the impact of AI from bad actors. Phishing is one area in particular that has really ramped up quite a bit. And not only have phishing emails increased, but they’ve gotten much better. By leveraging things like ChatGPT and other large language models, bad actors can now clean up some of the grammatical and language errors they would have missed without AI. This has empowered them to do a much better job at emulating large popular brands with emails that are much clearer and crisper, which, in turn, makes them look more realistic.

We’re also seeing more deepfakes. Early in the Russia-Ukraine war, a video deepfake was sent out of Ukraine President Volodymyr Zelenskyy telling his soldiers to drop their arms and surrender to the Russians. Although ultimately the Ukrainians did not fall for it, these deepfakes are getting better and better, and, as AI evolves, we are going to see them be used for nefarious purposes like these.

Utility Security Magazine:
So, let’s flip to the other side of the coin and talk about some of the positives of AI. For cybersecurity applications, we are seeing it be used to create self-healing networks. Can you talk more about that?

James Edgar:
If you think about what we’re using AI for right now—especially in cybersecurity—it is being used for threat intelligence so we can identify threat patterns a lot quicker. We have it in our Security Information & Event Management, or SIEM, so that we identify these threat patterns and respond faster. It’s also being used to simulate or emulate hacker activity, so now, organizations can do a targeted penetration test before the bad actors do it.

It’s also helping with managing data security. Do we know where our data is? Is it sensitive data that is on a server that’s exposed to the internet? One of the key benefits to AI is its neural network and all of its processing power that can sift through large amounts of data, identify patterns, and determine the best ways to respond to anomalous patterns and threats.

It will know where your sensitive data is resting and whether or not it’s exposed or has a vulnerability. It will know if it’s sitting behind a firewall that has a port open that shouldn’t be open. And when it runs its analysis, it can automatically adjust, like closing that open port or updating an endpoint security agent. So basically, it can know where the important stuff is; identify where bad things, threats or an incident occurs; and take the necessary action to respond to it all without having to have much interaction from a person. The response happens in almost real time.

Utility Security Magazine:
With the emergence of ChatGPT and Google’s Gemini as useful tools that many turn to for everything from developing shopping lists to writing advanced code, there is the question: Who owns the data and information that one attains from these apps?

James Edgar:
A lot of these AI models like ChatGPT go out and scrape the internet or they scrape documents—and the concern is that they might be pulling information that is copyrighted or confidential. With that said, I think what we are going to see is more litigation and legal decisions made around how we use the data. In fact, right now, any content generated from ChatGPT cannot be copyrighted because you can’t account for where that data may have come from.

On the code side of things, GitHub Copilot is being sued because they are being accused of using code sources they were never given permission to use. So, that creates a lot of legal concerns. And we can even go further down this rabbit hole and question, what happens when we put our own data out into ChatGPT or other chatbots? What are our legal liabilities or obligations? What protections do we have if we upload confidential data?

For example, let’s say an unwitting businessperson puts their sales strategy into ChatGPT so they can look for different ways to market it. That’s confidential corporate information. Or, let’s say you put in information about your operation technology (OT) network to look for ways to improve communications between systems. That data can be used, consumed and trained on, and then before you know it, someone else is querying a similar query or prompt, and it outputs all of your data.

And again, what are the legal ramifications if that happens? You have to be very careful about not only where you’re getting that data from, but also how you’re entering the data and how you are using data as an input into some of these tools.

Utility Security Magazine:
Another concern about chatbots like ChatGPT is the concept of hallucinations and how those can really create accuracy issues. Can you talk about why this is a big issue?

James Edgar:
Hallucinations are when a chatbot solution like ChatGPT gives you a wrong answer based on what it assumes is correct. A lot of that tracks down to the data that it is trained on—either it wasn’t sufficient data or it was inaccurate data to begin with. For example, when ChatGPT first came out, it was pulling on data that only went up to September of 2020. So, if you asked ChatGPT who the president of the United States is, it would answer Donald Trump because it wasn’t sourcing any data after that September 2020 time period.

Bad data can create hallucinations. If the chatbot doesn’t have the right data or it’s trained incorrectly, it can be inaccurate. You have to be careful about that because these issues can spread into other areas as well. If you’re developing your own business-use cases for using chatbots, you’ve got to make sure that from a scope and data standpoint, it’s accurate because you want that output to be correct.

Even worse, bad actors can deliberately put in incorrect information to alter the output. So that’s one thing that many are worried about. In fact, there have been some studies done on this, and they are finding that these chatbot models can be influenced by these deliberate efforts that ultimately result in hallucinations.

Utility Security Magazine:
So, as the automation of AI grows, many people’s minds go to movies like “The Terminator” that predict doomsday outcomes. In fact, the director of “The Terminator,” James Cameron, recently said, “I warned you.” So, should I start building a robot-proof bunker now, or are these fears a bit overblown?

James Edgar:
There are four levels of AI. There’s reactive, which are things like the Deep Blue supercomputer that beat the chess champion in a game of chess back in the ‘90s by basically using pattern analysis to predict every chess move and determine which moves gave it the best odds to win.

Then, the next level is limited memory, which is the stuff we are seeing today. For example, autonomous driving technology is limited-memory AI that is trained to know what to do in certain traffic situations and evaluate what is going on around it so that it can adapt to it.

The next level is Theory of Mind, which is essentially emotional awareness. For example, it could look at a camera feed of your face and determine whether or not you are sad and offer you solutions to improve your emotional state.

And then you get into the fourth level of AI, which is where we get into the James Cameron “Terminator” stuff: self-awareness. At this level, it can not only read emotion, but it also becomes self-aware. And it can do things like take actions for self-preservation.

The reality is, we are not even remotely close to the theory-of-mind level (Level 3), let alone the fourth level of self-awareness. So, when we think about all the crazy stuff that can happen that we’ve seen in the movies, it’s just not possible.

Utility Security Magazine:
What advice and resources would you offer to utility cybersecurity professionals as they explore AI and the benefits and risks that it brings to their organizations?

James Edgar:
Don’t shy away from AI. Leverage where you can and be sure to look at the tools you may already have, as AI might already be part of them. For example, if you have a managed security service provider you’re using for security and security services, they’re probably using AI.

Mandiant was acquired by Google, so now they’re leveraging a lot of Google AI. Microsoft is doing the same thing from their security standpoint. So, you might have opportunities to leverage AI at your fingertips. Explore whether or not you can take advantage of those opportunities.

There are all kinds of areas in which you’re seeing advancements in AI, such as identity access management. As you look at ways to better manage access to your information technology or operational technology systems, there are solutions out there now that have improved by leaps and bounds in the past couple of years by being able to implement AI. It can help you identify where there are gaps in privileged access. It can identify anomalous behavior or suspicious activity for remote access.

Risk management is another area you should be thinking about. Some of these AI tools are getting much better at consuming a lot of data. The foundational benefit and advantage of using AI is having that neural network that can process a lot of data, identify risk a lot quicker—almost real time— and help ensure that your security posture is at the level you want it to be.

A lot of utilities and critical infrastructure are looking at zero trust. AI is going to play a big role in pulling this all together, even from a smallest microsegmentation. If you have very critical systems in your environment that you want to lock down, leveraging AI with your current tools—and tools that are coming out—has really advanced the abilities of these technologies to be able to find suspicious behavior, lock down systems, alert you and self-heal.

So, I think there are lots of opportunities to take advantage of AI, especially with those organizations that are now including and advancing AI by partnering with these big players like Microsoft, Google and Adobe.

From an organizational standpoint and how we go about managing AI, look at the executive order that came out from the White House in 2023, as that is a blueprint for an AI Bill of Rights. It spells out how we should consider using AI from a business standpoint, how to use it ethically and how to make sure we’re in line with government regulations.

Another good resource is the National Institute of Standards and Technology AI Risk Management Framework. It is relatively new and ties back to the Center for Internet Security (CIS) framework, but it gives you a really good understanding of how to define the risk within your organization.

Another good resource is the Singapore Model for AI Governance Framework. This is almost a handbook on how to govern AI in your environment because if you do decide to go down that route—not just for cybersecurity but as a business—there are lots of steps you have to take. You have to think about your risk appetite. You have to think about the stakeholders on your policies and educate and train them. And finally, you need to have a way to manage it.

The bottom line is that having a governance model about how you use AI effectively and efficiently is going to be critical to success, and some of these resources will help you do that.

About James Edgar: James Edgar is senior vice president and chief information security officer for FLEETCOR (www.fleetcor.com), a global digital payments leader that helps automate, secure, digitize and manage payment transactions on behalf of businesses across more than 100 countries in North America, Latin America, Europe and Asia Pacific. He oversees the global Information Security and IT Compliance teams, which span four continents and multiple business lines. James has served on the steering committee for the Payment Processors Information Sharing Council, participated in the NIST Cybersecurity Framework development workshops, and been actively involved in the governance, risk and compliance community in Atlanta.

Featured