Our Risk-Minded Take on AI: Innovate, Fast and Safe

AI-Cover 1

Written by: Jon Siegler

Reviewed by: Tom Relihan
Published: August 02, 2023
Last Updated: August 07, 2023

Table of contents

When OpenAI's ChatGPT went mainstream last fall, the LogicGate team reacted in the way you'd expect. Our Slack channels pinged with haikus and short stories, bizarre prompt engineering, and lots of other impractical, fun content.

Soon, it became clear to us that generative AI is a fundamental shift in the technological and business landscape that is here to stay. It’s going to drastically change the way the world develops and interacts with technology.

So we decided to test ChatGPT’s limits as a business tool a bit more: We asked it to write us a mock strategic priority brief for a company just like ours. And it did … a pretty darn good job. That got us thinking about the ways we could leverage ChatGPT and generative AI systems like it to improve our organization and our customer’s experience with our product. This spring, that led to the development of our powerful new OpenAI Risk Cloud Connector integration and our ChatGPT-powered Policy & Procedure Management Application.

Now, we are continuing to build our teams’ muscles to enable them to leverage this technology and discover new, innovative uses of AI on our platform. We’re on the front foot of this emerging technology and we plan to remain there and take advantage of the significant efficiency gains we’re certain AI will bring. This is a very exciting time, indeed.

But…

Despite our excitement, we also fully understand the concerns our clients, customers, and other stakeholders have around using this new and novel technology. It’s changing and evolving just as quickly as it appeared, and we’ve already seen the horror stories that have come out of improper usage, even if unintentionally so, of large language learning models like ChatGPT.

AI models survive and grow on a healthy diet of data — lots and lots of data. Any use of this technology requires organizations to feed it with their own, and in some cases, their customers' data. Any organization looking to do so needs to respect the data boundaries set by their clients and other stakeholders. When we asked our customer base and Customer Advisory Board to weigh in on how they felt about the potential of adding AI-powered features to Risk Cloud, much of the conversation centered around data security and privacy implications.

This is an extraordinarily important consideration for any organization that handles especially sensitive data — which, these days, is almost every organization. Banks handle sensitive financial data, for instance, while health systems enter patient health records into electronic medical records platforms. A cybersecurity platform whose source code is leaked to the world could put all of its clients’ systems at risk, too.

Using third parties to process data is not a new concept for tech companies — off-site processing through service providers like Amazon Web Services, Google Cloud, and Microsoft Azure is common practice for organizations that build software platforms that rely on reams of data to work. But when this practice runs into the newness and uncertainty swirling around artificial intelligence and machine learning, people are naturally a bit more nervous. They want companies that upload data to these systems to approach doing so with the utmost caution.

All of this means that harnessing the vast potential of AI while also avoiding its multitude of pitfalls necessitates a measured approach. During development of the OpenAI-powered Policy & Procedure Management Application, we had multiple, deep conversations about how to best tow this line on our executives, directors, and customers.

Those conversations were crucial in informing our current approach to using AI at LogicGate. We’ve decided to give Risk Cloud users full control over how and when their data is processed by AI. We’re doing this by ensuring that any features we build into Risk Cloud that interact with or rely on AI are opt-in: Customers using our new OpenAI integration can access it using their own OpenAI token that they generate from their OpenAI account. That way, if they want to cut off access at any time, or control what data is fed into the model, they can. We are fully transparent about how AI is used on Risk Cloud — no behind-the-scenes AI-powered features here.

Internally, we’ve also improved our organization’s cybersecurity and data security practices by putting an acceptable use policy in place for any employees using ChatGPT or other AI technologies. That’s designed to ensure we’re not putting customer data at risk by uploading our own sensitive data to AI systems or pushing code generated by an AI into our product without a human review, while still allowing for AI use cases that boost our productivity and improve our day-to-day efficiency.

With these guardrails in place, we’re looking forward to pushing the boundaries of what’s possible in GRC with AI and finding innovative ways to leverage this exciting technology to help our customers solve their greatest risk and compliance challenges.

We’re exploring how we can push productivity to new heights by using technology like GitHub Copilot to augment our engineering team’s developer environments and get more built in less time — with the necessary reviews and safeguards in place to ensure secure, high quality code, of course.

And recently, we held an AI Week Hackathon, where members of the LogicGate team

spent four days brainstorming and creating our platform’s next AI-driven solutions. We designed this event to help our teams start thinking about the ways we can incorporate generative AI into our platform to develop that sort of functionality and improve our customer’s experience.

The results of the Hackathon were nothing short of astounding. We answered questions such as “What if you  could ask a chatbot to help you automatically fill out an IT security questionnaire?” and “Could you have an AI recommend risk mitigations, generate board-ready ERM reports, or point out policy differences?” Those types of efficiency gains, we discovered, are absolutely possible with AI, and we plan to leverage anything we build internally as well to test and validate it.

Here at LogicGate, we always say that the best companies aren’t built by avoiding risks, but taking the right ones. Artificial intelligence certainly presents its fair share of risks, but we believe that, with the proper safeguards in place, it has the potential to revolutionize how organizations approach GRC.

So what’s our take on how organizations should approach using AI? In short, “innovate, fast and safe.”

Related Posts