Search
Close this search box.

Enhancing Security with Generative AI: Identifying Risks and Implementing Mitigation Strategies 

2 minutes

You need to make sure that the AI is not being used to do something that it’s not supposed to do.”Sundaramoorthy also discussed the importance of securing AI models, which are often trained on sensitive data. He recommended that companies use encryption and access controls to protect these models. Additionally, he suggested that companies use AI to monitor AI, which can help identify and mitigate potential risks.

Microsoft’s Siva Sundaramoorthy has provided a comprehensive guide on how to apply common cyber precautions to generative AI used in and around security systems. The sudden rise of generative AI, particularly with the release of ChatGPT, has made it a hot topic in the tech world. As Microsoft utilizes OpenAI foundation models and receives inquiries from customers about the impact of AI on security, Sundaramoorthy, a senior cloud solutions security architect at Microsoft, often addresses these concerns. In his presentation at ISC2 in Las Vegas on October 14th, Sundaramoorthy discussed the benefits and security risks associated with generative AI.

One of the main concerns with generative AI is its accuracy. Sundaramoorthy emphasized that this technology acts as a predictor, selecting the most likely answer based on the context, but there may be other correct answers as well. To address this, cybersecurity professionals should consider AI use cases from three perspectives: usage, application, and platform.

Sundaramoorthy stated, “You need to understand what use case you are trying to protect.” He added, “A lot of developers and people in companies are going to be in this center bucket [application] where people are creating applications in it. Each company has a bot or a pre-trained AI in their environment.”

AMD recently revealed its competitor to NVIDIA’s heavy-duty AI chips, highlighting the ongoing hardware war. Once the usage, application, and platform are identified, AI can be secured similarly to other systems, but not entirely. Certain risks are more likely to arise with generative AI than with traditional systems. Sundaramoorthy listed seven adoption risks, including bias, misinformation, deception, lack of accountability, overreliance, intellectual property rights, and psychological impact.

AI presents a unique threat map, corresponding to the three angles mentioned above. For example, AI usage in security can lead to the disclosure of sensitive information, shadow IT from third-party LLM-based apps or plugins, or insider threat risks. AI applications in security can open doors for prompt injection, data leaks or infiltration, or insider threat risks. AI platforms can introduce security problems through data poisoning, denial-of-service attacks on the model, theft of models, model inversion, or hallucinations.

Attackers can exploit AI systems and models using strategies such as prompt converters, obfuscation, semantic tricks, or explicitly malicious instructions to bypass content filters. They could also use jailbreaking techniques to exploit AI systems, poison training data, perform prompt injection, take advantage of insecure plugin design, launch denial-of-service attacks, or force AI models to leak data.

Sundaramoorthy also stressed the importance of securing AI models, which are often trained on sensitive data. He recommended using encryption and access controls to protect these models. Additionally, he suggested using AI to monitor AI, which can help identify and mitigate potential risks.

In conclusion, while generative AI offers many benefits, it also presents unique security risks that must be addressed. By understanding the use case, application, and platform of AI, companies can implement appropriate security measures to protect against potential threats. It is also crucial to secure AI models and use AI to monitor AI to prevent and mitigate potential risks.  

ABOUT
Kerri is a proud member of TLP and has been serving the legal industry in marketing, intake and business development for over a decade. As CEO of KerriJames, she is relentless in her pursuit of improving intake so law firms can retain more cases without buying more leads. If your firm shares her hunger for growth, reach out and speak with Kerri.

Just for You

More from us

All things legal intake, law firm growth, marketing and client success.