Generative Artificial Intelligence (GenAI) is a transformative technology that is becoming the focus of many enterprises' IT strategies. As part of that effort, security teams are working to identify, develop, and implement best practices to secure the use of GenAI within their enterprises. This requires not only a review of internal IT security practices to take GenAI into account, but also a deep understanding of what GenAI brings to the table. The role of GenAI providers Supports safe corporate use. best practice While security in this area is constantly evolving, there are four basic questions enterprise security teams should ask to get the conversation started:
Will my data remain private?
GenAI providers should clearly document their privacy policies, ideally giving customers control over their information and ensuring it cannot be used to train underlying models or shared with other customers without their explicit permission.
Can you trust the content created by GenAI?
Like humans, GenAI will make mistakes from time to time. You can't expect perfection, but you can expect transparency and accountability. This can be achieved in three ways: by using trusted data sources to improve accuracy, by maintaining transparency with visibility into why and where, and by providing mechanisms for user feedback to support continuous improvement. In this way, providers can maintain the trustworthiness of the content their tools produce.
Would you like to help maintain a safe and responsible usage environment?
Enterprise security teams have an obligation to ensure that GenAI is used safely and responsibly within their organizations, and AI providers should be able to support that effort in a variety of ways.
For example, one concern is that users will become overly reliant on the technology. GenAI is not meant to replace actual workers, but to assist them in their daily work. As such, users should be encouraged to think critically about the information provided by the AI. Providers can encourage appropriate user scrutiny by explicitly citing sources and using carefully considered language that encourages thoughtful use.
Another risk, and perhaps less common, is adversarial insider abuse, which would involve attempts to induce GenAI to engage in harmful activities, such as generating dangerous code. AI providers can mitigate this type of risk by building safety protocols into their system designs and clearly setting boundaries for what GenAI can and cannot do.
Was this GenAI technology designed with security in mind?
Like any other type of enterprise software, GenAI technology must be designed and developed with security in mind, and technology providers must document and share their security development practices. Additionally, the security development lifecycle must be adjusted to account for new threat vectors posed by GenAI. This can include actions such as updating threat modeling requirements to address AI and machine learning specific threats, and implementing rigorous input validation and sanitization of user-provided prompts.
AI-powered red teaming They also serve as a powerful security hardening, allowing providers to look for exploitable vulnerabilities, the generation of potentially harmful content, and other issues. Red team exercises have the advantage that they are highly adaptable and can be used both before and after a product release, a benefit that is essential in maintaining the security of a rapidly evolving technology like GenAI.
Shared Responsibility
These questions can help enterprise security teams gain critical understanding of their GenAI provider’s efforts across four fundamental areas of protection: data privacy and ownership, transparency and accountability, user guidance and policy, and secure design and development.
While these are a great start, there are also many promising industry-level initiatives that should help ensure the safe and responsible development and use of GenAI and further our understanding of AI safety considerations. But one thing is clear: the leading providers of GenAI technology understand their role in this shared responsibility. Information about their work We're committed to advancing safe, secure, and trustworthy AI. Let's start the conversation today.
– read more A Partner Perspective from Microsoft Security