logo

Securing LLM Backed Systems: Essential Authorization Practices

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems

CSA CH Desk
August 13, 2024

Organizations are increasingly leveraging Large Language Models (LLMs) to tackle diverse business problems. Both existing companies and a crop of new startups are vying for first-mover advantages. With this mass adoption of LLM-backed systems, there is a critical need for formal guidance on the secure design of them. Organizations especially need this guidance when an LLM must make decisions or utilize external data sources.

This document by the CSA AI Technology and Risk Working Group describes the LLM security risks relevant to system design. After exploring the pitfalls in authorization and security, it also outlines LLM design patterns for extending the capabilities of these systems. System designers can use this guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Event in Pictures

We are pleased to share some highlights from our most recent event.

More like this

We are pleased to share some highlights from our most recent event.