Deepu

Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue

Is your RAG system secretly leaking sensitive data to your LLM? Learn how to stop it with fine-grained authorization before it goes rogue.

Delay the AI Overlords: How OAuth and OpenFGA Can Keep Your AI Agents from Going Rogue
#1about 4 minutes

Understanding the current state of AI security challenges

AI systems often have poor judgment, and the security domain is playing catch-up with the rapid evolution of AI agents and protocols.

#2about 3 minutes

Focusing on key OWASP Top 10 risks for developers

Application developers should focus on mitigating sensitive information disclosure and excessive agency, as these have a large attack surface under their control.

#3about 3 minutes

Why traditional RBAC fails for RAG systems

Traditional role-based access control (RBAC) is insufficient for RAG systems due to dynamic context and complex data relationships, necessitating a fine-grained authorization (FGA) approach.

#4about 5 minutes

Implementing OpenFGA to secure RAG data access

OpenFGA uses authorization models and relationship tuples to filter documents from a vector store, ensuring the LLM only receives data the user is permitted to see.

#5about 2 minutes

Mitigating excessive agency with zero trust tool access

Control an AI agent's tool access at the code level using zero trust principles, applying standard RBAC for simple cases and FGA for granular, user-dependent permissions.

#6about 1 minute

Securing third-party API calls using OAuth federation

Use OAuth 2.0 federation to allow AI agents to call third-party APIs on a user's behalf without handling raw credentials, using a broker to manage access tokens.

#7about 1 minute

Adding human guardrails with asynchronous authorization

Implement human-in-the-loop approvals for high-stakes actions by using the CIBA flow to send asynchronous authorization requests to users for confirmation.

#8about 5 minutes

Demoing step-up authorization and system architecture

A live demo showcases step-up authorization where an agent requests user consent before accessing sensitive data, followed by an overview of the application's architecture.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
DC
Daniel Cranney
Dev Digest 211: Securing Agents, Top AI Apps and Lost Readers…
Inside last week’s Dev Digest 211 . 🏗️ Can the infrastructure keep up with AI growth? 📱 Top 100 GenAI consumer apps 🪱 Wikipedia hit by worm and AI slop 🔍 The results of Codex Security scanning 1.2M commits 🧹 Bye bye innerHTML, welcome setHTML() 🔄 Cl...
Dev Digest 211: Securing Agents, Top AI Apps and Lost Readers…
DC
Daniel Cranney
Dev Digest 210: AI Agents Are Go! Is MCP Dead? LLMs Crack Anonymity
Inside last week’s Dev Digest 210 . 🪦 Is MCP already dead? 🐍 Secure snake on the CLI 🏗️ The architecture behind open source LLMs ⚖️ AI companies and governments at odds 🦫 Is Go the best language for AI agents? 🕵️ “Security research” bot hacks Micros...
Dev Digest 210: AI Agents Are Go! Is MCP Dead? LLMs Crack Anonymity

From learning to earning

Jobs that call for the skills explored in this talk.

Data & AI Architect

Ai Agents

Intermediate
Azure
Redshift
Google BigQuery
Microsoft Office
Google Cloud Platform
+1
Data & AI Architect

Ai Agents

Intermediate
Azure
Redshift
Google BigQuery
Microsoft Office
Google Cloud Platform
+1