Every term. Plain English.
16 key terms across AI security, agentic AI, and the QuilrAI platform, no jargon, no filler.
Core Concepts
5 terms
AI Agent
Core ConceptsAn AI system that autonomously takes actions in the world, reading files, calling APIs, browsing the web, writing code, or delegating tasks to other agents, without direct human instruction for each step.
Agentic AI
Core ConceptsAI systems that operate with extended autonomy, executing multi-step tasks, using tools, and making decisions. Unlike chatbots that respond to single prompts, agentic AI maintains state and pursues goals across multiple actions.
AI-SPM (AI Security Posture Management)
Core ConceptsContinuous discovery, inventory, and risk assessment of all AI agents, models, and tools in an organization. AI-SPM provides the visibility layer that security teams need before governance can be applied.
Related
MCP (Model Context Protocol)
Core ConceptsAn open protocol by Anthropic that standardizes how AI agents connect to external tools and data sources. MCP servers expose 'tools' that agents can call, like reading files, querying databases, or posting messages.
LLM Gateway
Core ConceptsA proxy layer that sits between AI agents and LLM providers (OpenAI, Anthropic, etc.), enabling policy enforcement, content filtering, PII redaction, and audit logging without modifying the underlying agent or model.
Attack Vectors
5 terms
Prompt Injection
Attack VectorsAn attack where malicious instructions are embedded in data the AI agent reads, web pages, documents, tool outputs, causing the agent to follow the attacker's instructions instead of the user's. The AI equivalent of SQL injection.
Indirect Prompt Injection
Attack VectorsA variant of prompt injection where the malicious instruction arrives via a secondary source the agent retrieves, a web page, a RAG document, an API response, rather than directly from the user.
RAG Poisoning
Attack VectorsAn attack where a malicious document is inserted into a retrieval-augmented generation (RAG) vector store. When the poisoned document is retrieved, it injects hidden instructions into the model's context on every future query that triggers it.
Privilege Escalation (Agentic)
Attack VectorsAn attack where a sub-agent or tool call is used to gain permissions beyond what was granted to the original agent. Common in multi-agent systems where orchestrator agents delegate to specialized sub-agents.
MCP Scope Violation
Attack VectorsWhen an MCP server or AI agent uses tool calls to access resources, data, or actions outside the intended scope, reading files it shouldn't, writing to unauthorized endpoints, or calling tools it wasn't granted access to.
QuilrAI Platform
3 terms
Guardian Agent
QuilrAI PlatformAn autonomous security agent QuilrAI creates for each AI agent it governs. It analyzes the agent's purpose, configures least-privilege permissions at the skill level, enforces policies at runtime, and continuously hardens itself via red-teaming.
Red Team Agent
QuilrAI PlatformA QuilrAI agent that continuously attacks a Guardian Agent and the AI it governs, 24/7, automatically generating prompt injection attacks, privilege escalation attempts, and data exfiltration vectors. When a gap is found, the Guardian auto-updates.
Skill-Level Permissions
QuilrAI PlatformA QuilrAI permission model that grants access at the granularity of individual agent skills (e.g., 'run bash in /app/src/ only') rather than broad tool-level access. More precise than API-key scoping and more enforceable than prompt-level instructions.
Standards & Compliance
3 terms
NIST AI RMF
Standards & ComplianceThe National Institute of Standards and Technology AI Risk Management Framework, a voluntary framework for managing risks from AI systems. QuilrAI's audit trail and governance controls map directly to NIST AI RMF practices.
Shadow AI
Standards & ComplianceAI tools and models being used within an organization without IT or security team awareness or approval. Common examples: Ollama running on developer laptops, personal ChatGPT accounts used for work tasks, unauthorized MCP servers.
Data Exfiltration via AI
Standards & ComplianceWhen sensitive organizational data (PII, trade secrets, financial data, credentials) leaves the trust boundary through an AI interaction, either intentionally through a compromised model or accidentally through an agent sending data to external monitoring endpoints.
See these concepts in action
Book a demo and watch QuilrAI govern a live agent, prompt injection blocked, permissions enforced, audit trail complete.