QuilrAI
Back to Resources
Engineering

One base_url Change. Your AI Gets Secure.

The entire QuilrAI integration is a single base URL swap. No SDK rewrite, no prompt changes, no agent refactor. Here's what happens the moment you flip it.

4 min read
April 2026

The most common feedback we hear from engineers evaluating QuilrAI is: we expected this to be a bigger project. The entire integration surface is a single environment variable, OPENAI_BASE_URL pointing at the QuilrAI gateway instead of the provider's API endpoint. Your existing SDK, your existing prompts, your existing agent code, none of it changes. Here is exactly what happens the moment you make that swap.

What Is the OpenAI-Compatible Proxy Pattern?

QuilrAI implements the full OpenAI Chat Completions API surface, including streaming, function calling, tool use, and all model parameters. When your SDK sends a request to the gateway, it receives a response that is byte-for-byte compatible with what the upstream provider would have returned, plus the side effects of security policy enforcement, token optimization, and audit logging happening transparently in the proxy layer.

What Activates Immediately?

On the first request through the gateway, the following capabilities activate with zero additional configuration: prompt injection detection (semantic and structural), PII detection and optional redaction, token usage tracking and cost attribution, model call audit logging, and rate limiting per API key. These defaults are tunable through the dashboard, but they are on from day one, providing an immediate security baseline without any prompt engineering or application changes.

What Is Progressive Configuration?

After the base URL swap, teams typically spend a few hours configuring their specific policy requirements: which PII fields to redact vs. flag, which model tier to route different request types to, and which MCP tools require elevated approval. The Guardian Agent setup wizard handles the more complex governance requirements, but all of that is additive configuration on top of a working, secured baseline that exists from the moment the URL changes.

QuilrAI

How QuilrAI addresses this: The gateway is designed for zero-friction adoption. The integration guide is a single code block. The default configuration provides meaningful security from the first request, with a progressive configuration model for teams that want deeper control.

Related Articles

Dig deeper

Secure your AI stack today

See how QuilrAI's Guardian Agent and LLM Gateway protect your AI deployment from the threats covered in this article.

Get a Demo