AI Security
AI Security
The AI Security page lets you configure security policies and access controls for your AI deployments. This helps you protect your AI API keys, control costs, and ensure safe usage of AI models.
OpenCloud uses LiteLLM as the engine behind AI Security. LiteLLM is an open-source AI security middleware that acts as a protective gateway between your applications and AI models, providing a unified API to access 100+ AI models while adding safety guardrails.
Your Apps → AI Security (LiteLLM) → OpenAI, Anthropic, Google, etc.
Why AI Security matters
When using AI services:
- API keys are valuable — If stolen, someone could run up charges on your AI provider account
- Costs can escalate quickly — Without limits, a single runaway application can consume thousands of dollars in AI credits
- Safety matters — You may want to restrict which AI models can be used and what they can be asked to do
Key features
| Feature | Description |
|---|---|
| Unified API | One API endpoint to access 100+ AI models from any provider |
| Secret detection | Automatically detects and hides API keys, passwords, and tokens in prompts |
| Banned keywords | Blocks requests containing dangerous keywords (hack, exploit, jailbreak) |
| Content moderation | Filters violent, harmful, or inappropriate content |
| Prompt injection detection | Detects and blocks jailbreak attempts |
| Rate limiting | Control how many AI requests each app can make |
| Model routing | Route different requests to different AI models |
| Master key auth | Single master key for API authentication |
| Retry logic | Automatically retries failed requests (up to 3 times) |
Built-in guardrails
AI Security comes pre-configured with four security guardrails:
Secret Detection
Banned Keywords
Content Moderation
Prompt Injection Detection
Security features
Access control
- Control which applications can access AI services
- Set permissions per user or per application
- Restrict access to specific AI models
Rate limiting
- Set limits on how many AI requests can be made per minute/hour/day
- Prevent runaway costs from misbehaving applications
- Fair usage across multiple applications
Monitoring
- Track AI API usage across all your applications
- See which applications are making the most AI requests
- Monitor costs in real-time
Requirements on OpenCloud
| Requirement | Details |
|---|---|
| Type | Addon (system service) |
| Database | None required |
| Plan | Fixed addon plan (1 CPU, 2 GB RAM, ~$1/month) |
| Default port | 4000 |
| Network alias | litellm.proxy (used by other apps to discover AI Security) |
Installation
Step-by-step deployment
- Deploy AI Security (LiteLLM)
- Go to Applications > Create Application (or the Addons section)
- Select LiteLLM (AI GuardRails)
- The plan is fixed: 1 CPU, 2 GB RAM, 5 GB storage (~$1/month)
- Select your project
- Click Deploy
- Get your master key
- Go to your LiteLLM application > Environment Variables
- Find
LITELLM_MASTER_KEY— this is the API key your apps use to connect
- Access the admin dashboard
Open the LiteLLM link to access the admin dashboard.
Connecting your apps
Other applications in your project can connect to AI Security using its internal network alias:
| Setting | Value |
|---|---|
| API Base URL | http://litellm.proxy:4000/v1 (internal) or your LiteLLM's public URL |
| API Key | Your LITELLM_MASTER_KEY value |
Adding AI models
Configure which AI models are available:
- Open the admin dashboard
- Go to Models or Model List
- Add a model:
- Model name — The name your apps will use (e.g.,
gpt-4o) - Provider — The AI provider (openai, anthropic, google, etc.)
- API Key — Your provider's API key
- Model name — The name your apps will use (e.g.,
- Save the configuration
Guardrails configuration
The four guardrails are active by default:
| Guardrail | Mode | What it does |
|---|---|---|
| Secret Detection | Pre-call | Hides API keys and passwords before sending to AI |
| Banned Keywords | Pre-call | Blocks dangerous keywords |
| Content Moderation | Pre-call | Filters harmful content |
| Prompt Injection | Pre-call | Blocks jailbreak attempts |
To customize guardrails, update the LITELLM_GUARDRAILS environment variable or modify the configuration through the admin dashboard.
Configuration
| Setting | Default | Description |
|---|---|---|
LITELLM_MASTER_KEY | Auto-generated | Master API key for authentication |
LITELLM_GUARDRAILS | [] (uses config file defaults) | Custom guardrail configuration |
| Request timeout | 120 seconds | Maximum time for AI requests |
| Retries | 3 | Number of automatic retries on failure |
How other OpenCloud apps use AI Security
| Application | Automatic? | How it connects |
|---|---|---|
| CoPaw | Yes | Auto-configured as default LLM provider |
| OpenClaw | Manual | Set model prefix to litellm/ (see integration guide) |
| n8n | Manual | Set OPENAI_API_BASE to LiteLLM URL in workflow credentials |
Troubleshooting
| Problem | Solution |
|---|---|
| Apps can't connect | Check the LITELLM_MASTER_KEY matches in both apps |
| AI requests fail | Verify your AI provider API keys in the config |
| Guardrails blocking valid requests | Customize guardrail settings to reduce false positives |
| High latency | AI Security adds minimal overhead; check the underlying AI provider |