OpenCloud
AI

AI Security

Configure security policies, guardrails, and access controls for your AI deployments on OpenCloud.

AI Security

The AI Security page lets you configure security policies and access controls for your AI deployments. This helps you protect your AI API keys, control costs, and ensure safe usage of AI models.

For step-by-step integration with OpenClaw, see Connect OpenClaw to AI GuardRails.

OpenCloud uses LiteLLM as the engine behind AI Security. LiteLLM is an open-source AI security middleware that acts as a protective gateway between your applications and AI models, providing a unified API to access 100+ AI models while adding safety guardrails.

Your Apps → AI Security (LiteLLM) → OpenAI, Anthropic, Google, etc.
Official documentation: For advanced configuration and full feature reference, see the LiteLLM Documentation.

Why AI Security matters

When using AI services:

  • API keys are valuable — If stolen, someone could run up charges on your AI provider account
  • Costs can escalate quickly — Without limits, a single runaway application can consume thousands of dollars in AI credits
  • Safety matters — You may want to restrict which AI models can be used and what they can be asked to do

Key features

FeatureDescription
Unified APIOne API endpoint to access 100+ AI models from any provider
Secret detectionAutomatically detects and hides API keys, passwords, and tokens in prompts
Banned keywordsBlocks requests containing dangerous keywords (hack, exploit, jailbreak)
Content moderationFilters violent, harmful, or inappropriate content
Prompt injection detectionDetects and blocks jailbreak attempts
Rate limitingControl how many AI requests each app can make
Model routingRoute different requests to different AI models
Master key authSingle master key for API authentication
Retry logicAutomatically retries failed requests (up to 3 times)

Built-in guardrails

AI Security comes pre-configured with four security guardrails:

Secret Detection

Scans outgoing prompts for API keys, passwords, and tokens. Automatically hides them before sending to the AI model.

Banned Keywords

Blocks requests containing dangerous keywords like "hack", "exploit", "jailbreak", and others.

Content Moderation

Filters violent, harmful, or inappropriate content using heuristic analysis.

Prompt Injection Detection

Detects patterns commonly used to bypass AI safety measures and blocks them.

Security features

Access control

  • Control which applications can access AI services
  • Set permissions per user or per application
  • Restrict access to specific AI models

Rate limiting

  • Set limits on how many AI requests can be made per minute/hour/day
  • Prevent runaway costs from misbehaving applications
  • Fair usage across multiple applications

Monitoring

  • Track AI API usage across all your applications
  • See which applications are making the most AI requests
  • Monitor costs in real-time

Requirements on OpenCloud

RequirementDetails
TypeAddon (system service)
DatabaseNone required
PlanFixed addon plan (1 CPU, 2 GB RAM, ~$1/month)
Default port4000
Network aliaslitellm.proxy (used by other apps to discover AI Security)

Installation

Step-by-step deployment

  1. Deploy AI Security (LiteLLM)
    • Go to Applications > Create Application (or the Addons section)
    • Select LiteLLM (AI GuardRails)
    • The plan is fixed: 1 CPU, 2 GB RAM, 5 GB storage (~$1/month)
    • Select your project
    • Click Deploy
  2. Get your master key
    • Go to your LiteLLM application > Environment Variables
    • Find LITELLM_MASTER_KEY — this is the API key your apps use to connect
  3. Access the admin dashboard
    Open the LiteLLM link to access the admin dashboard.

Connecting your apps

Other applications in your project can connect to AI Security using its internal network alias:

SettingValue
API Base URLhttp://litellm.proxy:4000/v1 (internal) or your LiteLLM's public URL
API KeyYour LITELLM_MASTER_KEY value

Adding AI models

Configure which AI models are available:

  1. Open the admin dashboard
  2. Go to Models or Model List
  3. Add a model:
    • Model name — The name your apps will use (e.g., gpt-4o)
    • Provider — The AI provider (openai, anthropic, google, etc.)
    • API Key — Your provider's API key
  4. Save the configuration

Guardrails configuration

The four guardrails are active by default:

GuardrailModeWhat it does
Secret DetectionPre-callHides API keys and passwords before sending to AI
Banned KeywordsPre-callBlocks dangerous keywords
Content ModerationPre-callFilters harmful content
Prompt InjectionPre-callBlocks jailbreak attempts

To customize guardrails, update the LITELLM_GUARDRAILS environment variable or modify the configuration through the admin dashboard.

Configuration

SettingDefaultDescription
LITELLM_MASTER_KEYAuto-generatedMaster API key for authentication
LITELLM_GUARDRAILS[] (uses config file defaults)Custom guardrail configuration
Request timeout120 secondsMaximum time for AI requests
Retries3Number of automatic retries on failure

How other OpenCloud apps use AI Security

ApplicationAutomatic?How it connects
CoPawYesAuto-configured as default LLM provider
OpenClawManualSet model prefix to litellm/ (see integration guide)
n8nManualSet OPENAI_API_BASE to LiteLLM URL in workflow credentials

Troubleshooting

ProblemSolution
Apps can't connectCheck the LITELLM_MASTER_KEY matches in both apps
AI requests failVerify your AI provider API keys in the config
Guardrails blocking valid requestsCustomize guardrail settings to reduce false positives
High latencyAI Security adds minimal overhead; check the underlying AI provider
Copyright © 2026