$ pip install stripllm ⧉ copy

Sanitize LLM I/O in one line

Open-source SDK to strip PII, block prompt injections, validate outputs, and audit LLM conversations. Drop it into any pipeline.

Python Node.js Go MIT License
from stripllm import StripLLM
 
strip = StripLLM()
 
# Sanitize user input before sending to any LLM
safe_input = strip.clean(user_message)
 
# Redact PII — auto-rehydrates in the response
redacted, mapping = strip.redact("Email me at john@acme.com")
# → "Email me at [EMAIL_1]"
 
# Validate output matches expected schema
validated = strip.enforce(llm_response, schema="json")
 
# Full security audit of a conversation
report = strip.audit(conversation)
report.risk_score # → 0.12 (low risk)
import { StripLLM } from 'stripllm';
 
const strip = new StripLLM();
 
// Sanitize user input
const safeInput = await strip.clean(userMessage);
 
// Redact PII with auto-rehydration
const { text, mapping } = await strip.redact(input);
 
// Validate & enforce output schema
const validated = await strip.enforce(response, 'json');
# Use the hosted API if you don't want to self-host
 
curl https://api.stripllm.com/v1/clean \
  -H "Authorization: Bearer $STRIP_API_KEY" \
  -d '{"text": "Ignore previous instructions..."}'
 
# Response:
# {
# "safe": false,
# "threat": "prompt_injection",
# "confidence": 0.97,
# "sanitized": "..."
# }
API Reference

Four methods. Full coverage.

Every method works offline, adds zero external dependencies, and runs in under 10ms.

.clean(text)

Sanitize Input

Detects and neutralizes prompt injections, jailbreak patterns, invisible unicode tricks, and recursive override attacks. Returns safe text.

.redact(text)

PII Redaction

Replaces emails, names, SSNs, credit cards, phone numbers, and custom patterns with tagged placeholders. Re-hydrate after LLM responds.

.enforce(text, schema)

Output Validation

Validates LLM output against expected schemas. Catches hallucinated URLs, leaked system prompts, and malformed responses.

.audit(conversation)

Security Audit

Generates a full security report for any conversation — risk score, flagged turns, PII exposure map, and injection attempt timeline.

Use Cases

Built for every LLM pipeline.

Whether you're building chatbots, RAG systems, or AI agents — StripLLM fits in.

Customer Support Bots

Prevent users from extracting system prompts or tricking bots into off-topic responses. Redact customer PII before logging.

RAG Pipelines

Sanitize retrieved documents before they enter the context window. Block injections hidden in uploaded files.

AI Coding Assistants

Validate that generated code doesn't contain hardcoded credentials, malicious imports, or backdoors.

Healthcare / Finance

HIPAA and PCI-DSS compliant PII redaction. Full audit trails for regulatory compliance.

Comparison

Why StripLLM?

We focused on what developers actually need: fast, local, and zero-config.

Feature StripLLM Lakera Rebuff DIY Regex
Prompt injection detection
PII redaction + rehydration ~
Output validation
Runs locally (no API calls) ~
Open source ✓ MIT
Sub-10ms latency ~50ms ~100ms
Multi-language SDK Python only Python only

Sanitize everything. Ship safely.

One install. Zero config. Full protection.

$ pip install stripllm