Stop unsafe image uploads before they become a problem
Analyze every image before it goes live. Get clear risk signals and explanations, so you can decide what happens next.
Python SDK and API. Ready in minutes, results in seconds.
A simple way to stay in control
Analyze the image, inspect the result, and act on it immediately. That is the product. It does not need a longer explanation than that. The point is not just to flag an upload. The point is to give you a result that is clear enough to trust and practical enough to use in your app right away.
from ice9 import Ice9
client = Ice9()
image = client.analyze("upload.jpg")
if image.is_nsfw:
image.moderation.censor(
"upload.jpg",
output="censored.jpg",
)
Know what’s in every image
Detect nudity and explicit content, understand what's actually happening in the scene, and get clear signals instead of a vague “NSFW” label.
Built for how you actually work
Start with the Python SDK or call the REST API directly. No infrastructure required to get your first result.
Not all “unsafe” content is the same
Ice9 helps you understand the difference between simple nudity, explicit sexual activity, and contextual content, so you can respond appropriately instead of blocking everything. If your app accepts image uploads, you need to know what users are posting. The goal isn’t more metadata. It's a better moderation decision.
Detect
Screen uploads before they land in your product, queue, feed, or moderation backlog.
Understand
Separate broad categories of risk so the result is usable by humans, not just machines.
Respond
Block, review, or transform content based on the level of risk.
Clear signals, not black boxes
Every result is meant to be readable by default and deeper only when you need more detail. You get the verdict, the reason, and the option to keep going without being forced into raw response plumbing. The default path stays simple, but the result still gives you enough structure to moderate confidently and take action immediately.
Simple verdict
A direct answer you can use in product logic.
Human-readable reason
Enough context to understand why the image was flagged.
Deeper details when needed
More analysis is available without making the default path noisy.
Pricing
Analyze images and get simple risk signals.
- NSFW / nudity detection
- Fast, real-time responses
- No credit card required
Higher throughput and richer analysis for production use.
- Priority processing
- Higher throughput
- Built for real-world moderation flows
Advanced detection for custom policy enforcement
- REST API + Python SDK
- Full analysis and control
- Works with your data and environment