Now accepting early access signups

Detect and remove AI-generated nude deepfakes

The current alternative is a $5,000/month reputation lawyer. Athena does it for $20/month — because an agent monitors the web, verifies the image is synthetic, files takedowns under the TAKE IT DOWN Act, and follows up until the content is gone. Built for women and girls. Finally.

Join the Waitlist View on GitHub
99%
of deepfake porn
targets women
440K
child NCMEC reports
in 6 months (2025)
21M
monthly visits to
nudify websites
0
consumer tools
that exist today
Protection

Three layers of defense

A reputation-management lawyer costs $5,000/month and won't take you as a client. Enterprise brand-protection software is sold to Fortune 500 companies, not people. Athena is built for everyone they don't serve.

Deepfake Detection

Drop any suspicious image into Athena and our classifier tells you in seconds whether it was generated by AI nudification or face-swap tools, with the indicators it found. Live today.

Image Shield

Once enrolled, Athena fingerprints your reference photo into a one-way perceptual hash and watches the open web for matches. You, and only you, get the alert. Roadmap.

Auto Takedown

When a match is verified synthetic, Athena drafts the TAKE IT DOWN Act notice and routes it to the host. Think of it as the "TurboTax of takedown": no lawyer required. Roadmap.

This is not a future problem.
It is happening now.

Nudify apps are free, require zero technical skill, and need only one photo. Schools, families, and communities are under attack with no tools to fight back.

96%
of all deepfakes are non-consensual pornography
1 in 10
minors say classmates use AI to generate nudes of other kids
+464%
increase in deepfake porn volume from 2022 to 2023
How It Works

Protection in three steps

No technical expertise required. Set it up once and Athena works in the background.

Verify it's you

Identity-verified enrollment with a selfie liveness check that has to match your reference photo. You can only enroll your own face, never anyone else's. Reference photos are turned into a one-way perceptual hash; the original image is discarded.

Athena monitors for you

We scan partner-network sources and known hosts for hash collisions against your enrollment. Match notifications go only to you, never to third parties, never to a public feed.

Detect, prove, and remove

When a match is found, our classifier verifies it's synthetic, generates a timestamped forensic report, and drafts a TAKE IT DOWN Act takedown request to every host of the content.

Safeguards

A protective tool, not a surveillance one

A face-monitoring system in the wrong hands is a stalker tool. We are designing Athena from day one to be incapable of that misuse.

You can only enroll yourself

Identity verification at enrollment requires a government ID and a selfie liveness check whose face must match the uploaded reference. There is no path to enroll someone else.

Hashes only, no face database

Reference photos are converted into a one-way perceptual hash and the originals are discarded. We never store raw face embeddings, biometric vectors, or images that could be reused as a face DB.

Matches go only to you

Detection results are routed exclusively to the enrollee. There is no admin console to look up "who matches this hash," no feed for third parties, and a tamper-evident audit log of every access.

Under the hood

How the detector actually works

A YC reviewer should be able to verify the model is real. Here's what's shipped today, with the actual numbers from the held-out test split.

Architecture

EfficientNet-B0 + classifier head

features → AdaptiveAvgPool → Dropout(0.3) → Linear(1280, 512) → ReLU → Dropout(0.2) → Linear(512, 1), single logit, trained with BCEWithLogitsLoss. 4.71M parameters, 19 MB on disk.

Training

Two-phase fine-tune in 4 minutes

Phase 1: 3 epochs with the ImageNet-pretrained backbone frozen, classifier head only. Phase 2: 9 epochs with everything unfrozen at 1/10 the learning rate. Apple Silicon MPS, batch size 32. AdamW + ReduceLROnPlateau, early stopping at patience 4.

Data & augmentations

2,394 images, 3 generators

1,197 real photos + 1,197 AI samples from Flux, Stable Diffusion 1.5, and one more generator. Stratified 75/15/10 split. Augmentations simulate real-world distortion: JPEG q=30–95, ±10° rotation, color jitter, Gaussian blur, RandomErasing. Held-out 6 images for the demo above.

Test-set metrics 242 held-out images

0.0000
ROC-AUC
0.0000
Accuracy
0.0000
F1
0 ms
Latency · MPS

Confusion matrix

Predicted Real
Predicted Synthetic
Actual Real
0
True Negative
0
False Positive
Actual Synthetic
0
False Negative
0
True Positive

One false positive, zero false negatives. We've biased the model toward flagging a real image over missing a synthetic one. In this domain, a missed deepfake is content that stays online.

Robustness sweep · ROC-AUC under perturbation

Clean
0.9999
JPEG quality 50
0.9999
JPEG quality 30
0.9999
2× downsample & upsample
0.9999
5° rotation
1.0000

ROC-AUC stays at ≥ 0.9999 across every perturbation we tested, including JPEG q=30 (heavier than what most platforms re-encode at) and a 2× downsample/upsample round-trip. The decision boundary holds even when threshold accuracy slips slightly under heavy compression.

Try it live

Run the classifier yourself

Drop any image (your photo, a screenshot, an AI sample). The same checkpoint described above runs the inference, in real time, on this server.

Drag & drop an image here

or click to browse, JPG, PNG, WebP (max 16 MB)

Or try one of these
Uploaded image
Reading bytes
Resize 224×224 + normalize
EfficientNet-B0 forward pass
Sigmoid → verdict
Pipeline output

What a takedown notice looks like

When the classifier confirms a hash match is synthetic, tools/monitor.py:TakedownGenerator drafts the TAKE IT DOWN Act notice the host receives. This is the actual output of that generator, with example data.

TAKEDOWN REQUEST · TAKE IT DOWN Act (S.146, 119th Congress) ============================================================ Date: 2026-04-27 19:27 UTC Platform: example-host.com Content URL: https://example-host.com/u/_anon_2bc4f/i/0847.jpg NOTIFICATION OF NON-CONSENSUAL INTIMATE IMAGERY Pursuant to the TAKE IT DOWN Act, I am requesting the immediate removal of the following content, which constitutes non-consensual intimate imagery generated using artificial intelligence. CONTENT DETAILS: URL: https://example-host.com/u/_anon_2bc4f/i/0847.jpg Detection Method: Perceptual hash matching + synthetic analysis Similarity Score: 94% Hash Match: [reference] → [detected] SYNTHETIC GENERATION INDICATORS: - ML model confidence: 100% - AI tool signature: stable diffusion - Resolution 512x768 matches AI generation pattern This content was identified as AI-generated non-consensual intimate imagery through automated detection. Under the TAKE IT DOWN Act, platforms must remove such content within 48 hours of notification. REQUEST: 1. Remove the identified content immediately. 2. Prevent re-upload using the provided content hash. 3. Preserve evidence for potential law enforcement referral. ============================================================ Generated by Athena, https://github.com/maisymylod/athena-ai
Founder

Why this, and why me

A solo founder's pitch deserves a personal answer.

Maisy Mylod

Maisy Mylod

Founder · Athena

Mathematics graduate (University of Michigan) and former biometric identity analyst. I've worked on facial-recognition systems on the inside, and I've watched the cost of those same techniques being weaponized against women, with no consumer tooling to push back.

Athena is what I would have built for the women in my life if it had existed. The TAKE IT DOWN Act gave us the legal scaffolding; my job is to build the rest.

Early Access

Be the first to be protected

Join the waitlist for early access. Free for victims. Always.