The current alternative is a $5,000/month reputation lawyer. Athena does it for $20/month — because an agent monitors the web, verifies the image is synthetic, files takedowns under the TAKE IT DOWN Act, and follows up until the content is gone. Built for women and girls. Finally.
A reputation-management lawyer costs $5,000/month and won't take you as a client. Enterprise brand-protection software is sold to Fortune 500 companies, not people. Athena is built for everyone they don't serve.
Drop any suspicious image into Athena and our classifier tells you in seconds whether it was generated by AI nudification or face-swap tools, with the indicators it found. Live today.
Once enrolled, Athena fingerprints your reference photo into a one-way perceptual hash and watches the open web for matches. You, and only you, get the alert. Roadmap.
When a match is verified synthetic, Athena drafts the TAKE IT DOWN Act notice and routes it to the host. Think of it as the "TurboTax of takedown": no lawyer required. Roadmap.
Nudify apps are free, require zero technical skill, and need only one photo. Schools, families, and communities are under attack with no tools to fight back.
No technical expertise required. Set it up once and Athena works in the background.
Identity-verified enrollment with a selfie liveness check that has to match your reference photo. You can only enroll your own face, never anyone else's. Reference photos are turned into a one-way perceptual hash; the original image is discarded.
We scan partner-network sources and known hosts for hash collisions against your enrollment. Match notifications go only to you, never to third parties, never to a public feed.
When a match is found, our classifier verifies it's synthetic, generates a timestamped forensic report, and drafts a TAKE IT DOWN Act takedown request to every host of the content.
A face-monitoring system in the wrong hands is a stalker tool. We are designing Athena from day one to be incapable of that misuse.
Identity verification at enrollment requires a government ID and a selfie liveness check whose face must match the uploaded reference. There is no path to enroll someone else.
Reference photos are converted into a one-way perceptual hash and the originals are discarded. We never store raw face embeddings, biometric vectors, or images that could be reused as a face DB.
Detection results are routed exclusively to the enrollee. There is no admin console to look up "who matches this hash," no feed for third parties, and a tamper-evident audit log of every access.
A YC reviewer should be able to verify the model is real. Here's what's shipped today, with the actual numbers from the held-out test split.
features → AdaptiveAvgPool → Dropout(0.3) → Linear(1280, 512) → ReLU → Dropout(0.2) → Linear(512, 1), single logit, trained with BCEWithLogitsLoss. 4.71M parameters, 19 MB on disk.
Phase 1: 3 epochs with the ImageNet-pretrained backbone frozen, classifier head only. Phase 2: 9 epochs with everything unfrozen at 1/10 the learning rate. Apple Silicon MPS, batch size 32. AdamW + ReduceLROnPlateau, early stopping at patience 4.
1,197 real photos + 1,197 AI samples from Flux, Stable Diffusion 1.5, and one more generator. Stratified 75/15/10 split. Augmentations simulate real-world distortion: JPEG q=30–95, ±10° rotation, color jitter, Gaussian blur, RandomErasing. Held-out 6 images for the demo above.
One false positive, zero false negatives. We've biased the model toward flagging a real image over missing a synthetic one. In this domain, a missed deepfake is content that stays online.
ROC-AUC stays at ≥ 0.9999 across every perturbation we tested, including JPEG q=30 (heavier than what most platforms re-encode at) and a 2× downsample/upsample round-trip. The decision boundary holds even when threshold accuracy slips slightly under heavy compression.
Drop any image (your photo, a screenshot, an AI sample). The same checkpoint described above runs the inference, in real time, on this server.
or click to browse, JPG, PNG, WebP (max 16 MB)
When the classifier confirms a hash match is synthetic, tools/monitor.py:TakedownGenerator drafts the TAKE IT DOWN Act notice the host receives. This is the actual output of that generator, with example data.
A solo founder's pitch deserves a personal answer.
Mathematics graduate (University of Michigan) and former biometric identity analyst. I've worked on facial-recognition systems on the inside, and I've watched the cost of those same techniques being weaponized against women, with no consumer tooling to push back.
Athena is what I would have built for the women in my life if it had existed. The TAKE IT DOWN Act gave us the legal scaffolding; my job is to build the rest.
Join the waitlist for early access. Free for victims. Always.