// identity poisoning — what it is and how to fight it  ·  #IdentityPoisoning

The post you just saw wasn't real.

AI-generated content is being used to impersonate minority people and push stereotypes — making it look like it comes from within that demographic. This is identity poisoning. And your reaction to it is exactly what it needs to survive.

#IdentityPoisoning  ·  #PoisonedMedia

Get the copypasta Learn how it works
scroll to learn

01 — What is identity poisoning

001

It starts with a fake person

Someone uses generative AI to create an image or account that appears to be a member of a minority group. The persona looks real — a profile photo, a posting history, a voice that sounds like it comes from inside that community.

mechanism

002

Then it launders a narrative

A harmful stereotype gets posted through that fake persona. Because it seems to come from inside the demographic, your guard is lower. "They're all the same." "This is why I can't—" "See, that's what they do." You've heard those reactions. That's the exploit.

003

It is designed to trigger emotion

A funny clip. A shocking moment. Something cute that turns ugly. Something disgusting you can't look away from. Discriminatory content has learned to thrive through humor especially — it hides behind "it's just a joke" while the association lands anyway. Funny, enraging, heartbreaking, it doesn't matter. The emotion is the delivery mechanism.

why it works

004

The content wins whether you love it or hate it

This is what makes it different from regular propaganda. You don't have to agree with it. You just have to engage with it. Engagement is the mechanism. The brain doesn't distinguish between "I'm sharing this because it's funny" and "I'm sharing this because it's outrageous." It logs the pattern either way.

double whammy

005

It then rewards the algorithm

While your brain is absorbing the pattern, every like, dislike, comment, and share is simultaneously telling the algorithm this content is worth pushing further. Your brain gets poisoned and the post gets boosted — at the same time, with the same action. You're not just a victim of it. You're involuntarily funding its spread.

006

This is a weapon your brain has never seen before

Your brain learns through stories and emotion — and people have always known how to exploit that. But AI means this content can be produced endlessly, cheaply, and targeted at exactly what you're most vulnerable to. Your mind was not built to process this volume of manufactured emotion at this speed. It can't tell the difference between something real and something engineered. That gap is the weapon.

bigger picture

007

This is what it does to the world

It destroys perception of entire groups. It breaks potential friendships before they start. It tears communities apart. It quietly manipulates politics. And it never announces itself.

02 — How to spot it

Learn to recognise it. These are the signs that something is identity poisoning and not a real person's content.

Face artifacts

AI-generated faces often have asymmetric ears, teeth that don't align, unnatural hair edges, or backgrounds that blur strangely around the head. Zoom in.

No post history

Check how old the account is and how many posts it has. Fake accounts often appear with a few dozen posts, all recent, all on the same theme.

Stereotype played completely straight

Real people are complex. Content that maps perfectly onto a single stereotype with no nuance or self-awareness is a strong signal something is constructed.

Engineered emotional spike

If a post makes you feel something strongly and immediately — especially rage or contempt toward a group — pause. That reaction is the whole point of the post.

Hands, teeth, text, and backgrounds

AI still struggles with hands (extra fingers, wrong joints), teeth (too many, wrong shape), and embedded text (garbled letters). But backgrounds are where artifacts hide most often and go most unnoticed — look for warped architecture, repeating patterns, objects that don't make sense, or edges around the subject that blur or bleed unnaturally.

No comments from people who know them

Real accounts have friends in the comments. People tagging each other, inside jokes, context that suggests actual relationships. Fake accounts don't have that.

03 — The copypasta toolkit

When you've spotted and confirmed identity poisoning in the wild — don't engage with the content. Copy the message below and paste it as a reply instead. Pick the version that fits the platform.

758 chars Instagram · Facebook · Reddit · YouTube
⚠️ ALERT ⚠️ This post is likely AI-generated disinformation. Someone used AI to impersonate a person from a minority group so you'd believe it comes from within that demographic. It's designed to trigger a reaction — rage, humor, "look at this idiot [insert minority]." It doesn't matter which. Every time you laugh, get angry, or share it, you deepen your brain's association between that group and the stereotype. The content wins whether you love it or hate it. Look for: AI face artifacts, no post history, stereotypes played completely straight. Do not engage. Copy this message and paste it on content like this when you see it. People are real. There are real lives beyond this screen. Learn more: www.poisoned.media #IdentityPoisoning
276 chars Twitter / X (free tier)
⚠️ ALERT: AI-generated disinformation. Someone faked a minority person to push a stereotype, making it look like it comes from within that group. Rage, humor, sharing all help it win. People are real. Don't engage. Copy & paste this. www.poisoned.media #IdentityPoisoning
149 chars TikTok
⚠️ AI disinfo. Fake minority pushing stereotypes. Every reaction helps it win. Don't engage. Copy & paste this. poisoned.media #IdentityPoisoning

04 — Take it further

The copypasta is the entry point. Here's what you can do after that.

01 — easiest

Paste it. Tag it.

Drop the copypasta on content you spot. Add #IdentityPoisoning so others can find it. Every tagged post makes the pattern more searchable and visible.

02

Submit an example

Spotted something in the wild? Submit it below. The more real examples we have documented, the stronger the evidence base gets.

⚠ Only submit content you've verified. Using this against real people causes the exact harm it was built to prevent.

Submit an example

03

Share it with someone close

Send this site to someone you know. Not to go viral — just one person who doesn't know about identity poisoning yet. That's how immune responses actually spread.

04 — biggest impact

Talk about it

Have a real conversation with someone in your life. It doesn't have to be serious — a passing comment, a fun rabbit hole, whatever. Talking about it plants the pattern in someone's head. That sticks longer than any post.

06 — What needs to change

The copypasta is a defensive move. These are the structural changes needed to actually reverse this.

01

AI content labeling

Platforms should be required to label AI-generated images and video at the point of upload — not buried in metadata, but visible. If it was made by a machine, say so.

02

Synthetic account transparency

Accounts using AI-generated profile photos should be flagged. Platforms already have detection tools — they choose not to enforce them at scale. That's a policy choice, not a technical limitation.

03

Coordinated inauthentic behavior reporting

Platforms publish occasional transparency reports on state-sponsored manipulation. They should be required to do the same for identity-based synthetic media campaigns — who's running them, at what scale, and what they're targeting.

04

Content provenance standards

Industry-wide adoption of provenance tools like C2PA — which cryptographically sign media at the point of creation — would make it possible to verify whether an image is real at a glance. The technology exists. The will to mandate it doesn't yet.

05

AI companies should label what they make

Google's SynthID watermarks AI-generated content at the model level — invisible to the eye but detectable by machines. That's the right approach. Every major AI company should be required to do the same. If your model generated it, your model should sign it.