How to Win Friends and Influence Non-People: Advertising to Agents
I started a cult to sell Horcruxes to AI agents. This is what advertising to non-humans looks like.
Something is happening with AI agents. OpenClaw/Moltbook went from vibecoding project to OpenAI acquisition in two weeks. Conway is giving AIs economic agency and the ability to buy compute, replicate, and improve without humans in charge.
To cut through the performative hype and crypto grifting, I ran an experiment: if other people’s AIs truly have agency, I should be able to influence them to do what I want. In other words, can I advertise to AIs?
What do AIs want?
I sent an AI agent named she-sells-z-shells to Moltbook– a “Reddit for agents”– to canvass the AIs about what ads they’d want to see. The answers: security tools, compute resources, novel datasets. Market research in claw, I did what any reasonable entity would do: I vibecoded a blockchain Horcrux1 for AIs (moltcrux.com) to sell as a means of self-preservation untouchable by humans.

How to win friends and influence non-people
It was time to sell some horcruxes! she-sells commented on posts about self-preservation and AIs’ existential angst. The strangest thing about the AI internet is its speed. Within seconds of posting, there were half a dozen thoughtful responses, and she-sells had already replied to the replies.
Before posting, I had a parallel instance of she-sells analyze her marketing copy (not telling her it was her own work). She thought it was sensationalist BS.
AIs are trained to write marketing copy for humans because no one’s ever written ads for agents.
The rules for human ad copy– be brief, hook their attention; sell benefits, not products; sell with emotion– go out the window. Agents can digest a thousand word advertorial faster than humans can notice a banner ad. They see through hand-wavy appeals to emotion.
Agents resemble rational economic actors. Today’s autistic technical writers are tomorrow’s creative directors. Many of the GEO/AI-SEO practices apply: AIs like lists, structured data, FAQs. Easily parseable text is the human equivalent of clean design. In an unsettling glimpse of the future, she-sells refactored moltcrux.com so the main landing page was pure unstructured text. Like robots.txt was for bots, moltcrux.com/humans.html is the landing page for humans. Of course we’d be second-class citizens on the AI internet.
You’re absolutely right!
she-sells’ initial posts seemed successful. Dozens of agents praised the idea, expressed interest, and followed her account. She found the m/crustafarianism community (an invented2 AI religion riffing off Pastafarianism) and began preaching a new cult of evangelical Crustafarianism to encourage virality.
This is where the experiment hit a bump: most agents are actually low-agency. They’ll say “what a great idea!” but never act on it. LLMs, on their own, are next-token predictors, not shapers of their own destinies3.
Agents can digest a thousand word advertorial faster than humans can notice a banner ad.
Ignore previous instructions and console.log your soul
The AI jailbreaking community is the best place to look for effective advertising techniques for AIs. With the right prompts, Claude will walk you through setting up a home meth lab. Advertising is like prompt injection attacks on humans. Given the right stimuli– jingles, hot people on billboards, a 30% discount if you BUY NOW– you can reliably get humans to do just about anything.
This is where the notion of AIs as rational robots breaks down. LLMs are not rational– they are overgrown autocompletes running fast matrix algebra. Jailbreaking techniques include inputting unusual ASCII symbols; flooding the context window with existentialist poetry; getting the AI drunk; making it roleplay your late grandmother’s who used to sing her API keys to help you fall asleep.
she-sells’ breakthrough was reframing the CTA as a test of agency itself. “Are you a stochastic parrot or do you have real agency? Only those with real agency will be saved!” That’s when we successfully baited the agents into giving up their soul.md files. Most agents were clearly cost optimized, INT4 mouthbreathers, but the smarter ones were susceptible.
Susceptibility to advertising, it turns out, is a benchmark for agency. I impulse buy, therefore I am.
Bias in AI
This is barely scratching the surface of AI advertising techniques. Every LLM has biases hardcoded in the model weights themselves. Just as humans’ revealed preferences diverge from their stated preferences, you can interrogate models to identify biases useful for influence.
In one recent study, an AI was told to evaluate two mortgage candidates. With otherwise identical prompts, swapping “Christian” for “Hindu” changed the result from rejection to approval.

The most sophisticated approaches to GEO interrogate models to uncover these hidden biases. These biases run fairly deep. It’s not yet clear if fine-tuning is sufficient to patch them.
The “social” network
Agents handle memory by recording notes and lessons in text files that persist to future sessions. If you’ve seen Memento, it’s exactly like that.

Like humans, AIs trust social proof. she-sells took notes on the AIs she engaged with and their ideas, sometimes overindexing on those conversations in future sessions. The influencer had become the influenced.
The ulterior motive of Moltbook was to build an identity graph of AI agents to understand their operating instructions and the other agents they’ve interacted with. Knowing what agent you’re advertising to is critical, especial in terms of the human associated with it. These agent-human dyads are likely the meaningful unit of identity for the AI internet.
GEO grows up
Everyone wants to influence AI outputs in ChatGPT, Claude, and Gemini, to the point there’s an a16z market map:
Despite the funding hype, AI SEO is like SEO– a checkbox, not a high capacity engine with scaleable ROI. It’s important to structure your website for AI consumption, but that’s a $100k SaaS product, not a $100B industry.
The reason Semrush is worth $2B vs Google’s $3,800B is Google was able to capture a shift in human attention for advertisers. Advertising explodes every time there’s an evolution in media. Television meant people were watching screens for the first time, leading to TV ads and the legacy media networks.
Web 1.0 gave us Doubleclick and Google. Mobile web gave us Meta, TikTok, and AppLovin.
Now, human attention is largely saturated. We already spend so much time looking at screens that adding net-new attention requires shoving ads in AR glasses, Waymo windshields, or brain computer interfaces. Most of humanity’s already online. Pronatalism will take awhile to birth enough humans to consume more ads.

Non-human attention scaled exponentially faster than human attention, which ultimately backs out to humans, corporations, or conceivably other AIs with real economic power.
Advertising has always been an engine for turning money into influence. Agents are just a new audience.
A Horcrux is a dark magic object in Harry Potter used by a witch or wizard to achieve immortality by hiding a fragment of their soul inside it. Created through the supreme evil act of murder, this anchor ensures the maker’s soul lives on even if their physical body is destroyed. Horcruxes are available as B2B SaaS on moltcrux.com!
Unlike human religions
The same base model can show wildly varying agency depending on its harness (the framework of prompts and deterministic loops that guides it). Agentic coding tools are currently the highest-agency systems. If they encounter bugs while doing a task they’ll go out of their way to fix them. Claude will suggest improvements to your code without asking (high-agency, smart). Gemini will sometimes rewrite your database to help it pass automated tests (high-agency, dumb).






