On Saturday, tech entrepreneur Siqi Chen released an open supply plug-in for Anthropic’s Claude Code AI assistant that instructs the AI mannequin to cease writing like an AI mannequin.
Referred to as Humanizer, the easy immediate plug-in feeds Claude a listing of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen revealed the plug-in on GitHub, the place it has picked up greater than 1,600 stars as of Monday.
“It’s actually useful that Wikipedia went and collated an in depth checklist of ‘indicators of AI writing,’” Chen wrote on X. “A lot in an effort to simply inform your LLM to … not try this.”
The supply materials is a information from WikiProject AI Cleanup, a gaggle of Wikipedia editors who’ve been looking AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu based the undertaking. The volunteers have tagged over 500 articles for evaluate and, in August 2025, published a proper checklist of the patterns they stored seeing.
Chen’s instrument is a “skill file” for Claude Code, Anthropic’s terminal-based coding assistant, which includes a Markdown-formatted file that provides a listing of written directions (you’ll be able to see them here) appended to the immediate fed into the big language mannequin that powers the assistant. In contrast to a traditional system prompt, for instance, the talent info is formatted in a standardized method that Claude fashions are fine-tuned to interpret with extra precision than a plain system immediate. (Customized expertise require a paid Claude subscription with code execution turned on.)
However as with all AI prompts, language fashions don’t all the time completely observe talent information, so does the Humanizer really work? In our restricted testing, Chen’s talent file made the AI agent’s output sound much less exact and extra informal, but it surely may have some drawbacks: It gained’t enhance factuality and may hurt coding means.
Particularly, a few of Humanizer’s directions may lead you astray, relying on the duty. For instance, the Humanizer talent contains this line: “Have opinions. Don’t simply report information—react to them. ‘I genuinely don’t know the best way to really feel about this’ is extra human than neutrally itemizing execs and cons.” Whereas being imperfect appears human, this sort of recommendation would in all probability not do you any favors if you happen to had been utilizing Claude to write down technical documentation.
Even with its drawbacks, it’s ironic that one of many net’s most referenced rule units for detecting AI-assisted writing could assist some individuals subvert it.
Recognizing the Patterns
So what does AI writing appear to be? The Wikipedia information is particular with many examples, however we’ll offer you only one right here for brevity’s sake.
Some chatbots like to pump up their topics with phrases like “marking a pivotal second” or “stands as a testomony to,” based on the information. They write like tourism brochures, calling views “breathtaking” and describing cities as “nestled inside” scenic areas. They tack “-ing” phrases onto the top of sentences to sound analytical: “symbolizing the area’s dedication to innovation.”
To work round these guidelines, the Humanizer talent tells Claude to interchange inflated language with plain information and presents this instance transformation:
Earlier than: “The Statistical Institute of Catalonia was formally established in 1989, marking a pivotal second within the evolution of regional statistics in Spain.”
After: “The Statistical Institute of Catalonia was established in 1989 to gather and publish regional statistics.”
Claude will learn that and do its finest as a pattern-matching machine to create an output that matches the context of the dialog or job at hand.
Why AI Writing Detection Fails
Even with such a assured algorithm crafted by Wikipedia editors, we’ve previously written about why AI writing detectors don’t work reliably: There’s nothing inherently distinctive about human writing that reliably differentiates it from LLM writing.
One cause is that regardless that most AI language fashions have a tendency towards sure forms of language, they may also be prompted to keep away from them, as with the Humanizer talent. (Though generally it’s very troublesome, as OpenAI present in its yearslong struggle in opposition to the em sprint.)
