Plugin uses Wikipedia's AI-writing rules to make Claude write plainer prose

Plugin uses Wikipedia's AI-writing rules to make Claude write plainer prose — Cdn.arstechnica.net
Image source: Cdn.arstechnica.net

The Humanizer skill tells the Claude model to replace inflated language with plainer facts by following AI-writing rules compiled by Wikipedia editors, offering an explicit example transformation.

The piece gives this example: Before: “The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain.” After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.” The article says Claude will read that and “do its best as a pattern-matching machine to create an output that matches the context of the conversation or task at hand.”

It also notes limits to detection: there is nothing inherently unique about human writing that reliably differentiates it from LLM writing, and models can be prompted to avoid typical AI phrasing (the piece even cites OpenAI's long struggle with the em dash). The Wikipedia guide is described as a set of observations, not ironclad rules; a 2025 preprint cited on the page found heavy users of large language models correctly spot AI-generated articles about 90 percent of the time, but the roughly 10 percent false-positive rate could discard quality writing. The article suggests AI-detection efforts may need to go deeper than flagging particular phrasing and examine substantive factual content instead.


Key Topics

Tech, Claude, Humanizer, Wikipedia, Ai Detection, Large Language Models