backstoryai-content· 2026-04-16· 5 min read

Why I built CheckApp

I spent most of 2025 ghostwriting. Marketing articles, SEO pieces, thought-leadership posts — the whole spread, for clients across fintech, wellness, and SaaS. By the middle of the year AI had shifted from "optional accelerator" to "default first draft." And that's when I started seeing the cracks.

The first three articles I nearly shipped

  • One had an entire paragraph lifted almost verbatim from a Wikipedia entry on vitamin D. Three sentences. Enough for Copyscape to flag it at 33% similarity.
  • One stated a statistic confidently — "the average B2B buyer now engages with 13 pieces of content before purchasing" — that came from nowhere. No source. No paper. The number didn't exist.
  • One confidently promoted a supplement as "clinically proven to reduce inflammation in 72 hours." That's a straight FDA violation, and the client would have been the one holding the bag.

Each of those articles looked fine on first read. Each one could have gone live.

The gap nobody was filling

Every AI content team I talked to had the same workflow: prompt → draft → light edit → publish. Everyone knew AI hallucinates. Nobody had a systematic safety net. People relied on editors' intuition — which doesn't scale when you're shipping 20 articles a week across five clients.

The existing tools either solved one piece (Copyscape for plagiarism, Originality for AI detection) or lived inside a CMS (and required moving your whole content workflow). Nothing ran from the command line. Nothing composed multiple checks into a single report. Nothing integrated with the AI agents that were generating the drafts in the first place.

What I built

Checkit runs 9 skills in parallel against any .md or .txt file, or a public Google Doc. Each skill returns a score, a verdict, and specific findings. The whole thing runs in under 30 seconds for a typical 800-word article and costs about 15 cents in API fees (estimate — varies by provider, model, and length).

  • Plagiarism + AI detection via Copyscape.
  • Fact-check via Exa's neural search — extract claims, find evidence, rate confidence.
  • Tone compared against a brand voice doc you upload once.
  • Legal risk scan for health claims, defamation, GDPR, false promises.
  • SEO, summary, brief matching, content purpose — the rest of the quality bar.

It's free. MIT. You bring your own API keys and pay the providers directly.

What comes next

Right now the tool runs locally — CLI plus a local web dashboard. The next big arc is agent integration: the MCP server is already live so Claude Code and Cursor can call check_article directly. That means an agent can draft an article and then run its own quality gate before handing it back. The feedback loop gets tighter every month.

If any of this hits home — if you've shipped something you later regretted, or your team is moving faster than your editing process can keep up — the repo is here. Open an issue. Fork it. Let me know what's missing.

CheckApp is free and open-source, built by one person. If this post helped — ☕ buy me a coffee

Was this useful?

Share it with someone who ships AI content.

Share on Twitter

Try Checkit

Open source. MIT. ~$0.15/check (estimate). Install in 60 seconds.