CheckApp Tutorial — Your First AI Content Check in 5 Minutes
By the end of this post you'll have run CheckApp on an article and read the report. Total time: 5 minutes. Total cost: under 10 cents.
No signup. No gate. Install, configure one API key, run. That's it.
What you'll need
- Node 20+ (check:
node --version) - A terminal
- One API key — Exa Search for fact-check (free trial tier available), or skip fact-check entirely and start with LanguageTool (free, no key) + the offline SEO skill
- A
.mdor.txtfile you want to check (or use a public Google Doc URL)
That's the whole list. You don't need to configure all 12 skills to run your first check. Start with one.
Step 1: Install
npm install -g checkapp
One command. CheckApp ships as a global CLI with a full web dashboard built in. You'll do most work in the dashboard — the CLI is mostly there for CI / automation. After install, checkapp --version should return 1.2.0.
Step 2: Open the dashboard
checkapp --ui
That starts a local web server and opens the dashboard at http://localhost:3000 in your browser. This is the primary UI — Run Check, Reports history, Skills management, Settings (providers + keys), Contexts (brand voice, briefs), Cost estimates. Leave it running; everything below uses it.
Step 3: Configure providers
In the dashboard, click Settings in the left sidebar. The Providers page lists all 12 skills with their available providers — free tiers are badged so you can see at a glance what won't cost anything.
(If you prefer the terminal, checkapp --setup walks through the same options in a wizard.)
The full catalog of skills and providers:
| Skill | What it checks | Free option | Paid option |
|---|---|---|---|
| Grammar | spelling, grammar, style, offset-based rewrites | LanguageTool | Sapling |
| SEO | keyword density, readability, headings | Offline (built-in) | — |
| Fact-check | real sources for factual claims | — | Exa Search ( |
| Plagiarism | copied passages | — | Copyscape, Originality.ai |
| AI detection | AI-generated content flag | — | Copyscape, Originality.ai |
| Self-plagiarism | overlap with your own past articles | Upstash Vector free tier | Cloudflare Vectorize, Pinecone |
| Tone | voice alignment to your brand doc | — | Claude, MiniMax, OpenRouter |
| Legal | health / FDA / GDPR risk | — | Claude, MiniMax, OpenRouter |
| Summary | key points | — | Claude, MiniMax, OpenRouter |
| Brief | coverage of a project brief | — | Claude, MiniMax, OpenRouter |
| Purpose | intent drift | — | Claude, MiniMax, OpenRouter |
| Academic | Semantic Scholar citations for scientific claims | Free, no key | — |
Twelve skills, six providers that each run for free. You control the cost by which paid providers you add — that's the BYOK promise.
Fastest zero-cost path for your first run:
- Grammar → LanguageTool (free)
- SEO → Offline (free)
- Academic → Semantic Scholar (free, no key)
- Everything else → press Enter to skip. A
skippedverdict is not a failure — it just means the skill didn't run because you didn't configure it. You can add providers later. This is the normal state for most of the 12 skills on a first run.
The config is saved to ~/.checkapp/config.json. You can edit it directly or re-run --setup any time.
Step 4: Run your first check
checkapp article.md
Replace article.md with your file. CheckApp accepts Markdown, plain text, and Google Doc URLs (checkapp https://docs.google.com/...).
The check runs in under 30 seconds for a typical 800-word article. Output prints to the terminal as skills complete.
Here's what the output looks like:
CheckApp v1.2.0 — article.md (843 words)
grammar warn 8 findings
seo pass keyword density 1.4%, readability 68
fact-check skipped (no provider configured)
tone skipped (no provider configured)
plagiarism skipped (no provider configured)
...
Verdict: WARN
Estimated cost this run: $0.00
Skills you didn't configure show as skipped. That's expected. It means the skill didn't run, not that your article passed.
Step 5: Read the report
After the summary, CheckApp prints findings for each non-passing skill. For grammar it looks like this:
grammar — WARN — 8 findings
[1] offset 142–158 — "utilise" → "utilize" (US spelling)
[2] offset 203–241 — passive voice: "was written by" → "wrote"
[3] offset 388–412 — double space before "and"
...
Each finding has:
- Offset — the exact character range in your file where the issue is
- What it found — the original text
- Rewrite — a suggested fix, anchored to that offset
These are not find-and-replace suggestions. They're offset-based splices, applied in descending order. If you have 8 fixes in one sentence, they don't drift and corrupt each other.
For fact-check findings (if you enabled it), each finding also includes the claim, a verdict, real source URLs, and a confidence score. Here's a real example:
fact-check — WARN — 1 finding
[1] offset 518–578 — Claim: "the average B2B buyer engages with 13 pieces of content before purchasing"
Verdict: insufficient_evidence
Confidence: 0.31
Sources:
• gartner.com/en/doc/b2b-buying-journey (relevance 0.68)
• demandgen.com/2024-content-report (relevance 0.52)
Note: 13-pieces figure not found in top sources; closest reference cites 6–8.
The URLs are live. Open them. The relevance scores come from the retrieval provider, not an LLM. The verdict comes from an LLM reading those real pages — not recalling facts from training. When the sources don't support the claim, you see insufficient_evidence, not a faked pass.
Understanding the verdicts
Every skill returns one of four verdicts:
| Verdict | Meaning |
|---|---|
pass | No issues found |
warn | Issues found, but not blocking |
fail | Blocking issues — this article has a real problem |
skipped | Skill not configured — not a failure, just not running |
The overall article verdict is the worst verdict across all configured skills. If grammar is warn and SEO is pass, the article verdict is warn.
skipped is not a problem. It means you haven't configured that skill yet. A full configuration with all 12 skills enabled costs $0.05–$0.25 per article depending on your provider choices. Start with the free skills, add paid ones when you're ready.
Step 6: Back to the dashboard — history, drill-downs, reruns
Open the dashboard at localhost:3000 (it's still running from Step 2). The check you just ran is the top row in Reports — click it to see the full breakdown with clickable findings.
The dashboard keeps history of every check. Click any past report to review findings. Click View evidence on a fact-check finding to see the actual source URLs with relevance scores — not LLM memory, real pages.
What to try next
Enable more skills. Grammar and SEO are free. The highest-signal paid skill for most writers is fact-check — add an Exa API key (Exa Search at ~$0.007/claim, Exa Deep Reasoning at ~$0.025/claim for deeper retrieval, or Parallel Task for multi-hop research-grade reasoning). Both Exa and Parallel have free-trial tiers.
Upload a tone guide. Go to Settings → Context, upload your brand voice document, then enable the Tone skill. On your next check, CheckApp compares the article against your voice guide. Costs about $0.002 per run with MiniMax.
Run it on your last five articles. Not for the findings — for the patterns. Most writers have one or two recurring issues (passive voice, unsupported statistics, keyword stuffing). Five articles will tell you what yours are.
Install it and run it today
npm install -g checkapp
checkapp --setup
checkapp article.md
The code is on GitHub — MIT license, 338 passing tests. If something breaks, open an issue.
For Claude Code users: install the MCP server from the repo and check_article becomes a native tool in your agent workflow. Your agent drafts. Then it checks.
Was this useful?
Share it with someone who ships AI content.
Continue reading
I Ran 5 Client Articles Through CheckApp — Here's What It Caught
Five real client articles — fintech, wellness, SaaS, B2B, onboarding. A contradicted statistic, FDA-risk phrases, a plagiarism near-miss, and self-plagiarism the writer didn't know about.
CheckApp vs Grammarly vs ChatGPT vs Copyscape
An honest comparison of four content quality tools across grammar, plagiarism, fact-checking, AI detection, tone matching, and legal risk — for agencies, marketers, and writers.
Try CheckApp
Open source. MIT. ~$0.15/check (estimate). Install in 60 seconds.