Building in Public — CheckApp Studio Roadmap
CheckApp is phase one
CheckApp v1.2.0 is a checker. You finish writing, you run it, you fix what it finds. 12 skills, CLI plus dashboard plus MCP server, BYOK, 338 tests passing. That pipeline is stable and shipped today.
That's phase one. Phase two is a writing editor that runs the checks while you're still writing.
I'm building it. It's not shipped. This post is about where it's going.
The friction that still exists
The current flow has a seam in it. You write in your editor of choice — Notion, VS Code, iA Writer, whatever. When you're done, you export the article to a .md file, run checkapp article.md, read the report, go back to the editor, apply fixes, export again. Sometimes you run it a second time to make sure the fixes held.
That export-check-fix-reimport loop isn't painful exactly. But it breaks flow. And it throws away useful information.
When CheckApp finds a factual claim with weak evidence, the report shows you the sources it found — real URLs with relevance scores. But those sources live in a terminal output or a dashboard tab, not next to the paragraph you're editing. You have to hold them in your head while you go fix the sentence.
When the grammar skill returns a rewrite, you have to manually apply it. Accept or reject happens outside the editor, in a separate window, against a JSON blob.
The live cost counter tells you what a check would cost — but it's a pre-flight estimate you run before writing, not something you can watch as the article grows.
These aren't bugs. They're the natural limit of a tool that sits outside your writing environment. The tool that comes next closes that gap.
What Studio will be
CheckApp Studio is a web-based writing editor where findings appear inline as you write, not after you export.
Here's what I'm building toward:
Inline findings. Grammar issues underlined as you type. Factual claims with shaky evidence flagged in the margin. Tone drift noted inline, not in a separate report. The same skills that run on the CLI, surfaced inside the document where you can act on them immediately.
Research panel. When CheckApp finds sources for a claim — from Exa Search, Exa Deep Reasoning, Parallel Task, or Semantic Scholar — they show up in a side panel next to that paragraph. If a source supports your claim, you can insert the citation with one click. If it contradicts it, you see that while you're still writing, not after you've published.
Rewrite co-pilot. Accept or reject grammar and style rewrites inline. Like code suggestions in an IDE — cursor at the finding, hit accept, done. No copy-paste from a separate report.
Live cost counter. A running total visible while you write. You know what a check run costs before you trigger it. No surprises.
None of this is a chat interface. There's no "ask the AI to improve your writing" box. Studio isn't a co-writer. It's still a checker — just one that runs while the document is open, not after you've closed it.
What it's built on
The tech decisions are already made. Studio uses Vercel AI SDK v6 for the streaming and tools layer, with vercel/ai-chatbot as the starting scaffolding. The editor will be tiptap or monaco depending on what the initial user feedback suggests. Still BYOK — you configure your own keys, same as the CLI. Still open source, MIT. Runs on the Vercel free tier for hobbyists.
The skills that run in the CLI are the same skills that will run in Studio. Same providers, same BYOK model, same cost structure. The only thing changing is where the results appear.
Timeline, honestly
The CLI launched publicly. It's stable and shipped. I want real feedback from real users on the existing pipeline before I commit to a Studio scope.
Studio design starts now that CheckApp is public. Studio MVP is targeted for later in 2026 — I'm not putting a specific quarter on it until I know what early users actually need from an editor.
I'm building in public. I'll post updates here as the design solidifies. If early use surfaces something unexpected — a skill that breaks everyone's flow, a provider that's too expensive — that affects what Studio prioritizes. I'm not going to spec a feature set in advance and then build it in a vacuum.
Shape it with me
People using the CLI now are the group that influences what Studio becomes. If you've installed it and have opinions on the current flow, open a GitHub issue — that's the feedback loop.
The friction I described above — the export loop, the sources stuck in a separate tab, the rewrites you have to apply by hand — these are the problems I'm solving. But you might have different friction. A skill that produces too much noise. A context switch I haven't thought about. A workflow where the CLI is actually fine and a web editor adds nothing.
I'm not going to build Studio based on assumptions. I'm going to build it based on what the people using the CLI tell me is broken.
Open an issue on GitHub if you have ideas. If you want to influence the editor UX, the place to start is using the CLI and telling me what's annoying about the current flow. Early adopters get first access to Studio previews. Subscribe to the repo (GitHub "watch" → "releases only") for notifications.
The CLI is shipped and works today. Install it, run it, tell me what's broken. Studio comes after.
npm install -g checkapp
checkapp --setup
checkapp article.md
CheckApp v1.2.0 is live. BYOK — you bring your own API keys (Exa, Anthropic, etc.), no markup from us. Install from GitHub →
Was this useful?
Share it with someone who ships AI content.
Continue reading
I Ran 5 Client Articles Through CheckApp — Here's What It Caught
Five real client articles — fintech, wellness, SaaS, B2B, onboarding. A contradicted statistic, FDA-risk phrases, a plagiarism near-miss, and self-plagiarism the writer didn't know about.
CheckApp vs Grammarly vs ChatGPT vs Copyscape
An honest comparison of four content quality tools across grammar, plagiarism, fact-checking, AI detection, tone matching, and legal risk — for agencies, marketers, and writers.
Try CheckApp
Open source. MIT. ~$0.15/check (estimate). Install in 60 seconds.