Token Count
When you’re tweaking a prompt or rewriting a system message, you want to know the token delta, not the amount of lines or characters it changes.
tokencount is a tool for optimizing context.
I was realizing that diagrams that LLMs create in ASCII are suboptimal,
particularly lines that are separators like -------, and they should never go
to CLAUDE.md or SKILL.md files. So, I asked Claude Opus 4.6 to build this
small tool for comparing how different LLM tokenizers handle the same text. It
supports Claude, OpenAI, Gemini, DeepSeek, Llama, Mistral, and a few others.
It also does token overlay visualization, so you can see exactly where each
tokenizer draws its boundaries. Different models chop up the same text in
interestingly different ways. The tokenizers lazy-load on first use, there’s a
CLI version that Claude Code can use (nix run github:eordano/tokencount), and
an offline single-HTML bundle in case you need it. Comparisons can be shared
via URL encoding of the data. It looks like this: ascii diagram vs explanation
on
tokencount
I’m open to suggestions on the style and UX. Hope you find it useful!