Token Counter
Count tokens for GPT-4, Claude, Gemini, and Llama.
~Tokens
29
Words
18
Characters
101
Sentences
2
Context window usage for GPT-4o mini29 / 128,000 tokens (0.0%)
Estimated API cost (GPT-4o mini)
Input cost: $0.000004($0.15/M tokens)
Output cost: $0.000017($0.6/M tokens)
Token count is approximate (GPT BPE-style). Actual counts may vary ±10–15% per model.
Know your token count before you run your prompt
Hitting the context window limit mid-request causes silent truncation or errors. Check your text length, compare it against your model's context window, and estimate the API cost — all before making a single API call.
Frequently asked questions
- What is a token in AI models?
- Tokens are the basic units that language models process — roughly 4 characters or 0.75 words in English. Common words are single tokens; rare words, code, and non-English text often require more tokens per word. Models are billed per token.
- Why does token count vary by model?
- Different models use different tokenisation algorithms (BPE, SentencePiece, etc.) trained on different vocabularies. GPT-4 uses cl100k_base, Claude uses its own tokeniser. The same text can have different token counts across models.
- What is a context window?
- The context window is the maximum number of tokens a model can process in a single request (input + output combined). GPT-4o has 128K tokens, Claude 3.5 Sonnet has 200K, and Gemini 1.5 Pro has 2M. Inputs exceeding the context window are truncated or rejected.
- How accurate is this counter?
- The token count is approximate (±10–15%) because exact tokenisation requires the model's specific tokeniser library (tiktoken for OpenAI, etc.). For precise counts, use the OpenAI Tokenizer (platform.openai.com/tokenizer) or the tiktoken Python library.
- How is API cost calculated?
- API cost = (token count / 1,000,000) × price per million tokens. Input and output tokens are priced differently — outputs are typically 3–4× more expensive than inputs. Costs shown are for input tokens only at current list prices.