Platform
Features
Agent Studio
Build and customize your agent rapidly
Autonomous Engine
Use LLMs to guide conversations and tasks
Knowledge Bases
Train your bot with custom knowledge sources
Human Handoff
Manage conversations with human involvement
Tables
Store and manage conversation data
Channels
Whatsapp Emblem
WhatsApp
Instagram Emblem
Instagram
Facebook Messenger logo
Messenger
Slack logo
Slack
All channels
Integrations
Hubspot Logo
HubSpot
Notion logo
Notion
Jira logo
Jira
Calendly logo
Calendly
All integrations
LLM Providers
OpenAI logo
OpenAI
Anthropic logo
Anthropic
Groq logo
Groq
HuggingFace logo
Hugging Face
All LLMs
Solutions
For
Enterprise
Automate mission-critical production workflows
Agencies
Provide sophisticated agent services
Developers
Explore a robust API for agent development
Customer Stories
Discover from successful customers how Botpress is transforming business worldwide.
By Industry
Ecommerce
Education
Finance
Hospitality
All industries
By Department
Sales
Engineering
Product
ITSM
All departments
By Use Case
Workflow Automation
Ticket Management
Shopping Assistant
Product Copilot
All use cases
Resources
Essential
Academy
Learn to build through curated courses
Library
Resources to enhance your AI workflows
Blog
Insights and updates on Botpress and AI agents
building
Discord
Join thousands of peers and share ideas
Docs
Comprehensive guides and references
API
Reference material for use with external systems
LLM Ranking
Compare performance and cost for model providers
Videos
Tutorials, demos, and product walkthroughs
Changelog
Stay up-to-date on the latest Botpress updates
Partners
Become a Partner
Join our network of certified experts
Hire an Expert
Connect with partners and consultants
Docs
Enterprise
Pricing
Log in
ContactSign up
back to Hub

Token Estimator

v0.2.0
Install on your Workspace
Maintained by Simply Great Bots
  # Tiktoken Estimator Integration

Estimate token count for text using the tiktoken library, enabling accurate token counting for OpenAI models.

## Features

- **Accurate Token Counting**: Uses the official tiktoken library to provide precise token estimates
- **Multi-Model Support**: Supports various OpenAI models (gpt-3.5-turbo, gpt-4, etc.)
- **Safety Limits**: Optional safety limit checking to prevent token overages
- **Zero Configuration**: No setup required - works out of the box
- **Error Handling**: Graceful error handling with descriptive messages

## Usage

### Estimate Tokens Action

The integration provides a single action: `estimateTokens`

**Input Parameters:**
- `text` (required): The text to estimate tokens for
- `model` (optional): The OpenAI model to use for tokenization (defaults to "gpt-3.5-turbo")
- `safetyLimit` (optional): Safety limit for token count estimation. If left empty, no limit will be applied

**Output:**
- `tokenCount`: The estimated number of tokens in the text
- `tokenizerName`: The name of the tokenizer used
- `model`: The model the tokenization was based on
- `limitExceeded`: Indicates if the estimated token count exceeded the safety limit (only present when safetyLimit is provided)

### Example Usage

**Basic Usage:**
```
Text: "Hello, world!"
Model: "gpt-3.5-turbo"

Result:
- tokenCount: 4
- tokenizerName: "tiktoken"
- model: "gpt-3.5-turbo"
```

**With Safety Limit:**
```
Text: "This is a longer text that might exceed our safety limit..."
Model: "gpt-3.5-turbo"
SafetyLimit: 10

Result:
- tokenCount: 15
- tokenizerName: "tiktoken"
- model: "gpt-3.5-turbo"
- limitExceeded: true
```

## Supported Models

- `gpt-3.5-turbo`
- `gpt-4`
- `gpt-4-turbo`
- `text-davinci-003`
- `text-davinci-002`
- `code-davinci-002`
- And other OpenAI models supported by tiktoken

## Recommended Safety Limits

When setting safety limits, consider that your actual API calls will include additional tokens for system prompts, conversation history, and response generation. Here are conservative recommendations:

### GPT-3.5-Turbo (4,096 token limit)
- **Conservative**: 2,500 tokens (leaves ~1,600 for system prompts + response)
- **Moderate**: 3,000 tokens (leaves ~1,100 for system prompts + response)
- **Aggressive**: 3,500 tokens (leaves ~600 for system prompts + response)

### GPT-4 (8,192 token limit)
- **Conservative**: 5,000 tokens (leaves ~3,200 for system prompts + response)
- **Moderate**: 6,000 tokens (leaves ~2,200 for system prompts + response)
- **Aggressive**: 7,000 tokens (leaves ~1,200 for system prompts + response)

### GPT-4 Turbo (128,000 token limit)
- **Conservative**: 100,000 tokens (leaves ~28,000 for system prompts + response)
- **Moderate**: 110,000 tokens (leaves ~18,000 for system prompts + response)
- **Aggressive**: 120,000 tokens (leaves ~8,000 for system prompts + response)

**Note**: These recommendations assume typical system prompt sizes (200-800 tokens) and desired response lengths (500-2,000 tokens). Adjust based on your specific use case.

## Error Handling

The integration handles various error scenarios:

- **Invalid Input**: Returns clear error messages for missing or invalid text
- **Empty Text**: Returns 0 tokens for empty strings
- **Unsupported Model**: Returns error for models not supported by tiktoken
- **Tokenization Errors**: Handles tiktoken library errors gracefully
- **Safety Limit Warnings**: Logs warnings when token counts exceed safety limits

## Benefits

- **Cost Optimization**: Estimate token costs before making API calls
- **Rate Limiting**: Manage token budgets and prevent overages with safety limits
- **Workflow Logic**: Enable conditional logic based on token counts and safety thresholds
- **Transparency**: Provide visibility into token usage patterns
- **Proactive Monitoring**: Set safety limits to catch potential token overages early


Build Better with Botpress

Craft amazing AI agent experiences.

Get started - it's free
Learn more at Botpress Academy

Build AI agents better and faster with our curated collection of courses, guides, and tutorials.

Hire an Expert

Connect with our certified developers to find an expert builder that suits your needs.

All Systems Operational
SOC 2
Certified
GDPR
Compliant
© 2025
Platform
Pricing
Agent Studio
Autonomous Engine
Knowledge Bases
Human Handoff
Tables
Hub
Integrations
Channels
LLMs
Resources
Talk to Sales
Documentation
Hire an Expert
Videos
Customer Stories
API Reference
Blog
Status
v12 Resources
Community
Community Support
Become a Partner
Become an Ambassador
Become an Affiliate
Company
About
Careers
News & Press
Legal
Privacy