aiTokens

Estimate token count for AI processing. Useful for staying within model limits, estimating costs, and optimizing prompt size before sending to AI providers.

Syntax

aiTokens(text, options)

Parameters

Parameter
Type
Required
Default
Description

text

any

Yes

-

String or array of strings to count tokens for

options

struct

No

{}

Configuration struct for token estimation

Options Structure

Option
Type
Default
Description

method

string

"characters"

Estimation method: "characters" or "words"

detailed

boolean

false

Return detailed statistics instead of just count

Estimation Methods

  • characters: Fast estimation based on character count (~4 chars = 1 token)

  • words: Slightly more accurate based on word count (~1.3 words = 1 token)

Returns

Returns:

  • Numeric: Token count estimate (when detailed: false)

  • Struct: Detailed statistics (when detailed: true) with keys:

    • tokens - Estimated token count

    • characters - Total character count

    • words - Total word count

    • chunks - Number of text chunks (if array)

    • method - Estimation method used

Examples

Basic Token Count

Longer Text

Character-Based Estimation (Default)

Word-Based Estimation

Array of Text Chunks

Detailed Statistics

Check Before Sending

Cost Estimation

Chunking Decision

Batch Processing

Optimize Prompt Size

Compare Methods

Message Token Count

Dynamic Context Management

Pre-Flight Check

Budget Management

Notes

  • Fast Estimation: Not exact but close enough for most use cases

  • 📏 Model Limits: Check provider limits (GPT-4: 8k-128k, Claude: 100k-200k)

  • 💰 Cost Planning: Estimate API costs before making calls

  • 🔍 Optimization: Use to optimize prompt size and reduce costs

  • 📊 Monitoring: Track token usage for budget management

  • ⚠️ Approximation: Estimates may vary ±10-20% from actual token count

  • 🎯 Rule of Thumb: ~4 characters or ~1.3 words ≈ 1 token (English)

Best Practices

Check before sending - Verify prompts fit within model limits

Estimate costs - Calculate approximate API costs before calling

Use for optimization - Trim unnecessary content to save tokens

Monitor usage - Track token consumption for budget management

Chunk large content - Use with aiChunk() for documents over limits

Don't rely on exact counts - Estimates are approximations, allow buffer

Don't forget response tokens - Model outputs also count toward limits

Don't ignore model limits - Each model has specific token limits

Last updated