Working with Models

The comprehensive guide to working with AI models in BoxLang, covering creation, configuration, pipeline integration, parameters, options, and advanced usage.

Learn how to use AI models as pipeline-compatible runnables. Models wrap AI service providers for seamless integration into pipelines.

📖 Table of Contents

🚀 Creating Models

The aiModel() BIF creates pipeline-compatible AI models.

🏗️ Model Architecture

Basic Creation

Model Configuration

🔗 Models in Pipelines

🔄 Pipeline Integration Flow

Basic Pipeline

Using Default Model

Multiple Models in Sequence

⚙️ Model Parameters

Common Parameters

Provider-Specific Parameters

OpenAI:

Claude:

Ollama:

Runtime Parameter Override

🎛️ Model Options

Models support the options parameter for controlling runtime behavior.

Setting Default Options

Runtime Options Override

Convenience Methods

Available Options

  • returnFormat:string - "raw" (default), "single", or "all"

  • timeout:numeric - Request timeout in seconds

  • logRequest:boolean - Log requests to ai.log

  • logRequestToConsole:boolean - Log requests to console

  • logResponse:boolean - Log responses to ai.log

  • logResponseToConsole:boolean - Log responses to console

  • provider:string - Override AI provider

  • apiKey:string - Override API key

Debugging with Options

Model Patterns

Task-Specific Models

Model Factory

Model Ensemble

Advanced Usage

Conditional Model Selection

Model with Fallback

Cost-Aware Model Selection

Model Introspection

Getting Model Information

Getting Complete Configuration

The getConfig() method returns a comprehensive view of the model's configuration:

Configuration Use Cases

Debugging and Logging:

Configuration Validation:

Model Comparison:

Saving/Restoring Configuration:

Pipeline Inspection

Binding Tools to Models

Models can have tools bound to them for function calling capabilities. Tools bound to a model are automatically available when the model is used.

Basic Tool Binding

Multiple Tools

Adding Tools Incrementally

Tools in Agents

Models with bound tools work seamlessly in agents:

Runtime Tools vs Bound Tools

Bound Tools (via bindTools/addTools):

  • Permanently attached to the model

  • Available in all executions

  • Ideal for reusable models

  • Used automatically in agents

Runtime Tools (via params.tools):

  • Passed per execution

  • Merged with bound tools

  • Useful for context-specific needs

Tool Execution Flow

When a model has tools:

  1. Request Sent: Model receives message with available tools

  2. AI Decides: Model determines if tool call is needed

  3. Tool Invoked: Service executes the tool function

  4. Result Returned: Tool result sent back to model

  5. Final Response: Model generates answer using tool result

All tool execution is handled automatically by the service layer.

Best Practices

  1. Name Your Models: Use .withName() for debugging

  2. Set Appropriate Temperature: Match creativity to task

  3. Limit Max Tokens: Control costs and response time

  4. Use Local Models: For privacy and development

  5. Cache Model Instances: Reuse configured models

  6. Handle Errors: Models can timeout or fail

  7. Monitor Costs: Track usage with raw responses

Examples

Document Processor

Multi-Model Validator

📚 Models with Document Loaders & RAG

Models can be integrated with document loaders and vector memory to create powerful RAG (Retrieval-Augmented Generation) systems.

🔄 Model RAG Flow

Basic RAG with Model

Multi-Source RAG Pipeline

Conditional Document Loading

Hybrid Search RAG

Combine keyword search with semantic search:

🔄 Models with Transformers

Models work seamlessly with transformers for data processing pipelines.

Output Transformation

Input Processing

Multi-Stage Processing

Document Processing Pipeline

Structured Output with Transformers

Next Steps

Last updated