Working with Models
The comprehensive guide to working with AI models in BoxLang, covering creation, configuration, pipeline integration, parameters, options, and advanced usage.
Learn how to use AI models as pipeline-compatible runnables. Models wrap AI service providers for seamless integration into pipelines.
📖 Table of Contents
🚀 Creating Models
The aiModel() BIF creates pipeline-compatible AI models.
🏗️ Model Architecture
Basic Creation
Model Configuration
🔗 Models in Pipelines
🔄 Pipeline Integration Flow
Basic Pipeline
Using Default Model
Multiple Models in Sequence
⚙️ Model Parameters
Common Parameters
Provider-Specific Parameters
OpenAI:
Claude:
Ollama:
Runtime Parameter Override
🎛️ Model Options
Models support the options parameter for controlling runtime behavior.
Setting Default Options
Runtime Options Override
Convenience Methods
Available Options
returnFormat:string-"raw"(default),"single", or"all"timeout:numeric- Request timeout in secondslogRequest:boolean- Log requests toai.loglogRequestToConsole:boolean- Log requests to consolelogResponse:boolean- Log responses toai.loglogResponseToConsole:boolean- Log responses to consoleprovider:string- Override AI providerapiKey:string- Override API key
Debugging with Options
Model Patterns
Task-Specific Models
Model Factory
Model Ensemble
Advanced Usage
Conditional Model Selection
Model with Fallback
Cost-Aware Model Selection
Model Introspection
Getting Model Information
Getting Complete Configuration
The getConfig() method returns a comprehensive view of the model's configuration:
Configuration Use Cases
Debugging and Logging:
Configuration Validation:
Model Comparison:
Saving/Restoring Configuration:
Pipeline Inspection
Binding Tools to Models
Models can have tools bound to them for function calling capabilities. Tools bound to a model are automatically available when the model is used.
Basic Tool Binding
Multiple Tools
Adding Tools Incrementally
Tools in Agents
Models with bound tools work seamlessly in agents:
Runtime Tools vs Bound Tools
Bound Tools (via bindTools/addTools):
Permanently attached to the model
Available in all executions
Ideal for reusable models
Used automatically in agents
Runtime Tools (via params.tools):
Passed per execution
Merged with bound tools
Useful for context-specific needs
Tool Execution Flow
When a model has tools:
Request Sent: Model receives message with available tools
AI Decides: Model determines if tool call is needed
Tool Invoked: Service executes the tool function
Result Returned: Tool result sent back to model
Final Response: Model generates answer using tool result
All tool execution is handled automatically by the service layer.
Best Practices
Name Your Models: Use
.withName()for debuggingSet Appropriate Temperature: Match creativity to task
Limit Max Tokens: Control costs and response time
Use Local Models: For privacy and development
Cache Model Instances: Reuse configured models
Handle Errors: Models can timeout or fail
Monitor Costs: Track usage with raw responses
Examples
Document Processor
Multi-Model Validator
📚 Models with Document Loaders & RAG
Models can be integrated with document loaders and vector memory to create powerful RAG (Retrieval-Augmented Generation) systems.
🔄 Model RAG Flow
Basic RAG with Model
Multi-Source RAG Pipeline
Conditional Document Loading
Hybrid Search RAG
Combine keyword search with semantic search:
🔄 Models with Transformers
Models work seamlessly with transformers for data processing pipelines.
Output Transformation
Input Processing
Multi-Stage Processing
Document Processing Pipeline
Structured Output with Transformers
Next Steps
Message Templates - Build dynamic prompts
Transformers - Process model outputs
Document Loaders - Load data from various sources
RAG Guide - Complete RAG workflow documentation
Vector Memory - Semantic search and embeddings
Pipeline Streaming - Real-time responses
Custom AI Providers - Integrate custom LLM services
Last updated