Main Components
Main components for building AI agents and pipelines in BoxLang
Welcome to the core building blocks of BoxLang AI. This section covers the essential components you need to build sophisticated AI agents and composable pipelines.
π Overview
BoxLang AI is built on a runnable pipeline architecture that allows you to:
π Chain operations - Connect AI models, messages, transforms, and agents
β»οΈ Reuse workflows - Define once, execute with different inputs
π§© Compose freely - Mix and match components to create complex flows
π― Stay flexible - Swap providers, add steps, or modify behavior without refactoring
Think of these components as LEGO blocks - each piece has a specific purpose, but the real power comes from how you combine them.
ποΈ Architecture Overview
π Pipeline Execution Flow
π― Core Components
Below is a detailed overview of each component. Each has a specific role in the ecosystem, and they work together seamlessly.
π§ AI Models
AI provider integrations wrapped as runnable pipeline components.
What it does:
Wraps OpenAI, Claude, Gemini, Ollama, and other providers
Makes them composable in pipelines
Provides consistent interface across all providers
Quick example:
Use when: You need direct model control, want to swap providers, or build custom workflows.
βοΈ Messages & Templates
Reusable message builders with dynamic placeholders and role management.
What it does:
Builds conversation messages (system, user, assistant)
Supports dynamic placeholders for variable injection
Handles multimodal content (images, audio, video, documents)
Creates reusable prompt templates
Quick example:
Use when: You have repeated prompts, need dynamic content, or want organized message management.
β Full Messages Documentation
π€ AI Agents
Autonomous AI entities with memory, tools, and reasoning capabilities.
What it does:
Maintains conversation context across multiple turns
Automatically decides when to use tools
Manages multiple memory strategies (windowed, summary, session, file)
Provides autonomous reasoning and planning
Quick example:
Use when: You need context-aware conversations, autonomous tool use, or complex multi-turn interactions.
π Memory Systems
Context management strategies for maintaining conversation history.
What it does:
Windowed memory - Keep recent N messages
Summary memory - Compress old context
Session memory - Persist across application restarts
File memory - Store conversations on disk
Vector memory - Semantic similarity search
Quick example:
Use when: Building chatbots, maintaining context, or managing long conversations.
π§ Transformers
Data processing and transformation steps in pipelines.
What it does:
Transform AI responses into desired formats
Extract specific data from responses
Chain multiple transformations
Apply custom logic between pipeline steps
Quick example:
Use when: You need data processing, format conversion, or custom business logic in workflows.
β Full Transformers Documentation
π οΈ AI Tools
Function calling to enable AI to access real-time data and external systems.
What it does:
Define functions that AI can call
Access databases, APIs, and external services
Provide real-time data to AI models
Enable AI to perform actions in your system
Quick example:
Use when: AI needs access to real-time data, external APIs, databases, or your own functions.
π Structured Output
Extract structured data from AI responses into classes, structs, or arrays.
π Component Relationships
π‘ When to Use Each Component
Simple Q&A
aiChat() function
Fastest for one-off queries
Reusable prompts
aiMessage()
Dynamic placeholders, template reuse
Direct model control
aiModel()
Swap providers, configure parameters
Complex workflows
Pipeline with transforms
Chain multiple steps
Context-aware chat
aiAgent() with memory
Maintains conversation history
Real-time data access
aiAgent() with tools
AI can call your functions
Long conversations
Agent with summary memory
Compresses old context
Structured data
Structured output + JSON format
Extract typed data
What it does:
Define schemas for AI output
Populate BoxLang classes automatically
Extract arrays of data
Validate and type-check responses
Quick example:
Use when: Extracting data from text, generating forms, or parsing documents into structured formats.
β Full Structured Output Documentation
π‘ Streaming
Real-time response streaming for better user experience.
What it does:
Stream AI responses token-by-token
Works with models, agents, and pipelines
Provides callbacks for real-time updates
Enables responsive UIs
Quick example:
Use when: Building interactive UIs, chatbots, or any application where real-time feedback matters.
β Full Streaming Documentation
Understanding Pipelines
What are Pipelines?
Pipelines are sequences of runnables - components that process data and pass results to the next step. Think of them as assembly lines for AI processing.
Basic Pipeline Structure
Each step:
Receives input from the previous step
Processes the data
Passes output to the next step
Why Use Pipelines?
Composability: Chain multiple operations together
Reusability: Define once, use with different inputs
Immutability: Each operation creates a new pipeline
Flexibility: Mix models, transforms, and custom logic
Runnable Interface
Runnables
All pipeline components implement the IAiRunnable interface:
Built-in Runnables:
AiMessage- Message templatesAiModel- AI providers wrapped for pipelinesAiAgent- Autonomous agents with memory and toolsAiTransformRunnable- Data transformersAiRunnableSequence- Pipeline chains
Input and Output
Input types:
Empty struct
{}- No inputStruct with bindings
{ key: "value" }Messages array
[{ role: "user", content: "..." }]Previous step output
Output types:
Messages array
AI response struct
Transformed data (string, struct, array, etc.)
Parameters
Runtime parameters merge with stored defaults:
Options
Options control runtime behavior (returnFormat, timeout, logging, etc.):
Building Your First Pipeline
Step 1: Create a Message Template
Step 2: Add an AI Model
Step 3: Add a Transformer
Step 4: Run It
Complete Example
Chaining Operations
The .to() Method
.to() MethodConnects runnables in sequence:
Helper Methods
.toDefaultModel() - Connect to default model:
.transform() - Add a transformer:
Pipeline Patterns
Linear Pipeline
Multi-Step Processing
Branching Logic
Reusable Components
Working with Pipeline Results
Running Pipelines
Options and Return Formats
By default, pipelines return raw responses from the AI provider (full API response struct). This gives you maximum flexibility to access all response data including metadata, usage stats, and multiple choices.
Five format options:
raw(default) - Full API response with all metadatasingle- Extract just the content string from first messageall- Array of all choice messagesjson- Automatically parse JSON response into struct/arrayxml- Automatically parse XML response into XML object
Using .withOptions()
.withOptions()Convenience Methods
.singleMessage() - Extract content string (most common):
.allMessages() - Get array of message objects:
.rawResponse() - Explicit raw format (default behavior):
.asJson() - Parse JSON response automatically:
.asXml() - Parse XML response automatically:
Format Comparison
Available Options
The options struct supports these properties:
returnFormat:string- Response format:"raw"(default),"single","all","json", or"xml"timeout:numeric- Request timeout in seconds (default: 30)logRequest:boolean- Log requests toai.log(default: false)logRequestToConsole:boolean- Log requests to console (default: false)logResponse:boolean- Log responses toai.log(default: false)logResponseToConsole:boolean- Log responses to console (default: false)provider:string- Override AI providerapiKey:string- Override API key
Options Propagation
Options set via withOptions() propagate through pipeline chains:
Runtime options override default options:
When to Use Each Format
Use raw (default) when:
Building reusable pipeline components
Need access to metadata (model, usage, tokens)
Handling multiple choice responses
Debugging API responses
Use singleMessage() when:
Simple text extraction is all you need
Building user-facing features
Migrating from
aiChat()to pipelinesChaining with transformers
Use allMessages() when:
Processing multiple response choices
Iterating over message objects
Extracting role/content pairs
Building conversation logs
Use asJson() when:
AI generates structured data (objects, arrays)
Need automatic JSON parsing
Building APIs or data extraction
Eliminating manual
deserializeJSON()calls
Use asXml() when:
AI generates XML documents
Working with legacy XML systems
RSS/ATOM feed generation
Configuration file creation
Format vs Transform
Return formats are built-in extraction, transforms are custom logic:
JSON and XML in Pipelines
JSON Data Pipelines
Create pipelines that automatically parse JSON responses:
Complex nested structures:
Multi-step JSON processing:
JSON for data extraction:
Reusable JSON templates:
XML Document Pipelines
Generate and parse XML documents:
RSS feed generation:
Multi-step XML processing:
XML to JSON conversion pipeline:
Combining Formats in Workflows
Inspecting Pipelines
Debugging Pipelines
Advanced Features
Storing Bindings
Parameter Management
Naming for Organization
Error Handling
Performance Tips
Reuse Pipelines: Create once, run many times
Cache Results: Cache expensive pipeline outputs
Use Appropriate Models: Match model capabilities to task complexity
Limit Max Tokens: Control costs and response times
Stream Long Responses: Better UX for detailed outputs
Next Steps
AI Agents - Autonomous agents with memory and tools
Working with Models - AI models in pipelines
Message Templates - Advanced templating
Transformers - Data transformation
Pipeline Streaming - Real-time processing
Last updated