Main Components

Main components for building AI agents and pipelines in BoxLang

Welcome to the core building blocks of BoxLang AI. This section covers the essential components you need to build sophisticated AI agents and composable pipelines.

πŸ“– Overview

BoxLang AI is built on a runnable pipeline architecture that allows you to:

  • πŸ”— Chain operations - Connect AI models, messages, transforms, and agents

  • ♻️ Reuse workflows - Define once, execute with different inputs

  • 🧩 Compose freely - Mix and match components to create complex flows

  • 🎯 Stay flexible - Swap providers, add steps, or modify behavior without refactoring

Think of these components as LEGO blocks - each piece has a specific purpose, but the real power comes from how you combine them.

πŸ—οΈ Architecture Overview

πŸ”„ Pipeline Execution Flow


🎯 Core Components

Below is a detailed overview of each component. Each has a specific role in the ecosystem, and they work together seamlessly.

🧠 AI Models

AI provider integrations wrapped as runnable pipeline components.

What it does:

  • Wraps OpenAI, Claude, Gemini, Ollama, and other providers

  • Makes them composable in pipelines

  • Provides consistent interface across all providers

Quick example:

Use when: You need direct model control, want to swap providers, or build custom workflows.

β†’ Full Models Documentation


Reusable message builders with dynamic placeholders and role management.

What it does:

  • Builds conversation messages (system, user, assistant)

  • Supports dynamic placeholders for variable injection

  • Handles multimodal content (images, audio, video, documents)

  • Creates reusable prompt templates

Quick example:

Use when: You have repeated prompts, need dynamic content, or want organized message management.

β†’ Full Messages Documentation


πŸ€– AI Agents

Autonomous AI entities with memory, tools, and reasoning capabilities.

What it does:

  • Maintains conversation context across multiple turns

  • Automatically decides when to use tools

  • Manages multiple memory strategies (windowed, summary, session, file)

  • Provides autonomous reasoning and planning

Quick example:

Use when: You need context-aware conversations, autonomous tool use, or complex multi-turn interactions.

β†’ Full Agents Documentation


Context management strategies for maintaining conversation history.

What it does:

  • Windowed memory - Keep recent N messages

  • Summary memory - Compress old context

  • Session memory - Persist across application restarts

  • File memory - Store conversations on disk

  • Vector memory - Semantic similarity search

Quick example:

Use when: Building chatbots, maintaining context, or managing long conversations.

β†’ Full Memory Documentation


πŸ”§ Transformers

Data processing and transformation steps in pipelines.

What it does:

  • Transform AI responses into desired formats

  • Extract specific data from responses

  • Chain multiple transformations

  • Apply custom logic between pipeline steps

Quick example:

Use when: You need data processing, format conversion, or custom business logic in workflows.

β†’ Full Transformers Documentation


πŸ› οΈ AI Tools

Function calling to enable AI to access real-time data and external systems.

What it does:

  • Define functions that AI can call

  • Access databases, APIs, and external services

  • Provide real-time data to AI models

  • Enable AI to perform actions in your system

Quick example:

Use when: AI needs access to real-time data, external APIs, databases, or your own functions.

β†’ Full Tools Documentation


Extract structured data from AI responses into classes, structs, or arrays.


πŸ”„ Component Relationships


πŸ’‘ When to Use Each Component

Scenario
Recommended Component
Why

Simple Q&A

aiChat() function

Fastest for one-off queries

Reusable prompts

aiMessage()

Dynamic placeholders, template reuse

Direct model control

aiModel()

Swap providers, configure parameters

Complex workflows

Pipeline with transforms

Chain multiple steps

Context-aware chat

aiAgent() with memory

Maintains conversation history

Real-time data access

aiAgent() with tools

AI can call your functions

Long conversations

Agent with summary memory

Compresses old context

Structured data

Structured output + JSON format

Extract typed data


What it does:

  • Define schemas for AI output

  • Populate BoxLang classes automatically

  • Extract arrays of data

  • Validate and type-check responses

Quick example:

Use when: Extracting data from text, generating forms, or parsing documents into structured formats.

β†’ Full Structured Output Documentation


πŸ“‘ Streaming

Real-time response streaming for better user experience.

What it does:

  • Stream AI responses token-by-token

  • Works with models, agents, and pipelines

  • Provides callbacks for real-time updates

  • Enables responsive UIs

Quick example:

Use when: Building interactive UIs, chatbots, or any application where real-time feedback matters.

β†’ Full Streaming Documentation


Understanding Pipelines

What are Pipelines?

Pipelines are sequences of runnables - components that process data and pass results to the next step. Think of them as assembly lines for AI processing.

Basic Pipeline Structure

Each step:

  1. Receives input from the previous step

  2. Processes the data

  3. Passes output to the next step

Why Use Pipelines?

Composability: Chain multiple operations together

Reusability: Define once, use with different inputs

Immutability: Each operation creates a new pipeline

Flexibility: Mix models, transforms, and custom logic

Runnable Interface

Runnables

All pipeline components implement the IAiRunnable interface:

Built-in Runnables:

  • AiMessage - Message templates

  • AiModel - AI providers wrapped for pipelines

  • AiAgent - Autonomous agents with memory and tools

  • AiTransformRunnable - Data transformers

  • AiRunnableSequence - Pipeline chains

Input and Output

Input types:

  • Empty struct {} - No input

  • Struct with bindings { key: "value" }

  • Messages array [{ role: "user", content: "..." }]

  • Previous step output

Output types:

  • Messages array

  • AI response struct

  • Transformed data (string, struct, array, etc.)

Parameters

Runtime parameters merge with stored defaults:

Options

Options control runtime behavior (returnFormat, timeout, logging, etc.):

Building Your First Pipeline

Step 1: Create a Message Template

Step 2: Add an AI Model

Step 3: Add a Transformer

Step 4: Run It

Complete Example

Chaining Operations

The .to() Method

Connects runnables in sequence:

Helper Methods

.toDefaultModel() - Connect to default model:

.transform() - Add a transformer:

Pipeline Patterns

Linear Pipeline

Multi-Step Processing

Branching Logic

Reusable Components

Working with Pipeline Results

Running Pipelines

Options and Return Formats

By default, pipelines return raw responses from the AI provider (full API response struct). This gives you maximum flexibility to access all response data including metadata, usage stats, and multiple choices.

Five format options:

  1. raw (default) - Full API response with all metadata

  2. single - Extract just the content string from first message

  3. all - Array of all choice messages

  4. json - Automatically parse JSON response into struct/array

  5. xml - Automatically parse XML response into XML object

Using .withOptions()

Convenience Methods

.singleMessage() - Extract content string (most common):

.allMessages() - Get array of message objects:

.rawResponse() - Explicit raw format (default behavior):

.asJson() - Parse JSON response automatically:

.asXml() - Parse XML response automatically:

Format Comparison

Available Options

The options struct supports these properties:

  • returnFormat:string - Response format: "raw" (default), "single", "all", "json", or "xml"

  • timeout:numeric - Request timeout in seconds (default: 30)

  • logRequest:boolean - Log requests to ai.log (default: false)

  • logRequestToConsole:boolean - Log requests to console (default: false)

  • logResponse:boolean - Log responses to ai.log (default: false)

  • logResponseToConsole:boolean - Log responses to console (default: false)

  • provider:string - Override AI provider

  • apiKey:string - Override API key

Options Propagation

Options set via withOptions() propagate through pipeline chains:

Runtime options override default options:

When to Use Each Format

Use raw (default) when:

  • Building reusable pipeline components

  • Need access to metadata (model, usage, tokens)

  • Handling multiple choice responses

  • Debugging API responses

Use singleMessage() when:

  • Simple text extraction is all you need

  • Building user-facing features

  • Migrating from aiChat() to pipelines

  • Chaining with transformers

Use allMessages() when:

  • Processing multiple response choices

  • Iterating over message objects

  • Extracting role/content pairs

  • Building conversation logs

Use asJson() when:

  • AI generates structured data (objects, arrays)

  • Need automatic JSON parsing

  • Building APIs or data extraction

  • Eliminating manual deserializeJSON() calls

Use asXml() when:

  • AI generates XML documents

  • Working with legacy XML systems

  • RSS/ATOM feed generation

  • Configuration file creation

Format vs Transform

Return formats are built-in extraction, transforms are custom logic:

JSON and XML in Pipelines

JSON Data Pipelines

Create pipelines that automatically parse JSON responses:

Complex nested structures:

Multi-step JSON processing:

JSON for data extraction:

Reusable JSON templates:

XML Document Pipelines

Generate and parse XML documents:

RSS feed generation:

Multi-step XML processing:

XML to JSON conversion pipeline:

Combining Formats in Workflows

Inspecting Pipelines

Debugging Pipelines

Advanced Features

Storing Bindings

Parameter Management

Naming for Organization

Error Handling

Performance Tips

  1. Reuse Pipelines: Create once, run many times

  2. Cache Results: Cache expensive pipeline outputs

  3. Use Appropriate Models: Match model capabilities to task complexity

  4. Limit Max Tokens: Control costs and response times

  5. Stream Long Responses: Better UX for detailed outputs

Next Steps

Last updated