Built-In Functions Reference

Reference documentation for built-in functions in BoxLang AI module

Complete reference documentation for all BoxLang AI built-in functions (BIFs). These functions provide the primary interface for AI operations in BoxLang.

📚 Overview

The BoxLang AI module provides 18 built-in functions organized into functional categories:

🗨️ Chat & Conversation

Core functions for AI chat interactions.

🤖 Agents & Models

Create autonomous agents and model runnables.

  • aiAgent() - Create AI agents with tools, memory, and reasoning

  • aiModel() - Create AI model runnables for pipelines

  • aiService() - Get AI service provider instances

💾 Memory & Context

Manage conversation history and knowledge bases.

  • aiMemory() - Create memory instances (conversation, vector, cache, etc.)

📄 Documents & RAG

Load and process documents for RAG workflows.

🔄 Transformation & Pipelines

Transform data in AI pipelines.

🔧 Tools & Utilities

Extend AI capabilities and estimate costs.

🔌 MCP (Model Context Protocol)

Connect AI to external tools and data sources.

  • MCP() - Create MCP client for consuming servers

  • MCPServer() - Create MCP server for exposing tools

🎯 Quick Reference

Common Usage Patterns

Simple Chat

Agent with Tools

RAG (Retrieval Augmented Generation)

Streaming Responses

Structured Output

📊 Function Categories by Use Case

For Simple AI Calls

Start with these for basic AI interactions:

  • aiChat() - Simplest sync chat

  • aiMessage() - Build complex messages

  • aiService() - Get provider instance

For Long-Running Operations

Use async/streaming for better UX:

  • aiChatAsync() - Non-blocking requests

  • aiChatStream() - Real-time responses

For Autonomous Behavior

Let AI reason and use tools:

  • aiAgent() - Autonomous agents

  • aiTool() - Create callable functions

  • aiMemory() - Maintain context

For Knowledge Bases (RAG)

Build AI that knows your data:

  • aiDocuments() - Load documents

  • aiMemory() - Vector storage

  • aiEmbed() - Generate embeddings

  • aiChunk() - Split documents

For Pipelines

Chain AI operations:

  • aiModel() - Model runnables

  • aiTransform() - Data transformation

  • aiMessage() - Fluent message building

For External Integration

Connect AI to external systems:

  • MCP() - Consume MCP servers

  • MCPServer() - Expose tools via MCP

  • aiTool() - Wrap any function

🔑 Key Concepts

Return Formats

All chat functions support multiple return formats:

  • "single": Just the content string (default for aiChat())

  • "all": Array of all messages

  • "raw": Complete API response with metadata

  • "json": Parsed JSON object

  • "xml": Parsed XML document

  • Class/Struct: Structured output (populate target)

Provider Selection

Three ways to specify AI provider:

  1. Default: Uses module configuration

  2. Parameter: aiChat( msg, {}, { provider: "claude" } )

  3. Environment: Auto-detects <PROVIDER>_API_KEY variables

Memory Types

Different memory for different needs:

  • Window: Recent conversation (short-term)

  • Vector: Semantic search (RAG, knowledge)

  • Cache: Distributed storage (CacheBox)

  • File: Simple persistence

  • JDBC: Database-backed

  • Session: User session scope

Fluent APIs

Many functions return objects with chainable methods:

🎓 Learning Path

Beginner

  1. Start with aiChat() for simple requests

  2. Learn aiMessage() for structured conversations

  3. Try aiChatStream() for real-time responses

Intermediate

  1. Create aiAgent() with basic tools

  2. Use aiMemory() for conversation context

  3. Implement aiTool() for custom functions

Advanced

  1. Build RAG systems with aiDocuments() and vector memory

  2. Use aiChatAsync() for concurrent requests

  3. Create MCP servers with MCPServer()

  4. Build complex pipelines with aiModel() and aiTransform()

🔍 Function Index

Function
Category
Description

Agents

Create autonomous AI agents

Chat

Synchronous AI chat

Chat

Asynchronous AI chat

Chat

Create request objects

Chat

Streaming AI chat

Documents

Chunk text into segments

Documents

Load documents for RAG

Documents

Generate embeddings

Memory

Create memory instances

Messages

Build message structures

Models

Create model runnables

Utilities

Populate classes from JSON

Services

Get service providers

Utilities

Estimate token counts

Tools

Create callable tools

Transform

Create transformers

MCP

Create MCP client

MCP

Create MCP server

📖 Additional Resources

💡 Tips

  • Start simple: Begin with aiChat() before moving to agents

  • Use appropriate memory: Window for chat, vector for knowledge

  • Clear tool descriptions: Help AI choose correct tools

  • Handle errors: Wrap AI calls in try/catch blocks

  • Monitor costs: Use aiTokens() to estimate usage

  • Test locally: Use Ollama for free local testing

  • Stream long responses: Better UX with aiChatStream()

  • Async for parallel: Use aiChatAsync() for multiple concurrent requests

Last updated