2.0.0
BoxLang AI Module v2.0.0 Release Notes - Document Loaders, MCP Security, Multiple Provider Support, and Embeddings
Released: January 19, 2026
One of our biggest library updates yet! This release introduces a powerful new document loading system, comprehensive security features for MCP servers, and full support for several major AI providers including Mistral, HuggingFace, Groq, OpenRouter, and Ollama. Additionally, we have implemented complete embeddings functionality and made numerous enhancements and fixes across the board.
π Major Features
Document Loaders System
New comprehensive document loading system for importing content from various sources into your AI workflows.
New BIFs:
aiDocuments()- Load documents with automatic type detectionaiDocumentLoader()- Create loader instances with advanced configurationaiDocumentLoaders()- Retrieve all registered loaders with metadataaiMemoryIngest()- Ingest documents into memory with comprehensive reporting
Built-in Loaders:
TextLoader- Plain text files (.txt, .text)MarkdownLoader- Markdown files with header splitting, code block removalHTMLLoader- HTML files and URLs with script/style removal, tag extractionCSVLoader- CSV files with row-as-document mode, column filteringJSONLoader- JSON files with field extraction, array-as-documents modeDirectoryLoader- Batch loading from directories with recursive scanning
Example - Loading Documents:
Example - Ingesting into Memory:
MCP Server Enterprise Security
Comprehensive security enhancements for MCP servers with CORS, rate limiting, API key validation, and automatic security headers.
CORS Configuration:
Request Body Size Limits:
Custom API Key Validation:
Automatic Security Headers - All responses include:
X-Content-Type-Options: nosniffX-Frame-Options: DENYX-XSS-Protection: 1; mode=blockStrict-Transport-Security: max-age=31536000And more...
New AI Provider Support
Mistral AI:
HuggingFace:
Groq (Fast Inference):
OpenRouter:
Ollama (Local Models):
Embeddings Support
Complete embeddings functionality for semantic search, clustering, and recommendations across all providers.
Generate Embeddings:
Supported Providers:
OpenAI:
text-embedding-3-small,text-embedding-3-largeOllama: Local embeddings for privacy
DeepSeek: OpenAI-compatible API
Groq: OpenAI-compatible API
OpenRouter: Aggregated models
Gemini:
text-embedding-004
Enhanced ChatMessage Methods
Template Binding:
π§ Enhancements
Automatic API Key Detection: Services now auto-detect API keys using
<PROVIDER>_API_KEYconventionTool JSON Serialization: Automatic JSON serialization for tool calls that don't return strings
Docker Testing Infrastructure: Automated local development and CI/CD support with Docker Compose
Enhanced GitHub Actions: Improved CI/CD pipeline with AI service support
BIF Reference Documentation: Complete function reference table in README
Comprehensive Event Documentation: Complete event system documentation
π Bug Fixes
Tool Argument Descriptions: Tool arguments without descriptions now default to argument name instead of causing errors
Model Name Compatibility: Updated OllamaService default model from llama3.2 to qwen2.5:0.5b-instruct
Docker GPU Support: Made GPU configuration optional for systems without GPU access
Test Model References: Corrected model names in Ollama tests to match available models
π Documentation
New comprehensive document loaders documentation
MCP server security features documentation with examples
Complete embeddings documentation with provider examples
Enhanced provider-specific documentation
π Upgrade Notes
This is a major release with significant new features. All existing code remains compatible, but you can now leverage:
Document loaders for ingesting content into AI workflows
MCP security features for production-ready API servers
Multiple new providers for flexibility and cost optimization
Embeddings for semantic search and RAG applications
Enhanced message templating for dynamic content generation
π Thank You
Thank you to all contributors and users who helped make this release possible!
Last updated