sparkles2.0.0

BoxLang AI Module v2.0.0 Release Notes - Document Loaders, MCP Security, Multiple Provider Support, and Embeddings

Released: January 19, 2026

One of our biggest library updates yet! This release introduces a powerful new document loading system, comprehensive security features for MCP servers, and full support for several major AI providers including Mistral, HuggingFace, Groq, OpenRouter, and Ollama. Additionally, we have implemented complete embeddings functionality and made numerous enhancements and fixes across the board.

πŸŽ‰ Major Features

Document Loaders System

New comprehensive document loading system for importing content from various sources into your AI workflows.

New BIFs:

  • aiDocuments() - Load documents with automatic type detection

  • aiDocumentLoader() - Create loader instances with advanced configuration

  • aiDocumentLoaders() - Retrieve all registered loaders with metadata

  • aiMemoryIngest() - Ingest documents into memory with comprehensive reporting

Built-in Loaders:

  • TextLoader - Plain text files (.txt, .text)

  • MarkdownLoader - Markdown files with header splitting, code block removal

  • HTMLLoader - HTML files and URLs with script/style removal, tag extraction

  • CSVLoader - CSV files with row-as-document mode, column filtering

  • JSONLoader - JSON files with field extraction, array-as-documents mode

  • DirectoryLoader - Batch loading from directories with recursive scanning

Example - Loading Documents:

Example - Ingesting into Memory:

MCP Server Enterprise Security

Comprehensive security enhancements for MCP servers with CORS, rate limiting, API key validation, and automatic security headers.

CORS Configuration:

Request Body Size Limits:

Custom API Key Validation:

Automatic Security Headers - All responses include:

  • X-Content-Type-Options: nosniff

  • X-Frame-Options: DENY

  • X-XSS-Protection: 1; mode=block

  • Strict-Transport-Security: max-age=31536000

  • And more...

New AI Provider Support

Mistral AI:

HuggingFace:

Groq (Fast Inference):

OpenRouter:

Ollama (Local Models):

Embeddings Support

Complete embeddings functionality for semantic search, clustering, and recommendations across all providers.

Generate Embeddings:

Supported Providers:

  • OpenAI: text-embedding-3-small, text-embedding-3-large

  • Ollama: Local embeddings for privacy

  • DeepSeek: OpenAI-compatible API

  • Groq: OpenAI-compatible API

  • OpenRouter: Aggregated models

  • Gemini: text-embedding-004

Enhanced ChatMessage Methods

Template Binding:

πŸ”§ Enhancements

  • Automatic API Key Detection: Services now auto-detect API keys using <PROVIDER>_API_KEY convention

  • Tool JSON Serialization: Automatic JSON serialization for tool calls that don't return strings

  • Docker Testing Infrastructure: Automated local development and CI/CD support with Docker Compose

  • Enhanced GitHub Actions: Improved CI/CD pipeline with AI service support

  • BIF Reference Documentation: Complete function reference table in README

  • Comprehensive Event Documentation: Complete event system documentation

πŸ› Bug Fixes

  • Tool Argument Descriptions: Tool arguments without descriptions now default to argument name instead of causing errors

  • Model Name Compatibility: Updated OllamaService default model from llama3.2 to qwen2.5:0.5b-instruct

  • Docker GPU Support: Made GPU configuration optional for systems without GPU access

  • Test Model References: Corrected model names in Ollama tests to match available models

πŸ“š Documentation

  • New comprehensive document loaders documentation

  • MCP server security features documentation with examples

  • Complete embeddings documentation with provider examples

  • Enhanced provider-specific documentation

πŸš€ Upgrade Notes

This is a major release with significant new features. All existing code remains compatible, but you can now leverage:

  1. Document loaders for ingesting content into AI workflows

  2. MCP security features for production-ready API servers

  3. Multiple new providers for flexibility and cost optimization

  4. Embeddings for semantic search and RAG applications

  5. Enhanced message templating for dynamic content generation

πŸ™ Thank You

Thank you to all contributors and users who helped make this release possible!

Last updated