installation
Quick installation guide for BoxLang AI module.
π¦ Installation
Get the BoxLang AI module installed and ready to use in minutes.
π Table of Contents
βοΈ System Requirements
BoxLang Runtime: 1.8+
Internet: Required for cloud providers (OpenAI, Claude, etc.)
Optional: Docker for running Ollama locally
π Installation Methods
π₯ BoxLang Module Installer
The simplest way to install the module is via the BoxLang Module Installer globally:
This command downloads and installs the module globally, making it available to all BoxLang applications on your system. If you want to install it locally in your cli or other runtimes:
π¦ CommandBox Package Manager
For CommandBox-based web applications and runtimes
This adds the module to your application's dependencies and installs it in the appropriate location.
π Application Dependencies
Add to your box.json for managed dependencies:
Then run:
π§ Quick Configuration
Set up your first AI provider in boxlang.json:
Basic Setup (OpenAI)
Using Environment Variables (Recommended)
Then set the environment variable:
Local AI (Ollama)
For free, local AI with no API costs:
π For detailed provider setup, see Provider Setup Guide
π³ Running Ollama with Docker
For production deployments or easier setup, use the included Docker Compose configuration:
π Quick Start
π€ HuggingFace:
Ollama Server on
http://localhost:11434Web UI on
http://localhost:3000
π― What's Included
The Docker setup provides:
β Ollama LLM Server - Fully configured and ready to use
β Web UI - Browser-based interface for testing and management
β Pre-loaded Model - Automatically downloads
qwen2.5:0.5b-instructβ Health Checks - Automatic monitoring and restart capabilities
β Persistent Storage - Data stored locally in
./.ollamadirectoryβ Production Ready - Configured with proper restart policies
mistralai/Mistral-7B-Instruct-v0.3- Fast and efficient for quick responses
β‘ Groq:deploying to production, update these settings in docker-compose-ollama.yml:**
Change Default Credentials
π· DeepSeek:e Models** - Update the preloaded model:
π Mistral:
Add Resource Limits (recommended):
SSL/TLS - Use a reverse proxy (nginx/traefik) for HTTPS
See the comments in docker-compose-ollama.yml for complete production setup notes.
π Managing the Service
mistral-large-latest- Most capable
π OpenRouter (Multi-model gateway): docker compose -f docker-compose-ollama.yml up -d
View logs
docker compose -f docker-compose-ollama.yml logs -f "apiKey": "sk-or-..." }
}
π Accessing the Web UI
Open your browser to http://localhost:3000 and login with:
Username:
boxlang(default)Password:
rocks(default)
β οΈ Change these credentials before production use!
| frequency_penalty | number | Encourage diversity | 0.1 |
π Environment Variablesconfigurations) is stored in ./.ollama directory:
./.ollama directory:Important: Add .ollama/ to your .gitignore to avoid committing large model files.
π§ Other Providers
πΈ Grok (xAI):
export OPENAI_API_KEY="sk-..."
Get your API key: https://huggingface.co/settings/tokens
Popular models:
Qwen/Qwen2.5-72B-Instruct- Default, powerful general-purpose model for complex reasoningmeta-llama/Llama-3.1-8B-Instruct- Meta's Llama model, balanced performance and speedmistralai/Mistral-7B-Instruct-v0.3- Fast and efficient for quick responses
Groq: If configured correctly, you should see a response from your AI provider.
π§ Troubleshooting
β "No API key provided"
}
β±οΈ "Connection timeout",
"apiKey": "sk-..." }
}
β
Verification
Test your installation:
Run it:
If configured correctly, you should see a response from your AI provider.
π Next Steps
Now that you're installed and configured:
Provider Setup Guide - Detailed configuration for all 12+ providers
Quick Start Guide - Your first AI conversation in 5 minutes
Basic Chatting - Learn the fundamentals
π‘ Quick Tips
Use environment variables for API keys (never commit to git)
Start with Ollama for free development/testing
Try multiple providers to find what works best for your use case
Read the provider guide for cost comparisons and model recommendations
Last updated