installation

Quick installation guide for BoxLang AI module.

πŸ“¦ Installation

Get the BoxLang AI module installed and ready to use in minutes.

πŸ“‘ Table of Contents

βš™οΈ System Requirements

  • BoxLang Runtime: 1.8+

  • Internet: Required for cloud providers (OpenAI, Claude, etc.)

  • Optional: Docker for running Ollama locally

πŸš€ Installation Methods

πŸ“₯ BoxLang Module Installer

The simplest way to install the module is via the BoxLang Module Installer globally:

This command downloads and installs the module globally, making it available to all BoxLang applications on your system. If you want to install it locally in your cli or other runtimes:

πŸ“¦ CommandBox Package Manager

For CommandBox-based web applications and runtimes

This adds the module to your application's dependencies and installs it in the appropriate location.

πŸ“‹ Application Dependencies

Add to your box.json for managed dependencies:

Then run:

πŸ”§ Quick Configuration

Set up your first AI provider in boxlang.json:

Basic Setup (OpenAI)

Then set the environment variable:

Local AI (Ollama)

For free, local AI with no API costs:

πŸ“– For detailed provider setup, see Provider Setup Guide


🐳 Running Ollama with Docker

For production deployments or easier setup, use the included Docker Compose configuration:

πŸ“‹ Quick Start

πŸ€— HuggingFace:

  • Ollama Server on http://localhost:11434

  • Web UI on http://localhost:3000

🎯 What's Included

The Docker setup provides:

  • βœ… Ollama LLM Server - Fully configured and ready to use

  • βœ… Web UI - Browser-based interface for testing and management

  • βœ… Pre-loaded Model - Automatically downloads qwen2.5:0.5b-instruct

  • βœ… Health Checks - Automatic monitoring and restart capabilities

  • βœ… Persistent Storage - Data stored locally in ./.ollama directory

  • βœ… Production Ready - Configured with proper restart policies

  • mistralai/Mistral-7B-Instruct-v0.3 - Fast and efficient for quick responses

⚑ Groq:deploying to production, update these settings in docker-compose-ollama.yml:**

  1. Change Default Credentials

πŸ”· DeepSeek:e Models** - Update the preloaded model:

🟠 Mistral:

  1. Add Resource Limits (recommended):

  2. SSL/TLS - Use a reverse proxy (nginx/traefik) for HTTPS

See the comments in docker-compose-ollama.yml for complete production setup notes.

πŸ“Š Managing the Service

  • mistral-large-latest - Most capable

🌐 OpenRouter (Multi-model gateway): docker compose -f docker-compose-ollama.yml up -d

View logs

docker compose -f docker-compose-ollama.yml logs -f "apiKey": "sk-or-..." }

}

🌐 Accessing the Web UI

Open your browser to http://localhost:3000 and login with:

  • Username: boxlang (default)

  • Password: rocks (default)

⚠️ Change these credentials before production use!

| frequency_penalty | number | Encourage diversity | 0.1 |

πŸ” Environment Variablesconfigurations) is stored in ./.ollama directory:

Important: Add .ollama/ to your .gitignore to avoid committing large model files.

πŸ”§ Other Providers

πŸ”Έ Grok (xAI):

export OPENAI_API_KEY="sk-..."

Get your API key: https://huggingface.co/settings/tokens

Popular models:

  • Qwen/Qwen2.5-72B-Instruct - Default, powerful general-purpose model for complex reasoning

  • meta-llama/Llama-3.1-8B-Instruct - Meta's Llama model, balanced performance and speed

  • mistralai/Mistral-7B-Instruct-v0.3 - Fast and efficient for quick responses

Groq: If configured correctly, you should see a response from your AI provider.

πŸ”§ Troubleshooting

❌ "No API key provided"

}

⏱️ "Connection timeout",

"apiKey": "sk-..." }

}

βœ… Verification

Test your installation:

Run it:

If configured correctly, you should see a response from your AI provider.


πŸš€ Next Steps

Now that you're installed and configured:

  1. Provider Setup Guide - Detailed configuration for all 12+ providers

  2. Quick Start Guide - Your first AI conversation in 5 minutes

  3. Basic Chatting - Learn the fundamentals

πŸ’‘ Quick Tips

  • Use environment variables for API keys (never commit to git)

  • Start with Ollama for free development/testing

  • Try multiple providers to find what works best for your use case

  • Read the provider guide for cost comparisons and model recommendations

Last updated