Service-Level Chatting
Take full control of AI interactions with service-level chatting in BoxLang, ideal for advanced use cases requiring custom configurations and multiple providers.
Take full control of AI interactions by working directly with service objects. Perfect for advanced scenarios requiring custom configuration, multiple providers, or direct API access.
🏗️ Service Architecture
Benefits:
Direct control over service configuration
Multiple providers in one application
Custom timeouts and endpoints
Reusable service instances
📋 Table of Contents
Creating Services
🔄 Service Lifecycle
Basic Service Creation
Service Configuration
Building Chat Requests
Use aiChatRequest() for detailed request control:
Basic Request
With Messages Array
With Parameters
Complete Request
Service Operations
Invoke (Synchronous)
Invoke Stream
Custom Headers
Add authentication, tracking, or custom headers:
Multiple Services
Manage multiple providers simultaneously:
Advanced Patterns
Retry Logic
Request Queue
Load Balancer
Provider-Specific Features
OpenAI
Claude
Ollama
Request Options
Return Formats
Logging
Practical Examples
Cost Tracker
Response Cache
A/B Testing
Best Practices
Reuse Service Objects: Create once, use many times
Handle Errors: Wrap invoke() in try/catch
Set Timeouts: Prevent hanging requests
Use Raw Format: For detailed debugging and cost tracking
Cache Responses: Save money on repeated questions
Implement Retries: Handle transient failures
Monitor Usage: Track tokens and costs
Next Steps
Pipeline Overview - Learn about AI pipelines
Working with Models - Services in pipelines
Basic Chatting - Back to basics
Last updated