aiService
Get a reference to a registered AI service provider. This is the direct invocation interface for AI providers, as opposed to aiModel() which creates pipeline-friendly runnables.
Syntax
aiService(provider, apiKey)Parameters
provider
string
No
(config)
The provider to use. If not provided, uses default from module configuration
apiKey
string
No
(config/env)
Optional API key override. If not provided, uses configuration or <PROVIDER>_API_KEY environment variable
Supported Providers
openai - OpenAI (GPT models)
claude - Anthropic Claude
gemini - Google Gemini
ollama - Ollama (local models)
groq - Groq
grok - xAI Grok
deepseek - DeepSeek
mistral - Mistral AI
cohere - Cohere
huggingface - Hugging Face
openrouter - OpenRouter
perplexity - Perplexity
voyage - Voyage AI (embeddings)
Returns
Returns an AI service provider instance (e.g., OpenAIService, ClaudeService) with methods:
invoke(request)- Send synchronous requestinvokeStream(request, callback)- Stream responseconfigure(apiKey)- Configure the servicegetName()- Get provider nameembed(request)- Generate embeddings (if supported)
Examples
Get Default Service
Specific Provider
With API Key Override
Direct Invocation
Streaming Response
Multiple Providers
Environment Variable Detection
Service Configuration
Reusable Services
With Custom Options
Error Handling
Tool-Augmented Generation
Embedding Service
Provider Comparison Utility
Service Factory Pattern
Long-Running Service
Provider Detection
Notes
🔑 Auto-Detection: Automatically detects
<PROVIDER>_API_KEYenvironment variables🎯 Direct Control: Use for direct service invocation (not pipelines)
🔄 Reusability: Create service once, invoke multiple times
📦 Provider Agnostic: Same interface across all providers
🔧 Configuration: Supports runtime API key override
🚀 Events: Fires
onAIProviderCreateevent for interceptors💡 vs aiModel(): Use
aiService()for direct calls,aiModel()for pipelines
Related Functions
aiModel()- Create model runnables for pipelinesaiChat()- Simple synchronous chat interfaceaiChatAsync()- Asynchronous chat interfaceaiChatStream()- Streaming chat interfaceaiChatRequest()- Build request objects
Best Practices
✅ Reuse services - Create once, invoke multiple times for efficiency
✅ Use environment variables - Set <PROVIDER>_API_KEY for automatic detection
✅ Handle errors - Wrap invocations in try/catch for robustness
✅ Choose right provider - Select based on task requirements (creative, code, local)
✅ Configure once - Set API keys at service creation, not per request
❌ Don't create per request - Services are reusable, create once
❌ Don't use in pipelines - Use aiModel() instead for pipeline integration
❌ Don't hardcode keys - Use environment variables or secure configuration
Last updated