aiModel
Create an AI Model runnable that wraps a service provider for use in pipelines. This is the pipeline-friendly version of aiService(), designed for composable AI operations.
Syntax
aiModel(provider, apiKey, tools)Parameters
provider
string
No
(config)
The provider to use (openai, claude, ollama, etc.)
apiKey
string
No
(config/env)
Optional API key override
tools
any
No
[]
ITool instance or array of Tool instances for tool-augmented generation
Returns
Returns an AiModel runnable with:
Pipeline integration:
run(),stream(),to()methodsTool binding:
bindTools()for function callingService wrapping: Access to underlying service provider
IAiRunnable interface: Compatible with all runnable pipelines
Examples
Basic Model Pipeline
With Transformation
Default Model
Specific Provider
With API Key Override
Template with Model
Multi-Step Pipeline
Streaming with Model
Model with Tools
Multiple Tool Binding
Parallel Model Comparison
Conditional Model Selection
Model Pipeline Factory
Extract and Transform
Error Handling
Reusable Model Components
Dynamic Tool Binding
Notes
🔄 IAiRunnable: Implements full runnable interface for pipelines
🎯 Service Wrapper: Wraps
aiService()for pipeline compatibility🔧 Tool Support: Bind tools for function calling capabilities
📦 Reusable: Create once, use in multiple pipelines
🚀 Events: Fires
onAIModelCreateevent for interceptors💡 Difference: Use
aiModel()for pipelines,aiService()for direct invocation⚡ Performance: Same underlying service, just different interface
Related Functions
aiService()- Get service provider instances (direct invocation)aiMessage()- Build messages for model inputaiTransform()- Transform model outputsaiTool()- Create tools for model function callingaiAgent()- Create agents (higher-level abstraction)
Best Practices
✅ Use in pipelines - Designed for to() chaining with messages and transforms
✅ Reuse model instances - Create once, use in multiple pipelines
✅ Bind tools early - Attach tools when creating model if known
✅ Combine with transforms - Chain with aiTransform() for data processing
✅ Template with variables - Use aiMessage() templates for flexibility
❌ Don't use for direct invocation - Use aiService() instead for simple calls
❌ Don't create models per request - Reuse model instances for efficiency
❌ Don't mix with non-runnables - Ensure entire pipeline uses IAiRunnable interface
Last updated