🎯Event System
The BoxLang AI module provides a comprehensive event system that allows you to intercept, monitor, and customize AI operations at various stages. These events give you fine-grained control over the AI lifecycle, from object creation to request/response handling.
📋 Table of Contents
🔍 Overview
The event system allows you to monitor, modify, validate, audit, secure, and customize AI operations without modifying core code.
All Available Events
🔄 Event Lifecycle Diagram
📊 Event Categories
🔌 Event Interception
To listen to events, create an interceptor and register it in your module or application.
🏗️ Interceptor Architecture
Creating an Interceptor
Registering an Interceptor
For BoxLang Modules (in ModuleConfig.bx):
For Applications/Scripts (use BoxRegisterInterceptor() BIF):
📖 Reference: BoxRegisterInterceptor() Documentation
📡 Available Events
1. onAIMessageCreate
Fired when an AI message object is created via aiMessage().
When: Message template creation Frequency: Once per aiMessage() call
Event Arguments
message
AiMessage
The created message object
Example
2. onAIRequestCreate
Fired when an AI request object is created via aiChatRequest().
When: Request object instantiation Frequency: Once per aiChatRequest() call
Event Arguments
aiRequest
AiRequest
The created request object
3. onAIProviderRequest
Fired when a provider is requested from the factory.
When: Before provider/service is created or retrieved Frequency: Once per provider request
Event Arguments
provider
String
Provider name (e.g., "openai", "claude")
apiKey
String
API key (if provided)
params
Struct
Request parameters
options
Struct
Request options
4. onAIProviderCreate
Fired when a provider/service instance is created.
When: After provider instantiation Frequency: Once per unique provider instance
Event Arguments
provider
IService
The created service instance
5. onAIModelCreate
Fired when an AI model runnable is created via aiModel().
When: Model wrapper creation Frequency: Once per aiModel() call
Event Arguments
model
AiModel
The created model runnable
service
IService
The underlying service
6. onAITransformCreate
Fired when a transform runnable is created via aiTransform().
When: Transform function creation Frequency: Once per aiTransform() call
Event Arguments
transform
AiTransformRunnable
The created transform runnable
7. beforeAIModelInvoke
Fired before an AI model is invoked (before sending to provider).
When: Before model execution Frequency: Every model invocation
Event Arguments
model
AiModel
The model being invoked
request
AiRequest
The request being sent
8. onAIRequest
Fired immediately before sending the HTTP request to the AI provider.
When: Before HTTP request Frequency: Every API call (including streaming)
Event Arguments
dataPacket
Struct
The HTTP request data packet
aiRequest
AiRequest
The AI request object
provider
IService
The service making the request
9. onAIResponse
Fired after receiving the HTTP response from the AI provider.
When: After HTTP response Frequency: Every API call (including streaming)
Event Arguments
aiRequest
AiRequest
The original request
response
Struct
The deserialized response
rawResponse
Struct
The raw HTTP response
provider
IService
The service that made the request
10. afterAIModelInvoke
Fired after an AI model completes its invocation.
When: After model execution completes Frequency: Every model invocation
Event Arguments
model
AiModel
The model that was invoked
request
AiRequest
The request that was sent
results
Any
The results returned by the model
11. onAIToolCreate
Fired when an AI tool is created via aiTool().
When: Tool creation Frequency: Once per aiTool() call
Event Arguments
tool
Tool
The created tool instance
name
String
Tool name
description
String
Tool description
12. beforeAIToolExecute
Fired immediately before a tool's callable function is executed.
When: Before tool execution Frequency: Every tool call
Event Arguments
tool
Tool
The tool being executed
name
String
Tool name
arguments
Struct
Arguments passed to the tool
13. afterAIToolExecute
Fired immediately after a tool's callable function completes execution.
When: After tool execution Frequency: Every tool call
Event Arguments
tool
Tool
The tool that was executed
name
String
Tool name
arguments
Struct
Arguments passed to the tool
results
Any
Results returned by the tool
executionTime
Numeric
Execution time in milliseconds
14. onAIError
Fired when an error occurs during AI operations (chat, embeddings, or streaming).
When: Before throwing provider errors Frequency: Every error condition
Event Arguments
error
Any
The error object/message from provider
errorMessage
String
Formatted error message
provider
IService
The provider where error occurred
operation
String
Operation type: "chat", "embeddings", "stream"
aiRequest
AiRequest
The request that caused the error (if available)
embeddingRequest
AiEmbeddingRequest
For embedding errors
canRetry
Boolean
Whether operation can be retried
15. onAIRateLimitHit
Fired when a provider returns a 429 (rate limit) HTTP status code.
When: When rate limit is detected Frequency: Every rate limit response
Event Arguments
provider
IService
The provider that hit rate limit
operation
String
Operation type: "chat", "embeddings"
statusCode
String
HTTP status code (429)
errorData
Struct
Error response from provider
aiRequest
AiRequest
The request that hit the limit
retryAfter
String
Retry-After header value (if present)
16. beforeAIPipelineRun
Fired before a runnable pipeline sequence begins execution.
When: Before pipeline execution starts Frequency: Every pipeline run
Event Arguments
sequence
AiRunnableSequence
The sequence being executed
name
String
Sequence name
stepCount
Numeric
Number of steps in pipeline
steps
Array
Array of step information
input
Any
Initial input to pipeline
params
Struct
Parameters passed to pipeline
options
Struct
Options passed to pipeline
17. afterAIPipelineRun
Fired after a runnable pipeline sequence completes execution.
When: After pipeline execution completes Frequency: Every pipeline run
Event Arguments
sequence
AiRunnableSequence
The sequence that was executed
name
String
Sequence name
stepCount
Numeric
Number of steps in pipeline
steps
Array
Array of step information
input
Any
Initial input to pipeline
result
Any
Final result from pipeline
executionTime
Numeric
Total execution time in milliseconds
18. onAITokenCount
Fired when token usage information is available from the AI provider response.
When: After receiving response with usage data Frequency: Every successful API call that returns usage
Event Arguments
provider
IService
The provider used
operation
String
Operation type: "chat", "embeddings"
model
String
Model name
promptTokens
Numeric
Input tokens used
completionTokens
Numeric
Output tokens used
totalTokens
Numeric
Total tokens (prompt + completion)
aiRequest
AiRequest
The request object
usage
Struct
Full usage object from provider
Event Priority Reference
Events fire in this order during a typical AI chat with tools:
onAIMessageCreate- Message template createdonAIRequestCreate- Request object createdonAIModelCreate- Model wrapper createdonAIToolCreate- Tool(s) created (if using tools)beforeAIPipelineRun- Pipeline about to start (if using pipelines)beforeAIModelInvoke- Model about to be invokedonAIRequest- HTTP request about to be sentonAIRateLimitHit- If rate limit encounteredonAIResponse- HTTP response receivedonAITokenCount- Token usage trackedbeforeAIToolExecute- Tool about to execute (if AI requested tool call)afterAIToolExecute- Tool execution completeonAIError- If any error occurredafterAIModelInvoke- Model invocation completeafterAIPipelineRun- Pipeline execution complete
19. onMCPServerCreate
Fired when a new MCP (Model Context Protocol) server instance is created.
When: MCP server instantiation via MCPServer() Frequency: Once per unique server creation
Event Arguments
server
MCPServer
The created server instance
name
String
Server name/identifier
description
String
Server description
version
String
Server version
20. onMCPServerRemove
Fired when an MCP server instance is being removed from the registry.
When: Before server removal via MCPServer::removeInstance() Frequency: Once per server removal
Event Arguments
name
String
Name of the server being removed
21. onMCPRequest
Fired before processing an incoming MCP request (JSON-RPC 2.0).
When: After CORS handling, before request processing Frequency: Every MCP request
Event Arguments
server
MCPServer
The target server instance
requestData
Struct
Request metadata (method, body, urlParams)
serverName
String
Server identifier
22. onMCPResponse
Fired after processing an MCP response, before returning to client.
When: After request handling, before HTTP response Frequency: Every MCP response
Event Arguments
server
MCPServer
The server instance
response
Struct
Response data (content, contentType, headers, statusCode)
requestData
Struct
Original request metadata
serverName
String
Server identifier
23. onMCPError
Fired when an exception occurs during MCP server operations.
When: Exception in request handling, class scanning, or other MCP operations Frequency: When errors occur
Event Arguments
server
MCPServer
The server instance
context
String
Where error occurred (handleRequest, scanClass, etc.)
exception
Struct
Exception object (message, detail, stackTrace, type)
method
String
Request method (context: handleRequest)
requestId
Any
Request ID (context: handleRequest)
params
Struct
Request parameters (context: handleRequest)
responseTime
Numeric
Time elapsed in ms (context: handleRequest)
errorCode
Numeric
RPC error code (context: handleRequest)
classPath
String
Class being scanned (context: scanClass)
Example
💡 Common Use Cases
1. Request Logging and Monitoring
2. Cost Tracking and Budgeting
3. Response Caching
4. Content Filtering and Moderation
5. Multi-Provider Fallback
6. A/B Testing Different Models
7. Adding Safety Guardrails
✅ Best Practices
1. Keep Event Handlers Lightweight
Event handlers are called frequently. Keep processing minimal:
2. Handle Errors Gracefully
Don't let interceptor errors break AI operations:
3. Document Side Effects
Make it clear what your interceptors modify:
4. Use Naming Conventions
5. Order Matters
Interceptors execute in registration order. Be mindful:
6. Test Interceptors Independently
Write unit tests for your interceptor logic:
7. Make Interceptors Configurable
📚 Examples
Complete Monitoring Solution
Security and Compliance
Next Steps
Now that you understand the event system, you can:
Monitor: Track AI usage and performance
Secure: Add authentication and content filtering
Optimize: Implement caching and cost controls
Extend: Build custom behaviors without modifying core code
Related Documentation
Pipeline Overview - Understanding AI pipelines
Service-Level Chatting - Direct service control
Additional Resources
BoxLang Interceptor Documentation: Learn more about the interceptor system
Event-Driven Architecture: Best practices for event handling
Security Guidelines: Protecting AI operations
Copyright © 2023-2025 Ortus Solutions, Corp
Last updated