NovaCore.AgentKit.Core
3.1.2
See the version list below for details.
dotnet add package NovaCore.AgentKit.Core --version 3.1.2
NuGet\Install-Package NovaCore.AgentKit.Core -Version 3.1.2
<PackageReference Include="NovaCore.AgentKit.Core" Version="3.1.2" />
<PackageVersion Include="NovaCore.AgentKit.Core" Version="3.1.2" />
<PackageReference Include="NovaCore.AgentKit.Core" />
paket add NovaCore.AgentKit.Core --version 3.1.2
#r "nuget: NovaCore.AgentKit.Core, 3.1.2"
#:package NovaCore.AgentKit.Core@3.1.2
#addin nuget:?package=NovaCore.AgentKit.Core&version=3.1.2
#tool nuget:?package=NovaCore.AgentKit.Core&version=3.1.2
NovaCore.AgentKit
Production-ready AI agents for .NET - Build ChatAgents for conversations or ReActAgents for autonomous tasks. Clean API, full-featured, 6 LLM providers. Built-in real-time cost tracking.
Quick Start
dotnet add package NovaCore.AgentKit.Core
dotnet add package NovaCore.AgentKit.Providers.Anthropic
await using var agent = await new AgentBuilder()
.UseAnthropic(apiKey, AnthropicModels.ClaudeSonnet45)
.WithSystemPrompt("You are a helpful assistant.")
.BuildChatAgentAsync();
var response = await agent.SendAsync("What is 2+2?");
Console.WriteLine(response.Text); // "4"
Agent Types
| ChatAgent | ReActAgent |
|---|---|
| Stateful conversations | Ephemeral tasks |
| Persistent across sessions | Autonomous execution |
| Human-in-the-loop (UI tools) | Tool-driven reasoning |
| Chat apps, support bots | Research, automation |
ChatAgent API
Basic Usage
var agent = await new AgentBuilder()
.UseOpenAI(apiKey, OpenAIModels.GPT4o)
.WithSystemPrompt("You are a helpful assistant.")
.ForConversation("user-123")
.BuildChatAgentAsync();
var response = await agent.SendAsync("Hello!");
Core Methods
// Send messages
Task<ChatMessage> SendAsync(ChatMessage message)
Task<ChatMessage> SendAsync(string text)
Task<ChatMessage> SendAsync(string text, List<FileAttachment> files)
// Checkpoints
Task CreateCheckpointAsync(string summary, int? upToTurnNumber = null)
Task<ConversationCheckpoint?> GetLatestCheckpointAsync()
// Stats & Management
HistoryStats GetStats()
void ClearHistory()
string ConversationId { get; }
ValueTask DisposeAsync()
ChatMessage Constructors
new ChatMessage(ChatRole.User, "Hello") // Text
new ChatMessage(ChatRole.User, List<IMessageContent>) // Multimodal
new ChatMessage(ChatRole.Tool, resultJson, toolCallId) // Tool result
new ChatMessage(ChatRole.Assistant, text, List<ToolCall>) // With tool calls
Persistence
// EF Core setup
public class MyDbContext : DbContext
{
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.ConfigureAgentKitModels();
}
}
var historyStore = new EfCoreHistoryStore<MyDbContext>(
dbContext, logger,
tenantId: "tenant-1", // Multi-tenancy
userId: "user-123");
var agent = await new AgentBuilder()
.UseAnthropic(apiKey, model)
.WithHistoryStore(historyStore)
.ForConversation("chat-123")
.BuildChatAgentAsync(); // Auto-loads existing history
Automatic Summarization
.WithSummarization(cfg =>
{
cfg.Enabled = true;
cfg.TriggerAt = 100; // Summarize when we hit 100 messages
cfg.KeepRecent = 10; // Keep last 10 (summarize first 90)
cfg.SummarizationTool = summaryTool;
})
// How it works:
// At 100 msgs → Summarize first 90 → Keep last 10 in memory
// Database retains ALL messages + checkpoint
Tool Result Filtering
.WithToolResultFiltering(cfg =>
{
cfg.KeepRecent = 5; // Keep last 5 tool results, replace others with "[Omitted]"
})
// Recommended: Browser agents: 1-3, ReAct: 5-10, Chat: 5
// Reduces tokens by 80-90% while preserving structure
Multimodal History Limiting
Limit images/screenshots retained in history (useful for computer use scenarios):
// Keep only the most recent screenshot in context
.WithMaxMultimodalMessages(1)
// Or via config
.WithConfig(cfg => cfg.MaxMultimodalMessages = 1)
// Default: null (no limit - all multimodal content retained)
// Older images are stripped but text content is preserved
Live Context Injection
Inject fresh, dynamic context into the system prompt before every LLM API call. Ideal for scenarios where external state changes between turns (e.g., visa case data, CRM records, document uploads, tool call side effects).
var agent = await new AgentBuilder()
.UseAnthropic(apiKey, AnthropicModels.ClaudeSonnet45)
.WithSystemPrompt("You are a helpful visa assistant...")
.WithLiveContext(
async (conversationId) => await _visaCaseService.GetFormattedContext(conversationId),
label: "Current Visa Case State::") // Optional custom label
.ForConversation(conversationId)
.BuildChatAgentAsync();
How it works:
- The context provider is called before every LLM API call (user messages, tool results, etc.)
- Fresh context is appended to the end of the system prompt with the specified label
- If provider returns null/empty, nothing is injected for that call
Example system prompt sent to LLM:
You are a helpful visa assistant...
Current Visa Case State::
Applicant: John Smith
Status: Documents Pending
Uploaded: Passport (verified), Bank Statement (pending review)
Next Step: Upload employment letter
Use cases:
- Customer support with real-time case/ticket data
- Document processing with live extraction results
- Multi-step workflows where tools modify external state
- Any scenario where the "ground truth" changes during conversation
API:
// With default label
.WithLiveContext(async (conversationId) => await GetContext(conversationId))
// With custom label
.WithLiveContext(
async (conversationId) => await GetContext(conversationId),
label: "Current Customer Record::")
// Default label: "Live Conversation Context (updated dynamically, latest)::"
Multimodal
var image = await FileAttachment.FromFileAsync("photo.png");
await agent.SendAsync("What's in this image?", new List<FileAttachment> { image });
// From bytes
var image = FileAttachment.FromBytes(data, "image/jpeg");
// From base64
var image = FileAttachment.FromBase64(base64, "image/png");
ReActAgent API
Basic Usage
var agent = await new AgentBuilder()
.UseOpenAI(apiKey, OpenAIModels.GPT4o)
.AddTool(new SearchTool())
.WithReActConfig(cfg => cfg.MaxTurns = 20)
.BuildReActAgentAsync();
var result = await agent.RunAsync("Find Bitcoin price and calculate 10% of it");
Console.WriteLine(result.FinalAnswer);
Core Methods
// Run with multimodal content (text + images)
Task<ReActResult> RunAsync(ChatMessage taskMessage)
// Convenience: Text only
Task<ReActResult> RunAsync(string task)
// Convenience: Text + file attachments
Task<ReActResult> RunAsync(string task, List<FileAttachment> files)
ValueTask DisposeAsync()
Multimodal Support
// Send images with task
var image = await FileAttachment.FromFileAsync("screenshot.png");
var result = await agent.RunAsync("Analyze this screenshot", new List<FileAttachment> { image });
// Or use ChatMessage for full control
var contents = new List<IMessageContent>
{
new TextMessageContent("What's in this image?"),
new ImageMessageContent(imageBytes, "image/png")
};
var result = await agent.RunAsync(new ChatMessage(ChatRole.User, contents));
ReActConfig
.WithReActConfig(cfg =>
{
cfg.MaxTurns = 20; // Stop after N turns
cfg.DetectStuckAgent = true; // Detect no progress
cfg.BreakOnStuck = false; // Continue even if stuck
})
ReActResult
result.Success // bool
result.FinalAnswer // string
result.TurnsExecuted // int
result.TotalLlmCalls // int
result.Duration // TimeSpan
result.Error // string?
Tools
Tool Types
// Strongly-typed tool
public class WeatherTool : Tool<WeatherArgs, WeatherResult>
{
public override string Name => "get_weather";
public override string Description => "Get current weather";
protected override async Task<WeatherResult> ExecuteAsync(
WeatherArgs args, CancellationToken ct)
{
var temp = await FetchWeatherAsync(args.Location);
return new WeatherResult(Temperature: temp, Condition: "Sunny");
}
}
public record WeatherArgs(string Location);
public record WeatherResult(double Temperature, string Condition);
// Simple tool (auto success/error handling)
public class NotifyTool : SimpleTool<NotifyArgs>
{
public override string Name => "notify";
public override string Description => "Send notification";
protected override Task<string> RunAsync(NotifyArgs args, CancellationToken ct)
{
return Task.FromResult($"Sent: {args.Message}");
}
}
// UI tool (human-in-the-loop)
public class ShowPaymentPageTool : UITool<PaymentArgs, PaymentResult>
{
public override string Name => "show_payment_page";
public override string Description => "Display payment UI";
}
// Manual ITool (full control)
public class CustomTool : ITool
{
public string Name => "custom";
public string Description => "Custom logic";
public JsonDocument ParameterSchema => JsonDocument.Parse(@"{...}");
public async Task<string> InvokeAsync(string argsJson, CancellationToken ct) { }
}
// Multimodal tool (returns images/screenshots alongside text)
// Use MultimodalTool<TArgs> base class for convenience
public class ClickTool : MultimodalTool<ClickArgs>
{
public override string Name => "click_at";
public override string Description => "Click at coordinates";
protected override async Task<ToolResult> ExecuteAsync(ClickArgs args, CancellationToken ct)
{
// Perform action
await page.Mouse.ClickAsync(args.X, args.Y);
// Capture screenshot after action
var screenshot = await page.ScreenshotAsync();
var url = page.Url;
// Return JSON result + screenshot
return new ToolResult
{
Text = JsonSerializer.Serialize(new { url }),
AdditionalContent = new ImageMessageContent(screenshot, "image/png")
};
}
}
public record ClickArgs(
[property: JsonPropertyName("x")] int X,
[property: JsonPropertyName("y")] int Y
);
Google Computer Use (Browser Automation)
Build browser control agents with Gemini 2.5 Computer Use model:
// 1. Setup agent with Computer Use
var agent = await new AgentBuilder()
.UseGoogleComputerUse(
apiKey: "your-google-api-key",
excludedFunctions: new List<string> { "drag_and_drop" }) // Optional
.AddTool(new ClickTool(browserContext), skipDefinition: true)
.AddTool(new TypeTextTool(browserContext), skipDefinition: true)
.AddTool(new NavigateTool(browserContext), skipDefinition: true)
.BuildChatAgentAsync();
// 2. Send task with screenshot
var screenshot = await page.ScreenshotAsync();
var response = await agent.SendAsync(
"Search for smart fridges under $4000 on Google Shopping",
new List<FileAttachment> { FileAttachment.FromBytes(screenshot, "image/png") });
Implementation Requirements:
- Inherit from
MultimodalTool<TArgs>(notTool<TArgs, TResult>) - Always return
ToolResultwith:Text: JSON object containingurlfieldAdditionalContent: Screenshot asImageMessageContent
- Use
skipDefinition: truefor Computer Use predefined tools - Normalize coordinates: Model outputs 0-999, convert to actual pixels
- Recommended screen size: 1440x900
Supported Actions: click_at, type_text_at, navigate, scroll_document, go_back, go_forward, key_combination, drag_and_drop, hover_at, scroll_at
See GOOGLE_COMPUTER_USE.md for full documentation.
Hidden Tools (Skip Definition)
For models pre-trained on specific tools (e.g., Gemini computer use), you can add tools without sending their definitions to the model:
// Tool is available for execution but NOT sent to the model
// Use case: Model is pre-trained on this tool's interface
.AddTool(new ComputerUseTool(), skipDefinition: true)
// Normal behavior (default) - tool definition sent to model
.AddTool(new MyCustomTool())
UI Tools (Human-in-the-Loop)
.AddUITool(new ShowPaymentPageTool())
// Execution pauses at UI tool
var response = await agent.SendAsync("I want to pay");
if (response.ToolCalls?.Any() == true)
{
var toolCall = response.ToolCalls[0];
var args = JsonSerializer.Deserialize<PaymentArgs>(toolCall.Arguments);
// Show UI to user
var result = await ShowPaymentUIAsync(args);
// Send result back
var toolResult = new ChatMessage(
ChatRole.Tool,
JsonSerializer.Serialize(result),
toolCall.Id);
var finalResponse = await agent.SendAsync(toolResult);
}
LLM Providers
Anthropic Claude
.UseAnthropic(options =>
{
options.ApiKey = apiKey;
options.Model = AnthropicModels.ClaudeSonnet45;
options.UseExtendedThinking = true; // Deep reasoning
options.EnablePromptCaching = true; // Cost optimization
options.ThinkingBudgetTokens = 10000;
options.MaxTokens = 4096;
options.Temperature = 1.0;
options.TopP = 0.9;
options.TopK = 40;
})
// Models
AnthropicModels.ClaudeSonnet45 // claude-sonnet-4-5-20250929
AnthropicModels.ClaudeSonnet4 // claude-sonnet-4-20250514
AnthropicModels.ClaudeSonnet37 // claude-3-7-sonnet-20250219
AnthropicModels.ClaudeSonnet35 // claude-3-5-sonnet-20241022
AnthropicModels.ClaudeHaiku35 // claude-3-5-haiku-20241022
OpenAI
.UseOpenAI(options =>
{
options.ApiKey = apiKey;
options.Model = OpenAIModels.GPT4o;
options.UseStructuredOutputs = true;
options.Seed = 42; // Deterministic
options.ResponseFormat = "json_object";
options.MaxTokens = 4096;
options.Temperature = 0.7;
options.FrequencyPenalty = 0.5;
// GPT-5.1 specific options
options.ReasoningEffort = "high"; // none, low, medium, high
options.PromptCacheRetention = "24h"; // Extended caching
})
// Models
OpenAIModels.GPT51 // gpt-5.1 (advanced reasoning, 400K context)
OpenAIModels.GPT51Mini // gpt-5.1-mini (faster, cost-effective)
OpenAIModels.GPT51Nano // gpt-5.1-nano (most economical)
OpenAIModels.GPT4o // gpt-4o
OpenAIModels.GPT4oMini // gpt-4o-mini
OpenAIModels.O1 // o1 (reasoning model)
OpenAIModels.O1Mini // o1-mini (reasoning model)
OpenAIModels.O1Preview // o1-preview (reasoning model)
OpenAIModels.GPT4Turbo // gpt-4-turbo
OpenAIModels.GPT4TurboPreview // gpt-4-turbo-preview
OpenAIModels.GPT4 // gpt-4
OpenAIModels.GPT35Turbo // gpt-3.5-turbo
Google Gemini
Google AI Studio (API Key):
// Simple: API key + model
.UseGoogle(apiKey, GoogleModels.Gemini25Flash)
// Full options
.UseGoogle(options =>
{
options.ApiKey = apiKey;
options.Model = GoogleModels.Gemini3FlashPreview;
options.EnableGrounding = true; // Google Search
options.SafetyLevel = "BLOCK_MEDIUM_AND_ABOVE";
options.MaxOutputTokens = 8192;
options.Temperature = 0.7;
options.ThinkingLevel = ThinkingLevel.Medium; // For Gemini 3 models
})
Vertex AI (GCP Service Account):
// Simple: Project + Location + Credentials
.UseGoogleVertexAI(
project: "my-gcp-project",
location: "us-central1",
credentialsJson: serviceAccountJson, // JSON content as string
model: GoogleModels.Gemini3FlashPreview)
// Full options
.UseGoogle(options =>
{
options.VertexAI = true;
options.Project = "my-gcp-project";
options.Location = "us-central1";
options.CredentialsJson = serviceAccountJson; // Or omit to use ADC
options.Model = GoogleModels.Gemini3FlashPreview;
options.MaxOutputTokens = 8192;
options.ThinkingLevel = ThinkingLevel.High;
})
Vertex AI Authentication:
CredentialsJson: Pass the content of your GCP service account JSON file as a string- If
CredentialsJsonis omitted, Application Default Credentials (ADC) will be used - Host apps typically load credentials from environment variables or secret managers
// Example: Load from environment variable
var credentialsJson = Environment.GetEnvironmentVariable("GOOGLE_CREDENTIALS_JSON");
.UseGoogleVertexAI("my-project", "us-central1", credentialsJson)
// Example: Load from file
var credentialsJson = await File.ReadAllTextAsync("service-account.json");
.UseGoogleVertexAI("my-project", "us-central1", credentialsJson)
// Models
GoogleModels.Gemini3FlashPreview // gemini-3-flash-preview (thinking capabilities)
GoogleModels.Gemini3ProPreview // gemini-3-pro-preview
GoogleModels.Gemini25Pro // gemini-2.5-pro
GoogleModels.Gemini25Flash // gemini-2.5-flash
GoogleModels.Gemini25FlashLite // gemini-2.5-flash-lite
GoogleModels.GeminiFlashLatest // gemini-flash-latest
GoogleModels.GeminiFlashLiteLatest // gemini-flash-lite-latest
Note: Gemini 3 models use "thought signatures" for function calling continuity. This is handled automatically by the library - signatures are captured from API responses and passed back transparently during multi-step tool interactions.
xAI Grok
.UseXAI(options =>
{
options.ApiKey = apiKey;
options.Model = XAIModels.Grok4FastNonReasoning;
})
// Models
XAIModels.Grok4FastReasoning // grok-4-fast-reasoning
XAIModels.Grok4FastNonReasoning // grok-4-fast-non-reasoning
XAIModels.GrokCodeFast1 // grok-code-fast-1
Groq (Ultra-fast)
.UseGroq(options =>
{
options.ApiKey = apiKey;
options.Model = GroqModels.Llama3_3_70B;
})
// Models (Recommended: Llama 3.3 or Qwen for tool calling)
GroqModels.Llama3_3_70B // llama-3.3-70b-versatile (128K, excellent tool calling)
GroqModels.Llama3_1_70B // llama-3.1-70b-versatile (128K, reliable tool calling)
GroqModels.Llama3_1_8B // llama-3.1-8b-instant (fast)
GroqModels.Llama4Maverick17B // meta-llama/llama-4-maverick-17b-128e-instruct (128K context, 8K max completion)
GroqModels.Qwen3_32B // qwen/qwen3-32b (128K context, 40K max completion)
GroqModels.GptOss120B // openai/gpt-oss-120b (UNRELIABLE tool calling)
GroqModels.GptOss20B // openai/gpt-oss-20b (UNRELIABLE tool calling)
GroqModels.KimiK2Instruct // moonshotai/kimi-k2-instruct-0905 (256K context, 16K max completion)
OpenRouter (Any Model)
.UseOpenRouter(options =>
{
options.ApiKey = apiKey;
options.Model = "anthropic/claude-3.5-sonnet";
options.AllowFallbacks = true;
options.ProviderPreferences = new List<string> { "Anthropic", "OpenAI" };
options.HttpReferer = "https://myapp.com";
options.AppTitle = "MyApp";
})
// Any model string supported by OpenRouter
MCP (Model Context Protocol)
Add powerful tools with zero code:
var mcpConfig = new McpConfiguration
{
Command = "npx",
Arguments = new List<string> { "-y", "@playwright/mcp" }
};
var mcpFactory = new McpClientFactory(loggerFactory);
var agent = await new AgentBuilder()
.UseAnthropic(apiKey, model)
.WithMcpClientFactory(mcpFactory)
.WithMcp(mcpConfig) // 21 browser automation tools added
.BuildChatAgentAsync();
await agent.SendAsync("Go to example.com and take a screenshot");
Filter tools to reduce context size:
// Only expose specific tools (reduces tokens & confusion)
var allowedTools = new List<string>
{
"playwright_navigate",
"playwright_screenshot",
"playwright_click"
};
var agent = await new AgentBuilder()
.UseAnthropic(apiKey, model)
.WithMcpClientFactory(mcpFactory)
.WithMcp(mcpConfig, allowedTools) // Only 3 tools exposed
.BuildChatAgentAsync();
Popular MCP servers:
@playwright/mcp- Browser automation@modelcontextprotocol/server-filesystem- File operations@modelcontextprotocol/server-github- GitHub API@modelcontextprotocol/server-slack- Slack integration
Configuration
AgentConfig
.WithConfig(cfg =>
{
cfg.SystemPrompt = "You are a helpful assistant";
cfg.MaxToolRoundsPerTurn = 10;
cfg.EnableTurnValidation = true; // Auto-fix invalid sequences
cfg.EnableOutputSanitization = true; // Clean model outputs
})
SanitizationOptions
cfg.Sanitization.RemoveThinkingTags = true; // Remove <thinking> tags
cfg.Sanitization.UnwrapJsonFromMarkdown = true; // Extract from ```json
cfg.Sanitization.TrimWhitespace = true;
cfg.Sanitization.RemoveNullCharacters = true;
Observer Pattern (Observability)
Get real-time visibility into agent execution:
public class MyObserver : IAgentObserver
{
public void OnLlmRequest(LlmRequestEvent evt)
{
Console.WriteLine($"→ LLM Request: {evt.Messages.Count} messages, {evt.ToolCount} tools");
// Monitor token usage to prevent context overflow
var estimatedTokens = EstimateTokens(evt.Messages);
if (estimatedTokens > 100_000)
_logger.LogWarning("Large context: {Tokens} tokens", estimatedTokens);
}
public void OnLlmResponse(LlmResponseEvent evt)
{
Console.WriteLine($"← LLM Response: {evt.Usage?.TotalTokens} tokens in {evt.Duration.TotalSeconds:F2}s");
// Cost tracking
if (evt.Usage != null)
_costTracker.RecordUsage(evt.Usage.InputTokens, evt.Usage.OutputTokens);
// New: Built-in cost tracking
Console.WriteLine($"Model: {evt.ModelName}");
Console.WriteLine($"Cost: ${evt.Usage?.TotalCost:F6}");
}
public void OnToolExecutionStart(ToolExecutionStartEvent evt)
{
Console.WriteLine($"🔧 Tool starting: {evt.ToolName}");
}
public void OnToolExecutionComplete(ToolExecutionCompleteEvent evt)
{
Console.WriteLine($"✓ Tool completed: {evt.ToolName} ({evt.Duration.TotalSeconds:F2}s)");
// Log errors
if (evt.Error != null)
_logger.LogError(evt.Error, "Tool {Tool} failed", evt.ToolName);
}
public void OnTurnStart(TurnStartEvent evt)
{
Console.WriteLine($"Turn starting: {evt.UserMessage}");
}
public void OnTurnComplete(TurnCompleteEvent evt)
{
Console.WriteLine($"Turn complete: {evt.Duration.TotalSeconds:F2}s");
// New: Built-in cost and token tracking
Console.WriteLine($"Total cost: ${evt.Result.TotalCost:F4}");
Console.WriteLine($"Total tokens: {evt.Result.TotalInputTokens + evt.Result.TotalOutputTokens}");
}
public void OnError(ErrorEvent evt)
{
_logger.LogError(evt.Exception, "Error in {Phase}", evt.Phase);
}
}
// Use observer
.WithObserver(new MyObserver())
Available Events:
| Event | Fires When | Key Data |
|---|---|---|
OnTurnStart |
Turn begins | UserMessage |
OnTurnComplete |
Turn ends | AgentTurn, Duration, Cost & Tokens |
OnLlmRequest |
Before LLM API call | Messages, ToolCount |
OnLlmResponse |
After LLM responds | ModelName, Text, ToolCalls, Usage, Duration, Cost |
OnToolExecutionStart |
Tool starts | ToolName, Arguments |
OnToolExecutionComplete |
Tool finishes | ToolName, Result, Duration, Error? |
OnError |
Any error occurs | Exception, Phase |
Event Context (all events):
evt.Context.Timestamp // DateTime
evt.Context.ConversationId // string? (null for ReActAgent)
evt.Context.MessageCount // int
Common Use Cases:
// Token monitoring
public void OnLlmRequest(LlmRequestEvent evt)
{
if (EstimateTokens(evt.Messages) > 500_000)
throw new InvalidOperationException("Token limit exceeded!");
}
// Cost tracking
public void OnLlmResponse(LlmResponseEvent evt)
{
_costs.Add(evt.Usage?.TotalCost ?? 0); // New: Built-in cost tracking
}
// Progress UI
public void OnToolExecutionStart(ToolExecutionStartEvent evt)
{
_progressBar.UpdateStatus($"Running {evt.ToolName}...");
}
// Debug trace
public void OnLlmRequest(LlmRequestEvent evt)
{
File.AppendAllText("trace.log",
$"{evt.Context.Timestamp}: LLM call with {evt.Messages.Count} messages\n");
}
AgentBuilder API
// LLM Provider (required - choose one)
.UseAnthropic(apiKey, model)
.UseAnthropic(options => { })
.UseOpenAI(apiKey, model)
.UseOpenAI(options => { })
.UseGoogle(apiKey, model)
.UseGoogle(options => { })
.UseGoogleVertexAI(project, location, credentialsJson?, model?)
.UseXAI(apiKey, model)
.UseXAI(options => { })
.UseGroq(apiKey, model)
.UseGroq(options => { })
.UseOpenRouter(apiKey, model)
.UseOpenRouter(options => { })
.UseLlmClient(customClient)
// Tools
.AddTool(tool) // Internal execution
.AddTool(tool, skipDefinition: true) // NEW: Execute without sending definition to model
.AddTools(tools)
.AddUITool(uiTool) // Human-in-the-loop (pauses)
.AddUITools(uiTools)
.WithMcp(mcpConfig) // MCP server (auto-discovery)
.WithMcpClientFactory(factory)
// Configuration
.WithSystemPrompt(prompt)
.ForConversation(id) // ChatAgent only
.WithHistoryStore(store) // ChatAgent only
.WithSummarization(cfg) // ChatAgent only
.WithToolResultFiltering(cfg) // Tool output filtering
.WithMaxMultimodalMessages(n) // Limit images in history (computer use)
.WithLiveContext(provider, label) // Inject fresh context every LLM call
.WithConfig(cfg) // Agent behavior
.WithObserver(observer) // Observability events
.WithReActConfig(cfg) // ReActAgent settings
.WithHistoryManager(manager) // Custom history
.WithCostCalculator(calculator) // Custom cost calculator
.WithModel(modelName) // Set model name for cost tracking
// Build
.BuildChatAgentAsync() // Interactive conversations
.BuildReActAgentAsync() // Autonomous tasks
Statistics & Monitoring
History Statistics
var stats = agent.GetStats();
stats.TotalMessages // int
stats.UserMessages // int
stats.AssistantMessages // int
stats.ToolMessages // int
stats.EstimatedTokens // int
stats.CompressionCount // int
Real-Time Monitoring (Observer)
public class MonitoringObserver : IAgentObserver
{
private int _totalTokens = 0;
private decimal _totalCost = 0;
private readonly Stopwatch _sessionTimer = Stopwatch.StartNew();
public void OnLlmResponse(LlmResponseEvent evt)
{
if (evt.Usage != null)
{
_totalTokens += evt.Usage.TotalTokens;
_totalCost += evt.Usage.TotalCost; // New: Built-in cost tracking
Console.WriteLine($"Session: {_totalTokens} tokens, ${_totalCost:F4}, {_sessionTimer.Elapsed}");
}
}
}
Advanced Features
Custom History Store
public class RedisHistoryStore : IHistoryStore
{
public Task AppendMessageAsync(string conversationId, ChatMessage message, CancellationToken ct) { }
public Task<List<ChatMessage>?> LoadAsync(string conversationId, CancellationToken ct) { }
public Task DeleteAsync(string conversationId, CancellationToken ct) { }
public Task<List<string>> ListConversationsAsync(CancellationToken ct) { }
public Task CreateCheckpointAsync(string conversationId, ConversationCheckpoint checkpoint, CancellationToken ct) { }
public Task<ConversationCheckpoint?> GetLatestCheckpointAsync(string conversationId, CancellationToken ct) { }
}
.WithHistoryStore(new RedisHistoryStore(redis))
Cost Tracking
Built-in real-time cost tracking - Monitor LLM API costs automatically:
public class CostObserver : IAgentObserver
{
public void OnLlmResponse(LlmResponseEvent evt)
{
// Per-response cost tracking
Console.WriteLine($"Model: {evt.ModelName}");
Console.WriteLine($"Cost: ${evt.Usage.TotalCost:F6}");
Console.WriteLine($" Input: ${evt.Usage.InputCost:F6} ({evt.Usage.InputTokens} tokens)");
Console.WriteLine($" Output: ${evt.Usage.OutputCost:F6} ({evt.Usage.OutputTokens} tokens)");
}
public void OnTurnComplete(TurnCompleteEvent evt)
{
// Turn-level summary
Console.WriteLine($"Turn cost: ${evt.Result.TotalCost:F4}");
Console.WriteLine($"Total tokens: {evt.Result.TotalInputTokens + evt.Result.TotalOutputTokens}");
}
}
.WithObserver(new CostObserver())
Automatic pricing for all major providers:
- OpenAI: GPT-5.1 series, GPT-4o, GPT-4o-mini, o1 series, GPT-4, GPT-3.5-turbo
- Anthropic: Claude 4.5, 4, 3.7 series
- Google: Gemini 2.5 Pro, Flash, Flash Lite
- XAI: Grok 4 variants
- Unknown models: Return $0.00 (as requested)
Cost tracking events:
OnLlmResponse: Per LLM call (granular)OnTurnComplete: Per turn (summary)OnError: Zero cost on errors
Rate Limiting
Implement rate limiting using the observer pattern:
public class RateLimitingObserver : IAgentObserver
{
private readonly SemaphoreSlim _semaphore = new(10, 10); // 10 concurrent requests
private readonly Queue<DateTime> _requestTimes = new();
private readonly int _maxRequestsPerMinute = 60;
public void OnLlmRequest(LlmRequestEvent evt)
{
// Wait for rate limit slot
lock (_requestTimes)
{
var oneMinuteAgo = DateTime.UtcNow.AddMinutes(-1);
while (_requestTimes.Count > 0 && _requestTimes.Peek() < oneMinuteAgo)
_requestTimes.Dequeue();
if (_requestTimes.Count >= _maxRequestsPerMinute)
{
var waitTime = _requestTimes.Peek().AddMinutes(1) - DateTime.UtcNow;
Thread.Sleep(waitTime);
}
_requestTimes.Enqueue(DateTime.UtcNow);
}
}
}
OpenTelemetry (Metrics)
AgentKit can emit per-LLM-call (granular) metrics for tokens and cost via OpenTelemetry Metrics.
1) Install the package:
dotnet add package NovaCore.AgentKit.Extensions.OpenTelemetry
2) Configure your host app's OpenTelemetry Metrics pipeline/exporter (Prometheus, OTLP, etc).
AgentKit metrics are produced under the Meter:
novacore.agentkit
Metric instruments emitted (per LLM response, as deltas):
novacore.agentkit.llm.input_tokens(Counter, unit:token)novacore.agentkit.llm.output_tokens(Counter, unit:token)novacore.agentkit.llm.input_cost_usd(Counter, unit:USD)novacore.agentkit.llm.output_cost_usd(Counter, unit:USD)
Dimensions (labels):
modelhost_app(auto-detected from entry assembly)pricing_known(false when tokens exist but pricing is unknown → cost is $0.00)
Auto-wiring behavior:
- If the
NovaCore.AgentKit.Extensions.OpenTelemetrypackage is referenced, AgentKit automatically attaches the metrics observer internally (no.WithObserver(...)required).
Key Types & Interfaces
Core
// Chat Messages
class ChatMessage
{
ChatRole Role { get; init; }
string? Text { get; init; }
List<IMessageContent>? Contents { get; init; }
List<ToolCall>? ToolCalls { get; init; }
string? ToolCallId { get; init; }
// Constructors
ChatMessage(ChatRole role, string? text, string? toolCallId = null)
ChatMessage(ChatRole role, string? text, List<ToolCall>? toolCalls)
ChatMessage(ChatRole role, List<IMessageContent> contents, string? toolCallId = null)
}
enum ChatRole { System, User, Assistant, Tool }
// Message Content (for multimodal messages)
interface IMessageContent { string ContentType { get; } }
record TextMessageContent(string Text) : IMessageContent
{
string ContentType => "text";
}
record ImageMessageContent(byte[] Data, string MimeType) : IMessageContent
{
string ContentType => "image";
}
record ToolCallMessageContent(string CallId, string ToolName, string ArgumentsJson) : IMessageContent
{
string ContentType => "tool_call";
}
record ToolResultMessageContent(string CallId, string Result, bool IsError = false) : IMessageContent
{
string ContentType => "tool_result";
}
// File Attachments
class FileAttachment
{
byte[] Data { get; init; }
string MediaType { get; init; }
string? FileName { get; init; }
bool IsImage { get; } // Check if image type
long Size { get; } // File size in bytes
// Factory methods
static FileAttachment FromBytes(byte[] data, string mediaType, string? fileName = null)
static FileAttachment FromBase64(string base64Data, string mediaType, string? fileName = null)
static Task<FileAttachment> FromFileAsync(string filePath, CancellationToken ct = default)
// Utility methods
ImageMessageContent ToMessageContent()
string ToBase64()
}
// Tool Calls
class ToolCall
{
string Id { get; init; } // Unique tool call ID
string FunctionName { get; init; } // Tool/function name
string Arguments { get; init; } // Arguments as JSON string
}
// Tools
interface ITool
{
string Name { get; }
string Description { get; }
JsonDocument ParameterSchema { get; }
Task<string> InvokeAsync(string argsJson, CancellationToken ct);
}
interface IUITool : ITool { } // Marker for human-in-the-loop tools
// NEW: Multimodal tool (returns images/screenshots alongside text)
interface IMultimodalTool : ITool
{
Task<ToolResult> InvokeWithResultAsync(string argsJson, CancellationToken ct);
}
// NEW: Result from multimodal tool
class ToolResult
{
string Text { get; init; } // Text result (for tool call response)
IMessageContent? AdditionalContent { get; init; } // Optional image/audio content
// Implicit conversion from string for backwards compatibility
static implicit operator ToolResult(string text);
}
abstract class Tool<TArgs, TResult> : ITool
{
abstract string Name { get; }
abstract string Description { get; }
protected abstract Task<TResult> ExecuteAsync(TArgs args, CancellationToken ct);
}
abstract class SimpleTool<TArgs> : Tool<TArgs, ToolResponse>
{
protected abstract Task<string> RunAsync(TArgs args, CancellationToken ct);
}
abstract class UITool<TArgs, TResult> : Tool<TArgs, TResult>, IUITool { }
// LLM Client
interface ILlmClient
{
Task<LlmResponse> CompleteAsync(List<LlmMessage> messages, LlmOptions? options = null, CancellationToken ct = default);
IAsyncEnumerable<LlmStreamingUpdate> StreamAsync(List<LlmMessage> messages, LlmOptions? options = null, CancellationToken ct = default);
}
class LlmMessage
{
MessageRole Role { get; init; }
string? Text { get; init; }
List<IMessageContent>? Contents { get; init; }
string? ToolCallId { get; init; }
}
enum MessageRole { System, User, Assistant, Tool }
class LlmOptions
{
int? MaxTokens { get; set; }
double? Temperature { get; set; }
double? TopP { get; set; }
List<string>? StopSequences { get; set; }
Dictionary<string, LlmTool>? Tools { get; set; }
Dictionary<string, object>? AdditionalProperties { get; set; }
}
class LlmTool
{
string Name { get; init; }
string Description { get; init; }
JsonElement ParameterSchema { get; init; }
}
class LlmResponse
{
string? Text { get; init; }
List<LlmToolCall>? ToolCalls { get; init; }
LlmFinishReason? FinishReason { get; init; }
LlmUsage? Usage { get; init; }
}
class LlmStreamingUpdate
{
string? TextDelta { get; init; }
LlmToolCall? ToolCall { get; init; }
LlmFinishReason? FinishReason { get; init; }
LlmUsage? Usage { get; init; }
}
class LlmUsage
{
int InputTokens { get; init; }
int OutputTokens { get; init; }
int TotalTokens { get; } // Computed: InputTokens + OutputTokens
}
History
// History Store (persistent storage)
interface IHistoryStore
{
Task AppendMessageAsync(string conversationId, ChatMessage message, CancellationToken ct = default);
Task AppendMessagesAsync(string conversationId, List<ChatMessage> messages, CancellationToken ct = default);
Task SaveAsync(string conversationId, List<ChatMessage> history, CancellationToken ct = default);
Task<List<ChatMessage>?> LoadAsync(string conversationId, CancellationToken ct = default);
Task DeleteAsync(string conversationId, CancellationToken ct = default);
Task<List<string>> ListConversationsAsync(CancellationToken ct = default);
Task<int> GetMessageCountAsync(string conversationId, CancellationToken ct = default);
Task CreateCheckpointAsync(string conversationId, ConversationCheckpoint checkpoint, CancellationToken ct = default);
Task<ConversationCheckpoint?> GetLatestCheckpointAsync(string conversationId, CancellationToken ct = default);
Task<(ConversationCheckpoint? checkpoint, List<ChatMessage> messages)> LoadFromCheckpointAsync(string conversationId, CancellationToken ct = default);
}
// History Manager (in-memory)
interface IHistoryManager
{
void AddMessage(ChatMessage message);
List<ChatMessage> GetHistory();
void ReplaceHistory(List<ChatMessage> history);
void Clear();
HistoryStats GetStats();
}
// History Selector (context selection)
interface IHistorySelector
{
List<ChatMessage> SelectMessagesForContext(List<ChatMessage> fullHistory, ToolResultConfig config);
List<ChatMessage> SelectMessagesForContext(List<ChatMessage> fullHistory, ConversationCheckpoint? checkpoint, ToolResultConfig config);
}
// Conversation Checkpoint (summarization point)
class ConversationCheckpoint
{
int UpToTurnNumber { get; init; } // Turn number summarized up to (inclusive)
string Summary { get; init; } // Summary text
DateTime CreatedAt { get; init; } // When checkpoint was created
Dictionary<string, object>? Metadata { get; init; } // Optional metadata
}
// History Statistics
class HistoryStats
{
int TotalMessages { get; init; } // Total message count
int UserMessages { get; init; } // User message count
int AssistantMessages { get; init; } // Assistant message count
int ToolMessages { get; init; } // Tool message count
int EstimatedTokens { get; init; } // Estimated token count
int CompressionCount { get; init; } // Compression count (deprecated, always 0)
}
MCP
// MCP Client (Model Context Protocol)
interface IMcpClient : IAsyncDisposable
{
Task ConnectAsync(CancellationToken ct = default);
Task<List<McpToolDefinition>> DiscoverToolsAsync(CancellationToken ct = default);
Task<McpToolResult> CallToolAsync(string toolName, Dictionary<string, object?> arguments, CancellationToken ct = default);
McpConnectionStatus ConnectionStatus { get; }
}
// MCP Configuration
interface IMcpConfiguration
{
string Command { get; } // Command to execute (e.g., "npx")
List<string> Arguments { get; } // Command arguments
Dictionary<string, string> Environment { get; } // Environment variables
string? WorkingDirectory { get; } // Working directory
}
class McpConfiguration : IMcpConfiguration
{
string Command { get; set; }
List<string> Arguments { get; set; }
Dictionary<string, string> Environment { get; set; }
string? WorkingDirectory { get; set; }
}
// MCP Tool Definition
class McpToolDefinition
{
string Name { get; init; } // Tool name
string Description { get; init; } // Tool description
JsonElement InputSchema { get; init; } // JSON schema for parameters
}
// MCP Tool Result
class McpToolResult
{
bool Success { get; init; } // Whether execution succeeded
JsonElement? Data { get; init; } // Result data
string? Error { get; init; } // Error message if failed
}
// MCP Connection Status
enum McpConnectionStatus
{
Disconnected,
Connecting,
Connected,
Failed
}
Observer
// Observer Interface (all methods have default no-op implementations)
interface IAgentObserver
{
void OnTurnStart(TurnStartEvent evt);
void OnTurnComplete(TurnCompleteEvent evt);
void OnLlmRequest(LlmRequestEvent evt);
void OnLlmResponse(LlmResponseEvent evt);
void OnToolExecutionStart(ToolExecutionStartEvent evt);
void OnToolExecutionComplete(ToolExecutionCompleteEvent evt);
void OnError(ErrorEvent evt);
}
// Shared Event Context (present in all events)
record AgentEventContext
{
DateTime Timestamp { get; init; } // When event occurred (UTC)
string? ConversationId { get; init; } // Conversation ID (null for ReActAgent)
int MessageCount { get; init; } // Current message count in history
}
// Turn Start Event
record TurnStartEvent(
AgentEventContext Context,
string UserMessage // User's input message
);
// Turn Complete Event
record TurnCompleteEvent(
AgentEventContext Context,
AgentTurn Result, // Turn result (see AgentTurn below)
TimeSpan Duration // Total turn duration
);
// LLM Request Event (before API call)
record LlmRequestEvent(
AgentEventContext Context,
IReadOnlyList<LlmMessage> Messages, // Messages being sent to LLM
int ToolCount // Number of tools available
);
// LLM Response Event (after API call)
record LlmResponseEvent(
AgentEventContext Context,
string ModelName, // NEW: Which model was used
string? Text, // Generated text
List<LlmToolCall>? ToolCalls, // Tool calls requested (see LlmToolCall below)
LlmUsage? Usage, // Token usage (see LlmUsage below)
LlmFinishReason? FinishReason, // Why generation stopped (Stop/Length/ToolCalls/ContentFilter)
TimeSpan Duration // LLM API call duration
);
// Tool Execution Start Event
record ToolExecutionStartEvent(
AgentEventContext Context,
string ToolName, // Name of tool being executed
string Arguments // JSON arguments
);
// Tool Execution Complete Event
record ToolExecutionCompleteEvent(
AgentEventContext Context,
string ToolName, // Name of tool executed
string Result, // Tool result (JSON string)
TimeSpan Duration, // Tool execution duration
Exception? Error // Exception if tool failed (null if success)
);
// Error Event
record ErrorEvent(
AgentEventContext Context,
Exception Exception, // The exception that occurred
string Phase // Phase where error occurred (e.g., "LLM", "Tool", "Turn")
);
// Supporting Types
class AgentTurn
{
string Response { get; init; } // Agent's response text
int LlmCallsExecuted { get; init; } // Number of LLM calls in this turn
string? CompletionSignal { get; init; } // Completion signal (ReActAgent only)
bool Success { get; init; } // Whether turn succeeded
string? Error { get; init; } // Error message if failed
// NEW: Built-in cost and token tracking
int TotalInputTokens { get; init; } // Total input tokens used in turn
int TotalOutputTokens { get; init; } // Total output tokens used in turn
decimal TotalCost { get; init; } // Total cost for the turn (USD)
}
class LlmToolCall
{
string Id { get; init; } // Unique tool call ID
string Name { get; init; } // Tool name
string ArgumentsJson { get; init; } // Arguments as JSON string
}
class LlmUsage
{
int InputTokens { get; init; } // Input/prompt tokens
int OutputTokens { get; init; } // Output/completion tokens
int TotalTokens { get; } // InputTokens + OutputTokens
// NEW: Built-in cost tracking
decimal InputCost { get; init; } // Cost for input tokens (USD)
decimal OutputCost { get; init; } // Cost for output tokens (USD)
decimal TotalCost { get; } // InputCost + OutputCost
}
enum LlmFinishReason
{
Stop, // Natural stop
Length, // Max tokens reached
ToolCalls, // Stopped to execute tools
ContentFilter // Content policy violation
}
class LlmMessage
{
MessageRole Role { get; init; } // System/User/Assistant/Tool
string? Text { get; init; } // Text content
List<IMessageContent>? Contents { get; init; } // Multimodal content
string? ToolCallId { get; init; } // Tool call ID (for Tool role)
}
enum MessageRole { System, User, Assistant, Tool }
Configuration
// Agent Configuration
class AgentConfig
{
int MaxToolRoundsPerTurn { get; set; } // Default: 10
string? SystemPrompt { get; set; } // System instructions
SummarizationConfig Summarization { get; set; } // Auto-summarization (ChatAgent)
ToolResultConfig ToolResults { get; set; } // Tool result filtering
int? MaxMultimodalMessages { get; set; } // Limit images in history (null = unlimited)
SanitizationOptions Sanitization { get; set; } // Output cleaning
bool EnableTurnValidation { get; set; } // Default: true
bool EnableOutputSanitization { get; set; } // Default: true
// Live Context (dynamic context injection)
Func<string?, Task<string?>>? LiveContextProvider { get; set; } // Context provider delegate
string LiveContextLabel { get; set; } // Default: "Live Conversation Context (updated dynamically, latest)::"
}
// Summarization Configuration (ChatAgent)
class SummarizationConfig
{
bool Enabled { get; set; } // Default: false
int TriggerAt { get; set; } // Default: 100
int KeepRecent { get; set; } // Default: 10
ITool? SummarizationTool { get; set; } // Required if Enabled
ToolResultConfig ToolResults { get; set; } // Nested tool filtering
}
// Tool Result Filtering Configuration
class ToolResultConfig
{
int KeepRecent { get; set; } // Default: 0 (unlimited)
// Recommended: Browser=1-3, ReAct=5-10, Chat=5
}
// Sanitization Options
class SanitizationOptions
{
bool RemoveThinkingTags { get; set; } // Default: true
bool UnwrapJsonFromMarkdown { get; set; } // Default: true
bool TrimWhitespace { get; set; } // Default: true
bool RemoveNullCharacters { get; set; } // Default: true
}
// ReAct Agent Configuration
class ReActConfig
{
int MaxTurns { get; set; } // Default: 20
bool DetectStuckAgent { get; set; } // Default: true
bool BreakOnStuck { get; set; } // Default: false
}
// ReAct Agent Result
class ReActResult
{
bool Success { get; init; } // Task completed successfully
string FinalAnswer { get; init; } // Final answer/result
int TurnsExecuted { get; init; } // Turns executed
int TotalLlmCalls { get; init; } // Total LLM calls
TimeSpan Duration { get; init; } // Execution time
string? Error { get; init; } // Error if failed
}
Packages
Core
- NovaCore.AgentKit.Core - Core abstractions (required)
- NovaCore.AgentKit.MCP - Model Context Protocol
- NovaCore.AgentKit.EntityFramework - Persistence
Providers
- NovaCore.AgentKit.Providers.Anthropic - Claude
- NovaCore.AgentKit.Providers.OpenAI - GPT-4o, o1
- NovaCore.AgentKit.Providers.Google - Gemini
- NovaCore.AgentKit.Providers.XAI - Grok
- NovaCore.AgentKit.Providers.Groq - Qwen, Llama
- NovaCore.AgentKit.Providers.OpenRouter - Any model
Extensions
- NovaCore.AgentKit.Extensions.OpenTelemetry - OpenTelemetry metrics (tokens + cost), auto-wired when referenced
License
MIT License - see LICENSE file
Built with ❤️ by NovaCore AI Team
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- Microsoft.Extensions.AI (>= 10.2.0)
- Microsoft.Extensions.AI.Abstractions (>= 10.2.0)
- Microsoft.Extensions.Caching.Abstractions (>= 10.0.2)
- Microsoft.Extensions.Caching.Memory (>= 10.0.2)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 10.0.2)
- Microsoft.Extensions.Logging (>= 10.0.2)
- Microsoft.Extensions.Logging.Abstractions (>= 10.0.2)
- Microsoft.Extensions.Options (>= 10.0.2)
- NJsonSchema (>= 11.5.2)
- Polly (>= 8.6.5)
- Polly.Extensions (>= 8.6.5)
- Serilog (>= 4.3.0)
- Serilog.Extensions.Logging (>= 10.0.0)
NuGet packages (10)
Showing the top 5 NuGet packages that depend on NovaCore.AgentKit.Core:
| Package | Downloads |
|---|---|
|
NovaCore.AgentKit.Providers.Google
Google Gemini provider for NovaCore.AgentKit with Vertex AI support |
|
|
NovaCore.AgentKit.Providers.Anthropic
Anthropic Claude provider for NovaCore.AgentKit with extended thinking support |
|
|
NovaCore.AgentKit.Extensions.OpenTelemetry
OpenTelemetry integration for NovaCore.AgentKit |
|
|
NovaCore.AgentKit.EntityFramework
EF Core persistence for conversations and agent history |
|
|
NovaCore.AgentKit.Providers.XAI
xAI Grok provider for NovaCore.AgentKit (OpenAI-compatible) |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 3.1.3 | 53 | 1/16/2026 |
| 3.1.2 | 35 | 1/16/2026 |
| 3.1.1 | 41 | 1/16/2026 |
| 3.1.0 | 72 | 1/15/2026 |
| 3.0.5 | 125 | 1/10/2026 |
| 3.0.4 | 136 | 1/7/2026 |
| 3.0.3 | 134 | 1/7/2026 |
| 3.0.2 | 200 | 12/27/2025 |
| 3.0.1 | 343 | 12/18/2025 |
| 3.0.0 | 419 | 11/30/2025 |
| 2.1.0 | 406 | 11/30/2025 |
| 2.0.0 | 312 | 11/30/2025 |
| 1.9.0 | 305 | 11/30/2025 |
| 1.8.0 | 165 | 11/29/2025 |
| 1.7.0 | 165 | 11/29/2025 |
| 1.6.0 | 224 | 11/25/2025 |
| 1.5.0 | 453 | 11/17/2025 |
| 1.4.0 | 205 | 11/15/2025 |
| 1.3.0 | 244 | 10/26/2025 |
| 1.2.0 | 173 | 10/18/2025 |
| 1.1.0 | 145 | 10/18/2025 |
| 1.0.0 | 160 | 10/17/2025 |