AIHelperLibrary 1.1.0
dotnet add package AIHelperLibrary --version 1.1.0
NuGet\Install-Package AIHelperLibrary -Version 1.1.0
<PackageReference Include="AIHelperLibrary" Version="1.1.0" />
<PackageVersion Include="AIHelperLibrary" Version="1.1.0" />
<PackageReference Include="AIHelperLibrary" />
paket add AIHelperLibrary --version 1.1.0
#r "nuget: AIHelperLibrary, 1.1.0"
#addin nuget:?package=AIHelperLibrary&version=1.1.0
#tool nuget:?package=AIHelperLibrary&version=1.1.0
AIHelperLibrary
AIHelperLibrary is a modular, multi-provider .NET library enabling seamless integration with OpenAI (GPT models) and Anthropic (Claude models). Designed for extensibility, robust configuration, multi-turn conversation support, and advanced prompt management.
Installation
Install via .NET CLI:
dotnet add package AIHelperLibrary --version 1.1.0
Or manually in your .csproj
file:
<PackageReference Include="AIHelperLibrary" Version="1.1.0" />
Quick Start
using AIHelperLibrary.Configurations;
using AIHelperLibrary.Models;
using AIHelperLibrary.Services;
var config = new OpenAIConfiguration
{
DefaultModel = OpenAIModel.GPT_4o,
MaxTokens = 500,
Temperature = 0.7,
EnableLogging = true
};
var client = new OpenAIClient("your-api-key", config);
var result = await client.GenerateTextAsync("Explain quantum computing.");
Console.WriteLine(result);
Full Configuration Reference
OpenAIConfiguration
Property | Description | Default |
---|---|---|
DefaultModel |
OpenAI model (e.g., GPT-4 , GPT-4o , o1 ). |
GPT-3.5-Turbo |
MaxTokens |
Maximum number of tokens in a response. | 150 |
Temperature |
Randomness of responses (ignored for o1/o3/o4). | 0.7 |
TopP |
Alternative randomness control (ignored for o1/o3/o4). | 1.0 |
RequestTimeoutMs |
HTTP timeout (ms). | 10000 |
EnableLogging |
Log outgoing requests and incoming responses. | false |
ProxyUrl |
Proxy server URL. | "" |
ProxyPort |
Proxy port. | 0 |
MaxRetryCount |
Retries on transient errors. | 3 |
RetryDelayMs |
Delay between retries (ms). | 2000 |
MaxChatHistorySize |
Retained messages for chatbot context. | 20 |
AnthropicConfiguration
Property | Description | Default Value |
---|---|---|
DefaultModel |
Claude model (e.g., Claude-3, Claude-3.5). | Claude3Sonnet |
ApiVersion |
Anthropic API version. | 2023-06-01 |
SystemPrompt |
Instructions for Claude assistant. | "You are Claude..." |
StopSequences |
Custom stop sequences to halt generation. | [] |
(and all standard base options like retries, timeout, logging...) |
Usage Examples
Generate a Basic Response
var response = await client.GenerateTextAsync("What is the capital of Japan?");
Console.WriteLine(response);
Persistent Chatbot Session
var chatResponse = await client.GenerateChatResponseAsync(
instanceKey: "Session1",
userMessage: "Can you help me book a hotel?",
initialPrompt: "You are a polite and helpful assistant."
);
Console.WriteLine(chatResponse);
Dynamic Prompt Manager
var promptManager = new DynamicPromptManager();
promptManager.AddPrompt("FriendlyGreet", "Please greet the user warmly.");
var dynamicResponse = await client.GenerateTextWithDynamicPromptAsync(promptManager, "FriendlyGreet", "Hello AI!");
Console.WriteLine(dynamicResponse);
Supported Models
OpenAI
Model | Notes |
---|---|
GPT-3.5-Turbo |
Fast and inexpensive |
GPT-4 |
Most capable model |
GPT-4o |
Optimized GPT-4 |
GPT-4o-Mini , GPT-4o-Nano |
Lightweight variants |
o1 , o1-mini , o3-mini |
Specialized, fast inference |
Anthropic (Claude)
Model | Notes |
---|---|
Claude-3-7-Sonnet |
Latest flagship |
Claude-3-5-Sonnet |
Mid-tier newer generation |
Claude-3-5-Haiku |
Fastest inference |
Claude-3-Opus , Claude-3-Sonnet , Claude-3-Haiku |
Main Claude 3 family |
Advanced Topics
Retry Logic
If a request fails due to a network or transient server issue, the library will automatically retry according to MaxRetryCount
and RetryDelayMs
settings.
Proxy Support
You can route API calls through a proxy server using ProxyUrl
and ProxyPort
configuration.
Custom Headers
Any additional HTTP headers (e.g., metadata) can be attached via the CustomHeaders
dictionary.
Special Handling for o-Series Models
Models like o1
, o3-mini
, and o4-mini
ignore temperature
and top_p
parameters and instead use max_completion_tokens
.
Contributing
Pull requests, bug reports, and feature suggestions are welcome. Please fork the repository and submit a pull request.
License
Distributed under the MIT License.
Changelog
See CHANGELOG.md for version history and release notes.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- Newtonsoft.Json (>= 13.0.3)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.