OpenAI.ChatGPT.EntityFrameworkCore
2.5.0
See the version list below for details.
dotnet add package OpenAI.ChatGPT.EntityFrameworkCore --version 2.5.0
NuGet\Install-Package OpenAI.ChatGPT.EntityFrameworkCore -Version 2.5.0
<PackageReference Include="OpenAI.ChatGPT.EntityFrameworkCore" Version="2.5.0" />
paket add OpenAI.ChatGPT.EntityFrameworkCore --version 2.5.0
#r "nuget: OpenAI.ChatGPT.EntityFrameworkCore, 2.5.0"
// Install OpenAI.ChatGPT.EntityFrameworkCore as a Cake Addin #addin nuget:?package=OpenAI.ChatGPT.EntityFrameworkCore&version=2.5.0 // Install OpenAI.ChatGPT.EntityFrameworkCore as a Cake Tool #tool nuget:?package=OpenAI.ChatGPT.EntityFrameworkCore&version=2.5.0
ChatGPT integration for .NET
OpenAI Chat Completions API (ChatGPT) integration with DI and EF Core supporting. It allows you to use the API in your .NET applications. Also, the client supports streaming responses (like ChatGPT) via async streams.
Preparation
First, you need to create an OpenAI account and get an API key. You can do this at https://platform.openai.com/account/api-keys.
Installation
The easiest way to use ChatGPT service in your .NET project with DI and persistence (EF Core) supporting is to install the NuGet package OpenAI.ChatGPT.EntityFrameworkCore:
Install-Package OpenAI.ChatGPT.EntityFrameworkCore
If you don't want to use EF Core, you can install the package OpenAI.ChatGPT.AspNetCore and implement your own storage for chat history, using IChatHistoryStorage
interface.
Usage
- Set the OpenAI API key or even host (optional) in your project user secrets, or the
appsettings.json
file (not safe):
{
"OpenAICredentials": {
"ApiKey": "your-api-key-from-openai",
"ApiHost": "https://api.openai.com/v1/"
}
}
Also, you can specify OpenAI API key as environment variable ASPNETCORE_OpenAICredentials:ApiKey
.
- Add ChatGPT integration with EF to your DI container:
builder.Services.AddChatGptEntityFrameworkIntegration(
options => options.UseSqlite("Data Source=chats.db"));
Instead of options.UseSqlite("Data Source=chats.db")
use your own db and connection string.
- Inject
ChatGPTFactory
to your service and use it to createChatGPT
instance:
public class YourService
{
private readonly ChatGPTFactory _chatGptFactory;
public YourService(ChatGPTFactory chatGptFactory)
{
_chatGptFactory = chatGptFactory;
}
public async Task<string> GetAnswer(string text)
{
ChatGPT chatGpt = await _chatGptFactory.Create(userId);
var chatService = await chatGpt.ContinueOrStartNewTopic();
response = await _chatService.GetNextMessageResponse(_prompt);
return response;
}
}
See Blazor Example.
If you want to configure request parameters, you can do it in appsettings.json
configuration or in ChatGPTFactory.Create
or in ChatGPT.CreateChat
methods.
{
"ChatGPTConfig": {
"InitialSystemMessage": null,
"InitialUserMessage": null,
"MaxTokens": null,
"Model": null,
"Temperature": null,
"PassUserIdToOpenAiRequests": true
}
}
See parameters description inside ChatGPTConfig.
Exceptions
If the server response is not a success status code, the client will throw a NotExpectedResponseException. The exception will contain the error message from the OpenAI API.
By default, requesting cancellation or ChatService.Stop()
method calling will throw OperationCanceledException
. If you don't want to throw it (relevant for streaming responses), you can set throwOnCancellation
parameter to false
:
await foreach (string chunk in chatService.StreamNextMessageResponse(text, throwOnCancellation: false))
{
//...
}
Thread safety and async
ChatGPTFactory
, ChatGPT
classes thread-safety is depend on the IChatHistoryStorage
implementation. If you use ChatGPTFactory
with entity framework, it's NOT thread-safe. ChatService
class is not thread-safe.
Anyways, this services are designed to be used safely with DI, so you don't need to worry about it.
All the methods from all the packages are designed to be used in async context and use ConfigureAwait(false)
(thanks for the ConfigureAwait.Fody
package).
Retries, timeouts and other policies
Since ChatGPTFactory
depends on IHttClientFactory
, you can easily use any of the available policies for it, like Polly.
Examples
- Blazor Example
- Console Example (simple)
- Spectre Console Example (advanced)
API Parameters
Here is a list of the main parameters that can be used in the ChatCompletions (ChatGPT) API request (OpenAI.ChatGpt/Models/ChatCompletion/ChatCompletionRequest.cs).
Some of them are taken from this article: https://towardsdatascience.com/gpt-3-parameters-and-prompt-design-1a595dc5b405
Below listed parameters for ChatCompletions API.
Model
The prediction-generating AI model is specified by the engine parameter. The available models are:
ChatCompletionModels.Gpt3_5_Turbo
(Default): Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with OpenAI's latest model iteration.ChatCompletionModels.Gpt3_5_Turbo_0301
: Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo, this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023.ChatCompletionModels.Gpt4
: More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with OpenAI's latest model iteration. *ChatCompletionModels.Gpt4_0314
: Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. *ChatCompletionModels.Gpt4_32k
: Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with OpenAI's latest model iteration. *ChatCompletionModels.Gpt4_32k_0314
: Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. *
Note that training data for all models is up to Sep 2021.
* These models are currently in beta and are not yet available to all users. Here is the link for joining waitlist: https://openai.com/waitlist/gpt-4-api
MaxTokens
The maximum number of tokens allowed for the generated answer. Defaults to ChatCompletionRequest.MaxTokensDefault
(64).
- This value is validated and limited with
ChatCompletionModels.GetMaxTokensLimitForModel
method. - It's possible to calculate approximately tokens count using
ChatCompletionMessage.CalculateApproxTotalTokenCount
method - The number of tokens can be retrieved from the API response:
ChatCompletionResponse.Usage.TotalTokens
. As a rule of thumb for English, 1 token is around 4 characters (so 100 tokens ≈ 75 words). See tokenizer from OpenAI: https://platform.openai.com/tokenizer - Encoding algorithm can be found here: https://github.com/latitudegames/GPT-3-Encoder
Temperature
What sampling temperature to use, between 0 and 2.
- Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- Predefined values are available in
ChatCompletionTemperatures
. - Default value is:
ChatCompletionTemperatures.Balanced
(0.5).
Description: Before being mapped into probabilities, the model outputs unnormalized values (logits). The logits are typically used with a function such as softmax to convert them into probabilities.
But, before applying the softmax function, we can use a trick inspired by thermodynamics and scale the logits with the temperature parameter, i.e. softmax(logits/temperature).
A temperature parameter close to 1 would mean that the logits are passed through the softmax function without modification. If the temperature is close to zero, the highest probable tokens will become very likely compared to the other tokens, i.e. the model becomes more deterministic and will always output the same set of tokens after a given sequence of words.
More parameters description can be found here: Some of them are taken from this article: https://towardsdatascience.com/gpt-3-parameters-and-prompt-design-1a595dc5b405
Using raw client without DI
If you don't need DI and chat history, you can use only the NuGet package OpenAI.ChatGPT:
Install-Package OpenAI.ChatGPT
Then create an instance of OpenAIClient
:
_client = new OpenAiClient("{YOUR_OPENAI_API_KEY}");
Simple usage of the Chat Completions API (raw client)
string text = "Who are you?";
string response = await _client.GetChatCompletions(new UserMessage(text), maxTokens: 80);
Console.WriteLine(response);
Streaming response with async streams (like ChatGPT)
var text = "Write the world top 3 songs of Soul genre";
await foreach (string chunk in _client.StreamChatCompletions(new UserMessage(text), maxTokens: 80))
{
Console.Write(chunk);
}
Continue dialog with ChatGPT (message history)
Use ThenAssistant
and ThenUser
methods to create a dialog:
var dialog = Dialog.StartAsUser("How many meters are in a kilometer? Write just the number.") //the message from user
.ThenAssistant("1000") // response from the assistant
.ThenUser("Convert it to hex. Write just the number."); // the next message from user
await foreach (var chunk in _client.StreamChatCompletions(dialog, maxTokens: 80))
{
Console.Write(chunk);
}
Or just send message history as a collection.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- Microsoft.EntityFrameworkCore (>= 7.0.5)
- OpenAI.ChatGPT.AspNetCore (>= 2.5.0)
-
net7.0
- Microsoft.EntityFrameworkCore (>= 7.0.5)
- OpenAI.ChatGPT.AspNetCore (>= 2.5.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated | |
---|---|---|---|
4.1.0 | 153 | 7/28/2024 | |
4.1.0-alpha | 282 | 12/17/2023 | |
4.0.2-alpha | 343 | 12/5/2023 | |
4.0.1-alpha | 120 | 12/5/2023 | |
4.0.0-alpha | 116 | 12/5/2023 | |
3.3.0 | 434 | 11/24/2023 | |
3.2.0 | 656 | 11/17/2023 | |
3.1.1 | 207 | 11/11/2023 | |
3.1.0 | 133 | 11/10/2023 | |
3.0.0 | 181 | 11/8/2023 | |
2.9.3 | 138 | 11/8/2023 | |
2.9.2 | 119 | 11/7/2023 | |
2.9.1 | 171 | 11/3/2023 | |
2.9.0 | 309 | 10/20/2023 | |
2.8.0 | 574 | 7/20/2023 | |
2.7.1 | 178 | 7/13/2023 | |
2.7.0 | 207 | 7/2/2023 | |
2.6.0 | 181 | 6/17/2023 | |
2.5.0 | 341 | 4/28/2023 | |
2.4.2 | 198 | 4/24/2023 | |
2.4.1 | 304 | 4/24/2023 | |
2.4.0 | 209 | 4/24/2023 | |
2.3.0 | 193 | 4/20/2023 | |
2.2.2 | 226 | 4/19/2023 | |
2.2.0 | 194 | 4/19/2023 | |
2.1.0 | 202 | 4/18/2023 | |
2.0.3 | 191 | 4/18/2023 | |
2.0.2 | 204 | 4/18/2023 | |
2.0.1 | 197 | 4/18/2023 | |
2.0.0 | 213 | 4/17/2023 |