Microsoft.Extensions.AI.Evaluation
9.4.0-preview.1.25207.5
Prefix Reserved
dotnet add package Microsoft.Extensions.AI.Evaluation --version 9.4.0-preview.1.25207.5
NuGet\Install-Package Microsoft.Extensions.AI.Evaluation -Version 9.4.0-preview.1.25207.5
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="9.4.0-preview.1.25207.5" />
<PackageVersion Include="Microsoft.Extensions.AI.Evaluation" Version="9.4.0-preview.1.25207.5" />
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" />
paket add Microsoft.Extensions.AI.Evaluation --version 9.4.0-preview.1.25207.5
#r "nuget: Microsoft.Extensions.AI.Evaluation, 9.4.0-preview.1.25207.5"
#addin nuget:?package=Microsoft.Extensions.AI.Evaluation&version=9.4.0-preview.1.25207.5&prerelease
#tool nuget:?package=Microsoft.Extensions.AI.Evaluation&version=9.4.0-preview.1.25207.5&prerelease
The Microsoft.Extensions.AI.Evaluation libraries
Microsoft.Extensions.AI.Evaluation
is a set of .NET libraries defined in the following NuGet packages that have been designed to work together to support building processes for evaluating the quality of AI software.
Microsoft.Extensions.AI.Evaluation
- Defines core abstractions and types for supporting evaluation.Microsoft.Extensions.AI.Evaluation.Quality
- Contains evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Equivalence and Groundedness.Microsoft.Extensions.AI.Evaluation.Safety
- Contains a set of evaluators that are built atop the Azure AI Content Safety service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.Microsoft.Extensions.AI.Evaluation.Reporting
- Contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.Microsoft.Extensions.AI.Evaluation.Reporting.Azure
- Supports theMicrosoft.Extensions.AI.Evaluation.Reporting
library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.Microsoft.Extensions.AI.Evaluation.Console
- A command line dotnet tool for generating reports and managing evaluation data.
Install the packages
From the command-line:
dotnet add package Microsoft.Extensions.AI.Evaluation
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
Or directly in the C# project file:
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="[CURRENTVERSION]" />
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="[CURRENTVERSION]" />
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="[CURRENTVERSION]" />
</ItemGroup>
You can optionally add the Microsoft.Extensions.AI.Evaluation.Reporting.Azure
package in either of these places if you need Azure Storage support.
Install the command line tool
dotnet tool install Microsoft.Extensions.AI.Evaluation.Console --create-manifest-if-needed
Usage Examples
For a comprehensive tour of all the functionality, concepts and APIs available in the Microsoft.Extensions.AI.Evaluation
libraries, check out the API Usage Examples available in the dotnet/ai-samples repo. These examples are structured as a collection of unit tests. Each unit test showcases a specific concept or API, and builds on the concepts and APIs showcased in previous unit tests.
Feedback & Contributing
We welcome feedback and contributions in our GitHub repo.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 is compatible. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETFramework 4.6.2
- Microsoft.Extensions.AI.Abstractions (>= 9.4.0-preview.1.25207.5)
- Microsoft.ML.Tokenizers (>= 1.0.1)
-
.NETStandard 2.0
- Microsoft.Extensions.AI.Abstractions (>= 9.4.0-preview.1.25207.5)
- Microsoft.ML.Tokenizers (>= 1.0.1)
-
net8.0
- Microsoft.Extensions.AI.Abstractions (>= 9.4.0-preview.1.25207.5)
- Microsoft.ML.Tokenizers (>= 1.0.1)
-
net9.0
- Microsoft.Extensions.AI.Abstractions (>= 9.4.0-preview.1.25207.5)
- Microsoft.ML.Tokenizers (>= 1.0.1)
NuGet packages (3)
Showing the top 3 NuGet packages that depend on Microsoft.Extensions.AI.Evaluation:
Package | Downloads |
---|---|
Microsoft.Extensions.AI.Evaluation.Quality
A library containing a set of evaluators for evaluating the quality (coherence, relevance, truth, completeness, groundedness, fluency, equivalence etc.) of responses received from an LLM. |
|
Microsoft.Extensions.AI.Evaluation.Reporting
A library for aggregating and reporting evaluation data. This library also includes support for caching LLM responses. |
|
Microsoft.Extensions.AI.Evaluation.Safety
A library containing a set of evaluators for evaluating the content safety (hate and unfairness, self-harm, violence etc.) of responses received from an LLM. |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Microsoft.Extensions.AI.Evaluation:
Repository | Stars |
---|---|
dotnet/ai-samples
|
Version | Downloads | Last updated |
---|---|---|
9.4.0-preview.1.25207.5 | 874 | 14 days ago |
9.3.0-preview.1.25164.6 | 2,512 | a month ago |
9.3.0-preview.1.25126.9 | 1,057 | 2 months ago |
9.3.0-preview.1.25114.11 | 4,805 | 2 months ago |
0.9.56-preview | 1,536 | 3 months ago |
0.9.45-preview | 588 | 4 months ago |
0.9.37-preview | 123 | 4 months ago |
0.9.6-preview | 601 | 5 months ago |
0.9.5-preview | 84 | 5 months ago |
0.9.2-preview | 91 | 5 months ago |