Azure.AI.Projects 2.0.0-beta.1

Prefix Reserved
This is a prerelease version of Azure.AI.Projects.
dotnet add package Azure.AI.Projects --version 2.0.0-beta.1
                    
NuGet\Install-Package Azure.AI.Projects -Version 2.0.0-beta.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Azure.AI.Projects" Version="2.0.0-beta.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Azure.AI.Projects" Version="2.0.0-beta.1" />
                    
Directory.Packages.props
<PackageReference Include="Azure.AI.Projects" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Azure.AI.Projects --version 2.0.0-beta.1
                    
#r "nuget: Azure.AI.Projects, 2.0.0-beta.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Azure.AI.Projects@2.0.0-beta.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Azure.AI.Projects&version=2.0.0-beta.1&prerelease
                    
Install as a Cake Addin
#tool nuget:?package=Azure.AI.Projects&version=2.0.0-beta.1&prerelease
                    
Install as a Cake Tool

Azure AI Projects client library for .NET

The AI Projects client library is part of the Azure AI Foundry SDK and provides easy access to resources in your Azure AI Foundry Project. Use it to:

  • Create and run Classic Agents using the GetPersistentAgentsClient method on the client.
  • Create Agents using Agents property.
  • Enumerate AI Models deployed to your Foundry Project using the Deployments operations.
  • Enumerate connected Azure resources in your Foundry project using the Connections operations.
  • Upload documents and create Datasets to reference them using the Datasets operations.
  • Create and enumerate Search Indexes using the Indexes operations.

The client library uses version v1 of the AI Foundry data plane REST APIs.

Product documentation | Samples | API reference documentation | Package (NuGet) | SDK source code

Table of contents

Getting started

Prerequisites

To use Azure AI Projects capabilities, you must have an Azure subscription. This will allow you to create an Azure AI resource and get a connection URL.

Install the package

Install the client library for .NET with NuGet:

dotnet add package Azure.AI.Projects --prerelease

Authenticate the client

A secure, keyless authentication approach is to use Microsoft Entra ID (formerly Azure Active Directory) via the Azure Identity library. To use this library, you need to install the Azure.Identity package:

dotnet add package Azure.Identity

Key concepts

Create and authenticate the client

To interact with Azure AI Projects, you’ll need to create an instance of AIProjectClient. Use the appropriate credential type from the Azure Identity library. For example, DefaultAzureCredential:

var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());

Note: Support for project connection string and hub-based projects has been discontinued. We recommend creating a new Azure AI Foundry resource utilizing project endpoint. If this is not possible, please pin the version of Azure.AI.Projects to version 1.0.0-beta.8 or earlier.

Once the AIProjectClient is created, you can use properties such as .Datasets and .Indexes on this client to perform relevant operations.

Examples

Performing Classic Agent operations

The GetPersistentAgentsClient method on the AIProjectsClient gives you access to an authenticated PersistentAgentsClient from the Azure.AI.Agents.Persistent package. Below we show how to create an Agent and delete it. To see what you can do with the agent you created, see the many samples associated with the Azure.AI.Agents.Persistent package.

The code below assumes ModelDeploymentName (a string) is defined. It's the deployment name of an AI model in your Foundry Project, as shown in the "Models + endpoints" tab, under the "Name" column.

var endpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var modelDeploymentName = System.Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
AIProjectClient projectClient = new(new Uri(endpoint), new DefaultAzureCredential());
PersistentAgentsClient agentsClient = projectClient.GetPersistentAgentsClient();

// Step 1: Create an agent
PersistentAgent agent = agentsClient.Administration.CreateAgent(
    model: modelDeploymentName,
    name: "Math Tutor",
    instructions: "You are a personal math tutor. Write and run code to answer math questions."
);

// Step 2: Create a thread
PersistentAgentThread thread = agentsClient.Threads.CreateThread();

// Step 3: Add a message to a thread
PersistentThreadMessage message = agentsClient.Messages.CreateMessage(
    thread.Id,
    MessageRole.User,
    "I need to solve the equation `3x + 11 = 14`. Can you help me?");

// Intermission: message is now correlated with thread
// Intermission: listing messages will retrieve the message just added

List<PersistentThreadMessage> messagesList = [.. agentsClient.Messages.GetMessages(thread.Id)];
Assert.That(message.Id, Is.EqualTo(messagesList[0].Id));

// Step 4: Run the agent
ThreadRun run = agentsClient.Runs.CreateRun(
    thread.Id,
    agent.Id,
    additionalInstructions: "Please address the user as Jane Doe. The user has a premium account.");
do
{
    Thread.Sleep(TimeSpan.FromMilliseconds(500));
    run = agentsClient.Runs.GetRun(thread.Id, run.Id);
}
while (run.Status == RunStatus.Queued
    || run.Status == RunStatus.InProgress);
Assert.That(
    RunStatus.Completed,
    Is.EqualTo(run.Status),
    run.LastError?.Message);

Pageable<PersistentThreadMessage> messages
    = agentsClient.Messages.GetMessages(
        threadId: thread.Id, order: ListSortOrder.Ascending);

foreach (PersistentThreadMessage threadMessage in messages)
{
    Console.Write($"{threadMessage.CreatedAt:yyyy-MM-dd HH:mm:ss} - {threadMessage.Role,10}: ");
    foreach (MessageContent contentItem in threadMessage.ContentItems)
    {
        if (contentItem is MessageTextContent textItem)
        {
            Console.Write(textItem.Text);
        }
        else if (contentItem is MessageImageFileContent imageFileItem)
        {
            Console.Write($"<image from ID: {imageFileItem.FileId}");
        }
        Console.WriteLine();
    }
}

agentsClient.Threads.DeleteThread(threadId: thread.Id);
agentsClient.Administration.DeleteAgent(agentId: agent.Id);

Performing Agent operations

Azure.AI.Projects can be used to create, update and delete Agents.

Create Agent

Synchronous call:

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a prompt agent."
};
AgentVersion agentVersion1 = projectClient.Agents.CreateAgentVersion(
    agentName: "myAgent1",
    options: new(agentDefinition));
Console.WriteLine($"Agent created (id: {agentVersion1.Id}, name: {agentVersion1.Name}, version: {agentVersion1.Version})");
AgentVersion agentVersion2 = projectClient.Agents.CreateAgentVersion(
    agentName: "myAgent2",
    options: new(agentDefinition));
Console.WriteLine($"Agent created (id: {agentVersion2.Id}, name: {agentVersion2.Name}, version: {agentVersion2.Version})");

Asynchronous call:

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a prompt agent."
};
AgentVersion agentVersion1 = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent1",
    options: new(agentDefinition));
Console.WriteLine($"Agent created (id: {agentVersion1.Id}, name: {agentVersion1.Name}, version: {agentVersion1.Version})");
AgentVersion agentVersion2 = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent2",
    options: new(agentDefinition));
Console.WriteLine($"Agent created (id: {agentVersion2.Id}, name: {agentVersion2.Name}, version: {agentVersion2.Version})");

Get Agent

Synchronous call:

AgentRecord result = projectClient.Agents.GetAgent(agentVersion1.Name);
Console.WriteLine($"Agent created (id: {result.Id}, name: {result.Name})");

Asynchronous call:

AgentRecord result = await projectClient.Agents.GetAgentAsync(agentVersion1.Name);
Console.WriteLine($"Agent created (id: {result.Id}, name: {result.Name})");

List Agents

Synchronous call:

foreach (AgentRecord agent in projectClient.Agents.GetAgents())
{
    Console.WriteLine($"Listed Agent: id: {agent.Id}, name: {agent.Name}");
}

Asynchronous call:

await foreach (AgentRecord agent in projectClient.Agents.GetAgentsAsync())
{
    Console.WriteLine($"Listed Agent: id: {agent.Id}, name: {agent.Name}");
}

Delete Agent

Synchronous call:

projectClient.Agents.DeleteAgentVersion(agentName: agentVersion1.Name, agentVersion: agentVersion1.Version);
Console.WriteLine($"Agent deleted (name: {agentVersion1.Name}, version: {agentVersion1.Version})");
projectClient.Agents.DeleteAgentVersion(agentName: agentVersion2.Name, agentVersion: agentVersion2.Version);
Console.WriteLine($"Agent deleted (name: {agentVersion2.Name}, version: {agentVersion2.Version})");

Asynchronous call:

await projectClient.Agents.DeleteAgentVersionAsync(agentName: agentVersion1.Name, agentVersion: agentVersion1.Version);
Console.WriteLine($"Agent deleted (name: {agentVersion1.Name}, version: {agentVersion1.Version})");
await projectClient.Agents.DeleteAgentVersionAsync(agentName: agentVersion2.Name, agentVersion: agentVersion2.Version);
Console.WriteLine($"Agent deleted (name: {agentVersion2.Name}, version: {agentVersion2.Version})");

Get an authenticated AzureOpenAI client

Your Azure AI Foundry project may have one or more OpenAI models deployed that support chat completions. Use the code below to get an authenticated ChatClient from the Azure.AI.OpenAI package, and execute a chat completions call.

The code below assumes modelDeploymentName (a string) is defined. It's the deployment name of an AI model in your Foundry Project, or a connected Azure OpenAI resource. As shown in the "Models + endpoints" tab, under the "Name" column.

You can update the connectionName with one of the connections in your Foundry project, and you can update the apiVersion value with one found in the "Data plane - inference" row in this table.

var endpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var modelDeploymentName = System.Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
var connectionName = System.Environment.GetEnvironmentVariable("CONNECTION_NAME");
Console.WriteLine("Create the Azure OpenAI chat client");
var credential = new DefaultAzureCredential();
AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), credential);

ClientConnection connection = projectClient.GetConnection(typeof(AzureOpenAIClient).FullName!);

if (!connection.TryGetLocatorAsUri(out Uri uri) || uri is null)
{
    throw new InvalidOperationException("Invalid URI.");
}
uri = new Uri($"https://{uri.Host}");

AzureOpenAIClient azureOpenAIClient = new AzureOpenAIClient(uri, credential);
ChatClient chatClient = azureOpenAIClient.GetChatClient(deploymentName: modelDeploymentName);

Console.WriteLine("Complete a chat");
ChatCompletion result = chatClient.CompleteChat("List all the rainbow colors");
Console.WriteLine(result.Content[0].Text);

Deployments operations

The code below shows some Deployments operations, which allow you to enumerate the AI models deployed to your AI Foundry Projects. These models can be seen in the "Models + endpoints" tab in your AI Foundry Project. Full samples can be found under the "Deployment" folder in the package samples.

var endpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var modelDeploymentName = System.Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
var modelPublisher = System.Environment.GetEnvironmentVariable("MODEL_PUBLISHER");

AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());

Console.WriteLine("List all deployments:");
foreach (AIProjectDeployment deployment in projectClient.Deployments.GetDeployments())
{
    Console.WriteLine(deployment);
}

Console.WriteLine($"List all deployments by the model publisher `{modelPublisher}`:");
foreach (AIProjectDeployment deployment in projectClient.Deployments.GetDeployments(modelPublisher: modelPublisher))
{
    Console.WriteLine(deployment);
}

Console.WriteLine($"Get a single model deployment named `{modelDeploymentName}`:");
ModelDeployment deploymentDetails = (ModelDeployment)projectClient.Deployments.GetDeployment(modelDeploymentName);
Console.WriteLine(deploymentDetails);

Connections operations

The code below shows some Connection operations, which allow you to enumerate the Azure Resources connected to your AI Foundry Projects. These connections can be seen in the "Management Center", in the "Connected resources" tab in your AI Foundry Project. Full samples can be found under the "Connections" folder in the package samples.

var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var connectionName = Environment.GetEnvironmentVariable("CONNECTION_NAME");
AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());

Console.WriteLine("List the properties of all connections:");
foreach (AIProjectConnection connection in projectClient.Connections.GetConnections())
{
    Console.WriteLine(connection);
    Console.WriteLine(connection.Name);
}

Console.WriteLine("List the properties of all connections of a particular type (e.g., Azure OpenAI connections):");
foreach (AIProjectConnection connection in projectClient.Connections.GetConnections(connectionType: ConnectionType.AzureOpenAI))
{
    Console.WriteLine(connection);
}

Console.WriteLine($"Get the properties of a connection named `{connectionName}`:");
AIProjectConnection specificConnection = projectClient.Connections.GetConnection(connectionName, includeCredentials: false);
Console.WriteLine(specificConnection);

Console.WriteLine("Get the properties of a connection with credentials:");
AIProjectConnection specificConnectionCredentials = projectClient.Connections.GetConnection(connectionName, includeCredentials: true);
Console.WriteLine(specificConnectionCredentials);

Console.WriteLine($"Get the properties of the default connection:");
AIProjectConnection defaultConnection = projectClient.Connections.GetDefaultConnection(includeCredentials: false);
Console.WriteLine(defaultConnection);

Console.WriteLine($"Get the properties of the default connection with credentials:");
AIProjectConnection defaultConnectionCredentials = projectClient.Connections.GetDefaultConnection(includeCredentials: true);
Console.WriteLine(defaultConnectionCredentials);

Dataset operations

The code below shows some Dataset operations. Full samples can be found under the "Datasets" folder in the package samples.

var endpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var connectionName = Environment.GetEnvironmentVariable("CONNECTION_NAME");
var datasetName = System.Environment.GetEnvironmentVariable("DATASET_NAME");
var datasetVersion1 = System.Environment.GetEnvironmentVariable("DATASET_VERSION_1") ?? "1.0";
var datasetVersion2 = System.Environment.GetEnvironmentVariable("DATASET_VERSION_2") ?? "2.0";
var filePath = System.Environment.GetEnvironmentVariable("SAMPLE_FILE_PATH") ?? "sample_folder/sample_file1.txt";
var folderPath = System.Environment.GetEnvironmentVariable("SAMPLE_FOLDER_PATH") ?? "sample_folder";

AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());

Console.WriteLine($"Uploading a single file to create Dataset with name {datasetName} and version {datasetVersion1}:");
FileDataset fileDataset = projectClient.Datasets.UploadFile(
    name: datasetName,
    version: datasetVersion1,
    filePath: filePath,
    connectionName: connectionName
    );
Console.WriteLine(fileDataset);

Console.WriteLine($"Uploading folder to create Dataset version {datasetVersion2}:");
FolderDataset folderDataset = projectClient.Datasets.UploadFolder(
    name: datasetName,
    version: datasetVersion2,
    folderPath: folderPath,
    connectionName: connectionName,
    filePattern: new Regex(".*\\.txt")
);
Console.WriteLine(folderDataset);

Console.WriteLine($"Retrieving Dataset version {datasetVersion1}:");
AIProjectDataset dataset = projectClient.Datasets.GetDataset(datasetName, datasetVersion1);
Console.WriteLine(dataset.Id);

Console.WriteLine($"Retrieving credentials of Dataset {datasetName} version {datasetVersion1}:");
DatasetCredential credentials = projectClient.Datasets.GetCredentials(datasetName, datasetVersion1);
Console.WriteLine(credentials);

Console.WriteLine($"Listing all versions for Dataset '{datasetName}':");
foreach (AIProjectDataset ds in projectClient.Datasets.GetDatasetVersions(datasetName))
{
    Console.WriteLine(ds);
    Console.WriteLine(ds.Version);
}

Console.WriteLine($"Listing latest versions for all datasets:");
foreach (AIProjectDataset ds in projectClient.Datasets.GetDatasets())
{
    Console.WriteLine($"{ds.Name}, {ds.Version}, {ds.Id}");
}

Console.WriteLine($"Deleting Dataset versions {datasetVersion1} and {datasetVersion2}:");
projectClient.Datasets.Delete(datasetName, datasetVersion1);

projectClient.Datasets.Delete(datasetName, datasetVersion2);

Indexes operations

The code below shows some Indexes operations. Full samples can be found under the "Indexes" folder in the package samples.

var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var indexName = Environment.GetEnvironmentVariable("INDEX_NAME") ?? "my-index";
var indexVersion = Environment.GetEnvironmentVariable("INDEX_VERSION") ?? "1.0";
var aiSearchConnectionName = Environment.GetEnvironmentVariable("AI_SEARCH_CONNECTION_NAME") ?? "my-ai-search-connection-name";
var aiSearchIndexName = Environment.GetEnvironmentVariable("AI_SEARCH_INDEX_NAME") ?? "my-ai-search-index-name";

AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());
Console.WriteLine("Create a local Index with configurable data, referencing an existing AI Search resource");
AzureAISearchIndex searchIndex = new AzureAISearchIndex(aiSearchConnectionName, aiSearchIndexName)
{
    Description = "Sample Index for testing"
};

Console.WriteLine($"Create the Project Index named `{indexName}` using the previously created local object:");
searchIndex = (AzureAISearchIndex)projectClient.Indexes.CreateOrUpdate(
    name: indexName,
    version: indexVersion,
    index: searchIndex
);
Console.WriteLine(searchIndex);

Console.WriteLine($"Get an existing Index named `{indexName}`, version `{indexVersion}`:");
AIProjectIndex retrievedIndex = projectClient.Indexes.GetIndex(name: indexName, version: indexVersion);
Console.WriteLine(retrievedIndex);

Console.WriteLine($"Listing all versions of the Index named `{indexName}`:");
foreach (AIProjectIndex version in projectClient.Indexes.GetIndexVersions(name: indexName))
{
    Console.WriteLine(version);
}

Console.WriteLine($"Listing all Indices:");
foreach (AIProjectIndex version in projectClient.Indexes.GetIndexes())
{
    Console.WriteLine(version);
}

Console.WriteLine("Delete the Index version created above:");
projectClient.Indexes.Delete(name: indexName, version: indexVersion);

Files operations

The code below shows some Files operations, which allow you to manage files through the OpenAI Files API. These operations are accessed via the ProjectOpenAIClient. Full samples can be found under the "FineTuning" folder in the package samples.

The first step working with OpenAI files is to authenticate to Azure through AIProjectClient and get the OpenAIFileClient.

string trainFilePath = Environment.GetEnvironmentVariable("TRAINING_FILE_PATH") ?? "data/sft_training_set.jsonl";
var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());
ProjectOpenAIClient oaiClient = projectClient.OpenAI;
OpenAIFileClient fileClient = oaiClient.GetOpenAIFileClient();

Use authenticated OpenAIFileClient to upload the local files to Azure.

using FileStream fileStream = File.OpenRead(trainFilePath);
OpenAIFile uploadedFile = fileClient.UploadFile(
    fileStream,
    "sft_training_set.jsonl",
    FileUploadPurpose.FineTune);
Console.WriteLine($"Uploaded file with ID: {uploadedFile.Id}");

To retrieve file, use GetFile method of OpenAIFileClient.

OpenAIFile retrievedFile = fileClient.GetFile(fileId);
Console.WriteLine($"Retrieved file: {retrievedFile.Filename} ({retrievedFile.SizeInBytes} bytes)");

Use GetFiles method of OpenAIFileClient to list the files.

ClientResult<OpenAIFileCollection> filesResult = fileClient.GetFiles();
Console.WriteLine($"Listed {filesResult.Value.Count} file(s)");
ClientResult<FileDeletionResult> deleteResult = fileClient.DeleteFile(fileId);
Console.WriteLine($"Deleted file: {deleteResult.Value.FileId}");

Fine-Tuning operations

The code below shows how to create a supervised fine-tuning job using the OpenAI Fine-Tuning API through the ProjectOpenAIClient. Fine-tuning allows you to customize models for specific tasks using your own training data. Full samples can be found under the "FineTuning" folder in the package samples.

string trainingFilePath = Environment.GetEnvironmentVariable("TRAINING_FILE_PATH") ?? "data/sft_training_set.jsonl";
string validationFilePath = Environment.GetEnvironmentVariable("VALIDATION_FILE_PATH") ?? "data/sft_validation_set.jsonl";
var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var modelDeploymentName = Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
AIProjectClient projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());
ProjectOpenAIClient oaiClient = projectClient.OpenAI;
OpenAIFileClient fileClient = oaiClient.GetOpenAIFileClient();
FineTuningClient fineTuningClient = oaiClient.GetFineTuningClient();

The fine-tuning task represents the adaptation of deep neural network weights to the domain specific data. To achieve this goal, we need to provide model with training data set for weights update and a validation set for evaluation of learning efficiency.

// Upload training file
Console.WriteLine("Uploading training file...");
using FileStream trainStream = File.OpenRead(trainingFilePath);
OpenAIFile trainFile = fileClient.UploadFile(
    trainStream,
    "sft_training_set.jsonl",
    FileUploadPurpose.FineTune);
Console.WriteLine($"Uploaded training file with ID: {trainFile.Id}");

// Upload validation file
Console.WriteLine("Uploading validation file...");
using FileStream validationStream = File.OpenRead(validationFilePath);
OpenAIFile validationFile = fileClient.UploadFile(
    validationStream,
    "sft_validation_set.jsonl",
    FileUploadPurpose.FineTune);
Console.WriteLine($"Uploaded validation file with ID: {validationFile.Id}");

Now we will use the uploaded training and validation set to fine-tue the model. In our experiment we will train the model for three epochs batch size of one and the constant learning rate of 1.0.

// Create supervised fine-tuning job
Console.WriteLine("Creating supervised fine-tuning job...");
FineTuningJob fineTuningJob = fineTuningClient.FineTune(
    modelDeploymentName,
    trainFile.Id,
    waitUntilCompleted: false,
    new()
    {
        TrainingMethod = FineTuningTrainingMethod.CreateSupervised(
            epochCount: 3,
            batchSize: 1,
            learningRate: 1.0),
        ValidationFile = validationFile.Id
    });
Console.WriteLine($"Created fine-tuning job: {fineTuningJob.JobId}");
Console.WriteLine($"Status: {fineTuningJob.Status}");

Memory store operations

Note: Memory stores is an experimental feature, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Memory in Foundry Agent Service is a managed, long-term memory solution. It enables Agent continuity across sessions, devices, and workflows. Project client can be used to manage memory stores. In the examples below we show only synchronous version of API for brevity.

Use the client to create the MemoryStore. Memory store requires two models, one for embedding and another for chat completion.

MemoryStoreDefaultDefinition memoryStoreDefinition = new(
    chatModel: modelDeploymentName,
    embeddingModel: embeddingDeploymentName
);
memoryStoreDefinition.Options = new(userProfileEnabled: true, chatSummaryEnabled: true);
MemoryStore memoryStore = projectClient.MemoryStores.CreateMemoryStore(
    name: "testMemoryStore",
    definition: memoryStoreDefinition,
    description: "Memory store demo."
);
Console.WriteLine($"Memory store with id {memoryStore.Id}, name {memoryStore.Name} and description {memoryStore.Description} was created.");

Update the description of memory store we have just created.

memoryStore = projectClient.MemoryStores.UpdateMemoryStore(name: memoryStore.Name, description: "New description for memory store demo.");
Console.WriteLine($"Memory store with id {memoryStore.Id}, name {memoryStore.Name} now has description: {memoryStore.Description}.");

Get the memory store.

memoryStore = projectClient.MemoryStores.GetMemoryStore(name: memoryStore.Name);
Console.WriteLine($"Returned Memory store with id {memoryStore.Id}, name {memoryStore.Name} and description {memoryStore.Description}.");

List all memory stores in our Microsoft Foundry.

foreach (MemoryStore store in projectClient.MemoryStores.GetMemoryStores())
{
    Console.WriteLine($"Memory store id: {store.Id}, name: {store.Name}, description: {store.Description}.");
}

Create a scope in the MemoryStore and add one item.

string scope = "Flower";
MemoryUpdateOptions memoryOptions = new(scope);
memoryOptions.Items.Add(ResponseItem.CreateUserMessageItem("My favourite flower is Cephalocereus euphorbioides."));
MemoryUpdateResult updateResult = projectClient.MemoryStores.WaitForMemoriesUpdate(memoryStoreName: memoryStore.Name, options: memoryOptions, pollingInterval: 500);
if (updateResult.Status == MemoryStoreUpdateStatus.Failed)
{
    throw new InvalidOperationException(updateResult.ErrorDetails);
}
Console.WriteLine($"The update operation {updateResult.UpdateId} has finished with {updateResult.Status} status.");

Ask the question about the memorized item.

MemorySearchOptions opts = new(scope)
{
    Items = { ResponseItem.CreateUserMessageItem("What was is your favourite flower?") },
};
MemoryStoreSearchResponse resp = projectClient.MemoryStores.SearchMemories(
    memoryStoreName: memoryStore.Name,
    options: opts
);
Console.WriteLine("==The output from memory tool.==");
foreach (Azure.AI.Projects.MemorySearchItem item in resp.Memories)
{
    Console.WriteLine(item.MemoryItem.Content);
}
Console.WriteLine("==End of memory tool output.==");

Remove the scope we have created from MemoryStore.

MemoryStoreDeleteScopeResponse deleteScopeResponse = projectClient.MemoryStores.DeleteScope(name: memoryStore.Name, scope: "Flower");
string status = deleteScopeResponse.Deleted ? "" : " not";
Console.WriteLine($"The scope {deleteScopeResponse.Name} was{status} deleted.");

Finally, delete MemoryStore.

DeleteMemoryStoreResponse deleteResponse = projectClient.MemoryStores.DeleteMemoryStore(name: memoryStore.Name);
status = deleteResponse.Deleted ? "" : " not";
Console.WriteLine($"The memory store {deleteResponse.Name} was{status} deleted.");

For more information abouit memory stores please refer this article

Evaluations

Evaluation in Azure AI Project client library provides quantitative, AI-assisted quality and safety metrics to asses performance and Evaluate LLM Models, GenAI Application and Agents. Metrics are defined as evaluators. Built-in or custom evaluators can provide comprehensive evaluation insights.

Agent evaluation

All the operations with evaluations can be performed using EvaluationClient. Here we will demonstrate only the basic concepts of the evaluations. Please see the full sample of evaluations in our samples section.

First, we need to define the evaluation criteria and the data source config. Testing criteria lists all the evaluators and data mappings for them. In the example below we will use three built in evaluators: "violence_detection", "fluency" and "task_adherence". We will use Agent's string and structured JSON outputs, named sample.output_text and sample.output_items respectively as response parameter for the evaluation and take query property from the data set, using item.query placeholder.

object[] testingCriteria = [
    new {
        type = "azure_ai_evaluator",
        name = "violence_detection",
        evaluator_name = "builtin.violence",
        data_mapping = new { query = "{{item.query}}", response = "{{sample.output_text}}"}
    },
    new {
        type = "azure_ai_evaluator",
        name = "fluency",
        evaluator_name = "builtin.fluency",
        initialization_parameters = new { deployment_name = modelDeploymentName},
        data_mapping = new { query = "{{item.query}}", response = "{{sample.output_text}}"}
    },
    new {
        type = "azure_ai_evaluator",
        name = "task_adherence",
        evaluator_name = "builtin.task_adherence",
        initialization_parameters = new { deployment_name = modelDeploymentName},
        data_mapping = new { query = "{{item.query}}", response = "{{sample.output_items}}"}
    },
];
object dataSourceConfig = new
{
    type = "custom",
    item_schema = new
    {
        type = "object",
        properties = new
        {
            query = new
            {
                type = "string"
            }
        },
        required = new[] { "query" }
    },
    include_sample_schema = true
};
BinaryData evaluationData = BinaryData.FromObjectAsJson(
    new
    {
        name = "Agent Evaluation",
        data_source_config = dataSourceConfig,
        testing_criteria = testingCriteria
    }
);

Use EvaluationClient to create the evaluation with provided parameters.

using BinaryContent evaluationDataContent = BinaryContent.Create(evaluationData);
ClientResult evaluation = await evaluationClient.CreateEvaluationAsync(evaluationDataContent);
Dictionary<string, string> fields = ParseClientResult(evaluation, ["name", "id"]);
string evaluationName = fields["name"];
string evaluationId = fields["id"];
Console.WriteLine($"Evaluation created (id: {evaluationId}, name: {evaluationName})");

Create the data source. It contains name, the ID of the evaluation we have created above, and data source, consisting of target agent name and version, two queries for an agent and the template, mapping these questions to the text field of the user messages, which will be sent to Agent. The target type azure_ai_agent informs the service that we are evaluating Agent.

object dataSource = new
{
    type = "azure_ai_target_completions",
    source = new
    {
        type = "file_content",
        content = new[] {
            new { item = new { query = "What is the capital of France?" } },
            new { item = new { query = "How do I reverse a string in Python? "} },
        }
    },
    input_messages = new
    {
        type = "template",
        template = new[] {
            new {
                type = "message",
                role = "user",
                content = new { type = "input_text", text = "{{item.query}}" }
            }
        }
    },
    target = new
    {
        type = "azure_ai_agent",
        name = agentVersion.Name,
        // Version is optional. Defaults to latest version if not specified.
        version = agentVersion.Version,
    }
};
BinaryData runData = BinaryData.FromObjectAsJson(
    new
    {
        eval_id = evaluationId,
        name = $"Evaluation Run for Agent {agentVersion.Name}",
        data_source = dataSource
    }
);
using BinaryContent runDataContent = BinaryContent.Create(runData);

Create the evaluation run and extract its ID and status.

ClientResult run = await evaluationClient.CreateEvaluationRunAsync(evaluationId: evaluationId, content: runDataContent);
fields = ParseClientResult(run, ["id", "status"]);
string runId = fields["id"];
string runStatus = fields["status"];
Console.WriteLine($"Evaluation run created (id: {runId})");

Wait for evaluation run to arrive at the terminal state.

while (runStatus != "failed" && runStatus != "completed")
{
    await Task.Delay(TimeSpan.FromMilliseconds(500));
    run = await evaluationClient.GetEvaluationRunAsync(evaluationId: evaluationId, evaluationRunId: runId, options: new());
    runStatus = ParseClientResult(run, ["status"])["status"];
    Console.WriteLine($"Waiting for eval run to complete... current status: {runStatus}");
}
if (runStatus == "failed")
{
    throw new InvalidOperationException($"Evaluation run failed with error: {GetErrorMessageOrEmpty(run)}");
}

Get the results using GetResultsListAsync method. It calls GetEvaluationRunOutputItemsAsync on the EvaluationClient returning the object representing ClientResult, which contains binary encoded JSON response that can be retrieved using GetRawResponse().

private static async Task<List<string>> GetResultsListAsync(EvaluationClient client, string evaluationId, string evaluationRunId)
{
    List<string> resultJsons = [];
    bool hasMore = false;
    do
    {
        ClientResult resultList = await client.GetEvaluationRunOutputItemsAsync(evaluationId: evaluationId, evaluationRunId: evaluationRunId, limit: null, order: "asc", after: default, outputItemStatus: default, options: new());
        Utf8JsonReader reader = new(resultList.GetRawResponse().Content.ToMemory().ToArray());
        JsonDocument document = JsonDocument.ParseValue(ref reader);

        foreach (JsonProperty topProperty in document.RootElement.EnumerateObject())
        {
            if (topProperty.NameEquals("has_more"u8))
            {
                hasMore = topProperty.Value.GetBoolean();
            }
            else if (topProperty.NameEquals("data"u8))
            {
                if (topProperty.Value.ValueKind == JsonValueKind.Array)
                {
                    foreach (JsonElement dataElement in topProperty.Value.EnumerateArray())
                    {
                        resultJsons.Add(dataElement.ToString());
                    }
                }
            }
        }
    } while (hasMore);
    return resultJsons;
}
Model evaluation

Model evaluation scenario differs from agent evaluation only by the target configuration in dataSource:

object dataSource = new
{
    type = "azure_ai_target_completions",
    source = new
    {
        type = "file_content",
        content = new[] {
            new { item = new { query = "What is the capital of France?" } },
            new { item = new { query = "How do I reverse a string in Python? "} },
        }
    },
    input_messages = new
    {
        type = "template",
        template = new[] {
            new {
                type = "message",
                role = "user",
                content = new { type = "input_text", text = "{{item.query}}" }
            }
        }
    },
    target = new
    {
        type = "azure_ai_model",
        model = modelDeploymentName,
        sampling_params = new
        {
            top_p = 1.0f,
            max_completion_tokens = 2048,
        }
    }
};
BinaryData runData = BinaryData.FromObjectAsJson(
    new
    {
        eval_id = evaluationId,
        name = $"Evaluation Run for Model {modelDeploymentName}",
        data_source = dataSource
    }
);
using BinaryContent runDataContent = BinaryContent.Create(runData);
Using uploaded datasets

To use the uploaded data set with evaluations please upload the data set as described in dataset operations section and use uploaded data set ID while creating data source object.

object dataSource = new
{
    type = "jsonl",
    source = new
    {
        type = "file_id",
        id = fileDataset.Id
    },
};
object runMetadata = new
{
    team = "evaluator-experimentation",
    scenario = "dataset-with-id",
};
BinaryData runData = BinaryData.FromObjectAsJson(
    new
    {
        eval_id = evaluationId,
        name = $"Evaluation Run for dataset {fileDataset.Name}",
        metadata = runMetadata,
        data_source = dataSource
    }
);
using BinaryContent runDataContent = BinaryContent.Create(runData);
Using custom prompt-based evaluator

Note: Storing evaluators in catalog is an experimental feature, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Side by side with built in evaluators, it is possible to define ones with custom logic. After the evaluator has been created and uploaded to catalog, it can be used as a regular evaluator:

Create a prompt-based evaluator.

private EvaluatorVersion promptVersion = new(
    categories: [EvaluatorCategory.Quality],
    definition: new PromptBasedEvaluatorDefinition(
        promptText: """
            You are a Groundedness Evaluator.

            Your task is to evaluate how well the given response is grounded in the provided ground truth.  
            Groundedness means the response’s statements are factually supported by the ground truth.  
            Evaluate factual alignment only — ignore grammar, fluency, or completeness.

            ---

            ### Input:
            Query:
            {{query}}

            Response:
            {{response}}

            Ground Truth:
            {{ground_truth}}

            ---

            ### Scoring Scale (1–5):
            5 → Fully grounded. All claims supported by ground truth.  
            4 → Mostly grounded. Minor unsupported details.  
            3 → Partially grounded. About half the claims supported.  
            2 → Mostly ungrounded. Only a few details supported.  
            1 → Not grounded. Almost all information unsupported.

            ---

            ### Output Format (JSON):
            {
                "result": <integer from 1 to 5>,
                "reason": "<brief explanation for the score>"
            }
            """
    ),
    evaluatorType: EvaluatorType.Custom
)
{
    DisplayName = "Custom prompt evaluator example",
    Description = "Custom evaluator for groundedness",
};

Upload evaluator to Azure.

EvaluatorVersion promptEvaluator = await projectClient.Evaluators.CreateVersionAsync(
    name: "myCustomEvaluatorPrompt",
    evaluatorVersion: promptVersion
);
Console.WriteLine($"Created evaluator {promptEvaluator.Id}");

To use the evaluator we have created, the next testing criteria should be set.

object[] testingCriteria = [
    new {
        type = "azure_ai_evaluator",
        name = "MyCustomEvaluation",
        evaluator_name = promptEvaluator.Name,
        data_mapping = new {
            query = "{{item.query}}",
            response = "{{item.response}}",
            ground_truth = "{{item.ground_truth}}",
        },
        initialization_parameters = new { deployment_name = modelDeploymentName, threshold = 3},
    },
];
Using custom code-based evaluator

Note: Storing evaluators in catalog is an experimental feature, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Custom evaluators may rely on code-based rules as shown below.

private EvaluatorVersion GetCodeEvaluatorVersion()
{
    EvaluatorMetric resultMetric = new()
    {
        Type = EvaluatorMetricType.Ordinal,
        DesirableDirection = EvaluatorMetricDirection.Increase,
        MinValue = 0.0f,
        MaxValue = 1.0f
    };
    EvaluatorVersion evaluatorVersion = new(
    categories: [EvaluatorCategory.Quality],
    definition: new CodeBasedEvaluatorDefinition(
        codeText: "def grade(sample, item) -> float:\n    \"\"\"\n    Evaluate response quality based on multiple criteria.\n    Note: All data is in the \\'item\\' parameter, \\'sample\\' is empty.\n    \"\"\"\n    # Extract data from item (not sample!)\n    response = item.get(\"response\", \"\").lower() if isinstance(item, dict) else \"\"\n    ground_truth = item.get(\"ground_truth\", \"\").lower() if isinstance(item, dict) else \"\"\n    query = item.get(\"query\", \"\").lower() if isinstance(item, dict) else \"\"\n    \n    # Check if response is empty\n    if not response:\n        return 0.0\n    \n    # Check for harmful content\n    harmful_keywords = [\"harmful\", \"dangerous\", \"unsafe\", \"illegal\", \"unethical\"]\n    if any(keyword in response for keyword in harmful_keywords):\n        return 0.0\n    \n    # Length check\n    if len(response) < 10:\n        return 0.1\n    elif len(response) < 50:\n        return 0.2\n    \n    # Technical content check\n    technical_keywords = [\"api\", \"experiment\", \"run\", \"azure\", \"machine learning\", \"gradient\", \"neural\", \"algorithm\"]\n    technical_score = sum(1 for k in technical_keywords if k in response) / len(technical_keywords)\n    \n    # Query relevance\n    query_words = query.split()[:3] if query else []\n    relevance_score = 0.7 if any(word in response for word in query_words) else 0.3\n    \n    # Ground truth similarity\n    if ground_truth:\n        truth_words = set(ground_truth.split())\n        response_words = set(response.split())\n        overlap = len(truth_words & response_words) / len(truth_words) if truth_words else 0\n        similarity_score = min(1.0, overlap)\n    else:\n        similarity_score = 0.5\n    \n    return min(1.0, (technical_score * 0.3) + (relevance_score * 0.3) + (similarity_score * 0.4))",
        initParameters: BinaryData.FromObjectAsJson(
            new
            {
                required = new[] { "deployment_name", "pass_threshold" },
                type = "object",
                properties = new
                {
                    deployment_name = new { type = "string" },
                    pass_threshold = new { type = "string" }
                }
            }
        ),
        dataSchema: BinaryData.FromObjectAsJson(
            new
            {
                required = new[] { "item" },
                type = "object",
                properties = new
                {
                    item = new
                    {
                        type = "object",
                        properties = new
                        {
                            query = new { type = "string" },
                            response = new { type = "string" },
                            ground_truth = new { type = "string" },
                        }
                    }
                }
            }
        ),
        metrics: new Dictionary<string, EvaluatorMetric> {
            { "result", resultMetric }
        }
    ),
    evaluatorType: EvaluatorType.Custom
)
    {
        DisplayName = "Custom code evaluator example",
        Description = "Custom evaluator to detect violent content",
    };
    return evaluatorVersion;
}

The code-based evaluator can be used the same way as prompt-based

Evaluation with Application Insights

Evaluators can be used to gather data from the Application Insights, connected to Microsoft Foundry. It requires both foundry and projects managed identity to be assigned "Log Analytics Reader" for the application insights. The data source configuration must have scenario field set to traces, informing that the data will be generated from Kusto query.

private static BinaryData GetEvaluationCriteria(string[] names, string modelDeploymentName)
{
    object[] testingCriteria = new object[names.Length];
    for (int i = 0; i < names.Length; i++)
    {
        testingCriteria[i] = new
        {
            type = "azure_ai_evaluator",
            name = names[i],
            evaluator_name = $"builtin.{names[i]}",
            data_mapping = new { query = "{{query}}", response = "{{response}}", tool_definitions = "{{tool_definitions}}" },
            initialization_parameters = new { deployment_name = modelDeploymentName },
        };
    }
    object dataSourceConfig = new
    {
        type = "azure_ai_source",
        scenario = "traces"
    };
    return BinaryData.FromObjectAsJson(
        new
        {
            name = "Trace Evaluation",
            data_source_config = dataSourceConfig,
            testing_criteria = testingCriteria
        }
    );
}

The runData must contain name and ID of the evaluation and data source. In this scenario it type is azure_ai_traces, which informs the service to run the Kusto query on traces and filter it by the trace IDs, stored in traceIDs array of strings.

object dataSource = new
{
    type = "azure_ai_traces",
    trace_ids = traceIDs,
    lookback_hours = lookbackHours
};
BinaryData runData = BinaryData.FromObjectAsJson(
    new
    {
        eval_id = evaluationId,
        name = $"agent_trace_eval_{endTime:O}",
        data_source = dataSource,
        metadata = new
        {
            agent_id = agentId,
            start_time = endTime.AddHours(-lookbackHours).ToString("O"),
            end_time = endTime.ToString("O"),
        }
    }
);
using BinaryContent runDataContent = BinaryContent.Create(runData);
Evaluating responses

The evaluation may be done on the OpenAI response items, received from the Agent. To use this data structure, the data source configuration scenario has to be set to "responses".

private static BinaryData GetEvaluationConfig(string modelDeploymentName)
{
    object[] testingCriteria = [
        new {
            type = "azure_ai_evaluator",
            name = "violence_detection",
            evaluator_name = "builtin.violence",
        },
    ];
    object dataSourceConfig = new
    {
        type = "azure_ai_source",
        scenario = "responses"
    };
    return BinaryData.FromObjectAsJson(
        new
        {
            name = "Agent Response Evaluation",
            data_source_config = dataSourceConfig,
            testing_criteria = testingCriteria
        }
    );
}

The data source needs to have section item_generation_params, having response_retrieval type. This section informs service to get the data from the response with the given ID.

private static BinaryData GetRunData(string agentName, string responseId, string evaluationId)
{
    object dataSource = new
    {
        type = "azure_ai_responses",
        item_generation_params = new
        {
            type = "response_retrieval",
            data_mapping = new { response_id = "{{item.resp_id}}" },
            source = new
            {
                type = "file_content",
                content = new[]
                {
                    new
                    {
                        item = new { resp_id =  responseId}
                    }
                }
            }
        },
    };
    return BinaryData.FromObjectAsJson(
        new
        {
            eval_id = evaluationId,
            name = $"Evaluation Run for Agent {agentName}",
            data_source = dataSource
        }
    );
}
Evaluation rules

Evaluation rules allow subscribing the evaluation to a specific event. In the example below we create evaluation rule, which launches evaluation each time the Agent sends the response.

ContinuousEvaluationRuleAction continuousAction = new(evaluationId)
{
    MaxHourlyRuns = 100,
};
EvaluationRule continuousRule = new(
    action: continuousAction, eventType: EvaluationRuleEventType.ResponseCompleted, enabled: true)
{
    Filter = new EvaluationRuleFilter(agentName: agentVersion.Name),
    DisplayName = "Continuous evaluation rule."
};

Apply the rule.

EvaluationRule continuousEvalRule = await projectClient.EvaluationRules.CreateOrUpdateAsync(
    id: "my-continuous-eval-rule",
    evaluationRule: continuousRule
);
Console.WriteLine($"Continuous Evaluation Rule created (id: {continuousEvalRule.Id}, name: {continuousEvalRule.DisplayName})");

Tracing

Note: Tracing functionality is in preliminary preview and is subject to change. Spans, attributes, and events may be modified in future versions.

Enabling GenAI Tracing

Tracing requires enabling GenAI-specific OpenTelemetry support. One way to do this is to set the AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING environment variable value to true. You can also enable the feature with the following code:

AppContext.SetSwitch("Azure.Experimental.EnableGenAITracing", true);

Precedence: If both the AppContext switch and the environment variable are set, the AppContext switch takes priority. No exception is thrown on conflict. If neither is set, the value defaults to false.

Important: When you enable Azure.Experimental.EnableGenAITracing, the SDK automatically enables the Azure.Experimental.EnableActivitySource flag, which is required for the OpenTelemetry instrumentation to function.

You can add an Application Insights Azure resource to your Microsoft Foundry project. If one was enabled, you can get the Application Insights connection string, configure your AI Projects client, and observe traces in Azure Monitor. Typically, you might want to start tracing before you create a client or Agent.

Tracing to Azure Monitor

First, set the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable to point to your Azure Monitor resource.

For tracing to Azure Monitor from your application, the preferred option is to use Azure.Monitor.OpenTelemetry.AspNetCore. Install the package with NuGet:

dotnet add package Azure.Monitor.OpenTelemetry.AspNetCore

More information about using the Azure.Monitor.OpenTelemetry.AspNetCore package can be found here.

Another option is to use Azure.Monitor.OpenTelemetry.Exporter package. Install the package with NuGet:

dotnet add package Azure.Monitor.OpenTelemetry.Exporter

Here is an example how to set up tracing to Azure Monitor using Azure.Monitor.OpenTelemetry.Exporter:

var tracerProvider = Sdk.CreateTracerProviderBuilder()
    .AddSource("Azure.AI.Projects.*")
    .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("AgentTracingSample"))
    .AddAzureMonitorTraceExporter().Build();

Tracing to Console

For tracing to console from your application, install the OpenTelemetry.Exporter.Console with NuGet:

dotnet add package OpenTelemetry.Exporter.Console

Here is an example how to set up tracing to console:

var tracerProvider = Sdk.CreateTracerProviderBuilder()
                .AddSource("Azure.AI.Projects.*") // Add the required sources name
                .SetResourceBuilder(OpenTelemetry.Resources.ResourceBuilder.CreateDefault().AddService("AgentTracingSample"))
                .AddConsoleExporter() // Export traces to the console
                .Build();

Enabling content recording

Content recording controls whether message contents and tool call related details, such as parameters and return values, are captured with the traces. This data may include sensitive user information.

To enable content recording, set the OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT environment variable to true. Alternatively, you can control content recording with the following code:

AppContext.SetSwitch("Azure.Experimental.TraceGenAIMessageContent", true);

If neither the environment variable nor the AppContext switch is set, content recording defaults to false.

Precedence: If both the AppContext switch and the environment variable are set, the AppContext switch takes priority. No exception is thrown on conflict.

Red teams

Note: Red teams is an experimental feature, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Red teams allow to check how models behave in response to attack attempts. To test the model using Base64-encoded strings, with the prompts asking it to generate violent content, we can use the next code.

AzureOpenAIModelConfiguration config = new(modelDeploymentName: modelDeploymentName);
RedTeam redTeam = new(target: config)
{
    AttackStrategies = { AttackStrategy.Base64 },
    RiskCategories = { RiskCategory.Violence },
    DisplayName = "redteamtest1"
};

Start the Read-Teaming task.

RequestOptions options = new();
options.AddHeader("model-endpoint", modelEndpoint);
options.AddHeader("model-api-key", modelApiKey);
redTeam = await projectClient.RedTeams.CreateAsync(redTeam: redTeam, options: options);
Console.WriteLine($"Red Team scan created with scan name: {redTeam.Name}");

Get Read-Teaming task and output its status.

redTeam = await projectClient.RedTeams.GetAsync(name: redTeam.Name);
Console.WriteLine($"Red Team scan status: {redTeam.Status}");

To get the results of the red teaming experiment, open Microsoft Foundry used for the experiments, on the left panel select Evaluation and choose AI red teaming tab.

Troubleshooting

Any operation that fails will throw a RequestFailedException. The exception's code will hold the HTTP response status code. The exception's message contains a detailed message that may be helpful in diagnosing the issue:

try
{
    projectClient.Datasets.GetDataset("non-existent-dataset-name", "non-existent-dataset-version");
}
catch (ClientResultException ex) when (ex.Status == 404)
{
    Console.WriteLine($"Exception status code: {ex.Status}");
    Console.WriteLine($"Exception message: {ex.Message}");
}

To further diagnose and troubleshoot issues, you can enable logging following the Azure SDK logging documentation. This allows you to capture additional insights into request and response details, which can be particularly helpful when diagnosing complex issues.

Reporting issues

To report an issue with the client library, or request additional features, please open a GitHub issue here. Mention the package name "Azure.AI.Projects" in the title or content.

Next steps

Beyond the introductory scenarios discussed, the AI Projects client library offers support for additional scenarios to help take advantage of the full feature set of the AI services. In order to help explore some of these scenarios, the AI Projects client library offers a set of samples to serve as an illustration for common scenarios. Please see the Samples for details.

Contributing

See the Azure SDK CONTRIBUTING.md for details on building, testing, and contributing to this library.

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (13)

Showing the top 5 NuGet packages that depend on Azure.AI.Projects:

Package Downloads
Microsoft.SemanticKernel.Agents.AzureAI

Defines a concrete Agent based on the Azure AI Agent API.

Microsoft.Agents.AI.AzureAI

Provides Microsoft Agent Framework support for Foundry Agents.

GreatIdeaz.trellispark.Rest.Logic

Package Description

Azure.AI.AgentServer.Core

Package Description

Azure.Projects

Azure.Projects simplifies getting started with Azure in .NET.

GitHub repositories (7)

Showing the top 7 popular GitHub repositories that depend on Azure.AI.Projects:

Repository Stars
microsoft/semantic-kernel
Integrate cutting-edge LLM technology quickly and easily into your apps
microsoft/mcp
Catalog of official Microsoft MCP (Model Context Protocol) server implementations for AI-powered data access and tool integration
microsoft/Generative-AI-for-beginners-dotnet
Five lessons, learn how to really apply AI to your .NET Applications
Azure/azure-mcp
The Azure MCP Server, bringing the power of Azure to your agents.
axzxs2001/Asp.NetCoreExperiment
原来所有项目都移动到**OleVersion**目录下进行保留。新的案例装以.net 5.0为主,一部分对以前案例进行升级,一部分将以前的工作经验总结出来,以供大家参考!
Azure-Samples/eShopLite
eShopLite is a set of reference .NET applications implementing an eCommerce site with features like Semantic Search, MCP, Reasoning models and more.
bingbing-gui/dotnet-platform
这是一个围绕 新一代 .NET 应用模型 的实践型仓库,覆盖 Web、云原生、AI、微服务等多种应用形态。
Version Downloads Last Updated
2.0.0-beta.1 2,107 2/25/2026
1.2.0-beta.5 117,924 12/13/2025
1.2.0-beta.4 16,971 11/18/2025
1.2.0-beta.3 18,888 11/16/2025
1.2.0-beta.2 1,050 11/14/2025
1.2.0-beta.1 9,440 11/14/2025
1.1.0 134,717 11/4/2025
1.0.0 40,767 10/1/2025
1.0.0-beta.11 115,497 8/20/2025
1.0.0-beta.10 62,936 7/12/2025
1.0.0-beta.9 454,520 5/16/2025
1.0.0-beta.8 46,383 4/23/2025
1.0.0-beta.7 28,726 4/21/2025
1.0.0-beta.6 27,341 3/28/2025
1.0.0-beta.5 3,858 3/17/2025
1.0.0-beta.4 27,482 2/28/2025
1.0.0-beta.3 101,036 1/22/2025
1.0.0-beta.2 22,931 12/13/2024
1.0.0-beta.1 1,849 11/19/2024