Skip to content
85 changes: 65 additions & 20 deletions dotnet/samples/HostedAgents/AgentWithHostedMCP/README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,61 @@
**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md).

Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct.

Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates.

Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output.

# What this sample demonstrates

This sample demonstrates how to use a Hosted Model Context Protocol (MCP) server with an AI agent.
The agent connects to the Microsoft Learn MCP server to search documentation and answer questions using official Microsoft content.
This sample demonstrates how to use a Hosted Model Context Protocol (MCP) server with a
[Microsoft Agent Framework](https://learn.microsoft.com/en-us/agent-framework/overview/agent-framework-overview#ai-agents) AI agent and
host it using [Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme) and
deploy it to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.

## How It Works

### MCP Integration

This sample uses a Hosted Model Context Protocol (MCP) server to provide external tools to the agent. The MCP workflow operates as follows:

1. The agent is configured with a `HostedMcpServerTool` pointing to `https://learn.microsoft.com/api/mcp`
2. Only the `microsoft_docs_search` tool is enabled from the available MCP tools
3. Approval mode is set to `NeverRequire`, allowing automatic tool execution without user confirmation
4. When you ask questions, the Azure OpenAI Responses service automatically invokes the MCP tool to search Microsoft Learn documentation
5. The agent returns answers based on the retrieved Microsoft Learn content

**Note**: In this configuration, the Azure OpenAI Responses service manages tool invocation directly - the Agent Framework does not handle MCP tool calls.

### Agent Hosting

The agent is hosted using the [Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme),
which provisions a REST API endpoint compatible with the OpenAI Responses protocol. This allows interaction with the agent using OpenAI Responses compatible clients.

### Agent Deployment

The hosted agent can be seamlessly deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.
The extension builds a container image for the agent, deploys it to Azure Container Instances (ACI), and creates a hosted agent version and deployment on Foundry Agent Service.

Key features:
- Configuring MCP tools with automatic approval (no user confirmation required)
- Filtering available tools from an MCP server
- Using Azure OpenAI Responses with MCP tools
## Running the Agent Locally

## Prerequisites
### Prerequisites

Before running this sample, ensure you have:

1. An Azure OpenAI endpoint configured
2. A deployment of a chat model (e.g., gpt-4o-mini)
3. Azure CLI installed and authenticated
2. A deployment of a chat model (e.g., `gpt-4o-mini`)
3. Azure CLI installed and authenticated (`az login`)
4. .NET 9.0 SDK or later installed

**Note**: This sample uses Azure CLI credentials for authentication. Make sure you're logged in with `az login` and have access to the Azure OpenAI resource.

## Environment Variables
### Environment Variables

Set the following environment variables:

- `AZURE_OPENAI_ENDPOINT` - Your Azure OpenAI endpoint URL (required)
- `AZURE_OPENAI_DEPLOYMENT_NAME` - The deployment name for your chat model (optional, defaults to `gpt-4o-mini`)

**PowerShell:**
```powershell
# Replace with your Azure OpenAI endpoint
$env:AZURE_OPENAI_ENDPOINT="https://your-openai-resource.openai.azure.com/"
Expand All @@ -30,14 +64,25 @@ $env:AZURE_OPENAI_ENDPOINT="https://your-openai-resource.openai.azure.com/"
$env:AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini"
```

## How It Works
### Running the Sample

The sample connects to the Microsoft Learn MCP server and uses its documentation search capabilities:
To run the agent, execute the following command in your terminal:

1. The agent is configured with a HostedMcpServerTool pointing to `https://learn.microsoft.com/api/mcp`
2. Only the `microsoft_docs_search` tool is enabled from the available MCP tools
3. Approval mode is set to `NeverRequire`, allowing automatic tool execution
4. When you ask questions, Azure OpenAI Responses automatically invokes the MCP tool to search documentation
5. The agent returns answers based on the Microsoft Learn content
```powershell
dotnet run
```

This will start the hosted agent locally on `http://localhost:8080/`.

### Interacting with the Agent

You can interact with the agent using:

- The `run-requests.http` file in this directory to test and prompt the agent
- Any OpenAI Responses compatible client by sending requests to `http://localhost:8080/`

Try asking questions about Microsoft documentation and technologies to see the MCP tool in action.

### Deploying the Agent to Microsoft Foundry

In this configuration, the OpenAI Responses service manages tool invocation directly - the Agent Framework does not handle MCP tool calls.
To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://aka.ms/azdaiagent/docs
4 changes: 2 additions & 2 deletions dotnet/samples/HostedAgents/AgentWithHostedMCP/agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ template:
- name: AZURE_OPENAI_ENDPOINT
value: ${AZURE_OPENAI_ENDPOINT}
- name: AZURE_OPENAI_DEPLOYMENT_NAME
value: gpt-4o-mini
value: "{{chat}}"
resources:
- name: "gpt-4o-mini"
- name: chat
kind: model
id: gpt-4o-mini
86 changes: 68 additions & 18 deletions dotnet/samples/HostedAgents/AgentWithTextSearchRag/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,61 @@
**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md).

Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct.

Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates.

Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output.

# What this sample demonstrates

This sample demonstrates how to use TextSearchProvider to add retrieval augmented generation (RAG) capabilities to an AI agent. The provider runs a search against an external knowledge base before each model invocation and injects the results into the model context.
This sample demonstrates how to use the TextSearchProvider to add retrieval augmented generation (RAG) capabilities to a
[Microsoft Agent Framework](https://learn.microsoft.com/en-us/agent-framework/overview/agent-framework-overview#ai-agents) AI agent and
host it using [Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme) and
deploy it to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.

## How It Works

### Retrieval Augmented Generation (RAG) with TextSearchProvider

This sample uses a **mock search function** to demonstrate the RAG pattern. The RAG workflow operates as follows:

1. When the user asks a question, the TextSearchProvider intercepts it
2. The search function looks for relevant documents based on the query
3. Retrieved documents are injected into the model's context
4. The AI responds using both its training and the provided context
5. The agent can cite specific source documents in its answers

**Note**: The mock search function returns pre-defined snippets for demonstration purposes. In a production scenario, replace this with actual searches against your knowledge base (e.g., Azure AI Search, vector database, or other data sources).

### Agent Hosting

The agent is hosted using the [Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme),
which provisions a REST API endpoint compatible with the OpenAI Responses protocol. This allows interaction with the agent using OpenAI Responses compatible clients.

### Agent Deployment

The hosted agent can be seamlessly deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.
The extension builds a container image for the agent, deploys it to Azure Container Instances (ACI), and creates a hosted agent version and deployment on Foundry Agent Service.

Key features:
- Configuring TextSearchProvider with custom search behavior
- Running searches before AI invocations to provide relevant context
- Managing conversation memory with a rolling window approach
- Citing source documents in AI responses
## Running the Agent Locally

## Prerequisites
### Prerequisites

Before running this sample, ensure you have:

1. An Azure OpenAI endpoint configured
2. A deployment of a chat model (e.g., gpt-4o-mini)
3. Azure CLI installed and authenticated
2. A deployment of a chat model (e.g., `gpt-4o-mini`)
3. Azure CLI installed and authenticated (`az login`)
4. .NET 9.0 SDK or later installed

## Environment Variables
### Environment Variables

Set the following environment variables:

- `AZURE_OPENAI_ENDPOINT` - Your Azure OpenAI endpoint URL (required)
- `AZURE_OPENAI_DEPLOYMENT_NAME` - The deployment name for your chat model (optional, defaults to `gpt-4o-mini`)

**PowerShell:**
```powershell
# Replace with your Azure OpenAI endpoint
$env:AZURE_OPENAI_ENDPOINT="https://your-openai-resource.openai.azure.com/"
Expand All @@ -28,14 +64,28 @@ $env:AZURE_OPENAI_ENDPOINT="https://your-openai-resource.openai.azure.com/"
$env:AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini"
```

## How It Works
### Running the Sample

The sample uses a mock search function that demonstrates the RAG pattern:
To run the agent, execute the following command in your terminal:

1. When the user asks a question, the TextSearchProvider intercepts it
2. The search function looks for relevant documents based on the query
3. Retrieved documents are injected into the model's context
4. The AI responds using both its training and the provided context
5. The agent can cite specific source documents in its answers
```powershell
dotnet run
```

This will start the hosted agent locally on `http://localhost:8080/`.

### Interacting with the Agent

You can interact with the agent using:

- The `run-requests.http` file in this directory to test and prompt the agent
- Any OpenAI Responses compatible client by sending requests to `http://localhost:8080/`

Try asking questions about:
- Contoso return policy
- Shipping information
- Product care instructions

### Deploying the Agent to Microsoft Foundry

The mock search function returns pre-defined snippets for demonstration purposes. In a production scenario, you would replace this with actual searches against your knowledge base (e.g., Azure AI Search, vector database, etc.).
To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://aka.ms/azdaiagent/docs
4 changes: 2 additions & 2 deletions dotnet/samples/HostedAgents/AgentWithTextSearchRag/agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ template:
- name: AZURE_OPENAI_ENDPOINT
value: ${AZURE_OPENAI_ENDPOINT}
- name: AZURE_OPENAI_DEPLOYMENT_NAME
value: gpt-4o-mini
value: "{{chat}}"
resources:
- name: "gpt-4o-mini"
- name: chat
kind: model
id: gpt-4o-mini
90 changes: 76 additions & 14 deletions dotnet/samples/HostedAgents/AgentsInWorkflows/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,88 @@
**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md).

Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct.

Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates.

Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output.

# What this sample demonstrates

This sample demonstrates the use of AI agents as executors within a workflow.
This sample demonstrates how to use AI agents as executors within a workflow, hosted using
[Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme) and
deploy it to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.

## How It Works

### Agents in Workflows

This sample demonstrates the integration of AI agents within a workflow pipeline. The workflow operates as follows:

1. **French Agent** - Receives input text and translates it to French
2. **Spanish Agent** - Takes the French translation and translates it to Spanish
3. **English Agent** - Takes the Spanish translation and translates it back to English

The agents are connected sequentially in a workflow, creating a translation chain that demonstrates:
- How AI-powered agents can be seamlessly integrated into workflow pipelines
- Sequential execution patterns where each agent's output becomes the next agent's input
- Composable agent architectures for multi-step processing

This workflow uses three translation agents:
1. French Agent - translates input text to French
2. Spanish Agent - translates French text to Spanish
3. English Agent - translates Spanish text back to English
### Agent Hosting

The agents are connected sequentially, creating a translation chain that demonstrates how AI-powered components can be seamlessly integrated into workflow pipelines.
The agent workflow is hosted using the [Azure AI AgentServer SDK](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.agentserver.agentframework-readme),
which provisions a REST API endpoint compatible with the OpenAI Responses protocol. This allows interaction with the agent workflow using OpenAI Responses compatible clients.

## Prerequisites
### Agent Deployment

Before you begin, ensure you have the following prerequisites:
The hosted agent workflow can be seamlessly deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://aka.ms/azdaiagent/docs) extension.
The extension builds a container image for the agent, deploys it to Azure Container Instances (ACI), and creates a hosted agent version and deployment on Foundry Agent Service.

- .NET 10 SDK or later
- Azure OpenAI service endpoint and deployment configured
- Azure CLI installed and authenticated (for Azure credential authentication)
## Running the Agent Locally

**Note**: This demo uses Azure CLI credentials for authentication. Make sure you're logged in with `az login` and have access to the Azure OpenAI resource. For more information, see the [Azure CLI documentation](https://learn.microsoft.com/cli/azure/authenticate-azure-cli-interactively).
### Prerequisites

Before running this sample, ensure you have:

1. An Azure OpenAI endpoint configured
2. A deployment of a chat model (e.g., `gpt-4o-mini`)
3. Azure CLI installed and authenticated (`az login`)
4. .NET 9.0 SDK or later installed

### Environment Variables

Set the following environment variables:

- `AZURE_OPENAI_ENDPOINT` - Your Azure OpenAI endpoint URL (required)
- `AZURE_OPENAI_DEPLOYMENT_NAME` - The deployment name for your chat model (optional, defaults to `gpt-4o-mini`)

**PowerShell:**
```powershell
# Replace with your Azure OpenAI endpoint
$env:AZURE_OPENAI_ENDPOINT="https://your-openai-resource.openai.azure.com/"

# Optional, defaults to gpt-4o-mini
$env:AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini"
```

### Running the Sample

To run the agent, execute the following command in your terminal:

```powershell
$env:AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" # Replace with your Azure OpenAI resource endpoint
$env:AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o-mini" # Optional, defaults to gpt-4o-mini
dotnet run
```

This will start the hosted agent workflow locally on `http://localhost:8080/`.

### Interacting with the Agent

You can interact with the agent workflow using:

- The `run-requests.http` file in this directory to test and prompt the agent
- Any OpenAI Responses compatible client by sending requests to `http://localhost:8080/`

Try providing text in English to see it translated through the workflow chain (English ? French ? Spanish ? English).

### Deploying the Agent to Microsoft Foundry

To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://aka.ms/azdaiagent/docs
4 changes: 2 additions & 2 deletions dotnet/samples/HostedAgents/AgentsInWorkflows/agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ template:
- name: AZURE_OPENAI_ENDPOINT
value: ${AZURE_OPENAI_ENDPOINT}
- name: AZURE_OPENAI_DEPLOYMENT_NAME
value: gpt-4o-mini
value: "{{chat}}"
resources:
- name: "gpt-4o-mini"
- name: chat
kind: model
id: gpt-4o-mini
Loading