PUBLISHED

Creating a Polkadot AI Agent with the OpenAI Agents SDK

Creating a Polkadot AI Agent with the OpenAI Agents SDK
2025-03-2212 min
FR

In this tutorial, we will build step-by-step an autonomous AI agent capable of analyzing Polkadot OpenGov governance proposals and automatically generating summary reports. The goal is to demonstrate how to combine the OpenAI Agents SDK and the MCP protocol to enable a LLM-powered agent to query external data (here, on-chain governance proposals) and deliver smart analyses.

Use case: Imagine a user requesting: "Analyze OpenGov proposal #1462". Our agent will fetch this proposal's information in real-time via Polkadot APIs (Subsquare, Subscan, etc.), identify the author (address or identity on the network), analyze their participation history (past proposals, success rate, reputation), summarize the proposal content, and finally formulate a structured opinion (benefits, risks, impact). The output will be a structured mini-report (title, summary, analysis, conclusion).

This tutorial is aimed at technically inclined readers passionate about AI and crypto. It assumes basic familiarity with Python and decentralized governance concepts (DAO, on-chain governance). We'll cover: introduction to the OpenGov use case, a concise overview of the OpenAI Agents SDK and the MCP protocol, environment setup, creation of a custom MCP tool to query OpenGov data, construction of the AI agent chaining tools and reasoning over results, a concrete example, and finally a bonus idea with a second agent acting as a "DAO advisor."

📚 For the curious minds

Want to dive deeper into AI agent architectures or explore emerging frameworks like LangChain, MCP, or PydanticAI? Here are a few recommended articles to go further:

⚡️ Context: Polkadot OpenGov and Governance Proposals

Polkadot OpenGov is the new fully decentralized governance system of Polkadot, launched in 2023. It puts the community at the heart of decision-making by eliminating centralized bodies (Council, Technical Committee) in favor of a direct democracy model. (Polkadot OpenGov Deep Dive | Messari). In practice, any DOT holder can submit a proposal (also called a referendum), and multiple referenda can proceed in parallel, speeding up the pace of collective decisions. Each proposal goes through several on-chain stages (submission, decision/vote, and execution if passed).

This open model (hence the name OpenGov) significantly increases the number of proposals that need to be assessed at any given time. For token holders or DAO members within the Polkadot ecosystem, it becomes difficult to track and analyze each proposal in detail: reading the (often long) description, verifying the proposer’s identity or address, browsing their on-chain history (have they made previous proposals? were they accepted or rejected?), and weighing the benefits and risks of the proposal before voting. Community platforms like Polkassembly and SubSquare help make proposal information and discussion more accessible (Polkadot Governance Apps · Polkadot Wiki), but the analysis remains manual and time-consuming.

This is where an AI agent can help. By automating the collection of on-chain/off-chain information and producing an automated report, an AI agent enables the community to save time and get an objective assessment of each proposal. Our “OpenGov Analyst” agent will primarily query publicly available APIs: for instance, SubSquare (a platform that tracks on-chain governance events and provides a user-friendly interface) exposes details of current and past referenda, and Subscan (a Polkadot blockchain explorer) allows fetching information on accounts and transactions. By combining these sources, the agent can piece together the full picture: proposal metadata (title, category, date, status, etc.), content/summary, proposer profile (Polkadot address, optional identity name, participation stats), and indicators to evaluate the proposal (requested amount for treasury proposals, current support level, etc.).

📚 OpenAI Agents SDK and MCP Protocol: Quick Overview

Before diving into the implementation, let’s briefly revisit the two technologies we’ll be using:

OpenAI Agents SDK: This is an open-source development kit introduced by OpenAI (released in March 2025), aimed at simplifying the creation and orchestration of AI agents. An agent is defined as an LLM-driven system that can act autonomously to perform tasks, using tools (functions, actions) when needed (OpenAI's Agents SDK and Anthropic's Model Context Protocol (MCP)). The SDK provides a lightweight structure with a few key primitives: Agents (LLMs with instructions and built-in tools), Handoffs (control transfers between agents for multi-agent workflows), and Guardrails (input/output checks and validations) (OpenAI Agents SDK). It features a robust agent execution loop: the agent can plan actions, call a tool, receive results, combine them into context, and iterate until it produces a final response — all transparent to the developer.

In short, the Agents SDK allows us to easily build an agent capable of multi-step reasoning and tool invocation (chain-of-thought). In our case, we’ll have a single “OpenGov analyst” agent equipped with two tools: one for fetching proposal details and another for retrieving the author’s profile. The SDK will handle the coordination: the agent (via the LLM) will decide when to use each tool and loop until it has enough information to generate the final report.

MCP (Model Context Protocol): Introduced by Anthropic in late 2024, MCP is an open standard designed to connect AI models with external data sources and tools in a unified way. The idea is to provide a kind of "universal port" for AI — Anthropic describes MCP as a “USB-C for AI” — allowing any model (Claude, GPT-4, etc.) to interact with services and tools through a common protocol. MCP distinguishes between MCP servers (services that connect to specific data sources — databases, APIs, files — and expose tools/resources via the protocol) and MCP clients (AI models or applications that query these servers).

In practice, an MCP server hosts one or more tools or resources that AI can invoke using standard MCP requests (usually in JSON); the server executes the requested action and returns the response to the model.

Why MCP in our project? While the Agents SDK can call local Python functions as tools, integrating MCP offers several benefits: it provides a decoupling between the agent and the data source (allowing the OpenGov tool to evolve independently from the agent’s code) and opens the door for tool reuse across different MCP-compatible models (e.g., a Claude agent could call the same MCP server). Moreover, the openai-agents-mcp extension we’ll use makes connecting everything seamless — allowing our OpenAI agent to use MCP tools alongside native Python tools. The SDK and MCP are complementary: “The OpenAI Agent SDK [...] handles orchestration (reasoning, tools, tracing) while MCP provides streamlined access to external data.”

In this tutorial, we’ll create an OpenGov MCP tool that exposes Polkadot governance data, and we’ll connect it to our agent via the extension. This way, our agent will have access to a specialized external tool for Polkadot governance that it can invoke just like any other function.

📦 Installation and Environment Setup

Let’s start by installing and configuring everything you need on your development machine.

1. Install Python 3.8+ – Make sure you have a recent version of Python (3.8 or later). Create a virtual environment if needed.

2. Install the OpenAI Agents SDK and MCP extension – The packages are available on PyPI: we’ll install openai-agents (the core SDK) along with openai-agents-mcp (the MCP extension). Run:

bash
pip install openai-agents
pip install openai-agents-mcp

This installation also includes the necessary dependencies (like mcp-agent). To verify the installation, try importing the module in Python without any errors.

3. Configure your OpenAI API key – The SDK uses the OpenAI API to run the LLM behind the agent. You’ll need a valid OpenAI API key (and ideally access to GPT-4 via the API, recommended for high-quality results). Export your key as an environment variable:

bash
export OPENAI_API_KEY=<your_sk_key>

(On Windows PowerShell: use $env:OPENAI_API_KEY="sk-..."). You can also use a .env file or set the environment variable directly in your IDE. Without this key, the agent won’t be able to call the OpenAI model (OpenAI Agents SDK).

4. (Optional) External tools – In this project, we’ll create our own tool via MCP. The openai-agents-mcp extension allows you to define MCP servers via a YAML config file or in code. If you want to use existing MCP servers (e.g., Anthropic’s generic fetch web server or a filesystem server), you can configure them in mcp_agent.config.yaml. In our case, we’ll be developing a custom MCP server in Python, so you can skip this step. Just make sure the MCP extension is properly installed and ready.

5. Install libraries for the Polkadot API – Depending on the API you choose to query OpenGov data, you might need additional packages. For instance, to call a REST API (like Subsquare or Subscan), the standard requests library will do (pip install requests). If you want to use Polkassembly’s GraphQL API, you may consider installing a GraphQL Python client. For our tutorial, we’ll stick to simple HTTP requests, so just import requests in your code.

Once all these steps are completed, your environment is ready: Python, the agent SDK, the MCP extension, and access to the OpenGov APIs. We can now move on to building the OpenGov MCP tool.

🔍 Creating the “OpenGov” MCP Tool to Query Polkadot Data

We’ll develop a small MCP server dedicated to Polkadot OpenGov, which will act as an interface between the agent and governance APIs. The idea is to equip this server with two main capabilities (tools):

  • get_proposal_details: Retrieve detailed information about a given proposal (identified by its referendum number or ID). This tool will return structured data: the proposal title, its description (or a summary), the proposer (address or name), the submission date, current status (ongoing, approved, rejected…), etc.
  • get_author_profile: Retrieve the “profile” of a proposal author (via their address). The idea is to compile some metrics about this address’s activity: how many proposals they’ve submitted in the past, how many were accepted vs rejected, and possibly other indicators (e.g., do they vote frequently, do they delegate, etc., depending on available data). To simplify, we’ll focus on the proposals submitted by this address and their success rate.

Let’s start by building the skeleton of our MCP server in Python. An MCP server needs to receive requests (typically in JSON) and return responses. However, the openai-agents-mcp extension will handle starting our server in the background and managing communication. Our job is simply to provide Python functions that perform the desired actions. To do this, we can use the mcp_agent library included with the extension.

But for the sake of simplicity, we’ll avoid implementing a full HTTP server ourselves. Instead, we’ll take advantage of the fact that the Agents SDK can also use plain Python functions as tools (via function tools). So we’ll develop the necessary Python functions and connect them via MCP. This allows us to test functions locally, and register them as tools available to the agent.

🔍 Fetching Proposal Details via the OpenGov API

Since Polkadot OpenGov has a fairly complex on-chain protocol, it’s easier to use a third-party API that indexes this data. Two popular choices are:

  • Subsquare: offers a governance-oriented API (as it’s a platform dedicated to governance). Recently, Subsquare revamped its API architecture to make it more accessible to developers and can return proposal content in Markdown or HTML (Polkassembly Social Contract 2025). It allows you to list active OpenGov referenda, fetch details of a particular referendum, etc.
  • Subscan: a multi-chain blockchain explorer that provides a REST API covering accounts, extrinsics, and governance events. The Subscan API requires a free API key and has dedicated endpoints (e.g., /api/scan/referendum for referendum details).

In this tutorial, we’ll use Subsquare for simplicity, assuming it provides a public endpoint for OpenGov referenda. (If needed, the Polkassembly API could also be queried to retrieve the proposal text or comment count, but we’ll skip that for now.)

Let’s implement the get_proposal_details(ref_id: int) function in Python. This function will:

  1. Build an HTTP request to the governance API for the specified ref_id referendum. For example, if the Subsquare API is available at a hypothetical address like https://polkadot.api.subsquare.io/opengov/referenda/, we’ll use that.
  2. Parse the JSON response and extract the important fields.
  3. Return a Python dictionary with these fields, which the SDK will convert into JSON for the LLM.

Here’s an example of the code, with comments:

python
import requests

def get_proposal_details(ref_id: int) -> dict:
    """Queries the OpenGov API to get details for a given referendum."""
    # Build the API URL (example for Subsquare)
    url = f"https://polkadot.api.subsquare.io/v1/open-gov/referenda/{ref_id}"
    response = requests.get(url)
    if response.status_code != 200:
        raise Exception(f\"API error: status {response.status_code}\")
    data = response.json()

    # Extract key fields (adapt based on real API structure)
    title = data.get(\"title\") or data[\"referendum_title\"]
    content = data.get(\"content\") or data.get(\"description\")  # proposal text
    author_address = data.get(\"proposer\") or data[\"submitter\"]
    submit_date = data.get(\"submitted_at\")  # timestamp or date
    status = data.get(\"status\")  # e.g., ongoing/confirming/approved/rejected

    # (Optional) Trim or summarize content if it's too long
    summary = content[:200] + \"...\" if content and len(content) > 250 else content

    # Prepare the result dictionary
    result = {
        \"id\": ref_id,
        \"title\": title,
        \"summary\": summary,
        \"author\": author_address,
        \"submitted_date\": submit_date,
        \"status\": status
    }
    return result

💡 Notes: In this example, we speculate on the API’s structure. In practice, you’ll need to adapt the field names (e.g., referendum_title, submitter, etc.) based on the actual API docs. Since Subsquare might return HTML/Markdown, you can choose to keep the raw content or extract a summary. Here, for brevity, we truncate the content to ~200 characters and call it a summary. A potential improvement would be to use an automatic summarization function (via LLM or otherwise), but for now, we’ll avoid AI recursion!

When called, this function could return:

python
{
    "id": 1462,
    "title": "Polkadot-API 2025 Development Funding through Polkadot Community Foundation",
    "summary": "Hi everyone, As many of you may already know, Polkadot-API was envisioned to provide a robust and flexible suite of libraries... (truncated)",
    "author": "16JG...pr9J",
    "submitted_date": "2025-03-01T12:34:56Z",
    "status": "Confirming"
}

As you can see, we now have the title of proposal #1462, the beginning of a summary, the author’s address (shortened here), the submission date, and its status ("Confirming" may mean it’s in the confirmation phase). All this information will be extremely useful for our agent to generate its final report.

🕵️‍♂️ Fetching the Author’s Profile (Participation History)

Now let’s create the function get_author_profile(address: str). The goal is to gather some stats about a given Polkadot address corresponding to the author of a proposal. Typically, we want to know: how many OpenGov proposals this address has submitted in the past, and among those, how many were accepted (or rejected). This gives us a success rate, which can indicate the author’s credibility or expertise within the governance process.

Subsquare likely offers a way to filter referenda by proposer (via its API or UI). According to recent updates, we know you can “see the vote history of any address on Subsquare” (Subsquare feature updates - Governance - Polkadot Forum), and there’s a profile view (perhaps accessible via /user/). Subscan may also allow querying Democracy/Referendum extrinsics submitted by an address. To keep it simple, we’ll assume an endpoint like https://polkadot.api.subsquare.io/v1/account//referenda exists on Subsquare.

We’ll implement the function as pseudo-code: call the API, count proposals, calculate the ratio. If the API doesn’t exist, we’ll show how one could proceed and simulate the data.

python
import requests

def get_proposal_details(ref_id: int) -> dict:
    """
    Fetch detailed information about a given OpenGov referendum using the Subsquare API.

    Args:
        ref_id (int): The referendum ID to query.

    Returns:
        dict: A dictionary containing structured proposal information.
    """
    api_url = f"https://polkadot.api.subsquare.io/v1/open-gov/referenda/{ref_id}"

    try:
        response = requests.get(api_url)
        response.raise_for_status()
    except requests.exceptions.RequestException as e:
        raise RuntimeError(f"Failed to fetch proposal details: {e}")

    data = response.json()

    # Extract relevant fields safely with fallbacks
    title = data.get("title") or data.get("referendum_title", "Untitled")
    content = data.get("content") or data.get("description", "")
    proposer = data.get("proposer") or data.get("submitter", "Unknown")
    submitted_at = data.get("submitted_at", "Unknown date")
    status = data.get("status", "Unknown status")

    # Generate a concise summary
    summary = content[:200] + "..." if len(content) > 250 else content

    return {
        "id": ref_id,
        "title": title,
        "summary": summary,
        "author": proposer,
        "submitted_date": submitted_at,
        "status": status
    }

To illustrate, let’s imagine that address 16JG...pr9J (from proposal #1462) had already submitted 5 proposals in the past, 3 of which were accepted and 2 rejected. The result would be:

python
{
    "address": "16JG...pr9J",
    "proposals": 5,
    "accepted": 3,
    "rejected": 2,
    "acceptance_rate": 0.6
}

Tip: You could enrich this profile with other available info — for example, voter participation, or the on-chain identity (if the address has a registered identity via the Identity module). Subsquare may return identity data linked to the address (potentially under a field like identity_display). But to keep things simple, we’ll focus only on proposal statistics for now.

🔄 Integrating the Tools into the AI Agent

Now that our Python functions are ready, it’s time to integrate them into the AI agent as tools. There are two main ways to do this:

  1. Directly via the SDK (function tools) – The Agents SDK allows you to declare a Python function as a tool using the @function_tool decorator from the agents module. This automatically generates the JSON schema for function calling. This is a simple and straightforward approach.
  2. Via an MCP server – You can wrap these functions in a small MCP server. The openai-agents-mcp extension allows connecting that server as a tool source for the agent. This approach modularizes the tool, making it potentially reusable across other MCP-compatible clients.

For this tutorial, we’ll illustrate the second approach (MCP), since it’s the main focus here, while still using our Python code. In practice, the extension makes it easy to expose our functions as MCP tools without much overhead. Here's how:

  • Write your functions (get_proposal_details and get_author_profile) in a Python file (e.g., opengov_server.py).
  • Instead of writing a full HTTP server, we’ll use the RunnerContext feature from the extension. In our main script, we can define a programmatic MCP configuration that specifies how to start our custom server. For example, we’ll tell the SDK that the MCP server named "opengov" should be launched by executing our Python script.

You can also configure this via a YAML file, but for simplicity, we’ll use inline code:

python
from agents_mcp import Agent, Runner, RunnerContext
from mcp_agent.config import MCPSettings, MCPServerSettings

# Define MCP settings to launch our opengov server
mcp_config = MCPSettings(
    servers={
        "opengov": MCPServerSettings(
            command="python",
            args=["opengov_server.py"]  # script that launches our MCP tools
        )
    }
)
context = RunnerContext(mcp_config=mcp_config)

Here we assume that opengov_server.py, when executed, starts an MCP server that exposes our two tools, for example under the names get_proposal_details and get_author_profile. The next question is: how should this script be written to turn our functions into MCP tools?

One solution is to use the mcp_agent library to declare tools. It offers ways to create an MCP server by registering tools and then starting the listener (via decorators or a tool registry, then calling serve()). That implementation is a bit beyond the scope of this tutorial, but it’s a valid path.

Alternatively, to keep things focused on the Agents SDK, we can skip launching an external server altogether and use our functions locally. In fact, the agent can directly work with tools — whether they’re native functions or MCP tools. The simplest approach, therefore, is to use @function_tool and register our functions directly within the agent, optionally mentioning the MCP context.

To avoid confusion, we’ll go with the direct approach: registering our functions as agent tools. This will allow us to see how the agent chains tool calls during execution.

Let’s put it all together — agent creation with: a name, instructions (system prompt), and the list of tools (functions) it can use.

python
from agents import Agent, Runner, function_tool

# Decorate the functions as tools so the SDK auto-generates the JSON schema
@function_tool
def get_proposal_details(ref_id: int) -> dict:
    # ... (see previous implementation)
    return result

@function_tool
def get_author_profile(address: str) -> dict:
    # ... (see previous implementation)
    return profile

# Define the agent’s instructions (its role and behavior)
instructions = (
    "You are an expert assistant specializing in Polkadot governance (OpenGov). "
    "You have access to tools for retrieving proposal details and checking author profiles. "
    "Use these tools wisely to respond to user requests. "
    "When asked to analyze a proposal, you must return a structured report containing a Title, "
    "a Summary of the proposal, an Analysis (advantages, risks, context), and a Conclusion (reasoned opinion). "
    "If needed, retrieve the information using tools before composing your response. "
    "Your tone should be neutral, informative, and professional."
)

# Create the agent with its name, instructions, and tools
opengov_agent = Agent(
    name="Polkadot OpenGov Analyst",
    instructions=instructions,
    tools=[get_proposal_details, get_author_profile]
)

💡 A few explanations:

  • We used the @function_tool decorator from the SDK on both functions. This tells the agent it can call these functions using OpenAI’s function-calling mechanism. The LLM will decide when to invoke get_proposal_details or get_author_profile during a conversation, passing the required arguments (the SDK automatically generates the type schema: ref_id as an integer, and address as a string).
  • The instructions act as a System Prompt. This is a crucial directive that defines expected behavior. We explain the agent’s role (OpenGov analyst), its available tools, how to structure the final response, and the tone to adopt. This guides the agent in crafting the final mini-report.
  • The agent is named "Polkadot OpenGov Analyst" (this can be useful for debugging or logging).
  • The tool list is passed directly.

At this point, our agent is fully configured. In theory, it can now receive a user prompt and autonomously decide when to call get_proposal_details and/or get_author_profile to gather the needed information. The Agents SDK and the LLM handle all the reasoning: the agent might internally plan something like “To answer, I need the proposal details” → tool call → “Now I have the details, let’s get the author’s history” → tool call → “Got everything, composing the report now”. As developers, we don’t need to manually orchestrate any of this: we just run the agent and retrieve the final output.

📝 Usage Scenario: Analyzing an OpenGov Proposal with the Agent

Now that everything is set up, let’s test our agent on a concrete example. Let’s use the fictional proposal #1462 (Polkadot-API 2025 Development Funding) that we’ve discussed earlier. Suppose a user asks the agent:

User: “Can you analyze OpenGov proposal #1462?”

When this request is sent to the agent via the SDK, here’s what happens step by step:

  1. Receiving the request – The agent receives the user prompt as input. The full prompt visible to the LLM is a concatenation of the instructions (system) and the user’s query.
  2. Initial reasoning – The LLM (e.g., GPT-4) understands that it is being asked to analyze a specific proposal (#1462). Based on its instructions, it knows it should return a structured report and that it has tools available to fetch precise information. It will begin planning to use those tools.
  3. Calling get_proposal_details – The agent first needs the details of proposal #1462. It internally generates a function call to get_proposal_details(ref_id=1462). The SDK runs our Python function and retrieves the result (the proposal's JSON data).
  4. Processing the result – The LLM receives the returned data for proposal #1462, for example: title “Polkadot-API 2025 Development Funding…”, content summary “Hi everyone, ...”, author 16JG...pr9J, date, status Confirming. This information is now injected into the conversation (as a function call result), and the LLM takes it into account for the next steps.
  5. Calling get_author_profile – The agent now thinks: it has the proposer (16JG...pr9J). Its instructions recommend verifying the author’s history. So it decides to call the second tool: get_author_profile(address="16JG...pr9J"). Again, the SDK runs the corresponding Python function and returns the result (the author’s profile) to the LLM. Let’s assume the response says: 5 proposals submitted, 3 accepted, 2 rejected, 60% success rate.
  6. Processing the result – The LLM integrates this new data into its context. It now knows the author has a 60% approval rate, which is a solid track record, and that they’ve made multiple proposals, suggesting experience.
  7. Generating the final report – With all the necessary information gathered, the agent no longer needs additional tools. The LLM shifts to response generation mode. Based on all collected data, it composes a structured textual answer with the required sections. Thanks to the system prompt, it knows to include a title, summary, analysis, and conclusion. For instance, it might reuse the actual proposal title as the report title, summarize the content (using the pre-fetched summary or rewording it), highlight pros and cons (e.g., the proposal benefits Polkadot developers), and conclude with a recommendation (e.g., “favorable”, justified by its potential impact and author credibility).

To run this in your Python code, you use the SDK’s Runner:

python
user_query = "Analyze OpenGov proposal #1462"
result = Runner.run_sync(opengov_agent, user_query)
print(result.final_output)

The Runner will orchestrate the entire process described above. The result.final_output will contain the agent’s final structured response. Here’s what a generated report might look like:

Example of a report generated by the agent for proposal #1462 (Polkadot OpenGov).

(Polkadot-API 2025 Development Funding through Polkadot Community Foundation)

In this fictional report, we find:

  • Title – taken directly from the original proposal for clarity (this one is about funding the Polkadot API development through the community foundation).
  • Summary of the proposal – the agent summarizes the proposer’s request (e.g., funding maintenance and development of the Polkadot-API library for 2025, with some context from the proposal description).
  • Analysis – the agent outlines benefits (e.g., continuity of developer support, positive ecosystem impact), and risks or concerns (e.g., reliance on recurring funding, need for budget oversight). It also includes the proposer profile: “an active member (5 proposals submitted, 60% accepted)”, giving additional reputation context. This supports trust in the proposal.
  • Conclusion – the agent gives a reasoned opinion. In this example, the conclusion is favorable, justified by the proposal’s usefulness to the community and the proposer’s good track record, while noting areas to watch (showing a nuanced analysis).

This structured and concise answer offers the user a clear summary without needing to read the entire proposal or dig through on-chain data. Of course, the quality of the final report depends on the LLM and the data provided: a strong model like GPT-4 can generate insightful, high-quality analysis.

Note: During execution, the developer can monitor the agent’s reasoning (tool usage, etc.) by enabling verbose mode or using the SDK’s tracing tools. This is helpful for debugging or verifying that the agent is using the tools as expected. For instance, logs will show when it intends to call get_proposal_details(1462), the returned JSON, and so on until the final response. The OpenAI Agents SDK provides built-in tracing to make debugging complex agents much easier.

💡 Bonus: A “DAO Advisor” Agent for Voting Recommendations

Our OpenGov Analyst agent provides objective analysis. But we could go one step further by creating a second, slightly more opinionated agent that acts as a voting advisor for the DAO. This agent would take the analysis report (produced by the first agent) as input and generate a voting recommendation (For, Against, or Abstain) intended for a council or DAO members.

Technically, this demonstrates how to build a multi-agent system using the OpenAI Agents SDK. For example, the Analyst agent could generate its report and then hand off the task to the Advisor agent. This is exactly what the SDK’s handoff feature enables (New tools for building agents | OpenAI): one agent can transfer context to another to continue the process.

How would this work? We could define a advisor_agent with instructions like: “You are a governance advisor. You will be given an analysis of a proposal. Based on this, you must recommend whether members should vote For or Against the proposal, along with a short justification. Focus only on the project’s value to the community and the risks involved. Be concise and decisive.” This agent wouldn’t necessarily need any tools — just the report as input.

You could either orchestrate this flow manually in Python (first run the Analyst agent, then pass its output to the Advisor), or configure an automatic handoff using the SDK (a parent agent that first runs the analyst, then forwards the output to the advisor). The Agents SDK makes this orchestration simple by allowing you to define multiple agents and specify that agent A can hand off to agent B (New tools for building agents | OpenAI).

Without going into full detail, here’s a rough sketch of the DAO Advisor agent:

python
advisor_instructions = (
    "You are a virtual advisor for a Polkadot DAO. "
    "You will receive an analysis report about a governance proposal. "
    "After reading it, you must provide a voting recommendation (For / Against), "
    "briefly explaining your reasoning. Your response should be 2–3 sentences maximum, clear and direct."
)

dao_advisor_agent = Agent(
    name="DAO Voting Advisor",
    instructions=advisor_instructions
    # No tools needed, the input will be the report text.
)

# Example usage after receiving the 'analysis_report' from the Analyst agent
advice = Runner.run_sync(dao_advisor_agent, analysis_report)
print(advice.final_output)

If analysis_report contains the text of the previously generated report, the advisor agent will read it (in context) and return something like:

Recommendation: Vote For. This proposal provides a clear benefit to the ecosystem by funding a critical tool, and the proposing team has a solid track record.”

This second agent automates decision-making based on structured analysis. Of course, you can fine-tune its instructions based on the DAO’s policy — some advisors may take a more conservative or more daring approach. What matters is that this setup demonstrates how you can chain multiple agents with different roles: one gathers and analyzes the information, and the other makes a decision or recommendation based on it.

📝 Conclusion and Outlook

We’ve built an autonomous AI agent capable of analyzing Polkadot OpenGov governance proposals and generating a structured report useful for participants in decentralized governance. This project showcased how to combine the OpenAI Agents SDK (to orchestrate LLM reasoning and tool usage) with the MCP protocol (to connect with external on-chain data through custom tools) in a real DAO context.

Throughout this tutorial, we covered all the key steps: setting up the environment, creating specialized tools to query Polkadot APIs (Subsquare/Subscan) to extract titles, descriptions, proposers, and statuses of proposals, as well as the on-chain profile of proposal authors; integrating those tools into an agent with a well-designed system prompt; executing and testing the flow on a real example (#1462); and finally, extending the setup into a multi-agent system with a voting advisor.

This prototype can be improved in many ways: for instance, enhancing the analysis (e.g., by adding LLM-based summarization of the full proposal text), retrieving live vote tallies, or connecting the agent to off-chain sources like the community discussion forum to include community arguments. Moreover, this approach is not limited to Polkadot: it can be adapted to other decentralized governance systems (like Ethereum DAO proposals) by swapping out the data connectors.

By combining AI agents with governance data, we glimpse a future where intelligent assistants help crypto communities digest information and make more informed decisions — all in a transparent and reproducible way. This tutorial is just a starting point, but you now have a solid foundation to experiment with your own proposal analysis agent — it’s up to you to evolve and tailor it to your DAO’s needs!

Summary: Using the OpenAI Agents SDK for orchestration and MCP for data access, we’ve created an AI agent that automatically queries Polkadot OpenGov proposals, summarizes their content and author history, and delivers a structured analysis report. This type of agent can greatly support decentralized governance by providing automated insights to voters and paves the way for AI-powered advisors within DAOs.

AIdevelopmentOpenAIPolkadot
CypherTux OS v1.30.3
© 2025 CYPHERTUX SYSTEMS