Creating an AI agent with n8n and MCP

Exploring how to build a trend analytics agent with n8n while leveraging the Elasticsearch MCP server.

In this article, we will learn how to create an AI agent with n8n that leverages the Elasticsearch MCP server, while analyzing Google “People Ask About” questions for specific keywords.

What’s n8n?

QuestionAnswer
What is n8n?n8n is a low-code workflow creation tool that has become popular for building automated processes. It is an open-source platform that uses a visual, drag-and-drop interface to build workflows, giving it its highly flexible nature.
What does n8n do?N8n enables you to connect different service nodes as building blocks via a user interface (UI). These nodes can integrate a huge number of services and applications, including the entire Google suite, various AI agents, and Model Context Protocols (MCPs).
How is a workflow built in n8n?A workflow is built by chaining together nodes in a sequence. You start with a trigger (like the Schedule Trigger node mentioned in the article) and then connect subsequent nodes to perform actions like searching data (SerpApi node), transforming data (Split Out node), storing data (Elasticsearch Create Document node), and processing data (the AI Agent node).
How does the n8n AI Agent node use the Elasticsearch MCP server?The AI Agent node uses the MCP Client Tool as one of its capabilities. When the agent is processing the prompt, it recognizes the instruction to "find historical searches." It then autonomously calls the configured Elasticsearch MCP server through the MCP Client Tool to retrieve the necessary aggregated data, enabling it to compare the current execution results with the total historical data.
Why is using n8n for this project more flexible than using a standard API script?n8n is a more convenient low-code solution than API scripts for this project, making it significantly easier for less technical users to build and manage workflows.

n8n is a low-code workflow creation tool that became extremely popular in the last few months. It allows you to connect different service nodes as building blocks using a UI. There are built-in nodes to integrate with a huge number of services and applications, including the entire Google suite, AI agents, and MCPs, or you can build your own API calls or code block nodes. n8n's slogan is: ​​Connect anything to everything.

To make it even better, there is a community collection of automation templates with free and paid templates you can copy and paste into your projects.

A workflow looks like this:

You can chain many services to build an input, process it, and then output to any place you want.

Now we are going to create our own workflow using the Elasticsearch MCP server as one of the agent tools.

You can find the full workflow here

Prerequisites

The workflow

Let’s imagine we want to do some periodical analysis about the Google-related questions (A.K.A “People also ask) for some specific keywords, store the results in Elasticsearch for analysis, and generate a report about the results.

We will design a Workflow that executes the following steps:

  1. On a schedule, goes and searches Google for the keyword we specify
  2. Gathers information about the top “People also ask” questions
  3. Stores the questions and answers in Elasticsearch for future analysis
  4. Using an agent and Elasticsearch, MCP compares the data of the current execution with the total of the aggregated data for that keyword
  5. Generates a Google Docs report with the analysis results

All of this without writing a line of code, and with the flexibility of easily changing or adding any inputs or outputs by just connecting more nodes to the workflow.

Deploying the MCP server

For the MCP server, we will leverage the Elastic Agent Builder server feature. Currently, it is available at Elasticsearch Serverless, and will be available in version 9.2 for the rest of the installation methods.

Start by going to the Manage Tools screen of your Elasticsearch deployment:

https://<your_kibana_url>/app/agent_builder/tools

And copy the MCP server address.

You will have an MCP server with different tools to explore the cluster indices and run search queries:

Then, generate an API Key from the management page:

https://<your_kibana_url>/app/management/security/api_keys

Configuring n8n

The process to deploy n8n locally is simple. We can use Docker to deploy our server.

First, we create a volume to persist our n8n workflows, and we don't lose them if we restart the container:

docker volume create n8n_data

And now we pull and run the image:

docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n

At the end of your shell, you will see:

Editor is now accessible via:
http://localhost:5678

Press "o" to open in Browser.

And you can do just that. You will be asked to set up an owner account. Go ahead and do so.

Click on “Start from scratch”

Now let’s follow the steps we defined above:

On a weekly schedule, go and search Google for the keyword we specify

Let’s start by creating a Schedule Trigger by clicking on the top right plus sign.

On this screen, we are going to set how frequently we want to execute our workflow. Let’s make it weekly.

Then add a SerpApi official node and select Google Search. To set it up, you need to get an API Key from their website. You can register for free and get 100 monthly searches.

This node allows you to programmatically run searches and have access to search results and metadata.

For the configuration, we will manually set q to elasticsearch:

After that, connect the two nodes:

It gathers information about the top 10 results

To gather the information, we must execute the Google Search node by clicking the “Execute step” button in the node. You should see the following in the output window:

There is a lot of information available to analyze, but we will focus on related questions.

Related questions are curated by Google from many sources and try to represent what people ask about the current search. You can use them to understand user intentions and adjust your site content accordingly. In this way, your website may become one of the answer sources of the “People also ask” section, bringing you additional visits.

Stores the point in time of the top 10 results in Elasticsearch for future analysis

To store each of the related questions on its own Elasticsearch document, we must:

  1. Add a Split Out node , drag the `related_question_fields` to the split field, and then 
  2. Connect the Split Out node to an Elasticsearch Create Document node

First, configure the Elasticsearch connection by creating a new Generic Credential Type of type Header Auth with the following settings:

Name: Authorization

Value: ApiKey <your_elasticsearch_apikey>.

To keep it simple, we will configure the node to map everything from the previous input by setting body as {{ $json.toJsonString() }}. On this page, we can select or ignore specific parts of the input if we want by dragging fields from the input to the request body:

By clicking “Execute step,” you will index the results into Elasticsearch. You can connect the two nodes from this step:

Using an agent and Elasticsearch, MCP compares the data of the current execution with the total of the aggregated data for that keyword

We must start by adding an AI Agent node, which will hold the AI Model and tools.

Start by clicking on the plus sign under “Chat Model” to configure your preferred model. You will need to provide your OpenAI Api Key on this step.

Now on Tool, we are going to select MCP Client Tool and set the Elasticsearch MCP server container we deployed as the provider of the tools the client can use to communicate with Elasticsearch. Configure as follows:

Endpoint: Your MCP url

Server Transport: HTTP Streamable

Authentication: Header Auth

  • Name: Authorization
  • Value: ApiKey <your_elasticsearch_apikey>

Tools to include: We keep all for now

You will end up with something like this:

We must replace the chat trigger with the output of the search results, and then add the instructions to the agent about what we want to do with the results.

For the prompt, we double-click the AI Agent, select Define Below as the source of the prompt, and add the following:

You are a helpful assistant that helps analyzing Google "Peoople Ask About" related questions for specific keywords for a given week.

The results for the keyword: {{ $json.search_parameters.q }} are:

{{ $json.related_questions.toJsonString() }}

I want you to use your search tool to go find historical searches using  the following ESQL query:

`FROM serp_results` Run this query as is.

And then write a report analysis using the Google docs tool.

You must first create the doc, and pass the document id to the add content tool. Use the keyword and the date for the document title.
For the report I'm looking for the format:

# Todays date, keyword
# This week results summary
# Total dataset results summary
# Conclusion about differences between the current week, and the total dataset changes

This prompt will use the results from the SerpApi call as the current input, and also use the Elasticsearch MCP results as historical input. Then compare both and create a Google Docs file with the report results.

Generates a Google Docs report with the analysis results

The final piece is to add the Google Docs Tool, so it can generate the report from the current search results and the historical search results provided by the MCP server.

For this, you must click the plus sign on Tool and select the Google Drive Tool documentation. We will need one node to create the file, and another one to update the content of the file with the agent's answer.

Add two tools for Google Docs, one to create the document, and another one to update the document by setting different “Operation” values. Keep the input parameters as “Define automatically by the model” by clicking the icon right of the input.

Testing it out

The final workflow will look like this:

You can find the full workflow here

And after execution, you should see the report created:

The good thing about this workflow is the flexibility it has. You can add more tools to the agent, and by just changing the prompt, you can add additional capabilities.

With this setup, you can also plug the chat trigger back and have a conversation with the agent about the analysis results and create specific reports on demand based on the conversation.

Conclusion

In this article, we explored how to use n8n with the Elasticsearch MCP in one of the endless use cases it can solve. You can drive your flows sequentially, in parallel, agentically, or all of them together.

If you are interested in growing this project, the first step would be to clean up the unneeded data from the SerpApi payload, and plug the Google search input node to some external data source for keywords, like a spreadsheet or database, so you don’t have to manually enter the search terms into the node. Another great idea would be to send the Word document by email or post it on Slack.

Level up your skills with our on-demand webinars: Agentic RAG using Elasticsearch and Intro to MCP with Elasticsearch MCP Server.

You can also take advantage of Elastic’s gen AI capabilities now by starting a free cloud trial or running Elasticsearch locally.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself