The future is agentic.

Tutorials

Agentic workers for commerce operations: a proof of concept.

A hands-on look at how AI agents can transform commerce ops using Mastra, MCP, and Commerce Layer’s Stream Hub — automate workflows, reduce errors, and power a new era of intelligent backend systems.

FP
Fabrizio
· October 27, 2025

In our recent white paper Principles of agentic commerce, Filippo Conforti, CEO and Co-founder of Commerce Layer, shared our vision for this new frontier of commerce experiences.

In the (very) near future as OpenAI itself suggests, agents will buy and sell on our behalf, helping us find the best deals and tailoring product recommendations around our behaviors and preferences.

With the recent introduction of the Agentic Commerce Protocol (ACP) by OpenAI and Stripe, the horizon for how merchants can engage with customers is rapidly expanding — from conversational shopping experiences within ChatGPT to new, intelligent commerce touchpoints across the web. Yet, the same principles behind ACP extend well beyond customer interactions. Agents can become powerful allies for backend operations, assisting business users in automating workflows, surfacing insights, and taking action directly on commerce data.

Emerging standards like A2A (Agent-to-Agent Protocol, more on this here) and MCP (Model Context Protocol, covered later in this article) are setting the foundation for a new level of interoperability — where agents can seamlessly collaborate with each other and connect with legacy systems such as ERPs or CRMs, orchestrating complex workflows with minimal human intervention.

This article presents the results of a proof of concept I’ve been working on over the last few weeks, showing how you can start today bringing AI into your business operations. And of course, Commerce Layer — API-first since the very beginning — is the true enabler of this new era of interactions.

The business need

Before diving into the proof of concept, it’s worth stepping back to understand why agentic workers matter for commerce operations. Modern digital commerce teams are already surrounded by data — from orders and inventory to customer interactions — yet much of this information still requires manual supervision, periodic reports, and repetitive interventions. Tasks like reviewing fraudulent orders, reconciling stock levels, or following up on abandoned carts are often handled through rigid workflows or human oversight, making them both time-consuming and error-prone.

Agentic workers offer a new approach. By combining autonomous reasoning with direct access to APIs and business tools, agents can continuously monitor operations, detect anomalies, and act on them in real time — without waiting for human input. This doesn’t replace people; it amplifies their capacity to make faster, more informed decisions.

From a business standpoint, this shift has measurable outcomes: reduced operational overhead, faster response times, and more consistent execution across repetitive processes. Over time, those gains compound into tangible ROI — lower costs, fewer errors, and improved customer satisfaction through timely, context-aware interventions.

In this proof of concept, we’ll focus on two concrete examples where agentic workers can already deliver value:

  • Abandoned cart management
    Where an agent identifies customers who left items in their carts and triggers timely recovery actions.
  • Early fraud detection
    Where an agent monitors incoming orders and flags those that deviate from expected behavior.

These two scenarios demonstrate how even small, well-scoped agents can make commerce operations more efficient, adaptive, and ultimately more profitable — while keeping human teams focused on strategy rather than supervision.

Main tools

Before getting in the details of the implementation let’s first introduce the main actors of this PoC.

Mastra as the agentic platform

Mastra will be our reference platform to build our agents and manage runtime. With its pragmatic approach to agent programming it made very quick and easy to set up agents and wire them with tools, memory, and everything needed in order to build our “automated commerce backend”.

It's not in the scope of this article to get over the internals of Mastra, but you can find plenty of documentation on their website.

MCP Server

There’s plenty of resources on the MCP protocol since it took the stage in every AI-related discussion in the last few months. The full specification can be found here along with a lot of interesting and useful resources.

In our case we implemented a couple of tools and made them available through a local MCP server to Mastra-based agents.

These tools give agents the possibility of:

  • Searching for orders according to some criteria (time frame, markets).
  • Tagging resources according to some logic (i.e. mark a cart as abandoned if some conditions are met).

Commerce Layer Stream API

One of the newest released features of our platform. The Stream Hub offers a CDC system to capture and track changes to data instantly. In our PoC, it will be the source of events which agents will act upon, feeding back to the platform via MCP tools and Commerce Layer APIs.

PoC scope and architecture

Now that we introduced the main concepts, let’s dive in the PoC structure.

For this proof of concept we relied on Mastra, a platform that provides a very straightforward development environment for building and running agents. Mastra comes with built-in support for the Model Context Protocol (MCP) and can work with a wide range of LLMs — in our case, we chose OpenAI’s gpt-4o-mini for its balance between speed and accuracy.

The Mastra server runs locally in our PoC (although a cloud deployment is equally supported) and exposes an SDK that makes it easy to access and orchestrate agents via standard APIs.

To connect Commerce Layer’s real-time data with Mastra, we developed a lightweight Node.js middleware. This middleware plays a central role in the architecture, acting as a bridge between the two environments. It provides two essential services:

  • Event forwarding from the Stream Hub
    The middleware listens to Commerce Layer’s event Stream Hub, captures incoming events (e.g., order updates, SKU changes, customer actions), and forwards them to Mastra through the Mastra SDK. This enables agents to immediately react to live business events.
  • Scheduled agent execution
    Not all use cases are event-driven. For scenarios where agents need to run periodically (e.g., nightly audits, inventory reconciliations, abandoned cart checks), the middleware also embeds a scheduler. The scheduler is responsible for “waking up” agents at predefined intervals, ensuring that time-based operations complement the event-driven ones.

Finally, we added a local MCP server to equip agents with the tools they need to interact directly with Commerce Layer. Through this MCP integration, agents can not only read data but also perform updates on resources in line with their programmed logic — for example, flagging anomalies, adjusting product tags, or initiating replenishment strategies.

This layered architecture — Commerce Layer Stream Hub → Node.js middleware → Mastra agents (event-driven or scheduled) → MCP tools — ensures a clean separation of concerns while giving agents the ability to operate in real time as well as on a scheduled basis.

The lifecycle of an agentic worker

Every agent in this proof of concept follows a simple but powerful lifecycle built around four key stages: trigger, reason, act, and learn.

  • Trigger
    The agent wakes up in response to an external event (for example, an order update or a cart status change) or through a scheduled job.
  • Reason
    Using its underlying LLM, the agent interprets the context: it evaluates data, checks business rules, and determines the most appropriate next step.
  • Act
    Through the tools exposed via the MCP server, the agent performs concrete actions — querying APIs, updating records, or sending notifications.
  • Learn
    Finally, the agent records outcomes and feedback, enabling refinement over time. While in this PoC the learning step is mostly observational, future iterations could include adaptive logic based on performance or user feedback.

This loop — trigger → reason → act → learn — makes agentic workers fundamentally different from traditional automation. Instead of executing predefined scripts, they continuously interpret and respond to changing contexts, allowing commerce operations to evolve from static processes into adaptive systems.

Bringing agents to life

With the architecture in place, the next step was to bring the pieces to life and observe how the system behaves under real conditions. In this section, I’ll walk through the role of each component — from event ingestion to agent execution — showing how they interact in practice. Along the way, I’ll include snippets of the actual code used and highlight some of the outputs, so you can get a concrete sense of how agentic workers can already enhance commerce operations today.

Middleware implementation

As already mentioned our middleware is a simple Node.js application. It has two main modules, the streamer and the scheduler.

Streamer

The streamer connects to Commerce Layer Stream Hub and forward messages to Mastra according to some routing logic contained in a simple router object:

const SERVICE_ROUTER = {
  "prices": {
    "name": "price-auditor",
    "agentId": "priceAuditorAgent"
  },
  "skus": {
    "name": "skus-auditor",
    "agentId": "skusAuditorAgent"
  },
  "orders": {
    "name": "early-fraud-detector",
    "agentId": "earlyFraudDetectionAgent"
  }
}

The idea is pretty simple: the streamer will listen for events and when a message update arrives, the streamer will intercept the resource type contained in the event payload, get the destination agent that will receive the event and send it through the Mastra SDK client.

Let’s see how this looks like in the below code snippets:

const { EventSourcePlus } = require("event-source-plus");
...
const eventSource = new EventSourcePlus(STREAM_API_URL, {
    headers: {
      Authorization: "Bearer " + token,
    },
    retryInterval: 3000,
    heartbeatTimeout: 45000,
  });

  eventSource.listen({
    onMessage(message) {
      switch (message.event) {
        case "update":
          handleEvent(JSON.parse(message.data), message.id);
          break;
        case "error":
          logger.error(`Error event received: ${message}`);
          return;
        default:
          return;
      }
    },
    onError(err) {
      logger.error("Error:", err);
    },
    onOpen() {
      logger.info("Connection opened");
    },
    onRetrying(count) {
      logger.info(`Reconnecting... attempt #${count}`);
    },
    onClosed() {
      logger.info("Connection closed");
    },
  });

As you can see, in our PoC we only manage update and error messages but create and delete events can be managed as well.

The handleEvent function is in charge of parsing the payload of the received event and pass it to the agent for evaluation using the router object to determine the right destination:

const { mastraClient } = require("./lib/mastra.js");

...

function handleEvent() {
	...
	try {
	  const agent = mastraClient.getAgent(
	    SERVICE_ROUTER[eventData.resource_type].agentId
	  );
	  
	  ...
	  const postData = JSON.stringify(eventData);
	  ...
	
	  const response = await agent.generate({
	    messages: [
	      {
	        role: "user",
	        content: postData,
	      },
	    ],
	    memory: {
	      resource: `memory_${eventData.resource_type}`,
	      thread: {
	        id: eventData.resource_id,
	      },
	    },
	  });
	  logger.info(
	    `[${eventId}] Response from ${
	      SERVICE_ROUTER[eventData.resource_type].name
	    } received: ${response.text}`
	  );
	} catch (error) {
	  logger.error(
	    { err: error },
	    `[${eventId}] Error posting to ${
	      SERVICE_ROUTER[eventData.resource_type].name
	    }:`
	  );
	}
	...
}

Where mastra.js is just a wrapper of the Mastra client SDK.

The function basically forwards the payload to the agent and wait for a response (for our PoC beside actions performed by agent using MCP tools) we just return a textual response in the middleware logger.

Scheduler

The scheduler object has the purpose of "awaking" specific agents according to a schedule (at the moment agents are still not able to trigger themselves!).

The schedule object is pretty simple:

const AGENT_SCHEDULE = [
  {
    "agent_id": "abandoned-cart",
    "schedule": "0 */5 * * * *",
    "enabled": true,
    "params": {
      "lookback_hours": 24,
      "max_orders": 100
    }
  }
]

It’s basically an array of objects containing, for each agent we want to be scheduled, the schedule (in node-cron format) and some additional parameters. In this case we want the abandoned-cart agent to do his thing every 5 minutes.

With this, it’s just a matter of starting the scheduler by reading the schedule object and schedule the trigger operation for each agent at the defined time:

const schedule = require("node-schedule");

...

function startAgentScheduler() {
  logger.info("starting agent scheduler...");
  AGENT_SCHEDULE.forEach((agent) => {
    if (!agent.enabled) return;
    logger.info(
      `scheduling agent ${agent.agent_id} with schedule ${
        agent.schedule
      } and params ${JSON.stringify(agent.params)}`
    );
    schedule.scheduleJob(agent.schedule, () => {
      triggerAgent(agent.agent_id, agent.params);
    });
  });
}

At each run the scheduler will just send a "wake up message" to the agent:

const TRIGGER_EVENT = {
  event: "scheduled_trigger",
  created_at: new Date().toISOString(),
};

async function triggerAgent(agentId, params) {
  logger.info(
    `triggering agent ${agentId} with params ${JSON.stringify(params)}`
  );

  try {
    const agent = mastraClient.getAgent(agentId);

    const response = await agent.generate({
      messages: [
        {
          role: "user",
          content: TRIGGER_EVENT,
        },
      ],
    });
    logger.info(
      `[${eventId}] Response from ${agentId} received: ${response.text}`
    );
  } catch (error) {
    logger.error({ err: error }, `[${eventId}] Error posting to ${agentId}:`);
  }
}

Also in this case what we do is just displaying the textual response of the agent.

One thing to notice is that in the trigger event we always send the created_at parameter containing the current timestamp. This is needed as agents lives in a timeless dimension and a date might be needed to query for data (unless, of course, you instruct the agent to get the current time of date via MCP before doing any other operation!).

Let’s pack everything together in an "executable" index.js file:

const args = process.argv.slice(2);
const RUN_AGENT_SCHEDULER = args.includes("-a");
const RUN_STREAMER = args.includes("-s");

const { logger } = require("./utils/logging.js");
const { startStreamer } = require("./streamer.js");
const { startAgentScheduler } = require("./scheduler.js");


if (!(RUN_AGENT_SCHEDULER || RUN_STREAMER )) {
  logger.info("Usage: node index.js [-a] [-s] (at least one required)");
  process.exit(1);
}

if (RUN_STREAMER) {
  startStreamer();
}

if (RUN_AGENT_SCHEDULER) {
  startAgentScheduler();
}

And let’s give this thing a spin:

cl-stream-poc % npm run start -- -a -s

> cl-stream-poc@0.0.1 start
> node src/index.js -a -s

[2025-10-02 08:26:10] INFO: [1759393570563][30] starting agent scheduler...
[2025-10-02 08:26:10] INFO: [1759393570563][30] scheduling agent abandoned-cart with schedule 0 */5 * * * * and params {"lookback_hours":24,"max_orders":100}
[2025-10-02 08:26:10] INFO: [1759393570801][30] Obtained token, connecting to stream...

Great! At this point our middleware is listening and actively triggering our agents!

MCP tools implementation

At its core, an agent is more than just a large language model (LLM) generating text. It’s an entity with a defined purpose, equipped with tools and context, that can use the reasoning power of the LLM to decide what to do and then actually carry it out. While an LLM on its own can answer questions, an agent can take action — querying data, updating resources, or triggering workflows — to achieve a specific goal.

MCP was built with the purpose of giving agents a standardised way to access external tools by providing meaningful description of the capabilities and the outcomes.

For this PoC we have the following tools:

  • One allowing the agent to tag resources in commerce layer as the result of an evaluation (i.e. marking an order as possible_fraud).
  • A number of other tools mapping our metrics APIs to get orders according to some filtering criteria.

For simplicity we will show here only the code for the resource tagging tool:

import { z } from 'zod';
import { errorTextResponse, structuredTextResponse } from '../../mcpc/tools/common.js';
import { callToolResult, getAccessToken } from '../../utils/tools.js';
import { CommerceLayer } from '@commercelayer/sdk';

const mcpTool = {
    name: 'tag-resource',
    description: 'Tags a resource with the specified attributes',
    inputSchema: {
        resourceId: z.string().describe('The ID of the resource to be tagged'),
        resourceType: z.string().describe('The type of resource'),
        tags: z.array(z.string()).describe('A comma separated list of tags to be associated with the resource, strip spaces')
    },
    outputSchema: {
        success: z.boolean().describe('Indicates if the tagging operation was successful')
    },
    callback: async ({ resourceId, tags, resourceType }, { authInfo }) => {
        const _resourceType = resourceType;
        let response;
        try {
            // Placeholder for actual implementation
            console.log(`Tagging ${_resourceType}/${resourceId} with tags: ${tags.join(', ')}`);
            const accessToken = await getAccessToken();
            const cl = CommerceLayer({ accessToken });
            console.log(`tagging resource ${resourceId} with tags: ${tags.join(', ')}`);
            await cl[_resourceType].update({
                id: resourceId,
                _add_tags: tags.join(', ')
            });
            response = structuredTextResponse({ success: true });
        }
        catch (error) {
            console.error('Error tagging resource:', error);
            console.error(`Error tagging resource ${resourceId} with tags: ${tags}`);
            response = errorTextResponse(String(error.message || error), true);
        }
        return callToolResult(response);
    }
};
export default mcpTool;

This snippet clearly shows how you can easily define an MCP tool.

First you provide descriptions for the tool itself and the input/output schema (using zod for validation). Then you provide the tool logic in the callback property. In our case the tool will get a token to interact with Commerce Layer APIs and will then update a resource adding the tags received as inputs. We will see later how all this come together in our PoC.

All the tools are then bundled together into an MCP server:

#!/usr/bin/env node
import { tools } from './tools/index.js';
import { McpLocalServer } from '../server/local.js';
import pkg from '../../package.json' with { type: 'json' };
const server = new McpLocalServer({
    name: 'commercelayer-metrics',
    version: pkg.version,
    title: pkg.description
});
// Register tools
server.registerTools(tools);
void server.start().catch((error) => {
    console.error('Fatal error during server start:', error);
    process.exit(1);
});

Agents implementation

It’s time to give a look at the main characters of our PoC: the agents. In this article we will cover two agents:

  • Abandoned cart detector agent
    Runs periodically and scans carts for possible abandoned ones. It uses the update timestamp as the criterion to determine the abandoned status (if it dates back over a specified time, then the cart is considered abandoned).
  • Early fraud detection agent
    Acts on the order update status. When the order turns pending, the agent will scan the order history to check orders with the same email but with more than x different shipping addresses.

In Mastra agents are programmatically defined using the Agent object. The minimal structure of the object is this:

export const myAgent = new Agent({
  name: 'SKUs Auditor Agent',
  instructions: `...`,
  model: openai('gpt-4o-mini'),
  tools: /* agent tools here */,
  memory: /* memory options  here*/,
});

So basically Mastra allows the creation of "stubs" for agents running on the model of choice. The agent is defined by:

  • Name — The agent name.
  • Instructions — What the agent is supposed to do, expressed in natural language.
  • Model — The LLM to be used (this might also require an access key in the env vars).
  • Tools — Anything the agent might need to carry out its task, this are provided via MCP.
  • Memory — Agents can have a local memory to persist conversations (this could be useful to improve agent efficiency).

We will go over the implementation of our agents in a short while, but now I want to jump to the final result. Agents are bundled in the main index.js file:

import { Mastra } from '@mastra/core/mastra';
import { PinoLogger } from '@mastra/loggers';
import { LibSQLStore } from '@mastra/libsql';
import { earlyFraudDetectionAgent } from './agents/fraud-monitor';
import { abandonedCartDetectorAgent } from './agents/abandoned-cart-detector';

export const mastra = new Mastra({
  workflows: {},
  agents: { earlyFraudDetectionAgent,abandonedCartDetectorAgent},
  storage: new LibSQLStore({
    // stores telemetry, evals, ... into memory storage, if it needs to persist, change to file:../mastra.db
    url: ":memory:",
  }),
  logger: new PinoLogger({
    name: 'Mastra',
    level: 'info',
  }),
});

We start the Mastra platform:

~ cl-commerce-monitor % npm run dev

> cl-commerce-monitor@1.0.0 dev
> mastra dev

◐ Preparing development environment...
✓ Initial bundle complete
◇ Starting Mastra dev server...

 mastra  0.12.3-alpha.1 ready in 1474 ms

│ Playground:   http://localhost:4111/
│ API:     http://localhost:4111/api

◯ watching for file changes...
◐ Bundling...
✓ Bundle complete
◐ [Mastra Dev] - Bundling finished, checking if restart is allowed...
◐ [Mastra Dev] - ✅ Restarting server...
↻ Restarting server...
◇ Starting Mastra dev server...

And opening a browser on http://localhost:4111/ we can see the Mastra admin platform showing the agents we created:

Let’s now give a look at the agents we created!

Abandoned cart detector

Cart abandonment is one of the most persistent challenges in commerce operations. Every unfinished checkout represents missed revenue, but also a burden for business teams who need to track, analyze, and recover those opportunities. Traditionally, this has meant either relying on basic automation (batch emails, static reminders) or performing repetitive manual tasks.

An abandoned cart agent changes the game: it continuously monitors for carts that go stale, evaluates the customer context in real time, and triggers the most appropriate action — whether it’s sending a personalized reminder, flagging the case for follow-up, or even applying a targeted incentive. For merchants, this means fewer repetitive tasks and a more streamlined backend process. For customers, it translates into a smoother, more relevant shopping experience.

For the scope of our article, our agent will just flag the cart as abandoned, but the potential here, is huge.

Abandoned Cart Agent is supposed to be triggered periodically, it will then make use of the schedule we mentioned earlier to get triggered.

The prompt is pretty straight forward — the text says it all:

You are an AI agent monitoring potential abandoned carts.

Instructions:
1. You will always receive a trigger event in JSON.

2. Retrieve all carts in status "pending" (sort by order.updated_at and *NOT* by cart.updated_at) 
 using cart search tool (**NEVER** call order search use *ALWAYS* clMetrics_carts-search) 
 the filter should be something like this:
 {
    "order": {
        "statuses": {
            "in": ["pending"]
        },
        "date_from": "<<day before of the event time 00:00>>",
        "date_to": "<<day before of the event time 23:59>>",
				"date_field": "updated_at"
    }
}

3. return a message with the number of pending carts in JSON format like this:
{
  "message": "There are <number_of_pending_orders> pending orders that might be abandoned carts."
  "orders": [<list_of_order_ids>]
}

4. tags the cart as "abandoned_cart" using the MCP tools specifying "orders" as the resource type and the order id as the resource id

In the section MCP tools implementation we showed how we created a tool that tags resources in Commerce Layer. Our agent will make use of this tool to flag a cart as potentially abandoned. Let’s see how to integrate with our MCP Server.

We added this code under src/mastra/mcp:

import { MCPClient } from "@mastra/mcp";

const CL_ENV = {
        "CL_CLIENT_ID" : "<<YOUR CL Client ID here>>",
        "CL_CLIENT_SECRET": "<<YOUR CL  Secret here>>",
        "CL_ORGANIZATION": "<<YOUR Client Org here>>",
      }
 
export const clMcpClient = new MCPClient({
  id: "cl-mcp-client",
  servers: {
    myMCPServer: {
      command: "node",
      args: ["<<path to mcp server index.js>>"],
      env: CL_ENV
    },
  }
});

As our MCP server is local, Mastra will spawn a node process that will run the index file containing our tools. We pass Commerce Layer credentials as env variables so that the tool can call our APIs. Then we just pass this to the agent object:

import { clMcpClient } from '../mcp/clMcpClient';
...
export const abandonedCartDetectorAgent = new Agent({
  name: 'Abandoned Cart Detector Agent',
  instructions: `
   ...
    `,
  model: openai('gpt-4o-mini'),
  tools: await clMcpClient.getTools(),
});

That’s it. We have our abandoned cart agent set up and ready to do his thing!

Early fraud detection agent

Fraudulent orders are a critical pain point in commerce operations. They not only generate financial losses but also drain resources as teams spend time verifying suspicious transactions and handling chargebacks. Traditional fraud prevention often relies on rigid rules or third-party tools that may either miss subtle risks or create false positives that frustrate legitimate customers.

An early fraud detection agent provides a smarter alternative. By continuously scanning incoming orders, cross-checking them against past customer behavior, and applying contextual reasoning, the agent can flag anomalies in real time. This allows business teams to focus only on genuinely suspicious cases, reducing repetitive manual checks while lowering the risk of fraud slipping through. The result is stronger protection for merchants and a more seamless purchasing experience for honest customers.

For the scope of this PoC, our agent will just flag the cart as potential fraud. The agent is supposed to be triggered by an order update event received by the streamer.

We'll need a more sophisticated prompt here, but as before, you can easily get what it’s supposed to do:

You are an AI agent monitoring incoming order updates for potential fraud.

    Instructions:
    1. You will always receive an order update in JSON.

    2. Retrieve the full order using the tool with the provided "resource_id".

   3. Check order status:
   - Normalize the order status to lowercase.
   - If the status is not exactly "pending":
     Respond:
     {
       "message": "✅ No fraud check required.",
       "order": "<resource_id>"
     }
     End.

    4. If the order status = "pending":
      - If "shipping_address" is missing:
        {
          "message": "ℹ️ Cannot evaluate — missing shipping address.",
          "order": "<resource_id>"
        }
        - End.

      - Otherwise, use the order breakdown tool to find all orders belonging to the same customer in the last year . 
        The breakdown should be by city, address and country, using cardinality operator. 
        Pass as a fitler an object like the following (replace with actual values):
        {
          "order": {
              "date_from": "<<now_minus_1_year>>",
              "date_to": "<<now>>",
              "date_field": "placed_at"
          },
          "customer": {
              "emails" : {
                  "in": ["<<customer_email>>"]
              }
          }
        } 

    5. With the retrieved breakdown (JSON object):
      - parse the content.text field as a JSON object.
      - extract the data.shipping_address.city array
      - Count elements

    6.Fraud detection rule:
      - If the number of unique cities is less or equal than 5 
        Respond with:
        {
          "message": "✅ Order <resource_id> looks good. <number_of_unique_cities> different shipping cities in the last year for email <customer_email>.",
          "order": "<resource_id>"
        }
      
      - If the number of unique cities is 6 or greater:
        - tag the order as "potential_fraud" using the MCP tools specifying "orders" as the resource type and the resource_id as the resource id
        - Respond with:
        {
          "message": "🚨 Potential fraud detected for order <resource_id>. Reason: More than 5 unique shipping cities for email <customer_email>.",
          "order": "<resource_id>"
        }

        Constraints:
        - Do not output reasoning steps.
        - Always return only a JSON object with "message" and "order".
        - No extra text, explanation, or formatting outside the JSON.

In this case the agent will use our Metrics API tool exposed by our local MCP server (their implementation is not covered in this article). The way they are assigned to the agent is similar to what we saw for the abandoned cart agent.

Putting everything together

Now that we looked into all the pieces of our PoC is time to give it a spin and see the results.

Here the output for our abandoned cart agent:

[2025-10-03 09:48:00] INFO: [1759484880007][30] [SCHEDULER] triggering agent abandonedCartDetectorAgent with params {"lookback_hours":24,"max_orders":100}
[2025-10-03 09:48:23] INFO: [1759484903670][30] Response received: ```json
{
  "message": "There are 14 pending orders that might be abandoned carts.",
  "orders": [
    "rnYhvglDpE",
    "pyAhVLRjrQ",
    "ykohGVkoXd",
    "nlKhmQkJXQ",
    "oKkhYXABnm",
    "zdJhngmpLa",
    "KaehevQJVK",
    "xzYhpaeklY",
    "QgYhvQQdAl",
    "YWehQBOyjn",
    "eWdhoydxMB",
    "GWrhZzbabv",
    "QgYhvQeGZD",
    "eWdhoybrrA"
  ]
}

And if we check the order with the CLI, we can see that our agent tagged the order as abandoned:

~ cl-stream-poc % cl get orders/rnYhvglDpE -i tags -f id tags               
{
  id: 'rnYhvglDpE',
  type: 'orders',
  tags: [
    {
      id: 'JrWXflvMpq',
      type: 'tags',
      name: 'abandoned_cart',
      created_at: '2025-09-24T14:56:05.726Z',
      updated_at: '2025-09-24T14:56:05.726Z',
      reference: null,
      reference_origin: null,
      metadata: {}
    }
  ]
}

Let’s see instead the output of the early fraud detection agent:

[2025-10-03 10:23:16] INFO: [1759486996615][30] [STREAMER] Event data: {"event":"update","resource_type":"orders","resource_id":"OBnhRYeeQm","organization_id":"DnwWkFmMdn","payload":{"updated_at":{"from":"2025-10-03T10:23:04.613Z","to":"2025-10-03T10:23:16.400Z"}},"ip_address":"93.66.65.181","user_agent":"node","uuid":"0bd1e09342ef4ab29c2e4f698008a005dffb3f32557671d938481e60d1fb0d59","test":true,"created_at":"2025-10-03T10:23:16.400Z","updated_at":"2025-10-03T10:23:16.400Z","who":{"application":{"id":"podBiKmyVN","client_id":"e_p0J5QeUuYgkkm97eRJP7w_8QHPNoJYYnxY1dWlgV0","kind":"integration","public":false,"confidential":true}}}
[2025-10-03 10:23:16] INFO: [1759486996615][30] [1759486996415-0][STREAMER] Posting to early-fraud-detector
[2025-10-03 10:23:23] INFO: [1759487003331][30] [1759486984683-0][STREAMER] Response from early-fraud-detector received: {"message":"🚨 Potential fraud detected for order OBnhRYeeQm. Reason: More than 5 unique shipping cities for email fabriziod@commercelayer.io.","order":"OBnhRYeeQm"}

Also here if we check the order with the CLI, we can see that the order has been correctly flagged as potential fraud:

~ cl-stream-poc % cl get orders/OBnhRYeeQm -i tags -f id tags 
{
  id: 'OBnhRYeeQm',
  type: 'orders',
  tags: [
    {
      id: 'JkkxfeEGPJ',
      type: 'tags',
      name: 'potential_fraud',
      created_at: '2025-09-24T10:24:04.992Z',
      updated_at: '2025-09-24T10:24:04.992Z',
      reference: null,
      reference_origin: null,
      metadata: {}
    }
  ]
}

Conclusion

This proof of concept was not about building production-ready solutions, but about demonstrating what’s already possible when you combine an API-first commerce platform like Commerce Layer with agentic workers powered by modern LLMs. By wiring together real-time events, scheduled tasks, and intelligent agents, we’ve shown how everyday commerce operations — from cart recovery to fraud detection — can be reimagined with far less manual overhead and far greater adaptability.

The real takeaway is that the building blocks are here today. With tools like Mastra, MCP, and streaming APIs, agents can already operate as autonomous, context-aware assistants within existing commerce stacks. What’s emerging now, with initiatives such as ACP and A2A, is the foundation for an open, interoperable ecosystem — where agents can not only act on data, but also collaborate with each other and with legacy systems like ERPs and CRMs.

Commerce Layer’s API-first architecture naturally fits into this evolution, making it an ideal ground for experimentation and early adoption. As these protocols mature, we’ll move from proofs of concept like this one to fully agentic operations — where human expertise and autonomous agents work side by side to create faster, smarter, and more resilient commerce systems.