Lab 3: Integrating a Task-Specific Agent in MX-AI
In this lab, you will learn how to integrate a Task-Specific agent into the MX-AI ecosystem. The agent will be able to communicate with the orchestrator agent and perform tasks based on user queries.
Containerizing the Agent
To integrate your TutorialAgent
into the MX-AI ecosystem, you need to containerize it. This will allow you to deploy it in a Kubernetes environment, where it can communicate with the orchestrator agent.
The following Dockerfile should be a good starting point to do so:
Dockerfile
# Use the latest Python 3.12 image with uv installed
FROM ghcr.io/astral-sh/uv:python3.12-bookworm AS base
# Stage for building the application
FROM base AS builder
# Pull latest uv image
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Set environment variables for uv
ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy
WORKDIR /app
# Create a directory for the application
RUN mkdir -p agent
# Copy the application code
COPY . agent/
WORKDIR /app/agent
# Create lockfile
RUN --mount=type=cache,target=/root/.cache/uv \
uv lock
# Install the application dependencies using uv
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-install-project --no-dev
# Stage for the final image
FROM base
# Copy the built application from the builder stage
COPY --from=builder /app /app
# Set environment variables for the final image
ENV PATH="/app/.venv/bin:$PATH"
# Set the working directory to the application directory
WORKDIR /app/agent
# Run the application using uv
CMD ["uv", "run", "."]
You can also use the following Makefile to facilitate the build and run process:
Makefile
# Set your Docker registry here
DOCKER_REGISTRY ?= hub.example.com
VERSION ?= $(shell git describe --tags --always --abbrev --dirty)
PORT ?= 9900
# Set the repository and image name here
REPO ?= example/tutorial-agent
IMAGE := $(DOCKER_REGISTRY)/$(REPO):$(TAG)
TAG ?= latest
.PHONY: build run push clean
build:
@echo "Building Agent's Docker image with tag: $(IMAGE) - Version: $(VERSION)"
docker build $(if $(NO_CACHE),--no-cache) \
--build-arg VERSION=$(VERSION) \
--tag $(IMAGE) .
@echo "Agent's Docker image built successfully."
run:
docker run --rm \
-it \
--env-file .env \
--network host \
-p $(PORT):$(PORT) \
$(IMAGE)
push:
@echo "Pushing Agent's Docker image with tag: $(IMAGE)"
docker push $(IMAGE)
@echo "Agent's Docker image pushed successfully."
clean:
docker rmi $(IMAGE) || true
You can put the two files above in the root of your agent's codebase and simply run
make build
to build the Docker image. If you want to run it, you can use:
make run
You can also push the image to your Docker registry with:
make push
Integration
The final step is to integrate your agent into the MX-AI ecosystem. After pushing the Docker image to your registry, you can deploy it using the AIFabric
Resource which you learnt about in Lab 1.
The following YAML file, for example, deploys the UI service, the orchestrator agent and our TutorialAgent
:
tutorial-fabric.yaml
apiVersion: odin.trirematics.io/v1
kind: AIFabric
metadata:
name: tutorial-fabric
namespace: trirematics
spec:
services:
- name: ui
image: hub.bubbleran.com/orama/services/ui:2025_06a1
llms:
- name: openai-model
provider: openai
model: gpt-4.1-mini
api-key: <your-api-key>
agents:
- name: orchestrator-agent
role: orchestrator
image: hub.bubbleran.com/orama/agents/orchestrator:2025_06a1
llm: openai-model
- name: tutorial-agent
role: worker
image: hub.example.com/example/tutorial-agent:latest # Replace with your image
llm: openai-model
Deploy the AIFabric
resource with the following command:
brc install aifabric tutorial-fabric.yaml
If everything goes well, but when trying to send queries through the UI you get an error, it's likely because the tutorial agent image was pulled later than the others. If that's the case, just delete and recreate the AIFabric
resource:
brc delete aifabric tutorial-fabric.yaml
brc install aifabric tutorial-fabric.yaml
This behavior could happen only the first time you deploy the AIFabric
resource with a new agent.
Note: With future updates, you will be able to push your agents directly to the BubbleRAN registry to share them with the community.