Lab 1: Deploying an AIFabric
This lab guides you through deploying an AIFabric
resource in the MX-AI ecosystem, which includes setting up agents that can interact with each other and with services.
You will learn how to create an AIFabric
, define agents, connect them to services and empower them with an LLM of your choice.
AIFabric
Custom Resource
The following YAML snippet defines an AIFabric
resource where the agents leverage OpenAI APIs to generate their responses.
myfabric.yaml
apiVersion: odin.trirematics.io/v1
kind: AIFabric
metadata:
name: myfabric
namespace: trirematics
spec:
services:
- name: ui
image: hub.bubbleran.com/orama/services/ui:2025_06a1
- name: observability-db
image: hub.bubbleran.com/orama/services/observability-db:2025_06a1
llms:
- name: openai-model
provider: openai
model: gpt-4.1-mini
api-key: <your-api-key>
agents:
- name: orchestrator-agent
role: orchestrator
image: hub.bubbleran.com/orama/agents/orchestrator:2025_06a1
llm: openai-model
- name: athena-agent
role: worker
image: hub.bubbleran.com/orama/agents/athena:2025_06a1
llm: openai-model
service-refs:
- observability-db
In the above example, you may change the name of the services, llms and agents, but you should avoid changing the image
field, as otherwise the odin-operator
will not be able to find the correct Docker images to deploy.
To create the AIFabric
, run:
brc install aifabric myfabric.yaml
Next, you can use the UI service to interact with the agents. To do so, SSH to the BubbleRAN cluster with local port forwarding on port 9900:
ssh -L 9900:localhost:9900 <your-user>@<your-cluster-ip>
Then, activate port forwarding for the UI service in Kubernetes:
kubectl -n trirematics port-forward service/myagent-ui 9900:9900
At this point, you can access the UI by navigating to http://localhost:9900
in your web browser.
Asking a question in the UI will trigger the orchestrator-agent
, which will then delegate the task to the appropriate worker agents based on their capabilities. In this case, the only worker agent is the athena-agent
, which will use the OpenAI API to generate a response.
The UI will display the steps taken by the agents while processing the request.
Example Questions
After deploying a Network and one or more Terminals, you can test the solutions with questions like:
- "What is the TDD configuration of the network?"
- "How many UEs are available?"
- "What is the IMSI of the UE named 'ue1'?"
In general, since the athena-agent
is based on a RAG architecture, you should try to ask questions containing keywords that are present in the Network/Terminal configuration yaml files.