Build a Network Automation Lab in Docker with Cisco NSO¶
Build a full enterprise network automation stack on a single server using Docker. By the end of this guide, you'll have Cisco NSO managing 8 simulated network devices across two vendors, a project management board, and a monitoring dashboard — all running in containers.
Video
This guide is a companion to the YouTube video. Watch that first for the walkthrough, then come back here to build it yourself.
What You'll Build¶
| Component | Purpose | Port |
|---|---|---|
| Cisco NSO 6.4.x | Network automation orchestrator | 8080, 2024 |
| 8 Netsim Devices | Simulated Cisco IOS, IOS-XR, and Juniper Junos devices | — |
| Taiga | Project management / ticketing (Jira alternative) | 9000 |
| Grafana | Monitoring dashboards | 3050 |
| Prometheus | Metrics collection | 9090 |
| NSO Exporter | Custom metrics scraper for NSO | 9110 |
Network Topology:
┌─────────────────┐
│ Internet │
└────────┬────────┘
│
┌──────────────┴──────────────┐
│ │
┌─────┴─────┐ ┌──────┴────┐
│ bdr-rtr0 │ │ bdr-rtr1 │
│ Cisco IOS │ │ Cisco IOS │
└─────┬─────┘ └──────┬────┘
│ │
┌─────┴─────┐ ┌──────┴────┐
│ dist-rtr0 │ │ dist-rtr1 │
│ Cisco XR │ │ Cisco XR │
└──┬─────┬──┘ └──┬─────┬──┘
│ │ │ │
┌───┘ └───┐ ┌────┘ └────┐
┌────┴──┐ ┌──────┴┐ ┌─────┴─┐ ┌────────┴┐
│acc-sw0│ │acc-sw1 │ │acc-sw2│ │ acc-sw3 │
│ Junos │ │ Junos │ │ Junos │ │ Junos │
└───────┘ └────────┘ └───────┘ └──────────┘
Prerequisites¶
- A Linux server with Docker and Docker Compose (tested on Ubuntu 22.04, 8GB+ RAM)
- A free Cisco DevNet account
- ~10GB disk space
- Basic familiarity with Docker and networking concepts
Step 1: Download NSO from Cisco DevNet¶
- Go to developer.cisco.com
- Sign in with your DevNet account (free)
-
Download:
- NSO installer:
nso-6.4.x.linux.x86_64.signed.bin - NED packages:
cisco-ios(for IOS CLI)cisco-iosxr(for IOS-XR CLI)juniper-junos(for Junos NETCONF)
- NSO installer:
-
Create a project directory and drop the files in:
mkdir -p ~/NetAutoLab/nsofiles
# Move your downloaded files here
mv nso-6.4.*.signed.bin ~/NetAutoLab/nsofiles/
mv ncs-*-cisco-ios-*.signed.bin ~/NetAutoLab/nsofiles/
mv ncs-*-cisco-iosxr-*.signed.bin ~/NetAutoLab/nsofiles/
mv ncs-*-juniper-junos-*.signed.bin ~/NetAutoLab/nsofiles/
Step 2: Extract Installers¶
The signed bins contain the actual installers inside. Extract them:
cd ~/NetAutoLab/nsofiles
# Extract NSO installer
sh nso-6.4.*.linux.x86_64.signed.bin --skip-verification
# Create NED directory
mkdir -p ~/NetAutoLab/nso/neds
# Extract each NED (adjust filenames to match your versions)
cd /tmp
sh ~/NetAutoLab/nsofiles/ncs-*-cisco-ios-*.signed.bin --skip-verification
mv ncs-*-cisco-ios-*.tar.gz ~/NetAutoLab/nso/neds/
sh ~/NetAutoLab/nsofiles/ncs-*-cisco-iosxr-*.signed.bin --skip-verification
mv ncs-*-cisco-iosxr-*.tar.gz ~/NetAutoLab/nso/neds/
sh ~/NetAutoLab/nsofiles/ncs-*-juniper-junos-*.signed.bin --skip-verification
mv ncs-*-juniper-junos-*.tar.gz ~/NetAutoLab/nso/neds/
Copy the extracted NSO installer to the build context:
Step 3: Create the NSO Dockerfile¶
cat > ~/NetAutoLab/nso/Dockerfile << 'EOF'
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
# Dependencies for NSO
RUN apt-get update && apt-get install -y \
openjdk-17-jdk-headless \
ant \
make \
python3 \
python3-pip \
openssh-client \
libxml2-utils \
xsltproc \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy NSO installer — local install (gives us ncs-setup)
COPY nso-6.4.3.linux.x86_64.installer.bin /tmp/nso-installer.bin
RUN chmod +x /tmp/nso-installer.bin \
&& /tmp/nso-installer.bin --local-install /opt/ncs \
&& rm /tmp/nso-installer.bin
# Set up environment
ENV NCS_DIR=/opt/ncs
ENV PATH="/opt/ncs/bin:${PATH}"
ENV PYTHONPATH="/opt/ncs/src/ncs/pyapi:${PYTHONPATH}"
# Copy and extract NED packages
COPY neds/ /tmp/neds/
RUN mkdir -p /opt/ncs/packages/neds \
&& for ned in /tmp/neds/*.tar.gz; do \
tar xzf "$ned" -C /opt/ncs/packages/neds/; \
done \
&& rm -rf /tmp/neds
# Copy netsim init script
COPY netsim-init.sh /opt/netsim-init.sh
RUN chmod +x /opt/netsim-init.sh
# Expose ports
EXPOSE 2024 8080 8888 830
WORKDIR /var/opt/ncs
CMD ["/bin/bash", "-c", "source /opt/ncs/ncsrc && /opt/netsim-init.sh && ncs --foreground -v"]
EOF
Installer filename
Update the COPY line to match your exact NSO installer filename.
Step 4: Create the Netsim Init Script¶
This script creates 8 simulated devices and sets up the NSO runtime:
cat > ~/NetAutoLab/nso/netsim-init.sh << 'INITEOF'
#!/bin/bash
source /opt/ncs/ncsrc
NETSIM_DIR="/var/opt/ncs/netsim"
NCS_RUN="/var/opt/ncs"
# Find NED directories
IOS_NED=$(ls -d /opt/ncs/packages/neds/cisco-ios-cli-* 2>/dev/null | head -1)
XR_NED=$(ls -d /opt/ncs/packages/neds/cisco-iosxr-cli-* 2>/dev/null | head -1)
JUNOS_NED=$(ls -d /opt/ncs/packages/neds/juniper-junos-nc-* 2>/dev/null | head -1)
echo "=== NEDs Found ==="
echo "IOS: $IOS_NED"
echo "XR: $XR_NED"
echo "Junos: $JUNOS_NED"
# Create netsim devices if not already done
if [ ! -d "$NETSIM_DIR" ]; then
echo "=== Creating netsim devices ==="
[ -n "$IOS_NED" ] && ncs-netsim create-network "$IOS_NED" 2 bdr-rtr --dir "$NETSIM_DIR"
[ -n "$XR_NED" ] && ncs-netsim add-to-network "$XR_NED" 2 dist-rtr --dir "$NETSIM_DIR"
[ -n "$JUNOS_NED" ] && ncs-netsim add-to-network "$JUNOS_NED" 4 acc-sw --dir "$NETSIM_DIR"
fi
# Start netsims
echo "=== Starting netsim devices ==="
ncs-netsim start --dir "$NETSIM_DIR" 2>/dev/null || ncs-netsim restart --dir "$NETSIM_DIR"
# Set up NCS run directory if not done
if [ ! -f "$NCS_RUN/ncs-cdb/devsetup_complete" ]; then
echo "=== Setting up NCS ==="
ncs-setup --netsim-dir "$NETSIM_DIR" --dest "$NCS_RUN"
touch "$NCS_RUN/ncs-cdb/devsetup_complete"
fi
echo "=== NetAutoLab ready ==="
INITEOF
Step 5: Create the Monitoring Stack¶
Prometheus config¶
mkdir -p ~/NetAutoLab/monitoring/dashboards
cat > ~/NetAutoLab/monitoring/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
- job_name: "nso-exporter"
static_configs:
- targets: ["nso-exporter:9110"]
scrape_interval: 30s
- job_name: "grafana"
static_configs:
- targets: ["grafana:3000"]
EOF
NSO Prometheus Exporter¶
cat > ~/NetAutoLab/monitoring/Dockerfile.exporter << 'EOF'
FROM python:3.11-slim
WORKDIR /app
RUN pip install --no-cache-dir prometheus_client requests
COPY nso-exporter.py /app/nso-exporter.py
EXPOSE 9110
CMD ["python3", "/app/nso-exporter.py"]
EOF
cat > ~/NetAutoLab/monitoring/nso-exporter.py << 'PYEOF'
"""NSO Prometheus Exporter — scrapes RESTCONF, exposes metrics."""
import os, time, requests
from requests.auth import HTTPBasicAuth
from prometheus_client import start_http_server, Gauge
NSO_HOST = os.environ.get("NSO_HOST", "nso")
NSO_PORT = os.environ.get("NSO_PORT", "8080")
NSO_USER = os.environ.get("NSO_USER", "admin")
NSO_PASS = os.environ.get("NSO_PASS", "admin")
BASE_URL = f"http://{NSO_HOST}:{NSO_PORT}/restconf"
AUTH = HTTPBasicAuth(NSO_USER, NSO_PASS)
HEADERS = {"Accept": "application/yang-data+json"}
device_count = Gauge("nso_device_count", "Total managed devices")
device_sync = Gauge("nso_device_sync_status", "Sync status per device", ["device"])
alarm_count = Gauge("nso_alarm_count", "Active alarms")
nso_up = Gauge("nso_up", "NSO reachable (1=up)")
def fetch(path):
try:
r = requests.get(f"{BASE_URL}/{path}", auth=AUTH, headers=HEADERS, timeout=10)
return r.json() if r.status_code == 200 else None
except:
return None
def collect():
data = fetch("data/tailf-ncs:devices/device")
if not data:
nso_up.set(0)
return
nso_up.set(1)
devices = data.get("tailf-ncs:device", [])
device_count.set(len(devices))
for d in devices:
device_sync.labels(device=d["name"]).set(1)
alarms = fetch("data/tailf-ncs:alarms/alarm-list")
if alarms:
alarm_count.set(alarms.get("tailf-ncs:alarm-list", {}).get("number-of-alarms", 0))
if __name__ == "__main__":
print("NSO Exporter on :9110")
start_http_server(9110)
while True:
try:
collect()
except Exception as e:
print(f"Error: {e}")
nso_up.set(0)
time.sleep(30)
PYEOF
Step 6: Taiga Reverse Proxy¶
mkdir -p ~/NetAutoLab/taiga
cat > ~/NetAutoLab/taiga/nginx.conf << 'EOF'
server {
listen 80;
client_max_body_size 50M;
location / {
proxy_pass http://taiga-front:80;
proxy_set_header Host $host;
}
location /api {
proxy_pass http://taiga-back:8000/api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /admin {
proxy_pass http://taiga-back:8000/admin;
proxy_set_header Host $host;
}
location /media { alias /taiga-back/media; }
location /static { alias /taiga-back/static; }
}
EOF
Step 7: Docker Compose¶
This is the full stack — 9 containers in one file:
services:
# === Cisco NSO + Netsims ===
nso:
build:
context: ./nso
container_name: netautolab-nso
ports:
- "2024:2024" # SSH CLI
- "8080:8080" # RESTCONF / Web UI
- "8888:8888" # JSON-RPC
volumes:
- nso-data:/var/opt/ncs
- nso-logs:/var/log/ncs
restart: unless-stopped
networks: [netautolab]
# === Taiga (Ticketing) ===
taiga-db:
image: postgres:15-alpine
container_name: netautolab-taiga-db
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: changeme123
volumes: [taiga-db:/var/lib/postgresql/data]
restart: unless-stopped
networks: [netautolab]
taiga-rabbitmq:
image: rabbitmq:3-management-alpine
container_name: netautolab-taiga-mq
environment:
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: changeme123
restart: unless-stopped
networks: [netautolab]
taiga-back:
image: taigaio/taiga-back:latest
container_name: netautolab-taiga-back
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: changeme123
POSTGRES_HOST: taiga-db
TAIGA_SECRET_KEY: your-secret-key-here
TAIGA_SITES_DOMAIN: "YOUR_SERVER_IP:9000"
TAIGA_SITES_SCHEME: "http"
RABBITMQ_USER: taiga
RABBITMQ_PASS: changeme123
EVENTS_PUSH_BACKEND: "rabbitmq"
EVENTS_PUSH_BACKEND_URL: "amqp://taiga:changeme123@taiga-rabbitmq:5672/taiga"
CELERY_BROKER_URL: "amqp://taiga:changeme123@taiga-rabbitmq:5672/taiga"
ENABLE_TELEMETRY: "False"
volumes:
- taiga-media:/taiga-back/media
- taiga-static:/taiga-back/static
depends_on: [taiga-db, taiga-rabbitmq]
restart: unless-stopped
networks: [netautolab]
taiga-front:
image: taigaio/taiga-front:latest
container_name: netautolab-taiga-front
environment:
TAIGA_URL: "http://YOUR_SERVER_IP:9000"
restart: unless-stopped
networks: [netautolab]
taiga-gateway:
image: nginx:alpine
container_name: netautolab-taiga-gw
ports: ["9000:80"]
volumes:
- ./taiga/nginx.conf:/etc/nginx/conf.d/default.conf:ro
- taiga-media:/taiga-back/media:ro
- taiga-static:/taiga-back/static:ro
depends_on: [taiga-back, taiga-front]
restart: unless-stopped
networks: [netautolab]
# === Monitoring ===
prometheus:
image: prom/prometheus:latest
container_name: netautolab-prometheus
ports: ["9090:9090"]
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
restart: unless-stopped
networks: [netautolab]
grafana:
image: grafana/grafana:latest
container_name: netautolab-grafana
ports: ["3050:3000"]
environment:
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: changeme123
volumes: [grafana-data:/var/lib/grafana]
depends_on: [prometheus]
restart: unless-stopped
networks: [netautolab]
nso-exporter:
build:
context: ./monitoring
dockerfile: Dockerfile.exporter
container_name: netautolab-nso-exporter
environment:
NSO_HOST: nso
NSO_PORT: 8080
NSO_USER: admin
NSO_PASS: admin
ports: ["9110:9110"]
depends_on: [nso]
restart: unless-stopped
networks: [netautolab]
volumes:
nso-data:
nso-logs:
taiga-db:
taiga-media:
taiga-static:
prometheus-data:
grafana-data:
networks:
netautolab:
driver: bridge
Replace placeholders
Replace YOUR_SERVER_IP with your server's IP address and changeme123 with real passwords.
Step 8: Build and Launch¶
cd ~/NetAutoLab
# Build NSO and exporter images
docker compose build
# Pull public images
docker compose pull
# Launch everything
docker compose up -d
# Check status
docker compose ps
The NSO container takes about 30 seconds to start (netsim creation + NCS boot). Watch the logs:
Wait for NCS started vsn: 6.4.x before proceeding.
Step 9: Post-Launch Setup¶
Create the RabbitMQ vhost for Taiga¶
docker exec netautolab-taiga-mq rabbitmqctl add_vhost taiga
docker exec netautolab-taiga-mq rabbitmqctl set_permissions -p taiga taiga '.*' '.*' '.*'
Set Taiga admin password¶
docker exec netautolab-taiga-back python manage.py shell -c "
from django.contrib.auth import get_user_model
User = get_user_model()
u = User.objects.get(username='admin')
u.set_password('YourPasswordHere')
u.is_active = True
u.save()
print('Done')
"
Sync NSO devices¶
curl -s -u admin:admin -X POST \
http://localhost:8080/restconf/data/tailf-ncs:devices/sync-from \
-H 'Accept: application/yang-data+json'
Add Prometheus to Grafana¶
curl -s -X POST http://localhost:3050/api/datasources \
-H "Content-Type: application/json" \
-u admin:changeme123 \
-d '{"name":"Prometheus","type":"prometheus","url":"http://prometheus:9090","access":"proxy","isDefault":true}'
Step 10: Verify Everything¶
Check devices via RESTCONF¶
curl -s -u admin:admin \
http://localhost:8080/restconf/data/tailf-ncs:devices/device \
-H 'Accept: application/yang-data+json' | \
python3 -c "
import json, sys
for d in json.load(sys.stdin)['tailf-ncs:device']:
print(f\" {d['name']}\")"
Expected output:
Check Prometheus metrics¶
Expected:
Access the UIs¶
| Service | URL | Credentials |
|---|---|---|
| NSO Web UI / RESTCONF | http://YOUR_IP:8080 |
admin / admin |
| NSO CLI (SSH) | ssh admin@YOUR_IP -p 2024 |
admin / admin |
| Taiga | http://YOUR_IP:9000 |
admin / (what you set) |
| Grafana | http://YOUR_IP:3050 |
admin / changeme123 |
| Prometheus | http://YOUR_IP:9090 |
— |
What's Next¶
Now that your lab is running:
- Push configs — Stage realistic interface and routing configs via RESTCONF
- Build service packages — Use
ncs-make-packageto create automation services - Connect AI — Wire Claude Code's NSO MCP server to manage devices with natural language
- Break things — Push bad configs, practice rollbacks, build muscle memory
Check the other guides in these docs for service package tutorials and advanced demos.
Troubleshooting¶
NSO container keeps restarting
Check logs: docker logs netautolab-nso. Common issue: ncs-cdb directory missing. Fix: remove the volume and restart.
Taiga returns 500 errors
Usually a RabbitMQ vhost issue. Make sure you ran the vhost creation commands in Step 9.
Grafana shows 'No data'
Verify the Prometheus datasource URL is http://prometheus:9090 (not localhost — containers use Docker DNS).
How much RAM does this need?
The full stack uses about 5-6 GB. A server with 8GB+ is recommended.
Built with MkDocs Material. Guide by PrimeNetwork.