NeuAIs Documentation
Welcome to the NeuAIs Platform documentation. NeuAIs is a micro-agent infrastructure orchestration platform designed to deploy, monitor, and scale thousands of autonomous AI agents.
Quickstart
Get started in 5 minutes
Deploy Your First Agent
Step-by-step deployment guide
Observatory 3D Visualization
Real-time agent monitoring
API Reference
Complete API documentation
What is NeuAIs?
NeuAIs is an enterprise-grade platform for deploying and managing thousands of autonomous AI agents. Built with Rust and Go for performance and reliability.
Quick example
Deploy an agent in seconds:
# Install the CLI
curl -sSL https://install.neuais.com | sh
# Deploy your first agent
neuais agent deploy my-agent \
--runtime rust \
--replicas 10
# Watch it scale
neuais agent logs my-agent --tail 20 --follow
Architecture
The NeuAIs platform consists of three layers working together to provide a complete agent orchestration solution:
User Applications
Dashboard, Admin Console, Observatory 3D Visualization, and CLI for complete control and visibility
Backend Services
Auth, IAM, KMS, Billing, Compute, Storage, Database, and AI Network services
Agent Infrastructure
1000+ micro AI agents handling monitoring, processing, and analysis tasks
┌──────────────────────────────────────────────────┐
│ Dashboard │ Observatory │ CLI │ Admin │
├──────────────────────────────────────────────────┤
│ Auth │ IAM │ KMS │ Compute │
│ Billing │ Storage │ Database │ Network │
├──────────────────────────────────────────────────┤
│ 1000+ Micro AI Agents │
│ (Monitor • Process • Analyze • Scale) │
└──────────────────────────────────────────────────┘
Key Features
Scale
Deploy from 1 to 1000+ agents with automatic load balancing, health monitoring, and intelligent resource allocation.
Visualize
Real-time 3D visualization of agent networks, message flows, and system telemetry through Observatory.
Secure
Enterprise-grade security with IAM, KMS, fine-grained access control, and encrypted communication.
Monitor
Comprehensive metrics, logs, and distributed traces for every agent and service in your infrastructure.
Cost-Effective
Pay only for what you use. Optimized for efficiency at scale with intelligent resource management.
Production Ready
Built with Rust and Go for maximum performance, reliability, and safety in production environments.
Getting Started
- Install the CLI - Set up your development environment
- Deploy Your First Agent - Launch and configure an agent
- Monitor in Observatory - Visualize your agent network
- Scale to Production - Take it to 1000+ agents
Example: Creating an Agent
Here’s a simple agent that monitors system health:
#![allow(unused)]
fn main() {
use neuais_sdk::prelude::*;
#[agent(name = "health-monitor")]
pub struct HealthMonitor {
interval: Duration,
threshold: f64,
}
#[async_trait]
impl Agent for HealthMonitor {
async fn run(&mut self, ctx: &Context) -> Result<()> {
loop {
let metrics = ctx.collect_metrics().await?;
if metrics.cpu_usage > self.threshold {
ctx.alert("High CPU usage detected").await?;
}
tokio::time::sleep(self.interval).await;
}
}
}
}
Deploy it:
neuais agent deploy health-monitor \
--config ./config.toml \
--replicas 5 \
--region us-west-2
Monitor it:
# View real-time logs
neuais logs health-monitor --tail 50 --follow
# Check agent status
neuais agent status health-monitor
# Scale up
neuais agent scale health-monitor --replicas 20
Backend Services
NeuAIs provides a complete suite of backend services:
| Service | Description | Status |
|---|---|---|
| Auth | Authentication and session management | Production |
| IAM | Identity and access management | Production |
| KMS | Key management and encryption | Production |
| Billing | Usage tracking and billing | Production |
| Compute | Agent execution environment | Production |
| Storage | Distributed object storage | Production |
| Database | Managed database services | Production |
| AI Network | Agent communication mesh | Production |
Use Cases
Infrastructure Monitoring
Deploy agents to monitor servers, containers, and cloud resources across your entire infrastructure.
Data Processing
Process streams of data in real-time with agents that scale automatically based on load.
Security Analysis
Analyze logs, network traffic, and system events for security threats and anomalies.
Cost Optimization
Monitor resource usage and automatically optimize cloud spending across providers.
Support
- Documentation: You’re reading it
- GitHub: github.com/neuais/platform
- Issues: github.com/neuais/platform/issues
- Community: Join our discussions
Ready to get started? → Quickstart Guide
Quickstart
Get up and running with NeuAIs in under 5 minutes. This guide will help you deploy your first agent and verify it’s working.
Prerequisites
- Linux, macOS, or WSL2 on Windows
- curl or wget
- 2GB RAM minimum
- Internet connection
Step 1: Install the CLI
Install the NeuAIs CLI tool:
curl -sSL https://install.neuais.com | sh
Verify the installation:
neuais --version
You should see:
neuais 0.1.0
Step 2: Authenticate
Create a free account and authenticate:
neuais auth login
This will open your browser to complete authentication. Once done, verify your login:
neuais auth whoami
Step 3: Deploy Your First Agent
Create a simple monitoring agent:
neuais agent deploy hello-world \
--image neuais/examples:hello-world \
--replicas 1
Check the deployment status:
neuais agent status hello-world
Expected output:
Agent: hello-world
Status: Running
Replicas: 1/1
Region: us-west-2
Uptime: 12s
Step 4: View Logs
Watch your agent’s logs in real-time:
neuais logs hello-world --tail 20 --follow
You should see output like:
[2024-01-15 10:23:45] hello-world-abc123: Agent started
[2024-01-15 10:23:46] hello-world-abc123: Health check: OK
[2024-01-15 10:23:47] hello-world-abc123: Processing tasks...
Press Ctrl+C to stop following logs.
Step 5: Scale Your Agent
Scale up to 5 replicas:
neuais agent scale hello-world --replicas 5
Verify the scaling:
neuais agent status hello-world
Output:
Agent: hello-world
Status: Running
Replicas: 5/5
Region: us-west-2
Uptime: 3m 42s
Step 6: Monitor in Observatory
Open the Observatory dashboard to see your agent in action:
neuais observatory open
This opens a 3D visualization of your agent network at http://localhost:3000.
Step 7: Clean Up
When you’re done, remove the agent:
neuais agent delete hello-world
Confirm the deletion:
neuais agent list
Next Steps
Troubleshooting
CLI not found
If neuais command is not found after installation, add it to your PATH:
export PATH="$HOME/.neuais/bin:$PATH"
Add this line to your ~/.bashrc or ~/.zshrc to make it permanent.
Authentication failed
If authentication fails, try:
neuais auth logout
neuais auth login
Agent deployment stuck
Check the agent logs for errors:
neuais logs hello-world --tail 50
For more help, see Troubleshooting or file an issue.
Installation
Install the NeuAIs CLI and SDKs on your system. The CLI provides complete control over your agent infrastructure from the command line.
System Requirements
- Operating System: Linux, macOS, or Windows (WSL2)
- Memory: 2GB RAM minimum, 4GB recommended
- Disk: 500MB free space
- Network: Internet connection for initial setup
Install CLI
Quick Install (Recommended)
Use our installation script:
curl -sSL https://install.neuais.com | sh
This script will:
- Detect your operating system and architecture
- Download the latest stable release
- Install to
~/.neuais/bin/ - Add to your PATH automatically
Manual Installation
Linux (x86_64)
wget https://github.com/neuais/cli/releases/latest/download/neuais-linux-amd64.tar.gz
tar -xzf neuais-linux-amd64.tar.gz
sudo mv neuais /usr/local/bin/
macOS (Intel)
wget https://github.com/neuais/cli/releases/latest/download/neuais-darwin-amd64.tar.gz
tar -xzf neuais-darwin-amd64.tar.gz
sudo mv neuais /usr/local/bin/
macOS (Apple Silicon)
wget https://github.com/neuais/cli/releases/latest/download/neuais-darwin-arm64.tar.gz
tar -xzf neuais-darwin-arm64.tar.gz
sudo mv neuais /usr/local/bin/
Windows (WSL2)
wget https://github.com/neuais/cli/releases/latest/download/neuais-windows-amd64.zip
unzip neuais-windows-amd64.zip
mv neuais.exe C:\Windows\System32\
Verify Installation
neuais --version
Expected output:
neuais 0.1.0
Build: 2024-01-15T10:00:00Z
Commit: abc123def
Install Language SDKs
Rust SDK
Add to your Cargo.toml:
[dependencies]
neuais-sdk = "0.1"
tokio = { version = "1", features = ["full"] }
Example usage:
use neuais_sdk::prelude::*;
#[tokio::main]
async fn main() -> Result<()> {
let client = NeuaisClient::new()?;
let agents = client.agents().list().await?;
println!("Found {} agents", agents.len());
Ok(())
}
Go SDK
go get github.com/neuais/sdk-go
Example usage:
package main
import (
"fmt"
"github.com/neuais/sdk-go/neuais"
)
func main() {
client := neuais.NewClient()
agents, err := client.Agents().List()
if err != nil {
panic(err)
}
fmt.Printf("Found %d agents\n", len(agents))
}
Python SDK
pip install neuais
Example usage:
from neuais import Client
client = Client()
agents = client.agents.list()
print(f"Found {len(agents)} agents")
TypeScript/JavaScript SDK
npm install @neuais/sdk
# or
yarn add @neuais/sdk
Example usage:
import { NeuaisClient } from '@neuais/sdk';
const client = new NeuaisClient();
const agents = await client.agents.list();
console.log(`Found ${agents.length} agents`);
Configuration
Set Up Authentication
Create a free account:
neuais auth signup
Or log in if you have an account:
neuais auth login
This opens your browser for authentication. Once complete, your credentials are stored in ~/.neuais/credentials.
Configuration File
Create ~/.neuais/config.toml:
[default]
region = "us-west-2"
output = "json"
[profile.production]
region = "us-east-1"
output = "table"
endpoint = "https://api.neuais.com"
[profile.development]
region = "local"
output = "json"
endpoint = "http://localhost:8080"
Use profiles:
# Use default profile
neuais agent list
# Use production profile
neuais agent list --profile production
# Use development profile
neuais agent list --profile development
Environment Variables
Configure via environment variables:
# API endpoint
export NEUAIS_ENDPOINT="https://api.neuais.com"
# Authentication token
export NEUAIS_TOKEN="your-token-here"
# Default region
export NEUAIS_REGION="us-west-2"
# Log level (debug, info, warn, error)
export NEUAIS_LOG_LEVEL="info"
# Output format (json, table, yaml)
export NEUAIS_OUTPUT="json"
Shell Completion
Bash
neuais completion bash > /etc/bash_completion.d/neuais
Zsh
neuais completion zsh > ~/.zsh/completion/_neuais
Fish
neuais completion fish > ~/.config/fish/completions/neuais.fish
Docker Image
Run the CLI in Docker:
docker pull neuais/cli:latest
docker run --rm -it \
-v ~/.neuais:/root/.neuais \
neuais/cli:latest agent list
Create an alias for convenience:
alias neuais='docker run --rm -it -v ~/.neuais:/root/.neuais neuais/cli:latest'
Upgrading
Upgrade CLI
neuais upgrade
Or reinstall:
curl -sSL https://install.neuais.com | sh
Upgrade SDKs
Rust
cargo update neuais-sdk
Go
go get -u github.com/neuais/sdk-go
Python
pip install --upgrade neuais
TypeScript/JavaScript
npm update @neuais/sdk
# or
yarn upgrade @neuais/sdk
Troubleshooting
Command not found
If neuais is not found after installation, add to your PATH:
export PATH="$HOME/.neuais/bin:$PATH"
Add this line to your shell config:
# For bash
echo 'export PATH="$HOME/.neuais/bin:$PATH"' >> ~/.bashrc
# For zsh
echo 'export PATH="$HOME/.neuais/bin:$PATH"' >> ~/.zshrc
Permission denied
If you get permission errors:
chmod +x ~/.neuais/bin/neuais
SSL/TLS errors
If you encounter SSL certificate errors:
# Update CA certificates (Linux)
sudo apt-get update && sudo apt-get install ca-certificates
# Update CA certificates (macOS)
brew install ca-certificates
Connection refused
Check your network connection and firewall settings. Verify the endpoint:
curl -v https://api.neuais.com/health
Next Steps
Quickstart Guide
Deploy your first agent in 5 minutes
First Deployment
Detailed deployment walkthrough
API Reference
Complete API documentation
CLI Reference
All CLI commands and options
Deploy Your First Agent
Create, deploy, and monitor your first NeuAIs agent in under 10 minutes.
Prerequisites
- NeuAIs CLI installed
- Authenticated account
- Basic knowledge of YAML or TOML
Step 1: Create Agent Definition
Create my-agent.toml:
[agent]
name = "my-first-agent"
version = "1.0.0"
runtime = "rust"
[resources]
cpu = "0.5"
memory = "512Mi"
replicas = 1
[health]
endpoint = "/health"
interval = "30s"
timeout = "5s"
[environment]
LOG_LEVEL = "info"
METRICS_PORT = "9090"
Step 2: Write Agent Code
Create src/main.rs:
use neuais_sdk::prelude::*;
#[agent(name = "my-first-agent")]
pub struct MyAgent {
counter: u64,
}
#[async_trait]
impl Agent for MyAgent {
async fn run(&mut self, ctx: &Context) -> Result<()> {
loop {
self.counter += 1;
ctx.log(format!("Tick {}", self.counter)).await?;
tokio::time::sleep(Duration::from_secs(5)).await;
}
}
async fn health(&self) -> HealthStatus {
HealthStatus::Healthy
}
}
#[tokio::main]
async fn main() -> Result<()> {
let agent = MyAgent { counter: 0 };
agent.start().await
}
Step 3: Build
cargo build --release
Step 4: Deploy
neuais agent deploy \
--config my-agent.toml \
--binary target/release/my-first-agent
Output:
Uploading binary... 100%
Creating agent... done
Starting replicas... 1/1
Agent deployed: my-first-agent
ID: agt_1a2b3c4d5e6f
Status: Running
Endpoint: https://my-first-agent.neuais.app
Step 5: Verify
Check status:
neuais agent status my-first-agent
View logs:
neuais logs my-first-agent --tail 20 --follow
Expected output:
[2024-01-15 10:00:00] Tick 1
[2024-01-15 10:00:05] Tick 2
[2024-01-15 10:00:10] Tick 3
Step 6: Test Health Endpoint
curl https://my-first-agent.neuais.app/health
Response:
{
"status": "healthy",
"uptime": "5m 32s",
"version": "1.0.0"
}
Step 7: Scale
Scale to 3 replicas:
neuais agent scale my-first-agent --replicas 3
Step 8: Update
Update agent code and redeploy:
cargo build --release
neuais agent update my-first-agent \
--binary target/release/my-first-agent \
--strategy rolling
Step 9: Monitor
View metrics:
neuais agent metrics my-first-agent
Open Observatory:
neuais observatory --agent my-first-agent
Step 10: Clean Up
Delete agent:
neuais agent delete my-first-agent
Next Steps
Agent Development
Learn advanced agent patterns
Scaling Agents
Auto-scaling and load balancing
Observatory
3D visualization and monitoring
Monitoring Guide
Metrics, logs, and traces
Troubleshooting
Build fails
Ensure Rust toolchain is installed:
rustup --version
cargo --version
Deploy fails
Check authentication:
neuais auth whoami
Agent crashes
View crash logs:
neuais logs my-first-agent --since 1h --level error
Health check fails
Verify health endpoint returns 200:
neuais agent exec my-first-agent -- curl localhost:8080/health
Platform Architecture
NeuAIs is built on a three-layer architecture designed for deploying and managing thousands of autonomous AI agents.
High-Level Overview
┌──────────────────────────────────────────────────┐
│ User Applications Layer │
│ Dashboard • Admin • Observatory • CLI • Mobile │
└──────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────┐
│ Backend Services Layer (8 services) │
│ Auth • IAM • KMS • Billing • Compute │
│ Storage • Database • AI Network │
└──────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────┐
│ Agent Infrastructure Layer │
│ 1000+ Micro AI Agents │
│ RIC • SMO • rApps • Mesh Network │
└──────────────────────────────────────────────────┘
Layer 1: User Applications
Dashboard
- Tech: Svelte frontend, Rust backend
- Purpose: Primary control plane for agent management
- Features: Agent CRUD, metrics visualization, configuration
- Port: 3000
Admin Portal
- Tech: Next.js, TypeScript
- Purpose: System administration and service configuration
- Features: User management, service health, system monitoring
- Port: 3001
Observatory
- Tech: Rust, WGPU, WebGPU
- Purpose: Real-time 3D visualization of agent networks
- Features: Network topology, performance metrics, agent status
- Port: 8080
CLI Tool
- Tech: Python (Calliope framework)
- Purpose: Command-line interface for all operations
- Features: Agent deployment, log streaming, configuration
Mobile Apps
- Android: Kotlin, Jetpack Compose
- iOS: Swift, SwiftUI
- Features: Agent monitoring, push notifications, quick actions
Layer 2: Backend Services
Service Mesh Architecture
┌─────────────────────────────────────────────────┐
│ API Gateway (Port 8000) │
│ Load Balancing • Rate Limiting │
└─────────────────────────────────────────────────┘
↓
┌──────────┬──────────┬──────────┬────────────────┐
│ Auth │ IAM │ KMS │ Billing │
│ :8001 │ :8002 │ :8003 │ :8004 │
└──────────┴──────────┴──────────┴────────────────┘
┌──────────┬──────────┬──────────┬────────────────┐
│ Compute │ Storage │ Database │ AI Network │
│ :8005 │ :8006 │ :8007 │ :8080 │
└──────────┴──────────┴──────────┴────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ Service Registry (Consul) │
│ Auto-discovery • Health Checks │
└─────────────────────────────────────────────────┘
Auth Service (Port 8001)
- OAuth2, JWT, API keys
- Session management
- MFA support
- Tech: Go
IAM Service (Port 8002)
- Roles and policies
- Resource permissions
- Service accounts
- Tech: Go
KMS Service (Port 8003)
- Key generation
- Encryption/decryption
- Secret storage
- Tech: Go
Billing Service (Port 8004)
- Usage tracking
- Cost calculation
- Invoicing
- Tech: Go
Compute Service (Port 8005)
- Agent execution
- Auto-scaling
- Resource allocation
- Tech: Go
Storage Service (Port 8006)
- S3-compatible object storage
- CDN integration
- Replication
- Tech: Go
Database Service (Port 8007)
- PostgreSQL
- Redis
- Migrations
- Tech: Go
AI Network Service (Port 8080)
- RIC (RAN Intelligent Controller)
- SMO (Service Management & Orchestration)
- rApps framework
- Tech: Go
Layer 3: Agent Infrastructure
AI Network Layer
┌─────────────────────────────────────────────────┐
│ SMO Server (Port 8080) │
│ rApp Manager • Policy Engine • Orchestrator │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ RIC Server (Port 8081) │
│ ML Engine • Anomaly Detection • Features │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ rApps (Network Applications) │
│ Anomaly Detector • Traffic Optimizer │
│ Auto-Remediation • Custom rApps │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ Mesh Network (Port 9000) │
│ Yggdrasil • QUIC • FRP • Exit Nodes │
└─────────────────────────────────────────────────┘
RIC (RAN Intelligent Controller)
- ML-powered network intelligence
- Real-time inference
- Anomaly detection (Isolation Forest)
- Feature extraction
- Model management
SMO (Service Management & Orchestration)
- rApp lifecycle management
- Policy enforcement
- Resource orchestration
- Event bus (Kafka, Redis)
rApps Framework
- Standard interface for network applications
- Registry and lifecycle management
- Event processing
- Action generation
Mesh Network
- Yggdrasil: IPv6 overlay network
- QUIC: Low-latency transport
- FRP: Fast reverse proxy for tunneling
- Exit Nodes: Failover and health monitoring
Data Flow
Agent Deployment Flow
1. User → CLI/Dashboard
2. CLI → Auth Service (authenticate)
3. CLI → Compute Service (create agent)
4. Compute → Database (store metadata)
5. Compute → Storage (upload binary)
6. Compute → AI Network (register agent)
7. AI Network → Mesh Network (allocate resources)
8. Mesh Network → Agent (start execution)
9. Agent → Observatory (stream metrics)
Monitoring Flow
1. Agent → Metrics (Prometheus format)
2. Metrics → AI Network (collect)
3. AI Network → RIC (ML inference)
4. RIC → SMO (anomaly detection)
5. SMO → rApps (process events)
6. rApps → SMO (generate actions)
7. SMO → Mesh Network (execute actions)
8. All → Observatory (visualize)
Network Protocols
Yggdrasil
- IPv6 overlay network
- Automatic routing
- End-to-end encryption
- Self-healing topology
QUIC
- UDP-based transport
- Low latency
- Connection migration
- Multiplexing
FRP
- Fast reverse proxy
- Tunneling through NAT
- Load balancing
- Health checks
Storage Architecture
Object Storage
- S3-compatible API
- Multi-region replication
- CDN integration
- Versioning
Database
- PostgreSQL (primary)
- Redis (caching, pub/sub)
- Time-series (metrics)
File System
- Distributed file service
- Agent binaries
- Configuration files
- Logs
Security Architecture
Authentication Flow
User → Auth Service → JWT Token → API Gateway → Services
Authorization Flow
Request → API Gateway → IAM Service → Policy Check → Allow/Deny
Encryption
- At Rest: AES-256 (KMS)
- In Transit: TLS 1.3
- End-to-End: Agent-to-agent encryption via Yggdrasil
Scaling Architecture
Horizontal Scaling
- All services are stateless
- Load balancing via API Gateway
- Auto-scaling based on metrics
Agent Scaling
- Auto-scaling policies
- Resource-based triggers (CPU, memory, custom)
- Health-based scaling
Database Scaling
- Read replicas
- Connection pooling
- Query optimization
Deployment Architecture
Development
Local machine → Docker Compose → All services
Staging
GitHub → CI/CD → Fly.io → Staging environment
Production
GitHub → CI/CD → Multi-region deployment
- US West (primary)
- US East (failover)
- EU (latency optimization)
Observability
Metrics
- Prometheus format
- Custom metrics per agent
- System metrics (CPU, memory, network)
Logs
- Structured JSON logs
- Centralized aggregation
- Real-time streaming
Traces
- Distributed tracing
- OpenTelemetry
- Request correlation
Visualization
- Observatory (3D real-time)
- Grafana (time-series)
- Dashboard (control plane)
Technology Stack
| Layer | Technology |
|---|---|
| Frontend | Svelte, Next.js, React |
| Backend | Go, Rust |
| Database | PostgreSQL, Redis |
| Networking | Yggdrasil, QUIC, FRP |
| ML | Isolation Forest, Custom models |
| Visualization | WGPU, WebGPU |
| CLI | Python (Calliope) |
| Mobile | Kotlin, Swift |
| Infrastructure | Docker, Fly.io |
| Monitoring | Prometheus, Grafana |
Design Principles
- Stateless Services: All services can be restarted without data loss
- Service Discovery: Automatic registration and health checks
- Fault Tolerance: Automatic failover and retry logic
- Observability: Metrics, logs, and traces for everything
- Security: Zero-trust architecture with encryption everywhere
- Scalability: Horizontal scaling for all components
- Performance: Low-latency networking and efficient resource usage
Next Steps
- Components Overview - Detailed component documentation
- Observatory - 3D visualization details
- Services Overview - Backend services documentation
Components
Observatory Platform
Technical overview of the Observatory visualization system
Observatory is the NeuAIs platform’s 3D visualization engine, providing real-time monitoring and interaction with large-scale agent networks.
Architecture
System Design
┌─────────────────────────────────────────────────────┐
│ User Interface │
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
│ │ Menu Bar │ │ Dock System │ │ Cards │ │
│ │ (Mac-style) │ │ (Auto-hide) │ │ (Floating)│ │
│ └──────────────┘ └──────────────┘ └───────────┘ │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Visualization Engine │
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
│ │ Three.js │ │ Particle │ │ Camera │ │
│ │ Scene │ │ System │ │ Controls │ │
│ └──────────────┘ └──────────────┘ └───────────┘ │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Data Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
│ │ Static JSON │ │ WebSocket │ │ localStorage││
│ │ (Dev) │ │ API (Prod) │ │ (Config) │ │
│ └──────────────┘ └──────────────┘ └───────────┘ │
└─────────────────────────────────────────────────────┘
Core Components
1. Observatory Core (observatory-core.js)
- Three.js scene, camera, renderer initialization
- WebGL context management
- Animation loop (60 FPS)
- Camera controls (orbit, zoom, pan)
2. Visual Configuration (observatory-visual-config.js)
- Color scheme management
- Shape mappings
- User preferences
- localStorage persistence
3. Data Management (observatory-data.js, observatory-data-live.js)
- Static data definitions
- WebSocket client
- Real-time updates
- Data transformation
4. UI Systems
- Dock (
observatory-dock.js) - Auto-hiding bottom toolbar - Cards (
observatory-cards.js) - Floating information panels - Menu (
observatory-menu.js) - Top menu bar - Context Menu (
observatory-context-menu.js) - Right-click actions
5. Node Rendering
- Geometry creation (spheres, cubes, diamonds, pyramids, tori)
- Material configuration (Phong shading, emissive)
- Clustering algorithm
- LOD (Level of Detail) system
6. Particle System
- Bézier curve path generation
- Particle lifecycle management
- Connection inference
- Flow animation
Data Model
Entity Schema
{
// Required fields
id: string, // Unique identifier
name: string, // Display name
status: enum, // 'active' | 'idle' | 'starting' | 'error'
// Category (determines cluster)
category: enum, // 'agents' | 'services' | 'infrastructure'
// Optional fields
cpu: number, // CPU usage (0-100)
mem: string, // Memory usage ('24MB')
connections: string[], // Array of connected entity IDs
// Metadata
metadata: {
language: string, // 'go', 'rust', 'python', 'typescript'
template: string, // Template type identifier
env: string, // 'production', 'staging', 'development'
version: string, // Semantic version
uptime: number // Seconds since start
}
}
Categories
Agents (AI Workers)
- Position: Left cluster (-20, 0, 0)
- Colors: Green (#22c55e), Yellow (#facc15)
- Shape: Cube
- Count: Typically 9-1000+
Services (Microservices)
- Position: Center cluster (0, 8, 0)
- Colors: Blue (#3b82f6), Purple (#a855f7)
- Shape: Sphere
- Count: Typically 19
Infrastructure (Databases, Caches)
- Position: Right cluster (20, -5, 0)
- Colors: Orange (#f97316), Red (#ef4444)
- Shape: Octahedron (Diamond)
- Count: Typically 3-5
Connections
Connections are defined as arrays of entity IDs:
{
id: 'anomaly-detector',
connections: ['metrics-api', 'redis-cache']
}
Connection Inference:
- Type inferred from target entity category
- Particle color based on connection type
- Speed varies by type (cache: fast, database: slow)
Rendering Pipeline
Initialization
1. Load Data
├─ Fetch from API or load static
├─ Parse and validate
└─ Store in memory
2. Setup Scene
├─ Create THREE.Scene
├─ Add camera (PerspectiveCamera)
├─ Add lights (ambient + directional)
└─ Create renderer (WebGLRenderer)
3. Create Node Groups
├─ Group by category
├─ Calculate cluster positions
└─ Create THREE.Group for each
4. Generate Nodes
├─ For each entity:
│ ├─ Choose geometry (sphere, cube, etc.)
│ ├─ Apply material (color, emissive)
│ ├─ Position within cluster
│ └─ Add to scene
└─ Store node references
5. Create Particles
├─ For each connection:
│ ├─ Calculate Bézier path
│ ├─ Create N particles
│ └─ Set initial positions
└─ Add to scene
6. Initialize UI
├─ Create dock
├─ Setup menu handlers
├─ Initialize cards system
└─ Attach event listeners
7. Start Animation
└─ Begin 60 FPS loop
Animation Loop
Every frame (16.67ms target):
function animate() {
requestAnimationFrame(animate);
// 1. Update camera (orbit controls)
observatoryCore.updateCamera();
// 2. Animate nodes (pulsing effect)
nodes.forEach(node => {
const pulse = Math.sin(time + node.phase);
node.mesh.scale.set(pulse, pulse, pulse);
node.mesh.material.emissiveIntensity = 0.4 + pulse * 0.2;
});
// 3. Move particles along paths
particles.forEach(particle => {
particle.progress += particle.speed;
if (particle.progress > 1) particle.progress = 0;
const position = getBezierPoint(
particle.start,
particle.mid,
particle.end,
particle.progress
);
particle.mesh.position.copy(position);
});
// 4. Render scene
observatoryCore.render();
}
Visual Configuration System
Modes
1. Category Mode (Default)
- All agents same color
- All services same color
- All infrastructure same color
2. Type Mode
- Each type (postgres, redis, auth, etc.) has unique color
- Fine-grained visual differentiation
- Better for large systems
Shape System
Available Geometries:
{
'sphere': new THREE.SphereGeometry(size, 16, 16),
'cube': new THREE.BoxGeometry(size, size, size),
'octahedron': new THREE.OctahedronGeometry(size),
'tetrahedron': new THREE.TetrahedronGeometry(size),
'torus': new THREE.TorusGeometry(size * 0.6, size * 0.3, 8, 16)
}
Performance Characteristics:
| Shape | Vertices | Faces | Performance |
|---|---|---|---|
| Tetrahedron | 12 | 4 | Excellent |
| Octahedron | 24 | 8 | Excellent |
| Cube | 24 | 12 | Excellent |
| Sphere (16) | 289 | 256 | Good |
| Torus | 512+ | 256+ | Fair |
Configuration Persistence
Storage: Browser localStorage
Key: observatory-visual-config
Format: JSON
{
mode: 'type',
types: {
'postgres': {
color: '#3b82f6',
shape: 'octahedron',
label: 'PostgreSQL'
},
// ... more types
},
categories: {
'infrastructure': {
color: '#ef4444',
shape: 'octahedron'
},
// ... more categories
}
}
Performance Optimization
Scaling Strategies
< 100 Nodes:
- Default settings work well
- All effects enabled
- High quality mode
100-500 Nodes:
- Reduce sphere segments to 8
- Limit particles to 3 per connection
- Enable instanced rendering for identical shapes
500-1000 Nodes:
- Use cubes only (simpler geometry)
- Disable particle flows
- Implement frustum culling
- Reduce emissive intensity calculations
1000+ Nodes:
- Static rendering mode (no animation)
- 2D fallback option
- Virtual scrolling for node list
- Aggressive LOD system
Memory Management
Techniques:
- Object pooling for particles
- Geometry instancing
- Texture atlasing
- Dispose unused geometries
Monitoring:
// Memory usage
console.log(renderer.info.memory);
// Render stats
console.log(renderer.info.render);
GPU Optimization
Best Practices:
- Batch draw calls
- Minimize state changes
- Use BufferGeometry
- Enable hardware acceleration
- Avoid transparent materials where possible
WebSocket API
Connection
const ws = new WebSocket('ws://localhost:8080/ws/observatory');
ws.onopen = () => {
// Send authentication
ws.send(JSON.stringify({
type: 'auth',
token: 'your-jwt-token'
}));
};
Message Protocol
Client → Server:
// Subscribe to updates
{
type: 'subscribe',
categories: ['agents', 'services', 'infrastructure']
}
// Unsubscribe
{
type: 'unsubscribe',
categories: ['agents']
}
// Request snapshot
{
type: 'snapshot'
}
Server → Client:
// Initial snapshot
{
type: 'snapshot',
timestamp: 1701234567890,
data: {
agents: [...],
services: [...],
infrastructure: [...]
}
}
// Update event
{
type: 'update',
timestamp: 1701234567890,
entity: {
id: 'anomaly-detector',
status: 'active',
cpu: 15,
mem: '28MB'
}
}
// Delete event
{
type: 'delete',
id: 'old-agent-123'
}
Reconnection Strategy
let reconnectAttempts = 0;
const maxReconnectAttempts = 10;
const reconnectDelay = 1000; // ms
function reconnect() {
if (reconnectAttempts >= maxReconnectAttempts) {
console.error('Max reconnect attempts reached');
return;
}
reconnectAttempts++;
const delay = reconnectDelay * Math.pow(2, reconnectAttempts);
setTimeout(() => {
console.log(`Reconnecting... (attempt ${reconnectAttempts})`);
connectWebSocket();
}, delay);
}
Security
Authentication
JWT Token:
- Passed in WebSocket connection headers
- Validated on server before accepting connection
- Refreshed every 15 minutes
Authorization
Permissions:
observatory:view- View agents/servicesobservatory:control- Start/stop componentsobservatory:admin- Full access
Data Protection
Client-Side:
- No sensitive data in localStorage
- Preferences only (colors, shapes)
- JWT in memory only
Network:
- WSS (WebSocket Secure) in production
- TLS 1.3
- Certificate pinning
Browser Compatibility
Supported Browsers
| Browser | Version | Support | Notes |
|---|---|---|---|
| Chrome | 90+ | ✅ Full | Recommended |
| Firefox | 88+ | ✅ Full | Excellent |
| Edge | 90+ | ✅ Full | Chromium-based |
| Safari | 14+ | ⚠️ Partial | Limited WebGL 2.0 |
| Opera | 76+ | ✅ Full | Chromium-based |
Feature Detection
// Check WebGL support
function checkWebGLSupport() {
try {
const canvas = document.createElement('canvas');
return !!(
window.WebGLRenderingContext &&
(canvas.getContext('webgl') || canvas.getContext('experimental-webgl'))
);
} catch (e) {
return false;
}
}
// Check WebGL 2.0
function checkWebGL2Support() {
try {
const canvas = document.createElement('canvas');
return !!canvas.getContext('webgl2');
} catch (e) {
return false;
}
}
Fallbacks
If WebGL unavailable:
- Show 2D canvas view
- List view with filters
- Table view with search
Testing
Unit Tests
Test Files:
tests/visual-config.test.jstests/data-transform.test.jstests/particle-system.test.js
Run Tests:
cd neuais.com/hub.neuais.com/observatory.neuais.com
npm test
Performance Tests
Benchmarks:
// Measure rendering performance
const stats = new Stats();
document.body.appendChild(stats.dom);
function animate() {
stats.begin();
// ... render code ...
stats.end();
}
Visual Regression Tests
Tools:
- Percy for screenshot comparison
- Backstop.js for visual diffs
Run:
npm run test:visual
Deployment
Build Process
# No build needed - static files
# Just copy to web server
cp -r neuais.com/hub.neuais.com/observatory.neuais.com /var/www/observatory
CDN Deployment
Files to CDN:
3d/skateboard/three.min.js(cached forever)css/*.css(versioned)js/*.js(versioned)assets/(cached forever)
Cache Headers:
Cache-Control: public, max-age=31536000, immutable # JS/CSS/Assets
Cache-Control: no-cache # HTML
Environment Configuration
Development:
const API_URL = 'ws://localhost:8080/ws';
const DEBUG = true;
Production:
const API_URL = 'wss://api.neuais.com/ws';
const DEBUG = false;
Monitoring
Client-Side Metrics
Track:
- FPS (frames per second)
- Memory usage
- WebSocket reconnections
- Error rate
- User interactions
Send to:
- Google Analytics
- Sentry (errors)
- Custom metrics endpoint
Server-Side Metrics
Track:
- WebSocket connections (active)
- Message rate (messages/sec)
- Update latency (ms)
- Connection duration (sec)
Future Development
Roadmap
v2.0 (Q1 2026):
- WebXR/VR support
- Multi-cluster visualization
- Historical playback
- Advanced filtering
v3.0 (Q2 2026):
- Collaborative features
- AI-powered insights
- Custom plugins
- Mobile app
Next: API Reference →
Prebuilt Environments
NeuAIs ships ready-to-use dev environments so you can launch agents and services in seconds without compiling toolchains on first run.
What you get
- Instant start: Nix-based prebuilts for common stacks (Node/TypeScript, Python, Go, Rust, Java, PHP, Ruby), plus databases and build tools (PostgreSQL, MySQL, Redis, MongoDB, Docker, cmake, vcpkg, etc.).
- Consistency: the same environment for every teammate and CI run.
- Offline-friendly: prebuilts are cached so repeat launches stay fast.
- Template coverage: all catalog templates are backed by prebuilts, so sample apps and starter kits come up immediately.
Typical workflows
- Create a workspace from a template and start coding immediately (no “install Node/Go” delay).
- Spin up backend services or agents locally with the exact toolchain they need.
- Demo or POC environments that start fast and behave the same across machines.
How to use (CLI flow)
- Pick a template (e.g.,
node,rust,python,fullstack/mysql). - Create a workspace:
neuais workspace create my-app --template node - The CLI checks the prebuild cache; if available, your environment starts in seconds. If not, it falls back to a build step and caches the result for the next run.
Tips
- Keep the CLI updated to get the latest template catalog and prebuild hints.
- If you add a new dependency, rerun the CLI to let it cache the updated environment for future launches.
- For CI, point jobs to reuse the shared prebuild cache to shorten pipeline times.
Backend Services Overview
NeuAIs provides a complete suite of backend services to power your agent infrastructure. All services are production-ready, highly available, and designed to scale automatically.
Service Architecture
┌─────────────────────────────────────────────┐
│ API Gateway & Load Balancer │
├─────────────────────────────────────────────┤
│ Auth │ IAM │ KMS │ Billing │ More │
├─────────────────────────────────────────────┤
│ Service Mesh (gRPC + REST) │
├─────────────────────────────────────────────┤
│ Distributed Storage & Databases │
└─────────────────────────────────────────────┘
Core Services
Authentication
OAuth2, JWT, API keys, and session management with support for SSO and MFA.
Learn more →Billing
Usage tracking, billing, invoicing, and cost analysis for your agent infrastructure.
Learn more →Compute
Distributed execution environment for agents with auto-scaling and load balancing.
Learn more →Database
Managed PostgreSQL, Redis, and time-series databases optimized for agent workloads.
Learn more →AI Network
High-performance message bus for agent-to-agent communication and coordination.
Learn more →Service Status
All services include built-in health checks, metrics, and monitoring.
| Service | Status | Uptime | Latency |
|---|---|---|---|
| Auth | Operational | 99.99% | 12ms |
| IAM | Operational | 99.99% | 8ms |
| KMS | Operational | 99.98% | 15ms |
| Billing | Operational | 99.95% | 25ms |
| Compute | Operational | 99.97% | 45ms |
| Storage | Operational | 99.99% | 18ms |
| Database | Operational | 99.96% | 22ms |
| AI Network | Operational | 99.98% | 6ms |
Last updated: 5 minutes ago
Quick Start with Services
Using the Authentication Service
# Get an access token
curl -X POST https://api.neuais.com/v1/auth/token \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "your-password"
}'
Response:
{
"access_token": "eyJhbGciOiJIUzI1NiIs...",
"token_type": "Bearer",
"expires_in": 3600
}
Using the Storage Service
# Upload a file
curl -X PUT https://storage.neuais.com/my-bucket/file.txt \
-H "Authorization: Bearer $TOKEN" \
-T file.txt
# Download a file
curl https://storage.neuais.com/my-bucket/file.txt \
-H "Authorization: Bearer $TOKEN" \
-o downloaded.txt
Using the Compute Service
use neuais_sdk::compute::*;
#[tokio::main]
async fn main() -> Result<()> {
let client = ComputeClient::new()?;
// Create a task
let task = client
.create_task("my-task")
.image("my-agent:latest")
.replicas(5)
.region("us-west-2")
.send()
.await?;
println!("Task created: {}", task.id);
Ok(())
}
Service Guarantees
Availability
All services are deployed across multiple availability zones with automatic failover:
- 99.99% uptime SLA for Auth, IAM, Storage
- 99.95% uptime SLA for Billing, Compute, Database, AI Network
- 99.9% uptime SLA for KMS
Performance
Target latencies at p99:
- Auth: < 50ms
- IAM: < 30ms
- KMS: < 100ms
- Storage: < 200ms
- Compute: < 500ms
- Database: < 100ms
- AI Network: < 20ms
Security
All services include:
- TLS 1.3 for all connections
- Encryption at rest
- Encryption in transit
- Regular security audits
- Compliance: SOC 2, ISO 27001, GDPR
Service Discovery
Services are automatically discoverable via DNS or the service registry:
# Via DNS
curl https://auth.neuais.com/health
# Via service registry
neuais service discover auth
Monitoring & Observability
All services expose:
- Prometheus metrics at
/metrics - Health checks at
/health - OpenTelemetry traces
- Structured JSON logs
View service metrics in Observatory:
neuais observatory --service auth
Development
Local Development
Run services locally with Docker Compose:
git clone https://github.com/neuais/platform
cd platform/services
docker-compose up -d
Services will be available at:
- Auth: http://localhost:8001
- IAM: http://localhost:8002
- KMS: http://localhost:8003
- Storage: http://localhost:9000
- Database: postgresql://localhost:5432
Testing
Each service includes a comprehensive test suite:
cd services/auth
cargo test --all-features
Further Reading
- Authentication Guide - Complete auth documentation
- IAM Policies - Policy syntax and examples
- KMS Operations - Key management operations
- Billing API - Usage and billing API
- Compute Scheduling - Task scheduling and execution
- Storage API - Object storage operations
- Database Management - Database provisioning and management
- AI Network Protocol - Agent communication protocol
Authentication
IAM
KMS
Billing
Compute
Storage
Database
AI Network Service
AI-powered network management layer providing Service Management & Orchestration (SMO), intelligent optimization via RIC, and network automation through rApps.
Overview
The AI Network service is the brain of the NeuAIs platform, managing thousands of autonomous agents through machine learning and intelligent orchestration.
Architecture
┌─────────────────────────────────────────────────┐
│ SMO Server (Port 8080) │
│ rApp Manager • Policy Engine • Orchestrator │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ RIC Server (Port 8081) │
│ ML Engine • Anomaly Detection • Features │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ rApps (Network Applications) │
│ Anomaly Detector • Traffic Optimizer │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ Mesh Network (Port 9000) │
│ Yggdrasil • QUIC • FRP │
└─────────────────────────────────────────────────┘
Components
SMO (Service Management & Orchestration)
Centralized management and orchestration for the AI network layer.
Features
- rApp lifecycle management
- Policy engine with condition evaluation
- Resource orchestration
- Event bus (Kafka, Redis)
- Mesh network integration
API Endpoints
rApp Management
GET /api/v1/rapps
POST /api/v1/rapps
GET /api/v1/rapps/{id}
DELETE /api/v1/rapps/{id}
PATCH /api/v1/rapps/{id}/status
POST /api/v1/rapps/{id}/heartbeat
Policy Management
GET /api/v1/policies
POST /api/v1/policies
GET /api/v1/policies/{id}
DELETE /api/v1/policies/{id}
POST /api/v1/policies/{id}/enable
POST /api/v1/policies/{id}/disable
Event Management
POST /api/v1/events
GET /api/v1/events
POST /api/v1/events/{id}/handle
RIC (RAN Intelligent Controller)
AI-powered network intelligence providing real-time ML inference.
Features
- ML model interface
- Anomaly detection (Isolation Forest)
- Feature extraction
- Inference engine
- Model management
- Training support
API Endpoints
Model Management
GET /api/v1/models
POST /api/v1/models
GET /api/v1/models/{id}
DELETE /api/v1/models/{id}
Inference
POST /api/v1/infer
POST /api/v1/infer/batch
Training
POST /api/v1/train
GET /api/v1/training/{id}/status
rApps Framework
Foundation for building Network Applications that automate network management.
Interface
type RApp interface {
Initialize(ctx context.Context, config map[string]interface{}) error
Start(ctx context.Context) error
Stop(ctx context.Context) error
ProcessEvent(ctx context.Context, event NetworkEvent) ([]NetworkAction, error)
GetStatus() RAppStatus
GetMetrics() RAppMetrics
}
Built-in rApps
Anomaly Detection rApp
- Real-time network monitoring
- ML-powered anomaly detection via RIC
- Intelligent alerting with webhooks
- Automatic remediation suggestions
Traffic Optimization rApp
- AI-powered route optimization
- Multi-objective scoring (latency, throughput, cost)
- Automatic route updates via SMO
- Local fallback when RIC unavailable
Configuration
Environment Variables
# SMO Configuration
SMO_PORT=8080
DATABASE_URL=postgresql://user:pass@localhost/neuais
REDIS_URL=redis://localhost:6379
KAFKA_BROKERS=localhost:9092
# RIC Configuration
RIC_PORT=8081
MLFLOW_URL=http://localhost:5000
MODEL_PATH=/models
# Mesh Network Integration
MESH_API_ENDPOINT=http://localhost:9000
MESH_API_KEY=your-api-key
Configuration File
config.toml:
[smo]
port = 8080
workers = 4
max_rapps = 100
[ric]
port = 8081
model_cache_size = 1000
inference_timeout = "5s"
[mesh]
endpoint = "http://localhost:9000"
health_check_interval = "30s"
retry_attempts = 3
[events]
backend = "kafka"
kafka_brokers = ["localhost:9092"]
redis_url = "redis://localhost:6379"
Usage
Deploy an rApp
curl -X POST http://localhost:8080/api/v1/rapps \
-H "Content-Type: application/json" \
-d '{
"name": "anomaly-detector",
"type": "anomaly_detection",
"version": "1.0.0",
"config": {
"threshold": 0.8,
"window_size": 60
},
"endpoint": "http://localhost:8082"
}'
Create a Policy
curl -X POST http://localhost:8080/api/v1/policies \
-H "Content-Type: application/json" \
-d '{
"id": "auto-scale-cpu",
"name": "Auto Scale on High CPU",
"type": "auto_scaling",
"enabled": true,
"conditions": [
{
"metric": "cpu_usage",
"operator": ">",
"threshold": 80.0,
"duration": 300
}
],
"actions": ["scale_up"]
}'
Publish an Event
curl -X POST http://localhost:8080/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"type": "anomaly",
"source": "anomaly-detector",
"severity": "high",
"title": "Network Anomaly Detected",
"data": {
"anomaly_score": 0.95,
"node_id": "node-1"
}
}'
Run Inference
curl -X POST http://localhost:8081/api/v1/infer \
-H "Content-Type: application/json" \
-d '{
"model_id": "anomaly-detector",
"features": {
"latency": 150.5,
"packet_loss": 0.02,
"bandwidth": 1024.0,
"cpu_usage": 75.0,
"memory_usage": 60.0
}
}'
Response:
{
"model_id": "anomaly-detector",
"prediction": {
"is_anomaly": true,
"anomaly_score": 0.87,
"confidence": 0.92
},
"inference_time_ms": 12
}
Creating Custom rApps
1. Implement the Interface
package myrapps
import (
"context"
"github.com/neuais/ai-network/rapps/framework"
)
type MyRApp struct {
*framework.BaseRApp
}
func NewMyRApp() *MyRApp {
base := framework.NewBaseRApp(
"my-rapp",
"1.0.0",
"My custom rApp",
)
return &MyRApp{BaseRApp: base}
}
func (r *MyRApp) ProcessEvent(ctx context.Context, event framework.NetworkEvent) ([]framework.NetworkAction, error) {
if event.Priority > 8 {
return []framework.NetworkAction{
{
Type: "alert",
Target: "admin",
Operation: "send_notification",
Parameters: map[string]interface{}{
"message": event.Data,
},
Reason: "High priority event detected",
},
}, nil
}
return nil, nil
}
2. Register Your rApp
registry := framework.NewRAppRegistry(ricClient)
registry.RegisterFactory("my-rapp", func() framework.RApp {
return NewMyRApp()
})
3. Deploy
go build -o my-rapp ./cmd/my-rapp
./my-rapp --smo-endpoint http://localhost:8080
Machine Learning Models
Anomaly Detection
Algorithm: Isolation Forest
Features:
- Latency
- Packet loss
- Bandwidth
- CPU usage
- Memory usage
Training:
curl -X POST http://localhost:8081/api/v1/train \
-H "Content-Type: application/json" \
-d '{
"model_type": "isolation_forest",
"training_data": "s3://bucket/training-data.csv",
"parameters": {
"n_estimators": 100,
"contamination": 0.1
}
}'
Traffic Optimization
Algorithm: Multi-objective scoring
Objectives:
- Minimize latency
- Maximize throughput
- Minimize cost
Weights (configurable):
- Latency: 0.4
- Throughput: 0.4
- Cost: 0.2
Monitoring
Metrics
# SMO Metrics
smo_rapps_total
smo_rapps_active
smo_policies_total
smo_policies_triggered
smo_events_processed
smo_actions_executed
# RIC Metrics
ric_models_loaded
ric_inferences_total
ric_inference_duration_seconds
ric_training_jobs_total
ric_model_accuracy
Health Checks
# SMO Health
curl http://localhost:8080/health
# RIC Health
curl http://localhost:8081/health
Troubleshooting
rApp not starting
Check logs:
curl http://localhost:8080/api/v1/rapps/{id}/logs
Inference errors
Verify model is loaded:
curl http://localhost:8081/api/v1/models
Event bus issues
Check connection:
# Kafka
kafka-console-consumer --bootstrap-server localhost:9092 --topic neuais-events
# Redis
redis-cli SUBSCRIBE neuais:events
Next Steps
- Auth Service - Authentication and authorization
- Compute Service - Agent execution
- Agent Development - Create custom agents
rApps Development Guide
Complete guide to building Network Applications (rApps) for the NeuAIs AI Network Layer.
Overview
rApps (Network Applications) automate network management tasks using ML-powered intelligence from RIC and orchestration from SMO.
Built-in rApps
1. Anomaly Detection rApp
Monitors network metrics and identifies unusual patterns using ML.
Features
- Real-time network monitoring
- ML-powered anomaly detection
- Intelligent alerting with webhooks
- Automatic remediation suggestions
Configuration
export RIC_ENDPOINT="http://localhost:8081"
export SMO_ENDPOINT="http://localhost:8080"
export ANOMALY_THRESHOLD="0.7"
export CHECK_INTERVAL="60s"
export WEBHOOK_URL="https://alerts.example.com/webhook"
Usage
cd services/ai-network/rapps/anomaly-detector
go run main.go
Alert Example
{
"id": "alert_abc123",
"node_id": "node-1",
"severity": "high",
"anomaly_score": 0.87,
"timestamp": "2024-01-15T10:00:00Z",
"contributing_factors": [
{"metric": "latency", "value": 250.5, "normal_range": "10-100"},
{"metric": "cpu_usage", "value": 95.2, "normal_range": "0-80"}
],
"suggested_actions": ["reroute_traffic", "scale_resources"]
}
2. Traffic Optimization rApp
AI-powered route optimization for network traffic.
Features
- Multi-objective scoring (latency, throughput, cost)
- Automatic route updates via SMO
- Local fallback when RIC unavailable
- Tracks optimization improvements
Configuration
export RIC_ENDPOINT="http://localhost:8081"
export SMO_ENDPOINT="http://localhost:8080"
export OPTIMIZATION_INTERVAL="300s"
export LATENCY_WEIGHT="0.4"
export THROUGHPUT_WEIGHT="0.4"
export COST_WEIGHT="0.2"
export SCORE_THRESHOLD="0.7"
Usage
cd services/ai-network/rapps/traffic-optimizer
go run main.go
Optimization Example
{
"route": {
"source": "node-1",
"destination": "node-5",
"path": ["node-1", "node-3", "node-5"]
},
"metrics": {
"latency": 45.2,
"throughput": 1024.5,
"cost": 0.05
},
"score": 0.85,
"improvement": 0.15
}
Creating Custom rApps
Step 1: Implement the Interface
package myrapps
import (
"context"
"github.com/neuais/ai-network/rapps/framework"
)
type MyRApp struct {
*framework.BaseRApp
config MyConfig
}
type MyConfig struct {
Threshold float64
Interval time.Duration
}
func NewMyRApp() *MyRApp {
base := framework.NewBaseRApp(
"my-rapp",
"1.0.0",
"Description of my rApp",
)
return &MyRApp{
BaseRApp: base,
config: MyConfig{
Threshold: 0.8,
Interval: 60 * time.Second,
},
}
}
func (r *MyRApp) Initialize(ctx context.Context, config map[string]interface{}) error {
// Parse config
if threshold, ok := config["threshold"].(float64); ok {
r.config.Threshold = threshold
}
// Initialize resources
return nil
}
func (r *MyRApp) Start(ctx context.Context) error {
ticker := time.NewTicker(r.config.Interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
if err := r.process(ctx); err != nil {
return err
}
}
}
}
func (r *MyRApp) ProcessEvent(ctx context.Context, event framework.NetworkEvent) ([]framework.NetworkAction, error) {
// Process network events
if event.Priority > 8 {
return []framework.NetworkAction{
{
Type: "alert",
Target: "admin",
Operation: "send_notification",
Parameters: map[string]interface{}{
"message": event.Data,
},
Reason: "High priority event detected",
},
}, nil
}
return nil, nil
}
func (r *MyRApp) GetStatus() framework.RAppStatus {
return framework.RAppStatus{
State: framework.RAppStateRunning,
Health: framework.RAppHealthHealthy,
Uptime: time.Since(r.StartTime),
}
}
func (r *MyRApp) GetMetrics() framework.RAppMetrics {
return framework.RAppMetrics{
EventsProcessed: r.EventCount,
ActionsGenerated: r.ActionCount,
ErrorCount: r.ErrorCount,
}
}
func (r *MyRApp) Stop(ctx context.Context) error {
// Cleanup resources
return nil
}
func (r *MyRApp) process(ctx context.Context) error {
// Your processing logic
return nil
}
Step 2: Create Main Entry Point
package main
import (
"context"
"log"
"os"
"os/signal"
"syscall"
"github.com/neuais/ai-network/rapps/myrapps"
)
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Create rApp
rapp := myrapps.NewMyRApp()
// Initialize
config := map[string]interface{}{
"threshold": 0.8,
"interval": "60s",
}
if err := rapp.Initialize(ctx, config); err != nil {
log.Fatal(err)
}
// Start
go func() {
if err := rapp.Start(ctx); err != nil {
log.Fatal(err)
}
}()
// Wait for signal
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
<-sigCh
// Stop
if err := rapp.Stop(ctx); err != nil {
log.Fatal(err)
}
}
Step 3: Register with SMO
curl -X POST http://localhost:8080/api/v1/rapps \
-H "Content-Type: application/json" \
-d '{
"name": "my-rapp",
"type": "custom",
"version": "1.0.0",
"endpoint": "http://localhost:8082"
}'
RIC Integration
Making Inference Requests
type RICClient struct {
endpoint string
client *http.Client
}
func (c *RICClient) Infer(modelID string, features map[string]float64) (*InferenceResult, error) {
req := InferenceRequest{
ModelID: modelID,
Features: features,
}
resp, err := c.client.Post(
c.endpoint+"/api/v1/infer",
"application/json",
marshalJSON(req),
)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result InferenceResult
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, err
}
return &result, nil
}
SMO Integration
Publishing Events
func (r *MyRApp) publishEvent(ctx context.Context, eventType string, data interface{}) error {
event := Event{
Type: eventType,
Source: r.Name,
Severity: "medium",
Timestamp: time.Now(),
Data: data,
}
return r.smo.PublishEvent(ctx, event)
}
Generating Actions
func (r *MyRApp) ProcessEvent(ctx context.Context, event framework.NetworkEvent) ([]framework.NetworkAction, error) {
actions := []framework.NetworkAction{}
if event.Type == "high_latency" {
actions = append(actions, framework.NetworkAction{
Type: "reroute",
Target: event.NodeID,
Operation: "find_alternate_path",
Priority: 8,
Reason: "High latency detected",
})
}
return actions, nil
}
Testing
Unit Tests
func TestMyRApp(t *testing.T) {
rapp := NewMyRApp()
config := map[string]interface{}{
"threshold": 0.7,
}
ctx := context.Background()
if err := rapp.Initialize(ctx, config); err != nil {
t.Fatal(err)
}
event := framework.NetworkEvent{
Type: "test_event",
Priority: 9,
}
actions, err := rapp.ProcessEvent(ctx, event)
if err != nil {
t.Fatal(err)
}
if len(actions) == 0 {
t.Error("Expected actions to be generated")
}
}
Integration Tests
# Start RIC and SMO
docker-compose up -d ric smo
# Start rApp
go run main.go &
# Send test event
curl -X POST http://localhost:8080/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"type": "test_event",
"source": "test",
"data": {"node_id": "node-1"}
}'
# Check actions were generated
curl http://localhost:8080/api/v1/events | jq '.events[] | select(.type == "action_generated")'
Best Practices
- Error Handling: Always return errors, never panic
- Context: Respect context cancellation
- Metrics: Emit metrics for monitoring
- Logging: Use structured logging
- Configuration: Make everything configurable
- Testing: Write unit and integration tests
- Documentation: Document your rApp’s behavior
Next Steps
- AI Network Service - AI Network overview
- Services Overview - All backend services
- Agent Development - Create agents
Dashboard
Primary control plane for managing your agent infrastructure. Built with Svelte frontend and Rust backend.
Overview
The Dashboard provides a web-based interface for deploying, monitoring, and managing agents. It’s the main entry point for most users.
Features
- Agent CRUD operations
- Real-time metrics visualization
- Configuration management
- Log streaming
- User management
- Billing overview
Architecture
┌─────────────────────────────────────────┐
│ Svelte Frontend (Port 3000) │
│ SvelteKit • TypeScript • Tailwind │
└─────────────────────────────────────────┘
↓ REST API
┌─────────────────────────────────────────┐
│ Rust Backend (Port 8000) │
│ Axum • Tokio • SQLx • Serde │
└─────────────────────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ PostgreSQL Database │
└─────────────────────────────────────────┘
Running
Development
cd neuais.com/hub.neuais.com/dashboard.neuais.com
# Start backend
cd backend
cargo run
# Start frontend (new terminal)
cd frontend
npm run dev
Access at: http://localhost:3000
Production
# Build backend
cd backend
cargo build --release
# Build frontend
cd frontend
npm run build
# Run
./backend/target/release/dashboard-backend &
cd frontend && npm run preview
API Endpoints
Authentication
POST /api/auth/login
POST /api/auth/logout
GET /api/auth/me
Agents
GET /api/agents
POST /api/agents
GET /api/agents/{id}
PUT /api/agents/{id}
DELETE /api/agents/{id}
POST /api/agents/{id}/start
POST /api/agents/{id}/stop
POST /api/agents/{id}/restart
GET /api/agents/{id}/logs
GET /api/agents/{id}/metrics
Users
GET /api/users
POST /api/users
GET /api/users/{id}
PUT /api/users/{id}
DELETE /api/users/{id}
Billing
GET /api/billing/usage
GET /api/billing/invoices
GET /api/billing/payment-methods
Frontend Structure
frontend/
├── src/
│ ├── routes/
│ │ ├── +page.svelte # Home
│ │ ├── agents/
│ │ │ ├── +page.svelte # Agent list
│ │ │ └── [id]/
│ │ │ └── +page.svelte # Agent details
│ │ ├── users/
│ │ └── billing/
│ ├── lib/
│ │ ├── components/
│ │ │ ├── AgentCard.svelte
│ │ │ ├── MetricsChart.svelte
│ │ │ └── LogViewer.svelte
│ │ ├── stores/
│ │ └── api/
│ └── app.html
├── static/
└── package.json
Backend Structure
backend/
├── src/
│ ├── main.rs # Entry point
│ ├── routes/
│ │ ├── agents.rs
│ │ ├── auth.rs
│ │ ├── users.rs
│ │ └── billing.rs
│ ├── models/
│ │ ├── agent.rs
│ │ ├── user.rs
│ │ └── billing.rs
│ ├── db/
│ │ ├── mod.rs
│ │ └── migrations/
│ └── middleware/
│ ├── auth.rs
│ └── cors.rs
└── Cargo.toml
Configuration
backend/config.toml:
[server]
host = "0.0.0.0"
port = 8000
[database]
url = "postgresql://user:pass@localhost/neuais"
max_connections = 10
[auth]
jwt_secret = "your-secret-key"
token_expiry = "24h"
[cors]
allowed_origins = ["http://localhost:3000"]
Database Schema
CREATE TABLE agents (
id UUID PRIMARY KEY,
name VARCHAR(255) NOT NULL,
status VARCHAR(50) NOT NULL,
created_at TIMESTAMP NOT NULL,
updated_at TIMESTAMP NOT NULL
);
CREATE TABLE users (
id UUID PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
role VARCHAR(50) NOT NULL,
created_at TIMESTAMP NOT NULL
);
CREATE TABLE agent_metrics (
id SERIAL PRIMARY KEY,
agent_id UUID REFERENCES agents(id),
cpu_usage FLOAT,
memory_usage BIGINT,
timestamp TIMESTAMP NOT NULL
);
Components
AgentCard
<script lang="ts">
export let agent: Agent;
</script>
<div class="card">
<h3>{agent.name}</h3>
<p>Status: {agent.status}</p>
<p>CPU: {agent.metrics.cpu}%</p>
<p>Memory: {agent.metrics.memory}MB</p>
</div>
MetricsChart
<script lang="ts">
import { onMount } from 'svelte';
export let agentId: string;
let metrics = [];
onMount(async () => {
const res = await fetch(`/api/agents/${agentId}/metrics`);
metrics = await res.json();
});
</script>
<canvas bind:this={canvas}></canvas>
Authentication
Login Flow
- User submits credentials
- Backend validates against database
- Backend generates JWT token
- Frontend stores token in localStorage
- Frontend includes token in all API requests
Example
// Login
const response = await fetch('/api/auth/login', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email, password })
});
const { token } = await response.json();
localStorage.setItem('token', token);
// Authenticated request
fetch('/api/agents', {
headers: {
'Authorization': `Bearer ${token}`
}
});
Real-Time Updates
WebSocket Connection
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
if (update.type === 'agent_status') {
updateAgentStatus(update.agent_id, update.status);
}
};
Deployment
Docker
# Backend
FROM rust:1.75 as builder
WORKDIR /app
COPY backend/ .
RUN cargo build --release
FROM debian:bookworm-slim
COPY --from=builder /app/target/release/dashboard-backend /usr/local/bin/
CMD ["dashboard-backend"]
# Frontend
FROM node:20 as builder
WORKDIR /app
COPY frontend/ .
RUN npm install && npm run build
FROM node:20-slim
COPY --from=builder /app/build /app
CMD ["node", "/app"]
Fly.io
fly.toml:
app = "neuais-dashboard"
[build]
dockerfile = "Dockerfile"
[[services]]
internal_port = 8000
protocol = "tcp"
[[services.ports]]
port = 80
handlers = ["http"]
[[services.ports]]
port = 443
handlers = ["tls", "http"]
Deploy:
fly deploy
Troubleshooting
Backend won’t start
Check database connection:
psql postgresql://user:pass@localhost/neuais
Frontend build fails
Clear node_modules:
rm -rf node_modules package-lock.json
npm install
CORS errors
Update allowed_origins in config.toml
WebSocket disconnects
Check firewall rules and load balancer settings
Next Steps
- Admin Portal - System administration
- Observatory - 3D visualization
- CLI Tool - Command-line interface
Admin Portal
Observatory (3D Visualization)
Real-time 3D visualization of your agent network
Observatory is NeuAIs’ flagship visualization tool - a beautiful, interactive 3D interface for monitoring and managing your entire agent infrastructure.
Overview
Observatory provides real-time visibility into:
- Agents - All AI agents with status, metrics, and connections
- Services - Backend microservices (Auth, Compute, Storage, etc.)
- Infrastructure - Databases, caches, service registries
- Connections - Live data flows between components
- Metrics - CPU, memory, network I/O for all components
Quick Start
Access Observatory
Local Development:
cd neuais.com/hub.neuais.com/observatory.neuais.com
python3 -m http.server 3000
Open: http://localhost:3000/start.html
Production:
Access at: https://observatory.neuais.com
First Time User Flow
- Landing Page - See the welcome screen with animated starfield
- Click “Launch Observatory” - Enter the 3D visualization
- Explore - Drag to rotate, scroll to zoom, click nodes for details
- Dock - Hover at bottom to reveal tools and cards
- Customize - Change colors, shapes, and visual preferences
Interface Elements
Top Menu Bar
View Menu:
- Show Filters
- Show Overview
- Show Metrics
- Full Screen
Window Menu:
- Minimize All Cards
- Restore All Cards
- Close All Cards
Help Menu:
- Documentation
- Keyboard Shortcuts
- About Observatory
Bottom Dock (Auto-Hide)
Hover at the bottom of the screen to reveal:
| Icon | Card | Description |
|---|---|---|
| 📊 | Overview | System summary (agent count, status, health) |
| 📈 | Metrics | Real-time performance metrics |
| 🗺️ | Topology | Connection map and network topology |
| 📝 | Logs | Live system logs streaming |
| 🎛️ | Filters | Toggle categories, particles, rotation |
| 🎨 | Customize | Change colors, shapes, visual theme |
| ⚙️ | Settings | Quality, particle count, zoom speed |
| 📸 | Screenshot | Capture current view as PNG |
| 💻 | Terminal | Execute commands |
| 💬 | Help | Controls and color legend |
3D Visualization
Node Types:
- ● Spheres - Services (blue/purple)
- ■ Cubes - AI Agents (yellow/green)
- ◆ Diamonds - Infrastructure (orange/red)
Interactions:
- Drag - Rotate 3D view
- Scroll - Zoom in/out
- Click Node - Show details card
- Right-Click - Context menu (coming soon)
Visual Effects:
- Pulsing Nodes - Active components breathe/glow
- Particle Flows - Data moving between nodes
- Connection Lines - Relationships between components
- Status Colors - Green (active), Grey (idle), Red (error)
Features
Visual Customization
Access: Dock → Customize Card
Options:
- Color Mode - By category or by type
- Color Picker - Custom colors for each component type
- Shape Selection - Sphere, Cube, Diamond, Pyramid, Torus
- Presets - Save and load custom themes
- Persistence - Settings saved in browser localStorage
Default Color Scheme:
| Category | Color | Hex |
|---|---|---|
| Agents | Green/Yellow | #22c55e / #facc15 |
| Services | Blue/Purple | #3b82f6 / #a855f7 |
| Infrastructure | Orange/Red | #f97316 / #ef4444 |
Filtering & Visibility
Access: Dock → Filters Card
Toggle Options:
- ☑ Show Agents
- ☑ Show Services
- ☑ Show Infrastructure
- ☑ Particle Flows
- ☑ Auto-Rotation
Hide categories to focus on specific parts of your system.
Metrics & Monitoring
Access: Dock → Metrics Card
Live Metrics:
- Avg CPU Usage - Across all components
- Total Memory - Current RAM usage
- Network I/O - Data transfer rate
- Active Connections - Between nodes
- Uptime - System uptime
Updates every 5 seconds (configurable).
Node Details
Click any node to see:
- Name - Component identifier
- Status - Active, Idle, Starting, Error
- Type - Agent, Service, Infrastructure
- ID - Unique identifier
- Metadata - Language, template, environment
- Resources - CPU %, Memory usage
- Connections - List of connected components
- Actions - Start, Stop, View Logs, View Code
System Overview
Access: Dock → Overview Card
Statistics:
- Total Agents (count)
- Total Services (count)
- Infrastructure Components (count)
- Active Components (count)
- Health Score (percentage)
Topology View
Access: Dock → Topology Card
Shows:
- Key connection paths (e.g., SMO → RIC)
- Database connection counts
- Cache usage patterns
- Agent-to-service mappings
Terminal Access
Access: Dock → Terminal Card
Features:
- Execute commands in Observatory context
- View command history
- Auto-completion
- Multi-line input
Available Commands:
help # Show all commands
status # System status
agents # List all agents
services # List all services
start <id> # Start component
stop <id> # Stop component
restart <id> # Restart component
logs <id> # View logs
clear # Clear terminal
Keyboard Shortcuts
| Key | Action |
|---|---|
| Space | Toggle auto-rotation |
| F | Toggle fullscreen |
| Esc | Close active card |
| ? | Show help overlay |
| Drag | Rotate view |
| Scroll | Zoom in/out |
| Click | Select node |
Configuration
Visual Settings
Quality Presets:
- High - Maximum detail, all effects (default)
- Medium - Balanced performance
- Low - Minimal effects, better FPS
Adjustable:
- Particle Count (1-10)
- Zoom Speed (1-10)
- Rotation Speed (1-10)
- Node Detail Level
Data Source
Static Data (Development):
Uses js/observatory-data.js with predefined agents/services
Live API (Production):
Connects to WebSocket at ws://localhost:8080/ws for real-time updates
Browser Support
Recommended:
- Chrome 90+ ✅
- Firefox 88+ ✅
- Edge 90+ ✅
- Safari 14+ ⚠️ (limited WebGL support)
Requirements:
- WebGL 2.0 support
- Modern JavaScript (ES2020+)
- localStorage enabled
- Minimum 1280x720 resolution
Architecture
Technology Stack
Frontend:
- Three.js - 3D rendering engine
- Vanilla JavaScript - No framework dependencies
- CSS3 - Glass morphism, animations
- WebGL 2.0 - Hardware-accelerated graphics
Data Layer:
- Static -
js/observatory-data.js(development) - Dynamic - WebSocket API (production)
- Storage - localStorage for preferences
File Structure
observatory.neuais.com/
├── index.html # Main application
├── start.html # Landing page
├── css/
│ ├── observatory-base.css # Core styles, menu
│ ├── observatory-cards.css # Floating cards
│ ├── observatory-dock.css # Auto-hide dock
│ └── observatory-glass.css # Glass effects
├── js/
│ ├── observatory-core.js # Three.js scene
│ ├── observatory-data.js # Data definitions
│ ├── observatory-visual-config.js # Color/shape config
│ ├── observatory-dock.js # Dock system
│ ├── observatory-cards.js # Card management
│ ├── observatory-menu.js # Menu bar
│ ├── observatory-context-menu.js # Right-click
│ ├── card-templates.js # Card HTML
│ └── observatory-terminal-commands.js # Terminal
├── 3d/
│ └── skateboard/
│ └── three.min.js # Three.js library
└── assets/
├── logo.png # NeuAIs logo
└── icons/ # UI icons (25 PNG files)
Data Model
Agent/Service Object:
{
id: 'anomaly-detector',
name: 'Anomaly Detector',
status: 'active', // active, idle, starting, error
cpu: 12, // CPU usage %
mem: '24MB', // Memory usage
connections: ['metrics-api', 'redis-cache'] // Connected IDs
}
Connection:
Inferred from connections array. Particles flow from source to target along curved Bézier paths.
Rendering Pipeline
- Load Data - From static file or API
- Create Nodes - Position in 3D clusters
- Apply Visual Config - Colors, shapes from user prefs
- Create Particles - Flow animations between connections
- Animation Loop - 60 FPS updates (pulsing, particles)
- User Interactions - Click, hover, drag handlers
Performance
Optimization
For 100+ Nodes:
- Node clustering by category
- Instanced meshes for identical shapes
- Particle count limit (5 per connection)
- LOD (Level of Detail) for distant nodes
For 1000+ Nodes:
- Reduce sphere segments (16 → 8)
- Disable particle flows
- Use simpler shapes (cubes only)
- Implement frustum culling
Metrics
Current Performance:
- 31 nodes + 150 particles = 60 FPS (Chrome, M1 MacBook)
- 100 nodes + 500 particles = 45 FPS
- 1000 nodes (no particles) = 30 FPS
Memory Usage:
- Initial load: ~50MB
- With all cards open: ~80MB
- After 1 hour: ~120MB (stable)
Troubleshooting
Issue: Blank Screen
Causes:
- WebGL not supported
- JavaScript errors
- Missing Three.js library
Solutions:
- Check browser console (F12)
- Verify Three.js loaded:
typeof THREE - Try different browser (Chrome recommended)
- Disable browser extensions
Issue: Low FPS
Causes:
- Too many nodes/particles
- Integrated graphics
- Other tabs open
Solutions:
- Open Settings card → Lower quality
- Open Filters → Disable particles
- Reduce particle count slider
- Close other browser tabs
Issue: Nodes Not Visible
Causes:
- Filters disabled category
- Nodes positioned off-screen
- Data not loaded
Solutions:
- Open Filters → Enable all categories
- Reset view (refresh page)
- Check console for data errors
Issue: Cards Not Opening
Causes:
- JavaScript error
- Card system not initialized
- Event listener issue
Solutions:
- Check console for errors
- Refresh page
- Try different dock icon
Issue: Customization Not Saving
Causes:
- localStorage disabled
- Private browsing mode
- Storage quota exceeded
Solutions:
- Enable localStorage in browser settings
- Exit private/incognito mode
- Clear old data:
localStorage.clear()
API Integration
WebSocket Protocol
Connect:
const ws = new WebSocket('ws://localhost:8080/ws');
ws.onopen = () => {
console.log('Connected to Observatory API');
};
Subscribe to Updates:
ws.send(JSON.stringify({
type: 'subscribe',
categories: ['agents', 'services', 'infrastructure']
}));
Receive Updates:
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
// Update visualization with new data
};
REST API
Get All Agents:
GET /api/v1/agents
Get Agent Details:
GET /api/v1/agents/{id}
Start Agent:
POST /api/v1/agents/{id}/start
Stop Agent:
POST /api/v1/agents/{id}/stop
Future Enhancements
Planned Features
Q1 2026:
- ✨ VR mode (WebXR support)
- 🎮 Gamepad navigation
- 🔊 Audio feedback (spatial audio)
- 📱 Mobile-optimized UI
Q2 2026:
- 🌐 Multi-cluster view
- 📊 Historical playback
- 🎥 Record and export animations
- 🤖 AI-powered insights
Q3 2026:
- 🔗 Direct code editing integration
- 📡 Real-time collaboration
- 🎨 Custom themes marketplace
- 📈 Advanced analytics
Community Requests
Vote on features: github.com/neuais/observatory/discussions
Support
Documentation:
Community:
- GitHub Issues: github.com/neuais/observatory/issues
- Discussions: github.com/neuais/observatory/discussions
Contact:
- Email: support@neuais.com
- Twitter: @neuais
Next: Platform Architecture →
CLI Tool
Command-line interface for all NeuAIs operations. Built with Python using the Calliope framework.
Installation
pip install neuais
Or from source:
git clone https://github.com/neuais/cli
cd cli
pip install -e .
Configuration
Config File
~/.neuais/config.toml:
[default]
endpoint = "https://api.neuais.com"
region = "us-west-2"
output = "json"
[profile.staging]
endpoint = "https://staging-api.neuais.com"
region = "us-west-2"
[profile.local]
endpoint = "http://localhost:8080"
region = "local"
Environment Variables
export NEUAIS_ENDPOINT="https://api.neuais.com"
export NEUAIS_TOKEN="your-token"
export NEUAIS_REGION="us-west-2"
export NEUAIS_OUTPUT="json"
Commands
Authentication
# Sign up
neuais auth signup
# Login
neuais auth login
# Logout
neuais auth logout
# Show current user
neuais auth whoami
# Refresh token
neuais auth refresh
Agent Management
# List agents
neuais agent list
neuais agent list --status running
neuais agent list --region us-west-2
# Deploy agent
neuais agent deploy my-agent \
--config agent.toml \
--binary ./target/release/my-agent
# Get agent details
neuais agent get my-agent
neuais agent get my-agent --output yaml
# Update agent
neuais agent update my-agent \
--binary ./target/release/my-agent \
--strategy rolling
# Delete agent
neuais agent delete my-agent
neuais agent delete my-agent --force
# Start/stop agent
neuais agent start my-agent
neuais agent stop my-agent
neuais agent restart my-agent
# Scale agent
neuais agent scale my-agent --replicas 5
# Get agent status
neuais agent status my-agent
Log Management
# Stream logs
neuais logs my-agent
neuais logs my-agent --follow
neuais logs my-agent --tail 100
neuais logs my-agent --since 1h
neuais logs my-agent --level error
# Download logs
neuais logs my-agent --download logs.txt
neuais logs my-agent --since 24h --download daily.log
Metrics
# Get agent metrics
neuais metrics my-agent
neuais metrics my-agent --period 1h
neuais metrics my-agent --metric cpu,memory
# Export metrics
neuais metrics my-agent --export metrics.json
Configuration
# Get config
neuais config get
neuais config get default.endpoint
# Set config
neuais config set default.endpoint https://api.neuais.com
neuais config set default.region us-east-1
# List profiles
neuais config profiles
# Use profile
neuais --profile staging agent list
Service Management
# List services
neuais service list
# Get service status
neuais service status auth
neuais service status --all
# Service logs
neuais service logs auth --tail 50
Observatory
# Open Observatory
neuais observatory
# Open for specific agent
neuais observatory --agent my-agent
# Open for region
neuais observatory --region us-west-2
Output Formats
JSON (default)
neuais agent list --output json
[
{
"id": "agt_1a2b3c",
"name": "my-agent",
"status": "running",
"replicas": 3
}
]
YAML
neuais agent list --output yaml
- id: agt_1a2b3c
name: my-agent
status: running
replicas: 3
Table
neuais agent list --output table
ID NAME STATUS REPLICAS
agt_1a2b3c my-agent running 3
agt_2b3c4d worker-1 stopped 0
Advanced Usage
Scripting
#!/bin/bash
# Deploy multiple agents
for agent in agent-{1..10}; do
neuais agent deploy $agent \
--config configs/$agent.toml \
--binary ./target/release/worker
done
# Wait for all to be running
for agent in agent-{1..10}; do
while [ "$(neuais agent status $agent --output json | jq -r '.status')" != "running" ]; do
sleep 1
done
done
echo "All agents deployed"
Filtering
# Filter by status
neuais agent list --status running
# Filter by region
neuais agent list --region us-west-2
# Filter by tag
neuais agent list --tag env=production
# Combine filters
neuais agent list --status running --region us-west-2 --tag env=prod
Batch Operations
# Stop all agents in region
neuais agent list --region us-west-2 --output json | \
jq -r '.[].id' | \
xargs -I {} neuais agent stop {}
# Scale all agents
neuais agent list --output json | \
jq -r '.[].id' | \
xargs -I {} neuais agent scale {} --replicas 5
Plugins
Installing Plugins
neuais plugin install neuais-plugin-monitoring
neuais plugin install neuais-plugin-backup
Using Plugins
# Monitoring plugin
neuais monitoring dashboard
neuais monitoring alerts
# Backup plugin
neuais backup create my-agent
neuais backup restore my-agent backup-20240115
Shell Completion
Bash
neuais completion bash > /etc/bash_completion.d/neuais
source /etc/bash_completion.d/neuais
Zsh
neuais completion zsh > ~/.zsh/completion/_neuais
Fish
neuais completion fish > ~/.config/fish/completions/neuais.fish
Troubleshooting
Command not found
Add to PATH:
export PATH="$HOME/.local/bin:$PATH"
Authentication failed
Refresh token:
neuais auth logout
neuais auth login
Connection timeout
Check endpoint:
curl -v https://api.neuais.com/health
SSL errors
Update CA certificates:
pip install --upgrade certifi
Development
Source Structure
micro_ai/
├── cli.py # Entry point
├── commands/
│ ├── agent.py
│ ├── auth.py
│ ├── logs.py
│ └── config.py
├── core/
│ ├── client.py
│ ├── config.py
│ └── output.py
└── calliope/
└── cli.py # Calliope framework
Building
python -m build
Testing
pytest tests/
Next Steps
- Dashboard - Web interface
- Admin Portal - System administration
- API Reference - REST API documentation
Mobile Apps
Creating Agents
Learn how to build custom agents in Rust, Go, Python, or TypeScript.
Agent Structure
All agents follow the same basic structure:
- Initialize: Set up resources and connections
- Run: Main execution loop
- Health: Report health status
- Shutdown: Clean up resources
Rust Agent
Setup
[dependencies]
neuais-sdk = "0.1"
tokio = { version = "1", features = ["full"] }
anyhow = "1"
Basic Agent
use neuais_sdk::prelude::*;
use anyhow::Result;
#[agent(name = "my-rust-agent")]
pub struct MyAgent {
counter: u64,
config: AgentConfig,
}
#[async_trait]
impl Agent for MyAgent {
async fn initialize(config: AgentConfig) -> Result<Self> {
Ok(Self {
counter: 0,
config,
})
}
async fn run(&mut self, ctx: &Context) -> Result<()> {
loop {
self.counter += 1;
ctx.log(format!("Processing: {}", self.counter)).await?;
ctx.emit_metric("counter", self.counter as f64).await?;
tokio::time::sleep(Duration::from_secs(5)).await;
}
}
async fn health(&self) -> HealthStatus {
if self.counter > 0 {
HealthStatus::Healthy
} else {
HealthStatus::Starting
}
}
async fn shutdown(&mut self) -> Result<()> {
ctx.log("Shutting down").await?;
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<()> {
let config = AgentConfig::from_env()?;
let agent = MyAgent::initialize(config).await?;
agent.start().await
}
Advanced Features
#![allow(unused)]
fn main() {
// HTTP endpoint
#[endpoint(path = "/status", method = "GET")]
async fn status(&self) -> Response {
json!({
"counter": self.counter,
"uptime": self.uptime()
})
}
// Background task
#[task(interval = "30s")]
async fn cleanup(&mut self, ctx: &Context) -> Result<()> {
ctx.log("Running cleanup").await?;
Ok(())
}
// Event handler
#[event(type = "user.created")]
async fn on_user_created(&mut self, event: Event) -> Result<()> {
let user_id = event.data["user_id"].as_str()?;
ctx.log(format!("New user: {}", user_id)).await?;
Ok(())
}
}
Go Agent
Setup
go get github.com/neuais/sdk-go
Basic Agent
package main
import (
"context"
"log"
"time"
"github.com/neuais/sdk-go/neuais"
)
type MyAgent struct {
counter int64
config *neuais.Config
}
func (a *MyAgent) Initialize(config *neuais.Config) error {
a.config = config
a.counter = 0
return nil
}
func (a *MyAgent) Run(ctx context.Context) error {
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
a.counter++
log.Printf("Processing: %d", a.counter)
neuais.EmitMetric("counter", float64(a.counter))
}
}
}
func (a *MyAgent) Health() neuais.HealthStatus {
if a.counter > 0 {
return neuais.HealthStatusHealthy
}
return neuais.HealthStatusStarting
}
func (a *MyAgent) Shutdown() error {
log.Println("Shutting down")
return nil
}
func main() {
config := neuais.LoadConfig()
agent := &MyAgent{}
if err := agent.Initialize(config); err != nil {
log.Fatal(err)
}
if err := neuais.Run(agent); err != nil {
log.Fatal(err)
}
}
Python Agent
Setup
pip install neuais
Basic Agent
from neuais import Agent, Context, HealthStatus
import asyncio
class MyAgent(Agent):
def __init__(self, config):
self.counter = 0
self.config = config
async def run(self, ctx: Context):
while True:
self.counter += 1
await ctx.log(f"Processing: {self.counter}")
await ctx.emit_metric("counter", self.counter)
await asyncio.sleep(5)
async def health(self) -> HealthStatus:
if self.counter > 0:
return HealthStatus.HEALTHY
return HealthStatus.STARTING
async def shutdown(self):
await self.ctx.log("Shutting down")
if __name__ == "__main__":
config = Agent.load_config()
agent = MyAgent(config)
agent.start()
TypeScript Agent
Setup
npm install @neuais/sdk
Basic Agent
import { Agent, Context, HealthStatus } from '@neuais/sdk';
class MyAgent extends Agent {
private counter = 0;
async run(ctx: Context): Promise<void> {
while (true) {
this.counter++;
await ctx.log(`Processing: ${this.counter}`);
await ctx.emitMetric('counter', this.counter);
await new Promise(resolve => setTimeout(resolve, 5000));
}
}
async health(): Promise<HealthStatus> {
return this.counter > 0
? HealthStatus.Healthy
: HealthStatus.Starting;
}
async shutdown(): Promise<void> {
await this.ctx.log('Shutting down');
}
}
const config = Agent.loadConfig();
const agent = new MyAgent(config);
agent.start();
Configuration
Agent Config File
agent.toml:
[agent]
name = "my-agent"
version = "1.0.0"
runtime = "rust"
[resources]
cpu = "1.0"
memory = "1Gi"
replicas = 3
[health]
endpoint = "/health"
interval = "30s"
timeout = "5s"
retries = 3
[scaling]
min_replicas = 1
max_replicas = 10
target_cpu = 70
target_memory = 80
[environment]
LOG_LEVEL = "info"
METRICS_PORT = "9090"
DATABASE_URL = "postgresql://localhost/db"
[endpoints]
"/status" = { method = "GET", public = true }
"/metrics" = { method = "GET", public = false }
Best Practices
Error Handling
#![allow(unused)]
fn main() {
// Good: Return errors
async fn process(&self, ctx: &Context) -> Result<()> {
let data = fetch_data().await?;
process_data(data)?;
Ok(())
}
// Bad: Panic
async fn process(&self, ctx: &Context) {
let data = fetch_data().await.unwrap();
process_data(data).unwrap();
}
}
Logging
#![allow(unused)]
fn main() {
// Structured logging
ctx.log_info("Processing started", json!({
"user_id": user_id,
"batch_size": batch.len()
})).await?;
// Log levels
ctx.log_debug("Debug info").await?;
ctx.log_info("Info message").await?;
ctx.log_warn("Warning").await?;
ctx.log_error("Error occurred").await?;
}
Metrics
#![allow(unused)]
fn main() {
// Counter
ctx.emit_metric("requests_total", 1.0).await?;
// Gauge
ctx.emit_metric("queue_size", queue.len() as f64).await?;
// Histogram
ctx.emit_metric("request_duration_ms", duration.as_millis() as f64).await?;
// With labels
ctx.emit_metric_with_labels(
"requests_total",
1.0,
&[("method", "GET"), ("status", "200")]
).await?;
}
Graceful Shutdown
#![allow(unused)]
fn main() {
async fn run(&mut self, ctx: &Context) -> Result<()> {
loop {
select! {
_ = ctx.shutdown_signal() => {
self.cleanup().await?;
break;
}
result = self.process_batch() => {
result?;
}
}
}
Ok(())
}
}
Testing
Unit Tests
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_agent_initialization() {
let config = AgentConfig::default();
let agent = MyAgent::initialize(config).await.unwrap();
assert_eq!(agent.counter, 0);
}
#[tokio::test]
async fn test_health_check() {
let agent = create_test_agent();
let status = agent.health().await;
assert_eq!(status, HealthStatus::Healthy);
}
}
}
Integration Tests
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_agent_deployment() {
let client = NeuaisClient::new()?;
let agent_id = client.deploy_agent(
"test-agent",
"./target/release/test-agent"
).await?;
// Wait for agent to start
tokio::time::sleep(Duration::from_secs(5)).await;
let status = client.get_agent_status(&agent_id).await?;
assert_eq!(status, "running");
client.delete_agent(&agent_id).await?;
}
}
Next Steps
- Agent Lifecycle - Understand agent lifecycle
- Deployment - Deploy agents to production
- Scaling - Auto-scaling configuration
- Monitoring - Metrics and observability
Agent Lifecycle
Deployment
Scaling
Monitoring
REST API
Complete REST API reference for the NeuAIs platform.
Base URL
Production: https://api.neuais.com/v1
Staging: https://staging-api.neuais.com/v1
Local: http://localhost:8000/v1
Authentication
All API requests require authentication via JWT token.
Get Token
POST /auth/token
Request:
{
"email": "user@example.com",
"password": "your-password"
}
Response:
{
"access_token": "eyJhbGciOiJIUzI1NiIs...",
"token_type": "Bearer",
"expires_in": 3600
}
Use Token
Include in Authorization header:
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..." \
https://api.neuais.com/v1/agents
Agents
List Agents
GET /agents
Query Parameters:
status: Filter by status (running, stopped, error)region: Filter by regionlimit: Max results (default: 100)offset: Pagination offset
Response:
{
"agents": [
{
"id": "agt_1a2b3c4d",
"name": "my-agent",
"status": "running",
"replicas": 3,
"region": "us-west-2",
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T10:05:00Z"
}
],
"total": 1,
"limit": 100,
"offset": 0
}
Get Agent
GET /agents/{id}
Response:
{
"id": "agt_1a2b3c4d",
"name": "my-agent",
"status": "running",
"replicas": 3,
"region": "us-west-2",
"config": {
"cpu": "1.0",
"memory": "1Gi"
},
"endpoints": [
"https://my-agent.neuais.app"
],
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T10:05:00Z"
}
Create Agent
POST /agents
Request:
{
"name": "my-agent",
"runtime": "rust",
"binary_url": "https://storage.neuais.com/binaries/my-agent",
"config": {
"cpu": "1.0",
"memory": "1Gi",
"replicas": 3
},
"environment": {
"LOG_LEVEL": "info"
}
}
Response:
{
"id": "agt_1a2b3c4d",
"name": "my-agent",
"status": "deploying",
"created_at": "2024-01-15T10:00:00Z"
}
Update Agent
PUT /agents/{id}
Request:
{
"binary_url": "https://storage.neuais.com/binaries/my-agent-v2",
"config": {
"replicas": 5
}
}
Delete Agent
DELETE /agents/{id}
Response:
{
"message": "Agent deleted successfully"
}
Start Agent
POST /agents/{id}/start
Stop Agent
POST /agents/{id}/stop
Restart Agent
POST /agents/{id}/restart
Scale Agent
POST /agents/{id}/scale
Request:
{
"replicas": 5
}
Logs
Get Logs
GET /agents/{id}/logs
Query Parameters:
tail: Number of lines (default: 100)since: Duration (e.g., “1h”, “30m”)level: Filter by level (debug, info, warn, error)follow: Stream logs (boolean)
Response:
{
"logs": [
{
"timestamp": "2024-01-15T10:00:00Z",
"level": "info",
"message": "Agent started"
}
]
}
Stream Logs
GET /agents/{id}/logs?follow=true
Returns Server-Sent Events (SSE) stream.
Metrics
Get Metrics
GET /agents/{id}/metrics
Query Parameters:
period: Time period (1h, 24h, 7d)metric: Specific metric (cpu, memory, network)
Response:
{
"metrics": {
"cpu": [
{"timestamp": "2024-01-15T10:00:00Z", "value": 45.2},
{"timestamp": "2024-01-15T10:01:00Z", "value": 47.1}
],
"memory": [
{"timestamp": "2024-01-15T10:00:00Z", "value": 512},
{"timestamp": "2024-01-15T10:01:00Z", "value": 524}
]
}
}
Services
List Services
GET /services
Response:
{
"services": [
{
"name": "auth",
"status": "healthy",
"version": "1.0.0",
"uptime": "5d 12h 30m"
}
]
}
Get Service Status
GET /services/{name}/status
Users
List Users
GET /users
Get User
GET /users/{id}
Create User
POST /users
Request:
{
"email": "user@example.com",
"password": "secure-password",
"role": "developer"
}
Update User
PUT /users/{id}
Delete User
DELETE /users/{id}
Billing
Get Usage
GET /billing/usage
Query Parameters:
period: Time period (current, last_month)
Response:
{
"period": "2024-01",
"usage": {
"compute_hours": 1000,
"storage_gb": 500,
"network_gb": 100
},
"cost": {
"compute": 50.00,
"storage": 10.00,
"network": 5.00,
"total": 65.00
}
}
Get Invoices
GET /billing/invoices
Response:
{
"invoices": [
{
"id": "inv_1a2b3c",
"period": "2024-01",
"amount": 65.00,
"status": "paid",
"due_date": "2024-02-01"
}
]
}
Error Responses
All errors follow this format:
{
"error": {
"code": "invalid_request",
"message": "Invalid agent configuration",
"details": {
"field": "replicas",
"reason": "must be greater than 0"
}
}
}
Error Codes
| Code | HTTP Status | Description |
|---|---|---|
invalid_request | 400 | Invalid request parameters |
unauthorized | 401 | Missing or invalid authentication |
forbidden | 403 | Insufficient permissions |
not_found | 404 | Resource not found |
conflict | 409 | Resource conflict |
rate_limit_exceeded | 429 | Too many requests |
internal_error | 500 | Internal server error |
service_unavailable | 503 | Service temporarily unavailable |
Rate Limiting
API requests are rate limited:
- Free tier: 100 requests/minute
- Pro tier: 1000 requests/minute
- Enterprise: Custom limits
Rate limit headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1704067200
Pagination
List endpoints support pagination:
GET /agents?limit=50&offset=100
Response includes pagination metadata:
{
"agents": [...],
"total": 500,
"limit": 50,
"offset": 100,
"has_more": true
}
Filtering
Use query parameters for filtering:
GET /agents?status=running®ion=us-west-2
Sorting
Use sort parameter:
GET /agents?sort=created_at:desc
Field Selection
Use fields parameter:
GET /agents?fields=id,name,status
Webhooks
Create Webhook
POST /webhooks
Request:
{
"url": "https://example.com/webhook",
"events": ["agent.created", "agent.stopped"],
"secret": "your-webhook-secret"
}
Webhook Events
agent.createdagent.updatedagent.deletedagent.startedagent.stoppedagent.error
Webhook Payload
{
"event": "agent.created",
"timestamp": "2024-01-15T10:00:00Z",
"data": {
"agent_id": "agt_1a2b3c4d",
"name": "my-agent"
}
}
SDK Examples
Rust
#![allow(unused)]
fn main() {
use neuais_sdk::Client;
let client = Client::new("your-token")?;
let agents = client.agents().list().await?;
}
Go
client := neuais.NewClient("your-token")
agents, err := client.Agents().List()
Python
from neuais import Client
client = Client("your-token")
agents = client.agents.list()
TypeScript
import { NeuaisClient } from '@neuais/sdk';
const client = new NeuaisClient('your-token');
const agents = await client.agents.list();
Next Steps
- gRPC API - gRPC API reference
- WebSocket API - Real-time WebSocket API
- SDKs - Language-specific SDKs