AI Network Service
AI-powered network management layer providing Service Management & Orchestration (SMO), intelligent optimization via RIC, and network automation through rApps.
Overview
The AI Network service is the brain of the NeuAIs platform, managing thousands of autonomous agents through machine learning and intelligent orchestration.
Architecture
┌─────────────────────────────────────────────────┐
│ SMO Server (Port 8080) │
│ rApp Manager • Policy Engine • Orchestrator │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ RIC Server (Port 8081) │
│ ML Engine • Anomaly Detection • Features │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ rApps (Network Applications) │
│ Anomaly Detector • Traffic Optimizer │
└─────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ Mesh Network (Port 9000) │
│ Yggdrasil • QUIC • FRP │
└─────────────────────────────────────────────────┘
Components
SMO (Service Management & Orchestration)
Centralized management and orchestration for the AI network layer.
Features
- rApp lifecycle management
- Policy engine with condition evaluation
- Resource orchestration
- Event bus (Kafka, Redis)
- Mesh network integration
API Endpoints
rApp Management
GET /api/v1/rapps
POST /api/v1/rapps
GET /api/v1/rapps/{id}
DELETE /api/v1/rapps/{id}
PATCH /api/v1/rapps/{id}/status
POST /api/v1/rapps/{id}/heartbeat
Policy Management
GET /api/v1/policies
POST /api/v1/policies
GET /api/v1/policies/{id}
DELETE /api/v1/policies/{id}
POST /api/v1/policies/{id}/enable
POST /api/v1/policies/{id}/disable
Event Management
POST /api/v1/events
GET /api/v1/events
POST /api/v1/events/{id}/handle
RIC (RAN Intelligent Controller)
AI-powered network intelligence providing real-time ML inference.
Features
- ML model interface
- Anomaly detection (Isolation Forest)
- Feature extraction
- Inference engine
- Model management
- Training support
API Endpoints
Model Management
GET /api/v1/models
POST /api/v1/models
GET /api/v1/models/{id}
DELETE /api/v1/models/{id}
Inference
POST /api/v1/infer
POST /api/v1/infer/batch
Training
POST /api/v1/train
GET /api/v1/training/{id}/status
rApps Framework
Foundation for building Network Applications that automate network management.
Interface
type RApp interface {
Initialize(ctx context.Context, config map[string]interface{}) error
Start(ctx context.Context) error
Stop(ctx context.Context) error
ProcessEvent(ctx context.Context, event NetworkEvent) ([]NetworkAction, error)
GetStatus() RAppStatus
GetMetrics() RAppMetrics
}
Built-in rApps
Anomaly Detection rApp
- Real-time network monitoring
- ML-powered anomaly detection via RIC
- Intelligent alerting with webhooks
- Automatic remediation suggestions
Traffic Optimization rApp
- AI-powered route optimization
- Multi-objective scoring (latency, throughput, cost)
- Automatic route updates via SMO
- Local fallback when RIC unavailable
Configuration
Environment Variables
# SMO Configuration
SMO_PORT=8080
DATABASE_URL=postgresql://user:pass@localhost/neuais
REDIS_URL=redis://localhost:6379
KAFKA_BROKERS=localhost:9092
# RIC Configuration
RIC_PORT=8081
MLFLOW_URL=http://localhost:5000
MODEL_PATH=/models
# Mesh Network Integration
MESH_API_ENDPOINT=http://localhost:9000
MESH_API_KEY=your-api-key
Configuration File
config.toml:
[smo]
port = 8080
workers = 4
max_rapps = 100
[ric]
port = 8081
model_cache_size = 1000
inference_timeout = "5s"
[mesh]
endpoint = "http://localhost:9000"
health_check_interval = "30s"
retry_attempts = 3
[events]
backend = "kafka"
kafka_brokers = ["localhost:9092"]
redis_url = "redis://localhost:6379"
Usage
Deploy an rApp
curl -X POST http://localhost:8080/api/v1/rapps \
-H "Content-Type: application/json" \
-d '{
"name": "anomaly-detector",
"type": "anomaly_detection",
"version": "1.0.0",
"config": {
"threshold": 0.8,
"window_size": 60
},
"endpoint": "http://localhost:8082"
}'
Create a Policy
curl -X POST http://localhost:8080/api/v1/policies \
-H "Content-Type: application/json" \
-d '{
"id": "auto-scale-cpu",
"name": "Auto Scale on High CPU",
"type": "auto_scaling",
"enabled": true,
"conditions": [
{
"metric": "cpu_usage",
"operator": ">",
"threshold": 80.0,
"duration": 300
}
],
"actions": ["scale_up"]
}'
Publish an Event
curl -X POST http://localhost:8080/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"type": "anomaly",
"source": "anomaly-detector",
"severity": "high",
"title": "Network Anomaly Detected",
"data": {
"anomaly_score": 0.95,
"node_id": "node-1"
}
}'
Run Inference
curl -X POST http://localhost:8081/api/v1/infer \
-H "Content-Type: application/json" \
-d '{
"model_id": "anomaly-detector",
"features": {
"latency": 150.5,
"packet_loss": 0.02,
"bandwidth": 1024.0,
"cpu_usage": 75.0,
"memory_usage": 60.0
}
}'
Response:
{
"model_id": "anomaly-detector",
"prediction": {
"is_anomaly": true,
"anomaly_score": 0.87,
"confidence": 0.92
},
"inference_time_ms": 12
}
Creating Custom rApps
1. Implement the Interface
package myrapps
import (
"context"
"github.com/neuais/ai-network/rapps/framework"
)
type MyRApp struct {
*framework.BaseRApp
}
func NewMyRApp() *MyRApp {
base := framework.NewBaseRApp(
"my-rapp",
"1.0.0",
"My custom rApp",
)
return &MyRApp{BaseRApp: base}
}
func (r *MyRApp) ProcessEvent(ctx context.Context, event framework.NetworkEvent) ([]framework.NetworkAction, error) {
if event.Priority > 8 {
return []framework.NetworkAction{
{
Type: "alert",
Target: "admin",
Operation: "send_notification",
Parameters: map[string]interface{}{
"message": event.Data,
},
Reason: "High priority event detected",
},
}, nil
}
return nil, nil
}
2. Register Your rApp
registry := framework.NewRAppRegistry(ricClient)
registry.RegisterFactory("my-rapp", func() framework.RApp {
return NewMyRApp()
})
3. Deploy
go build -o my-rapp ./cmd/my-rapp
./my-rapp --smo-endpoint http://localhost:8080
Machine Learning Models
Anomaly Detection
Algorithm: Isolation Forest
Features:
- Latency
- Packet loss
- Bandwidth
- CPU usage
- Memory usage
Training:
curl -X POST http://localhost:8081/api/v1/train \
-H "Content-Type: application/json" \
-d '{
"model_type": "isolation_forest",
"training_data": "s3://bucket/training-data.csv",
"parameters": {
"n_estimators": 100,
"contamination": 0.1
}
}'
Traffic Optimization
Algorithm: Multi-objective scoring
Objectives:
- Minimize latency
- Maximize throughput
- Minimize cost
Weights (configurable):
- Latency: 0.4
- Throughput: 0.4
- Cost: 0.2
Monitoring
Metrics
# SMO Metrics
smo_rapps_total
smo_rapps_active
smo_policies_total
smo_policies_triggered
smo_events_processed
smo_actions_executed
# RIC Metrics
ric_models_loaded
ric_inferences_total
ric_inference_duration_seconds
ric_training_jobs_total
ric_model_accuracy
Health Checks
# SMO Health
curl http://localhost:8080/health
# RIC Health
curl http://localhost:8081/health
Troubleshooting
rApp not starting
Check logs:
curl http://localhost:8080/api/v1/rapps/{id}/logs
Inference errors
Verify model is loaded:
curl http://localhost:8081/api/v1/models
Event bus issues
Check connection:
# Kafka
kafka-console-consumer --bootstrap-server localhost:9092 --topic neuais-events
# Redis
redis-cli SUBSCRIBE neuais:events
Next Steps
- Auth Service - Authentication and authorization
- Compute Service - Agent execution
- Agent Development - Create custom agents