2336 lines
88 KiB
Markdown
2336 lines
88 KiB
Markdown
# 🚀 Arquitectura Definitiva de Sesión de Demo — Alta Fidelidad, Baja Latencia
|
||
|
||
## Resumen Ejecutivo
|
||
|
||
Este documento especifica los **requisitos técnicos completos** para el sistema de **demostraciones técnicas hiperrealistas y deterministas** de Bakery-IA, basado en la implementación real actual del proyecto.
|
||
|
||
**Objetivo principal:** Cada sesión debe simular un entorno productivo operativo —con datos interrelacionados, coherentes y contextualmente creíbles—, **sin dependencia de infraestructura batch (Jobs, CronJobs, scripts externos)**.
|
||
|
||
**Características clave:**
|
||
- ✅ Creación **instantánea (5–15 s)** mediante llamadas HTTP paralelas
|
||
- ✅ **Totalmente reproducible** con garantías de integridad cruzada
|
||
- ✅ **Datos temporales dinámicos** ajustados al momento de creación de la sesión
|
||
- ✅ **70% menos código** que la arquitectura anterior basada en Kubernetes Jobs
|
||
- ✅ **3-6x más rápido** que el enfoque anterior
|
||
|
||
---
|
||
|
||
## 📋 Tabla de Contenidos
|
||
|
||
1. [Fase 0: Análisis y Alineación con Modelos de Base de Datos](#fase-0)
|
||
2. [Arquitectura de Microservicios](#arquitectura)
|
||
3. [Garantía de Integridad Transversal](#integridad)
|
||
4. [Determinismo Temporal](#determinismo)
|
||
5. [Modelo de Datos Base (SSOT)](#ssot)
|
||
6. [Estado Semilla del Orquestador](#orquestador)
|
||
7. [Limpieza de Sesión](#limpieza)
|
||
8. [Escenarios de Demo](#escenarios)
|
||
9. [Verificación Técnica](#verificacion)
|
||
|
||
---
|
||
|
||
<a name="fase-0"></a>
|
||
## 🔍 FASE 0: ANÁLISIS Y ALINEACIÓN CON MODELOS REALES DE BASE DE DATOS
|
||
|
||
### 📌 Objetivo
|
||
|
||
Derivar **esquemas de datos exactos y actualizados** para cada servicio, a partir de sus **modelos de base de datos en producción**, y usarlos como *contrato de validación* para los archivos JSON de demo.
|
||
|
||
> ✨ **Principio**: *Los datos de demostración deben ser estructuralmente aceptables por los ORM/servicios tal como están — sin transformaciones ad-hoc ni supresión de restricciones.*
|
||
|
||
### ✅ Actividades Obligatorias
|
||
|
||
#### 1. Extracción de Modelos Fuente-de-Verdad
|
||
|
||
Para cada servicio con clonación, extraer modelos reales desde:
|
||
|
||
**Archivos de modelos existentes:**
|
||
```
|
||
/services/tenant/app/models/tenants.py
|
||
/services/auth/app/models/users.py
|
||
/services/inventory/app/models/inventory.py
|
||
/services/production/app/models/production.py
|
||
/services/recipes/app/models/recipes.py
|
||
/services/procurement/app/models/procurement.py
|
||
/services/suppliers/app/models/suppliers.py
|
||
/services/orders/app/models/orders.py
|
||
/services/sales/app/models/sales.py
|
||
/services/forecasting/app/models/forecasting.py
|
||
/services/orchestrator/app/models/orchestrator.py
|
||
```
|
||
|
||
**Documentar para cada modelo:**
|
||
- Campos obligatorios (`NOT NULL`, `nullable=False`)
|
||
- Tipos de dato exactos (`UUID`, `DateTime(timezone=True)`, `Float`, `Enum`)
|
||
- Claves foráneas internas (con nombre de columna y tabla destino)
|
||
- Referencias cross-service (UUIDs sin FK constraints)
|
||
- Índices únicos (ej.: `unique=True`, índices compuestos)
|
||
- Validaciones de negocio (ej.: `quantity >= 0`)
|
||
- Valores por defecto (ej.: `default=uuid.uuid4`, `default=ProductionStatus.PENDING`)
|
||
|
||
#### 2. Ejemplo de Modelo Real: ProductionBatch
|
||
|
||
**Archivo:** [`/services/production/app/models/production.py:68-150`](services/production/app/models/production.py#L68-L150)
|
||
|
||
```python
|
||
class ProductionBatch(Base):
|
||
"""Production batch model for tracking individual production runs"""
|
||
__tablename__ = "production_batches"
|
||
|
||
# Primary identification
|
||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||
batch_number = Column(String(50), nullable=False, unique=True, index=True)
|
||
|
||
# Product and recipe information (cross-service references)
|
||
product_id = Column(UUID(as_uuid=True), nullable=False, index=True) # → inventory
|
||
product_name = Column(String(255), nullable=False)
|
||
recipe_id = Column(UUID(as_uuid=True), nullable=True) # → recipes
|
||
|
||
# Production planning (REQUIRED temporal fields)
|
||
planned_start_time = Column(DateTime(timezone=True), nullable=False)
|
||
planned_end_time = Column(DateTime(timezone=True), nullable=False)
|
||
planned_quantity = Column(Float, nullable=False)
|
||
planned_duration_minutes = Column(Integer, nullable=False)
|
||
|
||
# Actual production tracking (OPTIONAL - only for started batches)
|
||
actual_start_time = Column(DateTime(timezone=True), nullable=True)
|
||
actual_end_time = Column(DateTime(timezone=True), nullable=True)
|
||
actual_quantity = Column(Float, nullable=True)
|
||
|
||
# Status and priority (REQUIRED with defaults)
|
||
status = Column(
|
||
SQLEnum(ProductionStatus),
|
||
nullable=False,
|
||
default=ProductionStatus.PENDING,
|
||
index=True
|
||
)
|
||
priority = Column(
|
||
SQLEnum(ProductionPriority),
|
||
nullable=False,
|
||
default=ProductionPriority.MEDIUM
|
||
)
|
||
|
||
# Process stage tracking (OPTIONAL)
|
||
current_process_stage = Column(SQLEnum(ProcessStage), nullable=True, index=True)
|
||
|
||
# Quality metrics (OPTIONAL)
|
||
yield_percentage = Column(Float, nullable=True)
|
||
quality_score = Column(Float, nullable=True)
|
||
waste_quantity = Column(Float, nullable=True)
|
||
|
||
# Equipment and staff (JSON arrays of UUIDs)
|
||
equipment_used = Column(JSON, nullable=True) # [uuid1, uuid2, ...]
|
||
staff_assigned = Column(JSON, nullable=True) # [uuid1, uuid2, ...]
|
||
|
||
# Cross-service order tracking
|
||
order_id = Column(UUID(as_uuid=True), nullable=True) # → orders service
|
||
forecast_id = Column(UUID(as_uuid=True), nullable=True) # → forecasting
|
||
|
||
# Reasoning data for i18n support
|
||
reasoning_data = Column(JSON, nullable=True)
|
||
|
||
# Audit fields
|
||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||
updated_at = Column(DateTime(timezone=True), onupdate=func.now())
|
||
```
|
||
|
||
**Reglas de validación derivadas:**
|
||
- `planned_start_time < planned_end_time`
|
||
- `planned_quantity > 0`
|
||
- `actual_quantity <= planned_quantity * 1.1` (permite 10% sobre-producción)
|
||
- `status = IN_PROGRESS` → `actual_start_time` debe existir
|
||
- `status = COMPLETED` → `actual_end_time` debe existir
|
||
- `equipment_used` debe contener al menos 1 UUID válido
|
||
|
||
#### 3. Generación de Esquemas de Validación (JSON Schema)
|
||
|
||
Para cada modelo, crear JSON Schema Draft 7+ en:
|
||
```
|
||
shared/demo/schemas/{service_name}/{model_name}.schema.json
|
||
```
|
||
|
||
**Ejemplo para ProductionBatch:**
|
||
|
||
```json
|
||
{
|
||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||
"$id": "https://schemas.bakery-ia.com/demo/production/batch/v1",
|
||
"type": "object",
|
||
"title": "ProductionBatch",
|
||
"description": "Production batch for demo cloning",
|
||
"properties": {
|
||
"id": {
|
||
"type": "string",
|
||
"format": "uuid",
|
||
"description": "Unique batch identifier"
|
||
},
|
||
"tenant_id": {
|
||
"type": "string",
|
||
"format": "uuid",
|
||
"description": "Tenant owner (replaced during cloning)"
|
||
},
|
||
"batch_number": {
|
||
"type": "string",
|
||
"pattern": "^BATCH-[0-9]{8}-[A-Z0-9]{6}$",
|
||
"description": "Unique batch code"
|
||
},
|
||
"product_id": {
|
||
"type": "string",
|
||
"format": "uuid",
|
||
"description": "Cross-service ref to inventory.Ingredient (type=FINISHED_PRODUCT)"
|
||
},
|
||
"product_name": {
|
||
"type": "string",
|
||
"minLength": 1,
|
||
"maxLength": 255
|
||
},
|
||
"recipe_id": {
|
||
"type": ["string", "null"],
|
||
"format": "uuid",
|
||
"description": "Cross-service ref to recipes.Recipe"
|
||
},
|
||
"planned_start_time": {
|
||
"type": "string",
|
||
"format": "date-time",
|
||
"description": "ISO 8601 datetime with timezone"
|
||
},
|
||
"planned_end_time": {
|
||
"type": "string",
|
||
"format": "date-time"
|
||
},
|
||
"planned_quantity": {
|
||
"type": "number",
|
||
"minimum": 0.1,
|
||
"description": "Quantity in product's unit of measure"
|
||
},
|
||
"planned_duration_minutes": {
|
||
"type": "integer",
|
||
"minimum": 1
|
||
},
|
||
"actual_start_time": {
|
||
"type": ["string", "null"],
|
||
"format": "date-time",
|
||
"description": "Set when status becomes IN_PROGRESS"
|
||
},
|
||
"actual_end_time": {
|
||
"type": ["string", "null"],
|
||
"format": "date-time",
|
||
"description": "Set when status becomes COMPLETED"
|
||
},
|
||
"status": {
|
||
"type": "string",
|
||
"enum": ["PENDING", "IN_PROGRESS", "COMPLETED", "CANCELLED", "ON_HOLD", "QUALITY_CHECK", "FAILED"],
|
||
"default": "PENDING"
|
||
},
|
||
"priority": {
|
||
"type": "string",
|
||
"enum": ["LOW", "MEDIUM", "HIGH", "URGENT"],
|
||
"default": "MEDIUM"
|
||
},
|
||
"current_process_stage": {
|
||
"type": ["string", "null"],
|
||
"enum": ["mixing", "proofing", "shaping", "baking", "cooling", "packaging", "finishing", null]
|
||
},
|
||
"equipment_used": {
|
||
"type": ["array", "null"],
|
||
"items": { "type": "string", "format": "uuid" },
|
||
"minItems": 1,
|
||
"description": "Array of Equipment IDs"
|
||
},
|
||
"staff_assigned": {
|
||
"type": ["array", "null"],
|
||
"items": { "type": "string", "format": "uuid" }
|
||
}
|
||
},
|
||
"required": [
|
||
"id", "tenant_id", "batch_number", "product_id", "product_name",
|
||
"planned_start_time", "planned_end_time", "planned_quantity",
|
||
"planned_duration_minutes", "status", "priority"
|
||
],
|
||
"additionalProperties": false,
|
||
"allOf": [
|
||
{
|
||
"if": {
|
||
"properties": { "status": { "const": "IN_PROGRESS" } }
|
||
},
|
||
"then": {
|
||
"required": ["actual_start_time"]
|
||
}
|
||
},
|
||
{
|
||
"if": {
|
||
"properties": { "status": { "const": "COMPLETED" } }
|
||
},
|
||
"then": {
|
||
"required": ["actual_start_time", "actual_end_time", "actual_quantity"]
|
||
}
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
#### 4. Creación de Fixtures Base con Validación CI/CD
|
||
|
||
**Ubicación actual de datos semilla:**
|
||
```
|
||
services/{service}/scripts/demo/{entity}_es.json
|
||
```
|
||
|
||
**Archivos existentes (legacy - referenciar para migración):**
|
||
```
|
||
/services/auth/scripts/demo/usuarios_staff_es.json
|
||
/services/suppliers/scripts/demo/proveedores_es.json
|
||
/services/recipes/scripts/demo/recetas_es.json
|
||
/services/inventory/scripts/demo/ingredientes_es.json
|
||
/services/inventory/scripts/demo/stock_lotes_es.json
|
||
/services/production/scripts/demo/equipos_es.json
|
||
/services/production/scripts/demo/lotes_produccion_es.json
|
||
/services/production/scripts/demo/plantillas_calidad_es.json
|
||
/services/orders/scripts/demo/clientes_es.json
|
||
/services/orders/scripts/demo/pedidos_config_es.json
|
||
/services/forecasting/scripts/demo/previsiones_config_es.json
|
||
```
|
||
|
||
**Nueva estructura propuesta:**
|
||
```
|
||
shared/demo/fixtures/
|
||
├── schemas/ # JSON Schemas for validation
|
||
│ ├── production/
|
||
│ │ ├── batch.schema.json
|
||
│ │ ├── equipment.schema.json
|
||
│ │ └── quality_check.schema.json
|
||
│ ├── inventory/
|
||
│ │ ├── ingredient.schema.json
|
||
│ │ └── stock.schema.json
|
||
│ └── ...
|
||
├── professional/ # Professional tier seed data
|
||
│ ├── 01-tenant.json
|
||
│ ├── 02-auth.json
|
||
│ ├── 03-inventory.json
|
||
│ ├── 04-recipes.json
|
||
│ ├── 05-suppliers.json
|
||
│ ├── 06-production.json
|
||
│ ├── 07-procurement.json
|
||
│ ├── 08-orders.json
|
||
│ ├── 09-sales.json
|
||
│ └── 10-forecasting.json
|
||
└── enterprise/ # Enterprise tier seed data
|
||
├── parent/
|
||
│ └── ...
|
||
└── children/
|
||
├── madrid.json
|
||
├── barcelona.json
|
||
└── valencia.json
|
||
```
|
||
|
||
**Integración CI/CD (GitHub Actions):**
|
||
|
||
```yaml
|
||
# .github/workflows/validate-demo-data.yml
|
||
name: Validate Demo Data
|
||
|
||
on: [push, pull_request]
|
||
|
||
jobs:
|
||
validate:
|
||
runs-on: ubuntu-latest
|
||
steps:
|
||
- uses: actions/checkout@v3
|
||
|
||
- name: Setup Node.js (for ajv-cli)
|
||
uses: actions/setup-node@v3
|
||
with:
|
||
node-version: '18'
|
||
|
||
- name: Install ajv-cli
|
||
run: npm install -g ajv-cli
|
||
|
||
- name: Validate Professional Tier Data
|
||
run: |
|
||
for schema in shared/demo/schemas/*/*.schema.json; do
|
||
service=$(basename $(dirname $schema))
|
||
model=$(basename $schema .schema.json)
|
||
|
||
# Find corresponding JSON file
|
||
json_file="shared/demo/fixtures/professional/*-${service}.json"
|
||
|
||
if ls $json_file 1> /dev/null 2>&1; then
|
||
echo "Validating ${service}/${model}..."
|
||
ajv validate -s "$schema" -d "$json_file" --strict=false
|
||
fi
|
||
done
|
||
|
||
- name: Validate Enterprise Tier Data
|
||
run: |
|
||
# Similar validation for enterprise tier
|
||
echo "Validating enterprise tier..."
|
||
|
||
- name: Check Cross-Service References
|
||
run: |
|
||
# Custom script to validate UUIDs exist across services
|
||
python scripts/validate_cross_refs.py
|
||
```
|
||
|
||
**Script de validación de referencias cruzadas:**
|
||
|
||
```python
|
||
# scripts/validate_cross_refs.py
|
||
"""
|
||
Validates cross-service UUID references in demo data.
|
||
Ensures referential integrity without database constraints.
|
||
"""
|
||
import json
|
||
from pathlib import Path
|
||
from typing import Dict, Set
|
||
import sys
|
||
|
||
def load_all_fixtures(tier: str = "professional") -> Dict[str, any]:
|
||
"""Load all JSON fixtures for a tier"""
|
||
fixtures_dir = Path(f"shared/demo/fixtures/{tier}")
|
||
data = {}
|
||
|
||
for json_file in sorted(fixtures_dir.glob("*.json")):
|
||
service = json_file.stem.split('-', 1)[1] # Remove number prefix
|
||
with open(json_file, 'r') as f:
|
||
data[service] = json.load(f)
|
||
|
||
return data
|
||
|
||
def extract_ids(data: dict, entity_type: str) -> Set[str]:
|
||
"""Extract all IDs for an entity type"""
|
||
entities = data.get(entity_type, [])
|
||
return {e['id'] for e in entities}
|
||
|
||
def validate_references(data: Dict[str, any]) -> bool:
|
||
"""Validate all cross-service references"""
|
||
errors = []
|
||
|
||
# Extract all available IDs
|
||
ingredient_ids = extract_ids(data.get('inventory', {}), 'ingredients')
|
||
recipe_ids = extract_ids(data.get('recipes', {}), 'recipes')
|
||
equipment_ids = extract_ids(data.get('production', {}), 'equipment')
|
||
supplier_ids = extract_ids(data.get('suppliers', {}), 'suppliers')
|
||
|
||
# Validate ProductionBatch references
|
||
for batch in data.get('production', {}).get('batches', []):
|
||
# Check product_id exists in inventory
|
||
if batch['product_id'] not in ingredient_ids:
|
||
errors.append(
|
||
f"Batch {batch['batch_number']}: "
|
||
f"product_id {batch['product_id']} not found in inventory"
|
||
)
|
||
|
||
# Check recipe_id exists in recipes
|
||
if batch.get('recipe_id') and batch['recipe_id'] not in recipe_ids:
|
||
errors.append(
|
||
f"Batch {batch['batch_number']}: "
|
||
f"recipe_id {batch['recipe_id']} not found in recipes"
|
||
)
|
||
|
||
# Check equipment_used exists
|
||
for eq_id in batch.get('equipment_used', []):
|
||
if eq_id not in equipment_ids:
|
||
errors.append(
|
||
f"Batch {batch['batch_number']}: "
|
||
f"equipment {eq_id} not found in equipment"
|
||
)
|
||
|
||
# Validate Recipe ingredient references
|
||
for recipe in data.get('recipes', {}).get('recipes', []):
|
||
for ingredient in recipe.get('ingredients', []):
|
||
if ingredient['ingredient_id'] not in ingredient_ids:
|
||
errors.append(
|
||
f"Recipe {recipe['name']}: "
|
||
f"ingredient_id {ingredient['ingredient_id']} not found"
|
||
)
|
||
|
||
# Validate Stock supplier references
|
||
for stock in data.get('inventory', {}).get('stock', []):
|
||
if stock.get('supplier_id') and stock['supplier_id'] not in supplier_ids:
|
||
errors.append(
|
||
f"Stock {stock['batch_number']}: "
|
||
f"supplier_id {stock['supplier_id']} not found"
|
||
)
|
||
|
||
# Print errors
|
||
if errors:
|
||
print("❌ Cross-reference validation FAILED:")
|
||
for error in errors:
|
||
print(f" - {error}")
|
||
return False
|
||
|
||
print("✅ All cross-service references are valid")
|
||
return True
|
||
|
||
if __name__ == "__main__":
|
||
professional_data = load_all_fixtures("professional")
|
||
|
||
if not validate_references(professional_data):
|
||
sys.exit(1)
|
||
|
||
print("✅ Demo data validation passed")
|
||
```
|
||
|
||
#### 5. Gestión de Evolución de Modelos
|
||
|
||
**Crear CHANGELOG.md por servicio:**
|
||
|
||
```markdown
|
||
# Production Service - Demo Data Changelog
|
||
|
||
## 2025-12-13
|
||
- **BREAKING**: Added required field `reasoning_data` (JSON) to ProductionBatch
|
||
- Migration: Set to `null` for existing batches
|
||
- Demo data: Added reasoning structure for i18n support
|
||
- Updated JSON schema: `production/batch.schema.json` v1 → v2
|
||
|
||
## 2025-12-01
|
||
- Added optional field `shelf_life_days` (int, nullable) to Ingredient model
|
||
- Demo data: Updated ingredientes_es.json with shelf_life values
|
||
- JSON schema: `inventory/ingredient.schema.json` remains v1 (backward compatible)
|
||
```
|
||
|
||
**Versionado de esquemas:**
|
||
|
||
```json
|
||
{
|
||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||
"$id": "https://schemas.bakery-ia.com/demo/production/batch/v2",
|
||
"version": "2.0.0",
|
||
"changelog": "https://github.com/bakery-ia/schemas/blob/main/CHANGELOG.md#production-batch-v2",
|
||
...
|
||
}
|
||
```
|
||
|
||
**Compatibilidad hacia atrás:**
|
||
- Nuevos campos deben ser `nullable=True` o tener valores `default`
|
||
- Nunca eliminar campos sin ciclo de deprecación
|
||
- Mantener versiones antiguas de schemas durante al menos 2 releases
|
||
|
||
---
|
||
|
||
<a name="arquitectura"></a>
|
||
## 🏗 ARQUITECTURA DE MICROSERVICIOS
|
||
|
||
### Inventario de Servicios (19 Total)
|
||
|
||
**Archivo de referencia:** [`/services/demo_session/app/services/clone_orchestrator.py:42-106`](services/demo_session/app/services/clone_orchestrator.py#L42-L106)
|
||
|
||
| Servicio | Puerto | Rol | Clonación | URL Kubernetes | Timeout |
|
||
|----------|--------|-----|-----------|----------------|---------|
|
||
| **tenant** | 8000 | Gestión de tenants y suscripciones | ✅ Requerido | `http://tenant-service:8000` | 10s |
|
||
| **auth** | 8001 | Autenticación y usuarios | ✅ Requerido | `http://auth-service:8001` | 10s |
|
||
| **inventory** | 8002 | Ingredientes y stock | ✅ Opcional | `http://inventory-service:8002` | 30s |
|
||
| **production** | 8003 | Lotes y equipos de producción | ✅ Opcional | `http://production-service:8003` | 30s |
|
||
| **recipes** | 8004 | Recetas y BOM | ✅ Opcional | `http://recipes-service:8004` | 15s |
|
||
| **procurement** | 8005 | Órdenes de compra | ✅ Opcional | `http://procurement-service:8005` | 25s |
|
||
| **suppliers** | 8006 | Proveedores | ✅ Opcional | `http://suppliers-service:8006` | 20s |
|
||
| **orders** | 8007 | Pedidos de clientes | ✅ Opcional | `http://orders-service:8007` | 15s |
|
||
| **sales** | 8008 | Historial de ventas | ✅ Opcional | `http://sales-service:8008` | 30s |
|
||
| **forecasting** | 8009 | Previsión de demanda | ✅ Opcional | `http://forecasting-service:8009` | 15s |
|
||
| **distribution** | 8010 | Logística y distribución | ❌ Futuro | `http://distribution-service:8010` | - |
|
||
| **pos** | 8011 | Integración TPV | ❌ No necesario | `http://pos-service:8011` | - |
|
||
| **orchestrator** | 8012 | Orquestación de workflows | ✅ Opcional | `http://orchestrator-service:8012` | 15s |
|
||
| **ai_insights** | 8013 | Insights generados por IA | ❌ Calculados post-clone | `http://ai-insights-service:8013` | - |
|
||
| **training** | 8014 | Entrenamiento ML | ❌ No necesario | `http://training-service:8014` | - |
|
||
| **alert_processor** | 8015 | Procesamiento de alertas | ❌ Disparado post-clone | `http://alert-processor-service:8015` | - |
|
||
| **notification** | 8016 | Notificaciones (email/WhatsApp) | ❌ No necesario | `http://notification-service:8016` | - |
|
||
| **external** | 8017 | Datos externos (clima/tráfico) | ❌ No necesario | `http://external-service:8017` | - |
|
||
| **demo_session** | 8018 | Orquestación de demos | N/A | `http://demo-session-service:8018` | - |
|
||
|
||
### Flujo de Clonación
|
||
|
||
**Archivo de referencia:** [`/services/demo_session/app/services/clone_orchestrator.py`](services/demo_session/app/services/clone_orchestrator.py)
|
||
|
||
```
|
||
POST /api/demo/sessions
|
||
↓
|
||
DemoSessionManager.create_session()
|
||
↓
|
||
CloneOrchestrator.clone_all_services(
|
||
base_tenant_id="a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
|
||
virtual_tenant_id=<nuevo UUID>,
|
||
demo_account_type="professional",
|
||
session_id="demo_abc123",
|
||
session_created_at="2025-12-13T10:00:00Z"
|
||
)
|
||
↓
|
||
┌─────────────────────────────────────────────────────────────┐
|
||
│ FASE 1: Clonación de Tenant Padre (Paralelo) │
|
||
└─────────────────────────────────────────────────────────────┘
|
||
║
|
||
╠═► POST tenant-service:8000/internal/demo/clone
|
||
╠═► POST auth-service:8001/internal/demo/clone
|
||
╠═► POST inventory-service:8002/internal/demo/clone
|
||
╠═► POST recipes-service:8004/internal/demo/clone
|
||
╠═► POST suppliers-service:8006/internal/demo/clone
|
||
╠═► POST production-service:8003/internal/demo/clone
|
||
╠═► POST procurement-service:8005/internal/demo/clone
|
||
╠═► POST orders-service:8007/internal/demo/clone
|
||
╠═► POST sales-service:8008/internal/demo/clone
|
||
╠═► POST forecasting-service:8009/internal/demo/clone
|
||
╚═► POST orchestrator-service:8012/internal/demo/clone
|
||
↓
|
||
[Esperar a que todos completen - asyncio.gather()]
|
||
↓
|
||
┌─────────────────────────────────────────────────────────────┐
|
||
│ FASE 2: [Solo Enterprise] Clonación de Outlets Hijos │
|
||
└─────────────────────────────────────────────────────────────┘
|
||
║
|
||
╠═► Para cada child_outlet (Madrid, Barcelona, Valencia):
|
||
║ ↓
|
||
║ POST tenant-service:8000/internal/demo/create-child
|
||
║ ↓
|
||
║ [Clonar servicios para child_tenant_id - paralelo]
|
||
║ ╠═► POST inventory-service/internal/demo/clone
|
||
║ ╠═► POST production-service/internal/demo/clone
|
||
║ ╠═► POST orders-service/internal/demo/clone
|
||
║ ╚═► ...
|
||
↓
|
||
┌─────────────────────────────────────────────────────────────┐
|
||
│ FASE 3: Generación de Alertas Post-Clonación │
|
||
└─────────────────────────────────────────────────────────────┘
|
||
║
|
||
╠═► POST procurement/internal/delivery-tracking/trigger
|
||
╠═► POST inventory/internal/alerts/trigger
|
||
╚═► POST production/internal/alerts/trigger
|
||
↓
|
||
┌─────────────────────────────────────────────────────────────┐
|
||
│ FASE 4: [Professional/Enterprise] Generación de Insights IA │
|
||
└─────────────────────────────────────────────────────────────┘
|
||
║
|
||
╠═► POST procurement/internal/insights/price/trigger
|
||
╠═► POST inventory/internal/insights/safety-stock/trigger
|
||
╚═► POST production/internal/insights/yield/trigger
|
||
↓
|
||
┌─────────────────────────────────────────────────────────────┐
|
||
│ FASE 5: Actualización de Estado de Sesión │
|
||
└─────────────────────────────────────────────────────────────┘
|
||
║
|
||
╠═► UPDATE demo_sessions SET status='READY', cloning_completed_at=NOW()
|
||
╚═► RETURN session + credentials
|
||
```
|
||
|
||
**Tiempos esperados:**
|
||
- Professional: 5-10 segundos
|
||
- Enterprise: 10-15 segundos (3 child outlets)
|
||
|
||
**Comparación con arquitectura anterior:**
|
||
- Professional antiguo: 30-40 segundos
|
||
- Enterprise antiguo: 60-75 segundos
|
||
- **Mejora: 3-6x más rápido**
|
||
|
||
---
|
||
|
||
<a name="integridad"></a>
|
||
## 🔐 GARANTÍA DE INTEGRIDAD TRANSVERSAL
|
||
|
||
### Principio Rector
|
||
|
||
> **Ningún ID referenciado puede existir en un servicio sin que su entidad origen exista en su servicio propietario, bajo el *mismo tenant virtual*.**
|
||
|
||
### Estrategia de Referencias Cross-Service
|
||
|
||
**Importante:** No existen claves foráneas (FK) entre servicios - solo UUIDs almacenados.
|
||
|
||
**Archivo de referencia:** Cada servicio implementa su propia lógica de validación en `/services/{service}/app/api/internal_demo.py`
|
||
|
||
### Tabla de Integridad Obligatoria
|
||
|
||
| Entidad (Servicio A) | Campo | Referencia | Entidad (Servicio B) | Validación Requerida |
|
||
|----------------------|-------|------------|----------------------|----------------------|
|
||
| `ProductionBatch` (production) | `recipe_id` | UUID | `Recipe` (recipes) | ✅ Debe existir con `tenant_id = virtual_tenant_id` y `is_active = true` |
|
||
| `ProductionBatch` (production) | `product_id` | UUID | `Ingredient` (inventory) | ✅ Debe ser tipo `FINISHED_PRODUCT` del mismo tenant |
|
||
| `ProductionBatch` (production) | `equipment_used` | UUID[] | `Equipment` (production) | ✅ Todos los IDs deben existir con `status = OPERATIONAL` |
|
||
| `ProductionBatch` (production) | `order_id` | UUID | `CustomerOrder` (orders) | ✅ Debe existir y tener `status != CANCELLED` |
|
||
| `ProductionBatch` (production) | `forecast_id` | UUID | `Forecast` (forecasting) | ✅ Debe existir para el mismo `product_id` |
|
||
| `Recipe` (recipes) | `finished_product_id` | UUID | `Ingredient` (inventory) | ✅ Debe ser tipo `FINISHED_PRODUCT` |
|
||
| `RecipeIngredient` (recipes) | `ingredient_id` | UUID | `Ingredient` (inventory) | ✅ Debe existir y tener `product_type = INGREDIENT` |
|
||
| `Stock` (inventory) | `ingredient_id` | UUID | `Ingredient` (inventory) | ✅ FK interna - validación automática |
|
||
| `Stock` (inventory) | `supplier_id` | UUID | `Supplier` (suppliers) | ✅ Debe existir con contrato vigente |
|
||
| `PurchaseOrder` (procurement) | `supplier_id` | UUID | `Supplier` (suppliers) | ✅ Debe existir y estar activo |
|
||
| `PurchaseOrderItem` (procurement) | `ingredient_id` | UUID | `Ingredient` (inventory) | ✅ Debe ser tipo `INGREDIENT` (no producto terminado) |
|
||
| `PurchaseOrderItem` (procurement) | `purchase_order_id` | UUID | `PurchaseOrder` (procurement) | ✅ FK interna - validación automática |
|
||
| `ProcurementRequirement` (procurement) | `plan_id` | UUID | `ProcurementPlan` (procurement) | ✅ FK interna - validación automática |
|
||
| `ProcurementRequirement` (procurement) | `ingredient_id` | UUID | `Ingredient` (inventory) | ✅ Debe existir |
|
||
| `CustomerOrder` (orders) | `customer_id` | UUID | `Customer` (orders) | ✅ FK interna - validación automática |
|
||
| `OrderItem` (orders) | `customer_order_id` | UUID | `CustomerOrder` (orders) | ✅ FK interna - validación automática |
|
||
| `OrderItem` (orders) | `product_id` | UUID | `Ingredient` (inventory) | ✅ Debe ser tipo `FINISHED_PRODUCT` |
|
||
| `QualityCheck` (production) | `batch_id` | UUID | `ProductionBatch` (production) | ✅ FK interna - validación automática |
|
||
| `QualityCheck` (production) | `template_id` | UUID | `QualityCheckTemplate` (production) | ✅ FK interna - validación automática |
|
||
| `SalesData` (sales) | `product_id` | UUID | `Ingredient` (inventory) | ✅ Debe ser tipo `FINISHED_PRODUCT` |
|
||
| `Forecast` (forecasting) | `product_id` | UUID | `Ingredient` (inventory) | ✅ Debe existir |
|
||
|
||
### Mecanismo de Validación
|
||
|
||
**Opción 1: Validación Pre-Clonación (Recomendada)**
|
||
|
||
Validar todas las referencias **en memoria** al cargar los datos base, asumiendo que los archivos JSON están validados por CI/CD.
|
||
|
||
```python
|
||
# En clone_orchestrator.py o pre-clone validation script
|
||
def validate_cross_service_refs(data: Dict[str, Any]) -> None:
|
||
"""
|
||
Validates all cross-service references before cloning.
|
||
Raises ValidationError if any reference is invalid.
|
||
"""
|
||
errors = []
|
||
|
||
# Extract available IDs
|
||
ingredient_ids = {i['id'] for i in data['inventory']['ingredients']}
|
||
recipe_ids = {r['id'] for r in data['recipes']['recipes']}
|
||
equipment_ids = {e['id'] for e in data['production']['equipment']}
|
||
|
||
# Validate ProductionBatch references
|
||
for batch in data['production']['batches']:
|
||
if batch['product_id'] not in ingredient_ids:
|
||
errors.append(f"Batch {batch['batch_number']}: product_id not found")
|
||
|
||
if batch.get('recipe_id') and batch['recipe_id'] not in recipe_ids:
|
||
errors.append(f"Batch {batch['batch_number']}: recipe_id not found")
|
||
|
||
for eq_id in batch.get('equipment_used', []):
|
||
if eq_id not in equipment_ids:
|
||
errors.append(f"Batch {batch['batch_number']}: equipment {eq_id} not found")
|
||
|
||
if errors:
|
||
raise ValidationError(f"Cross-service reference validation failed:\n" + "\n".join(errors))
|
||
```
|
||
|
||
**Opción 2: Validación Runtime (Solo si datos no están pre-validados)**
|
||
|
||
```python
|
||
# En internal_demo.py de cada servicio
|
||
async def validate_cross_service_reference(
|
||
service_url: str,
|
||
entity_type: str,
|
||
entity_id: UUID,
|
||
tenant_id: UUID
|
||
) -> bool:
|
||
"""
|
||
Check if a cross-service reference exists.
|
||
|
||
Args:
|
||
service_url: URL of the service to check (e.g., "http://inventory-service:8000")
|
||
entity_type: Type of entity (e.g., "ingredient", "recipe")
|
||
entity_id: UUID of the entity
|
||
tenant_id: Tenant ID to filter by
|
||
|
||
Returns:
|
||
True if entity exists, False otherwise
|
||
"""
|
||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||
response = await client.head(
|
||
f"{service_url}/internal/demo/exists",
|
||
params={
|
||
"entity_type": entity_type,
|
||
"id": str(entity_id),
|
||
"tenant_id": str(tenant_id)
|
||
},
|
||
headers={"X-Internal-API-Key": settings.INTERNAL_API_KEY}
|
||
)
|
||
return response.status_code == 200
|
||
|
||
# Uso en clonación
|
||
if batch.recipe_id:
|
||
if not await validate_cross_service_reference(
|
||
"http://recipes-service:8000",
|
||
"recipe",
|
||
batch.recipe_id,
|
||
virtual_tenant_id
|
||
):
|
||
raise IntegrityError(
|
||
f"Recipe {batch.recipe_id} not found for batch {batch.batch_number}"
|
||
)
|
||
```
|
||
|
||
**Endpoint de existencia (implementar en cada servicio):**
|
||
|
||
```python
|
||
# En services/{service}/app/api/internal_demo.py
|
||
@router.head("/exists", dependencies=[Depends(verify_internal_key)])
|
||
async def check_entity_exists(
|
||
entity_type: str,
|
||
id: UUID,
|
||
tenant_id: UUID,
|
||
db: AsyncSession = Depends(get_db)
|
||
):
|
||
"""
|
||
Check if an entity exists for a tenant.
|
||
Returns 200 if exists, 404 if not.
|
||
"""
|
||
if entity_type == "recipe":
|
||
result = await db.execute(
|
||
select(Recipe)
|
||
.where(Recipe.id == id)
|
||
.where(Recipe.tenant_id == tenant_id)
|
||
)
|
||
entity = result.scalar_one_or_none()
|
||
# ... otros entity_types
|
||
|
||
if entity:
|
||
return Response(status_code=200)
|
||
else:
|
||
return Response(status_code=404)
|
||
```
|
||
|
||
---
|
||
|
||
<a name="determinismo"></a>
|
||
## 📅 DETERMINISMO TEMPORAL
|
||
|
||
### Línea Base Fija
|
||
|
||
**Archivo de referencia:** [`/shared/utils/demo_dates.py:11-42`](shared/utils/demo_dates.py#L11-L42)
|
||
|
||
```python
|
||
def get_base_reference_date(session_created_at: Optional[datetime] = None) -> datetime:
|
||
"""
|
||
Get the base reference date for demo data.
|
||
|
||
If session_created_at is provided, calculate relative to it.
|
||
Otherwise, use current time (for backwards compatibility with seed scripts).
|
||
|
||
Returns:
|
||
Base reference date at 6 AM UTC
|
||
"""
|
||
if session_created_at:
|
||
if session_created_at.tzinfo is None:
|
||
session_created_at = session_created_at.replace(tzinfo=timezone.utc)
|
||
# Reference is session creation time at 6 AM that day
|
||
return session_created_at.replace(
|
||
hour=6, minute=0, second=0, microsecond=0
|
||
)
|
||
# Fallback for seed scripts: use today at 6 AM
|
||
now = datetime.now(timezone.utc)
|
||
return now.replace(hour=6, minute=0, second=0, microsecond=0)
|
||
```
|
||
|
||
**Concepto:**
|
||
- Todos los datos de seed están definidos respecto a una marca de tiempo fija: **6:00 AM UTC del día de creación de la sesión**
|
||
- Esta marca se calcula dinámicamente en tiempo de clonación
|
||
|
||
### Transformación Dinámica
|
||
|
||
**Archivo de referencia:** [`/shared/utils/demo_dates.py:45-93`](shared/utils/demo_dates.py#L45-L93)
|
||
|
||
```python
|
||
def adjust_date_for_demo(
|
||
original_date: Optional[datetime],
|
||
session_created_at: datetime,
|
||
base_reference_date: datetime = BASE_REFERENCE_DATE
|
||
) -> Optional[datetime]:
|
||
"""
|
||
Adjust a date from seed data to be relative to demo session creation time.
|
||
|
||
Example:
|
||
# Seed data created on 2025-12-13 06:00
|
||
# Stock expiration: 2025-12-28 06:00 (15 days from seed date)
|
||
# Demo session created: 2025-12-16 10:00
|
||
# Base reference: 2025-12-16 06:00
|
||
# Result: 2025-12-31 10:00 (15 days from session date)
|
||
"""
|
||
if original_date is None:
|
||
return None
|
||
|
||
# Ensure timezone-aware datetimes
|
||
if original_date.tzinfo is None:
|
||
original_date = original_date.replace(tzinfo=timezone.utc)
|
||
if session_created_at.tzinfo is None:
|
||
session_created_at = session_created_at.replace(tzinfo=timezone.utc)
|
||
if base_reference_date.tzinfo is None:
|
||
base_reference_date = base_reference_date.replace(tzinfo=timezone.utc)
|
||
|
||
# Calculate offset from base reference
|
||
offset = original_date - base_reference_date
|
||
|
||
# Apply offset to session creation date
|
||
return session_created_at + offset
|
||
```
|
||
|
||
**En tiempo de clonación:**
|
||
```python
|
||
Δt = session_created_at - base_reference_date
|
||
new_timestamp = original_timestamp + Δt
|
||
```
|
||
|
||
### Aplicación por Tipo de Dato
|
||
|
||
| Tipo | Campos Afectados | Regla de Transformación | Archivo de Referencia |
|
||
|------|------------------|-------------------------|------------------------|
|
||
| **Orden de Compra** | `created_at`, `order_date`, `expected_delivery_date`, `approval_deadline` | `+Δt`. Si `expected_delivery_date` cae en fin de semana → desplazar al lunes siguiente | `procurement/app/api/internal_demo.py` |
|
||
| **Lote de producción** | `planned_start_time`, `planned_end_time`, `actual_start_time` | `+Δt`. `actual_start_time = null` para lotes futuros; `actual_start_time = planned_start_time` para lotes en curso | `production/app/api/internal_demo.py` |
|
||
| **Stock** | `received_date`, `expiration_date` | `+Δt`. `expiration_date = received_date + shelf_life_days` | `inventory/app/api/internal_demo.py` |
|
||
| **Pedido cliente** | `order_date`, `delivery_date` | `+Δt`, manteniendo días laborables (lunes-viernes) | `orders/app/api/internal_demo.py` |
|
||
| **Alerta** | `triggered_at`, `acknowledged_at`, `resolved_at` | Solo `triggered_at` se transforma. `resolved_at` se calcula dinámicamente si está resuelta | `alert_processor/app/consumer/event_consumer.py` |
|
||
| **Forecast** | `forecast_date`, `prediction_date` | `+Δt`, alineado a inicio de semana (lunes) | `forecasting/app/api/internal_demo.py` |
|
||
| **Entrega** | `expected_date`, `actual_date` | `+Δt`, con ajuste por horario comercial (8-20h) | `procurement/app/api/internal_demo.py` |
|
||
|
||
### Funciones Avanzadas de Ajuste Temporal
|
||
|
||
**Archivo de referencia:** [`/shared/utils/demo_dates.py:264-341`](shared/utils/demo_dates.py#L264-L341)
|
||
|
||
#### shift_to_session_time
|
||
|
||
```python
|
||
def shift_to_session_time(
|
||
original_offset_days: int,
|
||
original_hour: int,
|
||
original_minute: int,
|
||
session_created_at: datetime,
|
||
base_reference: Optional[datetime] = None
|
||
) -> datetime:
|
||
"""
|
||
Shift a time from seed data to demo session time with same-day preservation.
|
||
|
||
Ensures that:
|
||
1. Items scheduled for "today" (offset_days=0) remain on the same day as session creation
|
||
2. Future items stay in the future, past items stay in the past
|
||
3. Times don't shift to invalid moments (e.g., past times for pending items)
|
||
|
||
Examples:
|
||
# Session created at noon, item originally scheduled for morning
|
||
>>> session = datetime(2025, 12, 12, 12, 0, tzinfo=timezone.utc)
|
||
>>> result = shift_to_session_time(0, 6, 0, session) # Today at 06:00
|
||
>>> # Returns today at 13:00 (shifted forward to stay in future)
|
||
|
||
# Session created at noon, item originally scheduled for evening
|
||
>>> result = shift_to_session_time(0, 18, 0, session) # Today at 18:00
|
||
>>> # Returns today at 18:00 (already in future)
|
||
"""
|
||
# ... (implementación en demo_dates.py)
|
||
```
|
||
|
||
#### ensure_future_time
|
||
|
||
```python
|
||
def ensure_future_time(
|
||
target_time: datetime,
|
||
reference_time: datetime,
|
||
min_hours_ahead: float = 1.0
|
||
) -> datetime:
|
||
"""
|
||
Ensure a target time is in the future relative to reference time.
|
||
|
||
If target_time is in the past or too close to reference_time,
|
||
shift it forward by at least min_hours_ahead.
|
||
"""
|
||
# ... (implementación en demo_dates.py)
|
||
```
|
||
|
||
#### calculate_edge_case_times
|
||
|
||
**Archivo de referencia:** [`/shared/utils/demo_dates.py:421-477`](shared/utils/demo_dates.py#L421-L477)
|
||
|
||
```python
|
||
def calculate_edge_case_times(session_created_at: datetime) -> dict:
|
||
"""
|
||
Calculate deterministic edge case times for demo sessions.
|
||
|
||
These times are designed to always create specific demo scenarios:
|
||
- One late delivery (should have arrived hours ago)
|
||
- One overdue production batch (should have started hours ago)
|
||
- One in-progress batch (started recently)
|
||
- One upcoming batch (starts soon)
|
||
- One arriving-soon delivery (arrives in a few hours)
|
||
|
||
Returns:
|
||
{
|
||
'late_delivery_expected': session - 4h,
|
||
'overdue_batch_planned_start': session - 2h,
|
||
'in_progress_batch_actual_start': session - 1h45m,
|
||
'upcoming_batch_planned_start': session + 1h30m,
|
||
'arriving_soon_delivery_expected': session + 2h30m,
|
||
'evening_batch_planned_start': today 17:00,
|
||
'tomorrow_morning_planned_start': tomorrow 05:00
|
||
}
|
||
"""
|
||
```
|
||
|
||
### Casos Extremos (Edge Cases) Requeridos para UI/UX
|
||
|
||
| Escenario | Configuración en Datos Base | Resultado Post-Transformación |
|
||
|-----------|------------------------------|-------------------------------|
|
||
| **OC retrasada** | `expected_delivery_date = BASE_TS - 1d`, `status = "PENDING"` | `expected_delivery_date = session_created_at - 4h` → alerta roja "Retraso de proveedor" |
|
||
| **Lote atrasado** | `planned_start_time = BASE_TS - 2h`, `status = "PENDING"`, `actual_start_time = null` | `planned_start_time = session_created_at - 2h` → alerta amarilla "Producción retrasada" |
|
||
| **Lote en curso** | `planned_start_time = BASE_TS - 1h`, `status = "IN_PROGRESS"`, `actual_start_time = BASE_TS - 1h45m` | `actual_start_time = session_created_at - 1h45m` → producción activa visible |
|
||
| **Lote próximo** | `planned_start_time = BASE_TS + 1.5h`, `status = "PENDING"` | `planned_start_time = session_created_at + 1.5h` → próximo en planificación |
|
||
| **Stock agotado + OC pendiente** | `Ingredient.stock = 0`, `reorder_point = 10`, `PurchaseOrder.status = "PENDING_APPROVAL"` | ✅ Alerta de inventario *no se dispara* (evita duplicidad) |
|
||
| **Stock agotado sin OC** | `Ingredient.stock = 0`, `reorder_point = 10`, **ninguna OC abierta** | ❗ Alerta de inventario *sí se dispara*: "Bajo stock — acción requerida" |
|
||
| **Stock próximo a caducar** | `expiration_date = BASE_TS + 2d` | `expiration_date = session_created_at + 2d` → alerta naranja "Caducidad próxima" |
|
||
|
||
### Implementación en Clonación
|
||
|
||
**Ejemplo real del servicio de producción:**
|
||
|
||
```python
|
||
# services/production/app/api/internal_demo.py (simplificado)
|
||
from shared.utils.demo_dates import (
|
||
adjust_date_for_demo,
|
||
calculate_edge_case_times,
|
||
ensure_future_time,
|
||
get_base_reference_date
|
||
)
|
||
|
||
@router.post("/clone")
|
||
async def clone_production_data(
|
||
request: DemoCloneRequest,
|
||
db: AsyncSession = Depends(get_db)
|
||
):
|
||
session_created_at = datetime.fromisoformat(request.session_created_at)
|
||
base_reference = get_base_reference_date(session_created_at)
|
||
edge_times = calculate_edge_case_times(session_created_at)
|
||
|
||
# Clone equipment (no date adjustment needed)
|
||
for equipment in base_equipment:
|
||
new_equipment = Equipment(
|
||
id=uuid.uuid4(),
|
||
tenant_id=request.virtual_tenant_id,
|
||
# ... copy fields
|
||
install_date=adjust_date_for_demo(
|
||
equipment.install_date,
|
||
session_created_at,
|
||
base_reference
|
||
)
|
||
)
|
||
db.add(new_equipment)
|
||
|
||
# Clone production batches with edge cases
|
||
batches = []
|
||
|
||
# Edge case 1: Overdue batch (should have started 2h ago)
|
||
batches.append({
|
||
"planned_start_time": edge_times["overdue_batch_planned_start"],
|
||
"planned_end_time": edge_times["overdue_batch_planned_start"] + timedelta(hours=3),
|
||
"status": ProductionStatus.PENDING,
|
||
"actual_start_time": None,
|
||
"priority": ProductionPriority.URGENT
|
||
})
|
||
|
||
# Edge case 2: In-progress batch
|
||
batches.append({
|
||
"planned_start_time": edge_times["in_progress_batch_actual_start"],
|
||
"planned_end_time": edge_times["upcoming_batch_planned_start"],
|
||
"status": ProductionStatus.IN_PROGRESS,
|
||
"actual_start_time": edge_times["in_progress_batch_actual_start"],
|
||
"priority": ProductionPriority.HIGH,
|
||
"current_process_stage": ProcessStage.BAKING
|
||
})
|
||
|
||
# Edge case 3: Upcoming batch
|
||
batches.append({
|
||
"planned_start_time": edge_times["upcoming_batch_planned_start"],
|
||
"planned_end_time": edge_times["upcoming_batch_planned_start"] + timedelta(hours=2),
|
||
"status": ProductionStatus.PENDING,
|
||
"actual_start_time": None,
|
||
"priority": ProductionPriority.MEDIUM
|
||
})
|
||
|
||
# Clone remaining batches from seed data
|
||
for base_batch in seed_batches:
|
||
adjusted_start = adjust_date_for_demo(
|
||
base_batch.planned_start_time,
|
||
session_created_at,
|
||
base_reference
|
||
)
|
||
|
||
# Ensure future batches stay in the future
|
||
if base_batch.status == ProductionStatus.PENDING:
|
||
adjusted_start = ensure_future_time(adjusted_start, session_created_at, min_hours_ahead=1.0)
|
||
|
||
batches.append({
|
||
"planned_start_time": adjusted_start,
|
||
"planned_end_time": adjust_date_for_demo(
|
||
base_batch.planned_end_time,
|
||
session_created_at,
|
||
base_reference
|
||
),
|
||
"status": base_batch.status,
|
||
"actual_start_time": adjust_date_for_demo(
|
||
base_batch.actual_start_time,
|
||
session_created_at,
|
||
base_reference
|
||
) if base_batch.actual_start_time else None,
|
||
# ... other fields
|
||
})
|
||
|
||
for batch_data in batches:
|
||
new_batch = ProductionBatch(
|
||
id=uuid.uuid4(),
|
||
tenant_id=request.virtual_tenant_id,
|
||
**batch_data
|
||
)
|
||
db.add(new_batch)
|
||
|
||
await db.commit()
|
||
|
||
return {
|
||
"service": "production",
|
||
"status": "completed",
|
||
"records_cloned": len(batches) + len(equipment_list),
|
||
"details": {
|
||
"batches": len(batches),
|
||
"equipment": len(equipment_list),
|
||
"edge_cases_created": 3
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
<a name="ssot"></a>
|
||
## 🧱 MODELO DE DATOS BASE — FUENTE ÚNICA DE VERDAD (SSOT)
|
||
|
||
### Ubicación Actual vs. Propuesta
|
||
|
||
**Archivos existentes (legacy):**
|
||
```
|
||
/services/{service}/scripts/demo/{entity}_es.json
|
||
```
|
||
|
||
**Nueva estructura propuesta:**
|
||
```
|
||
shared/demo/
|
||
├── schemas/ # JSON Schemas para validación
|
||
│ ├── production/
|
||
│ │ ├── batch.schema.json
|
||
│ │ ├── equipment.schema.json
|
||
│ │ └── quality_check.schema.json
|
||
│ ├── inventory/
|
||
│ │ ├── ingredient.schema.json
|
||
│ │ └── stock.schema.json
|
||
│ ├── recipes/
|
||
│ │ ├── recipe.schema.json
|
||
│ │ └── recipe_ingredient.schema.json
|
||
│ └── ... (más servicios)
|
||
├── fixtures/
|
||
│ ├── professional/
|
||
│ │ ├── 01-tenant.json # Metadata del tenant base
|
||
│ │ ├── 02-auth.json # Usuarios demo
|
||
│ │ ├── 03-inventory.json # Ingredientes + stock
|
||
│ │ ├── 04-recipes.json # Recetas
|
||
│ │ ├── 05-suppliers.json # Proveedores
|
||
│ │ ├── 06-production.json # Equipos + lotes
|
||
│ │ ├── 07-procurement.json # OCs + planes
|
||
│ │ ├── 08-orders.json # Clientes + pedidos
|
||
│ │ ├── 09-sales.json # Historial ventas
|
||
│ │ └── 10-forecasting.json # Previsiones
|
||
│ └── enterprise/
|
||
│ ├── parent/
|
||
│ │ └── ... (misma estructura)
|
||
│ └── children/
|
||
│ ├── madrid.json # Datos específicos Madrid
|
||
│ ├── barcelona.json # Datos específicos Barcelona
|
||
│ └── valencia.json # Datos específicos Valencia
|
||
└── metadata/
|
||
├── tenant_configs.json # IDs base por tier
|
||
├── demo_users.json # Usuarios hardcoded
|
||
└── cross_refs_map.json # Mapa de dependencias
|
||
```
|
||
|
||
### IDs Fijos por Tier
|
||
|
||
**Archivo de referencia:** [`/services/demo_session/app/core/config.py:38-72`](services/demo_session/app/core/config.py#L38-L72)
|
||
|
||
```python
|
||
DEMO_ACCOUNTS = {
|
||
"professional": {
|
||
"email": "demo.professional@panaderiaartesana.com",
|
||
"name": "Panadería Artesana Madrid - Demo",
|
||
"subdomain": "demo-artesana",
|
||
"base_tenant_id": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
|
||
"subscription_tier": "professional",
|
||
"tenant_type": "standalone"
|
||
},
|
||
"enterprise": {
|
||
"email": "demo.enterprise@panaderiacentral.com",
|
||
"name": "Panadería Central - Demo Enterprise",
|
||
"subdomain": "demo-central",
|
||
"base_tenant_id": "80000000-0000-4000-a000-000000000001",
|
||
"subscription_tier": "enterprise",
|
||
"tenant_type": "parent",
|
||
"children": [
|
||
{
|
||
"name": "Madrid Centro",
|
||
"base_tenant_id": "A0000000-0000-4000-a000-000000000001",
|
||
"location": {
|
||
"city": "Madrid",
|
||
"zone": "Centro",
|
||
"latitude": 40.4168,
|
||
"longitude": -3.7038
|
||
}
|
||
},
|
||
{
|
||
"name": "Barcelona Gràcia",
|
||
"base_tenant_id": "B0000000-0000-4000-a000-000000000001",
|
||
"location": {
|
||
"city": "Barcelona",
|
||
"zone": "Gràcia",
|
||
"latitude": 41.4036,
|
||
"longitude": 2.1561
|
||
}
|
||
},
|
||
{
|
||
"name": "Valencia Ruzafa",
|
||
"base_tenant_id": "C0000000-0000-4000-a000-000000000001",
|
||
"location": {
|
||
"city": "Valencia",
|
||
"zone": "Ruzafa",
|
||
"latitude": 39.4623,
|
||
"longitude": -0.3645
|
||
}
|
||
}
|
||
]
|
||
}
|
||
}
|
||
```
|
||
|
||
### Usuarios Demo Hardcoded
|
||
|
||
```python
|
||
# Estos IDs están hardcoded en tenant/app/api/internal_demo.py
|
||
DEMO_OWNER_IDS = {
|
||
"professional": "c1a2b3c4-d5e6-47a8-b9c0-d1e2f3a4b5c6", # María García López
|
||
"enterprise": "d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7" # Carlos Martínez Ruiz
|
||
}
|
||
|
||
STAFF_USERS = {
|
||
"professional": [
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000001",
|
||
"role": "baker",
|
||
"name": "Juan Panadero"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000002",
|
||
"role": "sales",
|
||
"name": "Ana Ventas"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000003",
|
||
"role": "quality_control",
|
||
"name": "Pedro Calidad"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000004",
|
||
"role": "admin",
|
||
"name": "Laura Admin"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000005",
|
||
"role": "warehouse",
|
||
"name": "Carlos Almacén"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000006",
|
||
"role": "production_manager",
|
||
"name": "Isabel Producción"
|
||
}
|
||
],
|
||
"enterprise": [
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000011",
|
||
"role": "production_manager",
|
||
"name": "Roberto Producción"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000012",
|
||
"role": "quality_control",
|
||
"name": "Marta Calidad"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000013",
|
||
"role": "logistics",
|
||
"name": "Javier Logística"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000014",
|
||
"role": "sales",
|
||
"name": "Carmen Ventas"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000015",
|
||
"role": "procurement",
|
||
"name": "Luis Compras"
|
||
},
|
||
{
|
||
"user_id": "50000000-0000-0000-0000-000000000016",
|
||
"role": "maintenance",
|
||
"name": "Miguel Mantenimiento"
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
### Transformación de IDs Durante Clonación
|
||
|
||
Cada `base_tenant_id` en los archivos JSON se reemplaza por `virtual_tenant_id` durante la clonación.
|
||
|
||
**Ejemplo de datos base:**
|
||
|
||
```json
|
||
// shared/demo/fixtures/professional/06-production.json
|
||
{
|
||
"equipment": [
|
||
{
|
||
"id": "eq-00000000-0001-0000-0000-000000000001",
|
||
"tenant_id": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6", // ← Reemplazado
|
||
"name": "Horno de leña principal",
|
||
"type": "OVEN",
|
||
"model": "WoodFire Pro 3000",
|
||
"status": "OPERATIONAL",
|
||
"install_date": "2024-01-15T06:00:00Z"
|
||
}
|
||
],
|
||
"batches": [
|
||
{
|
||
"id": "batch-00000000-0001-0000-0000-000000000001",
|
||
"tenant_id": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6", // ← Reemplazado
|
||
"batch_number": "BATCH-20251213-000001",
|
||
"product_id": "prod-00000000-0001-0000-0000-000000000001", // ref a inventory
|
||
"product_name": "Baguette Tradicional",
|
||
"recipe_id": "recipe-00000000-0001-0000-0000-000000000001", // ref a recipes
|
||
"planned_start_time": "2025-12-13T08:00:00Z", // ← Ajustado por demo_dates
|
||
"planned_end_time": "2025-12-13T11:00:00Z",
|
||
"planned_quantity": 100.0,
|
||
"planned_duration_minutes": 180,
|
||
"status": "PENDING",
|
||
"priority": "HIGH",
|
||
"equipment_used": [
|
||
"eq-00000000-0001-0000-0000-000000000001" // ref a equipment (mismo servicio)
|
||
]
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
**Transformación durante clonación:**
|
||
|
||
```python
|
||
# En production/app/api/internal_demo.py
|
||
virtual_tenant_id = uuid.UUID("new-virtual-uuid-here")
|
||
|
||
# Transformar equipo
|
||
new_equipment_id = uuid.uuid4()
|
||
equipment_id_map[old_equipment_id] = new_equipment_id
|
||
|
||
new_equipment = Equipment(
|
||
id=new_equipment_id,
|
||
tenant_id=virtual_tenant_id, # ← REEMPLAZADO
|
||
# ... resto de campos copiados
|
||
)
|
||
|
||
# Transformar lote
|
||
new_batch = ProductionBatch(
|
||
id=uuid.uuid4(),
|
||
tenant_id=virtual_tenant_id, # ← REEMPLAZADO
|
||
batch_number=f"BATCH-{datetime.now():%Y%m%d}-{random_suffix}", # Nuevo número
|
||
product_id=batch_data["product_id"], # ← Mantener cross-service ref
|
||
recipe_id=batch_data["recipe_id"], # ← Mantener cross-service ref
|
||
equipment_used=[equipment_id_map[eq_id] for eq_id in batch_data["equipment_used"]], # ← Mapear IDs internos
|
||
# ...
|
||
)
|
||
```
|
||
|
||
---
|
||
|
||
<a name="orquestador"></a>
|
||
## 🔄 ESTADO SEMILLA DEL ORQUESTADOR
|
||
|
||
### Condiciones Iniciales Garantizadas
|
||
|
||
El estado inicial de la demo no es arbitrario: refleja el **output de la última ejecución del Orquestador en producción simulada**.
|
||
|
||
| Sistema | Estado Esperado | Justificación |
|
||
|---------|------------------|---------------|
|
||
| **Inventario** | - 3 ingredientes en `stock < reorder_point`<br>- 2 con OC pendiente (no disparan alerta)<br>- 1 sin OC (dispara alerta roja) | Realismo operativo - problemas de abastecimiento |
|
||
| **Producción** | - 1 lote "atrasado" (inicio planeado: hace 2h, status: PENDING)<br>- 1 lote "en curso" (inicio real: hace 1h45m, status: IN_PROGRESS)<br>- 2 programados para hoy (futuros) | Simula operación diaria con variedad de estados |
|
||
| **Procurement** | - 2 OCs pendientes (1 aprobada por IA, 1 en revisión humana)<br>- 1 OC retrasada (entrega esperada: hace 4h)<br>- 3 OCs completadas (entregadas hace 1-7 días) | Escenarios de toma de decisión y seguimiento |
|
||
| **Calidad** | - 3 controles completados (2 PASSED, 1 FAILED → lote en cuarentena)<br>- 1 control pendiente (lote en QUALITY_CHECK) | Flujo de calidad realista |
|
||
| **Pedidos** | - 5 clientes con pedidos recientes (últimos 30 días)<br>- 2 pedidos pendientes de entrega (delivery_date: hoy/mañana)<br>- 8 pedidos completados | Actividad comercial realista |
|
||
| **Forecasting** | - Previsiones para próximos 7 días (generadas "ayer")<br>- Precisión histórica: 88-92% (calculada vs sales reales) | Datos de IA/ML creíbles |
|
||
|
||
### Datos Específicos para Edge Cases
|
||
|
||
**Archivo: `shared/demo/fixtures/professional/06-production.json` (ejemplo)**
|
||
|
||
```json
|
||
{
|
||
"batches": [
|
||
{
|
||
"id": "batch-edge-overdue-0001",
|
||
"tenant_id": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6",
|
||
"batch_number": "BATCH-OVERDUE-0001",
|
||
"product_id": "prod-baguette-traditional",
|
||
"product_name": "Baguette Tradicional",
|
||
"recipe_id": "recipe-baguette-traditional",
|
||
"planned_start_time": "BASE_TS - 2h", // Marcador - se ajusta en clonación
|
||
"planned_end_time": "BASE_TS + 1h",
|
||
"planned_quantity": 100.0,
|
||
"planned_duration_minutes": 180,
|
||
"actual_start_time": null,
|
||
"actual_end_time": null,
|
||
"status": "PENDING",
|
||
"priority": "URGENT",
|
||
"equipment_used": ["eq-oven-main"],
|
||
"reasoning_data": {
|
||
"delay_reason": "equipment_maintenance",
|
||
"delay_reason_i18n_key": "production.delay.equipment_maintenance"
|
||
}
|
||
},
|
||
{
|
||
"id": "batch-edge-in-progress-0001",
|
||
"batch_number": "BATCH-IN-PROGRESS-0001",
|
||
"product_id": "prod-croissant-butter",
|
||
"product_name": "Croissant de Mantequilla",
|
||
"recipe_id": "recipe-croissant-butter",
|
||
"planned_start_time": "BASE_TS - 1h45m",
|
||
"planned_end_time": "BASE_TS + 1h15m",
|
||
"planned_quantity": 50.0,
|
||
"planned_duration_minutes": 180,
|
||
"actual_start_time": "BASE_TS - 1h45m",
|
||
"actual_end_time": null,
|
||
"actual_quantity": null,
|
||
"status": "IN_PROGRESS",
|
||
"priority": "HIGH",
|
||
"current_process_stage": "BAKING",
|
||
"equipment_used": ["eq-oven-main"],
|
||
"staff_assigned": ["50000000-0000-0000-0000-000000000001"]
|
||
},
|
||
{
|
||
"id": "batch-edge-upcoming-0001",
|
||
"batch_number": "BATCH-UPCOMING-0001",
|
||
"product_id": "prod-whole-wheat-bread",
|
||
"product_name": "Pan Integral",
|
||
"recipe_id": "recipe-whole-wheat",
|
||
"planned_start_time": "BASE_TS + 1h30m",
|
||
"planned_end_time": "BASE_TS + 4h30m",
|
||
"planned_quantity": 80.0,
|
||
"planned_duration_minutes": 180,
|
||
"status": "PENDING",
|
||
"priority": "MEDIUM",
|
||
"equipment_used": ["eq-oven-secondary"]
|
||
}
|
||
]
|
||
}
|
||
```
|
||
|
||
**Nota:** Los marcadores `BASE_TS ± Xh` se resuelven durante la clonación usando `calculate_edge_case_times()`.
|
||
|
||
---
|
||
|
||
<a name="limpieza"></a>
|
||
## 🧹 LIMPIEZA DE SESIÓN — ATOMICIDAD Y RESILIENCIA
|
||
|
||
### Trigger Principal
|
||
|
||
**API Endpoint:**
|
||
```
|
||
DELETE /api/demo/sessions/{session_id}
|
||
```
|
||
|
||
**Implementación:**
|
||
```python
|
||
# services/demo_session/app/api/sessions.py
|
||
@router.delete("/{session_id}")
|
||
async def destroy_demo_session(
|
||
session_id: str,
|
||
db: AsyncSession = Depends(get_db)
|
||
):
|
||
"""
|
||
Destroy a demo session and all associated data.
|
||
This triggers parallel deletion across all services.
|
||
"""
|
||
session = await get_session_by_id(db, session_id)
|
||
|
||
# Update status to DESTROYING
|
||
session.status = DemoSessionStatus.DESTROYING
|
||
await db.commit()
|
||
|
||
# Trigger cleanup
|
||
cleanup_service = DemoCleanupService(redis_manager=redis)
|
||
result = await cleanup_service.cleanup_session(session)
|
||
|
||
if result["success"]:
|
||
session.status = DemoSessionStatus.DESTROYED
|
||
else:
|
||
session.status = DemoSessionStatus.FAILED
|
||
session.error_details = result["errors"]
|
||
|
||
await db.commit()
|
||
|
||
return {
|
||
"session_id": session_id,
|
||
"status": session.status,
|
||
"records_deleted": result.get("total_deleted", 0),
|
||
"duration_ms": result.get("duration_ms", 0)
|
||
}
|
||
```
|
||
|
||
### Limpieza Paralela por Servicio
|
||
|
||
**Archivo de referencia:** `services/demo_session/app/services/cleanup_service.py` (a crear basado en clone_orchestrator)
|
||
|
||
```python
|
||
# services/demo_session/app/services/cleanup_service.py
|
||
class DemoCleanupService:
|
||
"""Orchestrates parallel demo data deletion via direct HTTP calls"""
|
||
|
||
async def cleanup_session(self, session: DemoSession) -> Dict[str, Any]:
|
||
"""
|
||
Delete all data for a demo session across all services.
|
||
|
||
Returns:
|
||
{
|
||
"success": bool,
|
||
"total_deleted": int,
|
||
"duration_ms": int,
|
||
"details": {service: {records_deleted, duration_ms}},
|
||
"errors": []
|
||
}
|
||
"""
|
||
start_time = time.time()
|
||
virtual_tenant_id = session.virtual_tenant_id
|
||
|
||
# Define services to clean (same as cloning)
|
||
services = [
|
||
ServiceDefinition("tenant", "http://tenant-service:8000", required=True),
|
||
ServiceDefinition("auth", "http://auth-service:8001", required=True),
|
||
ServiceDefinition("inventory", "http://inventory-service:8002"),
|
||
ServiceDefinition("production", "http://production-service:8003"),
|
||
# ... etc
|
||
]
|
||
|
||
# Delete from all services in parallel
|
||
tasks = [
|
||
self._delete_from_service(svc, virtual_tenant_id)
|
||
for svc in services
|
||
]
|
||
|
||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||
|
||
# Aggregate results
|
||
total_deleted = 0
|
||
details = {}
|
||
errors = []
|
||
|
||
for svc, result in zip(services, results):
|
||
if isinstance(result, Exception):
|
||
errors.append(f"{svc.name}: {str(result)}")
|
||
details[svc.name] = {"status": "error", "error": str(result)}
|
||
else:
|
||
total_deleted += result.get("records_deleted", {}).get("total", 0)
|
||
details[svc.name] = result
|
||
|
||
# Delete from Redis
|
||
await self._delete_redis_cache(virtual_tenant_id)
|
||
|
||
# Delete child tenants if enterprise
|
||
if session.demo_account_type == "enterprise":
|
||
child_metadata = session.session_metadata.get("children", [])
|
||
for child in child_metadata:
|
||
child_tenant_id = child["virtual_tenant_id"]
|
||
await self._delete_from_all_services(child_tenant_id)
|
||
|
||
duration_ms = int((time.time() - start_time) * 1000)
|
||
|
||
return {
|
||
"success": len(errors) == 0,
|
||
"total_deleted": total_deleted,
|
||
"duration_ms": duration_ms,
|
||
"details": details,
|
||
"errors": errors
|
||
}
|
||
|
||
async def _delete_from_service(
|
||
self,
|
||
service: ServiceDefinition,
|
||
virtual_tenant_id: UUID
|
||
) -> Dict[str, Any]:
|
||
"""Delete all data from a single service"""
|
||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||
response = await client.delete(
|
||
f"{service.url}/internal/demo/tenant/{virtual_tenant_id}",
|
||
headers={"X-Internal-API-Key": self.internal_api_key}
|
||
)
|
||
|
||
if response.status_code == 200:
|
||
return response.json()
|
||
elif response.status_code == 404:
|
||
# Already deleted or never existed - idempotent
|
||
return {
|
||
"service": service.name,
|
||
"status": "not_found",
|
||
"records_deleted": {"total": 0}
|
||
}
|
||
else:
|
||
raise Exception(f"HTTP {response.status_code}: {response.text}")
|
||
|
||
async def _delete_redis_cache(self, virtual_tenant_id: UUID):
|
||
"""Delete all Redis keys for a virtual tenant"""
|
||
pattern = f"*:{virtual_tenant_id}:*"
|
||
keys = await self.redis_manager.keys(pattern)
|
||
if keys:
|
||
await self.redis_manager.delete(*keys)
|
||
```
|
||
|
||
### Endpoint de Limpieza en Cada Servicio
|
||
|
||
**Contrato estándar:**
|
||
|
||
```python
|
||
# services/{service}/app/api/internal_demo.py
|
||
@router.delete(
|
||
"/tenant/{virtual_tenant_id}",
|
||
dependencies=[Depends(verify_internal_key)]
|
||
)
|
||
async def delete_demo_tenant_data(
|
||
virtual_tenant_id: UUID,
|
||
db: AsyncSession = Depends(get_db)
|
||
):
|
||
"""
|
||
Delete all demo data for a virtual tenant.
|
||
This endpoint is idempotent - safe to call multiple times.
|
||
|
||
Returns:
|
||
{
|
||
"service": "production",
|
||
"status": "deleted",
|
||
"virtual_tenant_id": "uuid-here",
|
||
"records_deleted": {
|
||
"batches": 50,
|
||
"equipment": 12,
|
||
"quality_checks": 183,
|
||
"total": 245
|
||
},
|
||
"duration_ms": 567
|
||
}
|
||
"""
|
||
start_time = time.time()
|
||
|
||
records_deleted = {
|
||
"batches": 0,
|
||
"equipment": 0,
|
||
"quality_checks": 0,
|
||
"quality_templates": 0,
|
||
"production_schedules": 0,
|
||
"production_capacity": 0
|
||
}
|
||
|
||
try:
|
||
# Delete in reverse dependency order
|
||
|
||
# 1. Delete quality checks (depends on batches)
|
||
result = await db.execute(
|
||
delete(QualityCheck)
|
||
.where(QualityCheck.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["quality_checks"] = result.rowcount
|
||
|
||
# 2. Delete production batches
|
||
result = await db.execute(
|
||
delete(ProductionBatch)
|
||
.where(ProductionBatch.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["batches"] = result.rowcount
|
||
|
||
# 3. Delete equipment
|
||
result = await db.execute(
|
||
delete(Equipment)
|
||
.where(Equipment.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["equipment"] = result.rowcount
|
||
|
||
# 4. Delete quality check templates
|
||
result = await db.execute(
|
||
delete(QualityCheckTemplate)
|
||
.where(QualityCheckTemplate.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["quality_templates"] = result.rowcount
|
||
|
||
# 5. Delete production schedules
|
||
result = await db.execute(
|
||
delete(ProductionSchedule)
|
||
.where(ProductionSchedule.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["production_schedules"] = result.rowcount
|
||
|
||
# 6. Delete production capacity records
|
||
result = await db.execute(
|
||
delete(ProductionCapacity)
|
||
.where(ProductionCapacity.tenant_id == virtual_tenant_id)
|
||
)
|
||
records_deleted["production_capacity"] = result.rowcount
|
||
|
||
await db.commit()
|
||
|
||
records_deleted["total"] = sum(records_deleted.values())
|
||
|
||
logger.info(
|
||
"demo_data_deleted",
|
||
service="production",
|
||
virtual_tenant_id=str(virtual_tenant_id),
|
||
records_deleted=records_deleted
|
||
)
|
||
|
||
return {
|
||
"service": "production",
|
||
"status": "deleted",
|
||
"virtual_tenant_id": str(virtual_tenant_id),
|
||
"records_deleted": records_deleted,
|
||
"duration_ms": int((time.time() - start_time) * 1000)
|
||
}
|
||
|
||
except Exception as e:
|
||
await db.rollback()
|
||
logger.error(
|
||
"demo_data_deletion_failed",
|
||
service="production",
|
||
virtual_tenant_id=str(virtual_tenant_id),
|
||
error=str(e)
|
||
)
|
||
raise HTTPException(
|
||
status_code=500,
|
||
detail=f"Failed to delete demo data: {str(e)}"
|
||
)
|
||
```
|
||
|
||
### Registro de Auditoría
|
||
|
||
**Modelo:**
|
||
|
||
```python
|
||
# services/demo_session/app/models/cleanup_audit.py
|
||
class DemoCleanupAudit(Base):
|
||
"""Audit log for demo session cleanup operations"""
|
||
__tablename__ = "demo_cleanup_audit"
|
||
|
||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||
session_id = Column(String(255), nullable=False, index=True)
|
||
virtual_tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||
|
||
started_at = Column(DateTime(timezone=True), nullable=False)
|
||
completed_at = Column(DateTime(timezone=True), nullable=True)
|
||
duration_ms = Column(Integer, nullable=True)
|
||
|
||
total_records_deleted = Column(Integer, default=0)
|
||
service_results = Column(JSON, nullable=True) # Details per service
|
||
|
||
status = Column(String(50), nullable=False) # SUCCESS, PARTIAL, FAILED
|
||
error_details = Column(JSON, nullable=True)
|
||
|
||
retry_count = Column(Integer, default=0)
|
||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||
```
|
||
|
||
**Logging:**
|
||
|
||
```python
|
||
# Al iniciar limpieza
|
||
audit = DemoCleanupAudit(
|
||
session_id=session.session_id,
|
||
virtual_tenant_id=session.virtual_tenant_id,
|
||
started_at=datetime.now(timezone.utc),
|
||
status="IN_PROGRESS"
|
||
)
|
||
db.add(audit)
|
||
await db.commit()
|
||
|
||
# Al completar
|
||
audit.completed_at = datetime.now(timezone.utc)
|
||
audit.duration_ms = int((audit.completed_at - audit.started_at).total_seconds() * 1000)
|
||
audit.total_records_deleted = cleanup_result["total_deleted"]
|
||
audit.service_results = cleanup_result["details"]
|
||
audit.status = "SUCCESS" if cleanup_result["success"] else "PARTIAL"
|
||
audit.error_details = cleanup_result.get("errors")
|
||
await db.commit()
|
||
```
|
||
|
||
### CronJob de Limpieza Automática
|
||
|
||
**Implementación:**
|
||
|
||
```python
|
||
# services/demo_session/app/services/scheduled_cleanup.py
|
||
from apscheduler.schedulers.asyncio import AsyncIOScheduler
|
||
from datetime import datetime, timezone, timedelta
|
||
|
||
scheduler = AsyncIOScheduler()
|
||
|
||
@scheduler.scheduled_job('cron', hour=2, minute=0) # 02:00 UTC diariamente
|
||
async def cleanup_expired_sessions():
|
||
"""
|
||
Find and clean up expired demo sessions.
|
||
Runs daily at 2 AM UTC.
|
||
"""
|
||
logger.info("scheduled_cleanup_started")
|
||
|
||
async with get_async_session() as db:
|
||
# Find expired sessions that haven't been destroyed
|
||
one_hour_ago = datetime.now(timezone.utc) - timedelta(hours=1)
|
||
|
||
result = await db.execute(
|
||
select(DemoSession)
|
||
.where(DemoSession.expires_at < one_hour_ago)
|
||
.where(DemoSession.status.in_([
|
||
DemoSessionStatus.READY,
|
||
DemoSessionStatus.PARTIAL,
|
||
DemoSessionStatus.FAILED
|
||
]))
|
||
.limit(100) # Process in batches
|
||
)
|
||
expired_sessions = result.scalars().all()
|
||
|
||
logger.info(
|
||
"expired_sessions_found",
|
||
count=len(expired_sessions)
|
||
)
|
||
|
||
cleanup_service = DemoCleanupService()
|
||
|
||
success_count = 0
|
||
partial_count = 0
|
||
failed_count = 0
|
||
|
||
for session in expired_sessions:
|
||
try:
|
||
result = await cleanup_service.cleanup_session(session)
|
||
|
||
if result["success"]:
|
||
session.status = DemoSessionStatus.DESTROYED
|
||
success_count += 1
|
||
else:
|
||
session.status = DemoSessionStatus.PARTIAL
|
||
session.error_details = result.get("errors")
|
||
partial_count += 1
|
||
|
||
# Retry limit
|
||
retry_count = session.cleanup_retry_count or 0
|
||
if retry_count < 3:
|
||
session.cleanup_retry_count = retry_count + 1
|
||
logger.warning(
|
||
"cleanup_partial_will_retry",
|
||
session_id=session.session_id,
|
||
retry_count=session.cleanup_retry_count
|
||
)
|
||
else:
|
||
logger.error(
|
||
"cleanup_max_retries_exceeded",
|
||
session_id=session.session_id
|
||
)
|
||
# Notify ops team
|
||
await notify_ops_team(
|
||
f"Demo cleanup failed after 3 retries: {session.session_id}"
|
||
)
|
||
|
||
except Exception as e:
|
||
logger.error(
|
||
"cleanup_exception",
|
||
session_id=session.session_id,
|
||
error=str(e)
|
||
)
|
||
session.status = DemoSessionStatus.FAILED
|
||
session.error_details = {"exception": str(e)}
|
||
failed_count += 1
|
||
|
||
await db.commit()
|
||
|
||
logger.info(
|
||
"scheduled_cleanup_completed",
|
||
total=len(expired_sessions),
|
||
success=success_count,
|
||
partial=partial_count,
|
||
failed=failed_count
|
||
)
|
||
|
||
# Alert if >5% failure rate
|
||
if len(expired_sessions) > 0:
|
||
failure_rate = (partial_count + failed_count) / len(expired_sessions)
|
||
if failure_rate > 0.05:
|
||
await notify_ops_team(
|
||
f"High demo cleanup failure rate: {failure_rate:.1%} "
|
||
f"({partial_count + failed_count}/{len(expired_sessions)})"
|
||
)
|
||
|
||
# Start scheduler on app startup
|
||
def start_scheduled_cleanup():
|
||
scheduler.start()
|
||
logger.info("scheduled_cleanup_started")
|
||
```
|
||
|
||
**Iniciar en app startup:**
|
||
|
||
```python
|
||
# services/demo_session/app/main.py
|
||
from app.services.scheduled_cleanup import start_scheduled_cleanup
|
||
|
||
@app.on_event("startup")
|
||
async def startup_event():
|
||
start_scheduled_cleanup()
|
||
logger.info("application_started")
|
||
```
|
||
|
||
---
|
||
|
||
<a name="escenarios"></a>
|
||
## 🏢 ESCENARIOS DE DEMO — ESPECIFICACIÓN DETALLADA POR TIER
|
||
|
||
### 🥖 Tier Professional: "Panadería Artesana Madrid"
|
||
|
||
**Configuración base:** [`services/demo_session/app/core/config.py:39-46`](services/demo_session/app/core/config.py#L39-L46)
|
||
|
||
| Dimensión | Detalle |
|
||
|-----------|---------|
|
||
| **Ubicación** | Madrid, España (coordenadas ficticias) |
|
||
| **Modelo** | Tienda + obrador integrado (standalone) |
|
||
| **Equipo** | 6 personas:<br>- María García (Owner/Admin)<br>- Juan Panadero (Baker)<br>- Ana Ventas (Sales)<br>- Pedro Calidad (Quality Control)<br>- Carlos Almacén (Warehouse)<br>- Isabel Producción (Production Manager) |
|
||
| **Productos** | ~24 referencias:<br>- 12 panes (baguette, integral, centeno, payés, etc.)<br>- 8 bollería (croissant, napolitana, ensaimada, etc.)<br>- 4 pastelería (tarta, pastel, galletas, brownies) |
|
||
| **Recetas** | 18-24 activas (alineadas con productos) |
|
||
| **Maquinaria** | - 1 horno de leña (OPERATIONAL)<br>- 1 horno secundario (OPERATIONAL)<br>- 2 amasadoras (1 en MAINTENANCE)<br>- 1 laminadora (OPERATIONAL)<br>- 1 cortadora (OPERATIONAL) |
|
||
| **Proveedores** | 5-6:<br>- Harinas del Norte (harina)<br>- Lácteos Gipuzkoa (leche/mantequilla)<br>- Frutas Frescas (frutas)<br>- Sal de Mar (sal)<br>- Envases Pro (packaging)<br>- Levaduras Spain (levadura) |
|
||
| **Datos operativos** | **Stock crítico:**<br>- Harina: 5 kg (reorder_point: 50 kg) → OC pendiente<br>- Levadura: 0 kg (reorder_point: 10 kg) → SIN OC → ❗ ALERTA<br>- Mantequilla: 2 kg (reorder_point: 15 kg) → OC aprobada<br><br>**Lotes hoy:**<br>- OVERDUE: Baguette (debió empezar hace 2h) → ⚠️ ALERTA<br>- IN_PROGRESS: Croissant (empezó hace 1h45m, etapa: BAKING)<br>- UPCOMING: Pan Integral (empieza en 1h30m)<br><br>**Alertas activas:**<br>- 🔴 "Levadura agotada — crear OC urgente"<br>- 🟡 "Producción retrasada — Baguette" |
|
||
| **Dashboard KPIs** | - % cumplimiento producción: 87%<br>- Stock crítico: 3 ingredientes<br>- Alertas abiertas: 2<br>- Forecasting precisión: 92% |
|
||
|
||
### 🏢 Tier Enterprise: "Panadería Central Group"
|
||
|
||
**Configuración base:** [`services/demo_session/app/core/config.py:47-72`](services/demo_session/app/core/config.py#L47-L72)
|
||
|
||
| Dimensión | Detalle |
|
||
|-----------|----------|
|
||
| **Estructura** | 1 obrador central (parent tenant) + 3 tiendas (child outlets):<br>- **Madrid Centro**<br>- **Barcelona Gràcia**<br>- **Valencia Ruzafa** |
|
||
| **Equipo** | ~20 personas:<br>**Central:**<br>- Carlos Martínez (Owner/Director Operativo)<br>- Roberto Producción (Production Manager)<br>- Marta Calidad (Quality Control)<br>- Luis Compras (Procurement)<br>- Miguel Mantenimiento (Maintenance)<br><br>**Por tienda (5 personas cada una):**<br>- Gerente<br>- 2 vendedores<br>- 1 panadero<br>- 1 warehouse |
|
||
| **Producción** | Centralizada en obrador principal:<br>- Produce para las 3 tiendas<br>- Distribuye por rutas nocturnas<br>- Capacidad: 500 kg/día |
|
||
| **Logística** | **Rutas diarias:**<br>- Ruta 1: Madrid → Barcelona (salida: 23:00, llegada: 05:00)<br>- Ruta 2: Madrid → Valencia (salida: 01:00, llegada: 06:00)<br>- Reparto local Madrid (salida: 05:00) |
|
||
| **Datos por tienda** | **Madrid Centro:**<br>- Alta rotación<br>- Stock ajustado (just-in-time)<br>- Pedidos: 15-20/día<br><br>**Barcelona Gràcia:**<br>- Alta demanda turística<br>- Productos premium (croissants mantequilla francesa)<br>- ❗ Alerta activa: "Stock bajo brioche premium"<br><br>**Valencia Ruzafa:**<br>- En crecimiento<br>- Stock más conservador<br>- Pedidos: 10-15/día |
|
||
| **Alertas multisite** | - 🔴 BCN: "Stock bajo de brioche premium — OC creada por IA"<br>- 🟠 Obrador: "Capacidad horno al 95% — riesgo cuello de botella"<br>- 🟡 Logística: "Retraso ruta Barcelona — ETA +30 min (tráfico A-2)" |
|
||
| **Dashboard Enterprise** | **Mapa de alertas:**<br>- Vista geográfica con marcadores por tienda<br>- Drill-down por ubicación<br><br>**Comparativa KPIs:**<br>- Producción por tienda<br>- Stock por ubicación<br>- Margen por tienda<br><br>**Forecasting agregado:**<br>- Precisión: 88%<br>- Próxima semana: +12% demanda prevista<br><br>**ROI de automatización IA:**<br>- Reducción desperdicio: 17%<br>- Ahorro procurement: €1,200/mes |
|
||
|
||
---
|
||
|
||
<a name="verificacion"></a>
|
||
## ✅ VERIFICACIÓN TÉCNICA
|
||
|
||
### Requisitos de Validación
|
||
|
||
| Requisito | Cómo se Verifica | Tooling/Métrica | Umbral de Éxito |
|
||
|-----------|-------------------|-----------------|-----------------|
|
||
| **Determinismo** | Ejecutar 100 clonaciones del mismo tier → comparar hash SHA-256 de todos los datos (por servicio) | Script `scripts/test/demo_determinism.py` | 100% de hashes idénticos (excluir timestamps audit) |
|
||
| **Coherencia cruzada** | Post-clonado, ejecutar `CrossServiceIntegrityChecker` → validar todas las referencias UUID | Script `scripts/validate_cross_refs.py` (CI/CD) | 0 errores de integridad |
|
||
| **Latencia Professional** | P50, P95, P99 de tiempo de creación | Prometheus: `demo_session_creation_duration_seconds{tier="professional"}` | P50 < 7s, P95 < 12s, P99 < 15s |
|
||
| **Latencia Enterprise** | P50, P95, P99 de tiempo de creación | Prometheus: `demo_session_creation_duration_seconds{tier="enterprise"}` | P50 < 12s, P95 < 18s, P99 < 22s |
|
||
| **Realismo temporal** | En sesión creada a las 10:00, lote programado para "session + 1.5h" → debe ser 11:30 exacto | Validador `scripts/test/time_delta_validator.py` | 0 desviaciones > ±1 minuto |
|
||
| **Alertas inmediatas** | Tras creación, `GET /alerts?tenant_id={virtual}&status=open` debe devolver ≥2 alertas | Cypress E2E: `cypress/e2e/demo_session_spec.js` | Professional: ≥2 alertas<br>Enterprise: ≥4 alertas |
|
||
| **Limpieza atómica** | Tras `DELETE`, consulta directa a DB de cada servicio → 0 registros con `tenant_id = virtual_tenant_id` | Script `scripts/test/cleanup_verification.py` | 0 registros residuales en todas las tablas |
|
||
| **Escalabilidad** | 50 sesiones concurrentes → tasa de éxito, sin timeouts | Locust: `locust -f scripts/load_test/demo_load_test.py --users 50 --spawn-rate 5` | ≥99% tasa de éxito<br>0 timeouts HTTP |
|
||
| **Idempotencia limpieza** | Llamar `DELETE` 3 veces consecutivas al mismo session_id → todas devuelven 200, sin errores | Script `scripts/test/idempotency_test.py` | 3/3 llamadas exitosas |
|
||
|
||
### Scripts de Validación
|
||
|
||
#### 1. Determinismo
|
||
|
||
```python
|
||
# scripts/test/demo_determinism.py
|
||
"""
|
||
Test deterministic cloning by creating multiple sessions and comparing data hashes.
|
||
"""
|
||
import asyncio
|
||
import hashlib
|
||
import json
|
||
from typing import List, Dict
|
||
import httpx
|
||
|
||
DEMO_API_URL = "http://localhost:8018"
|
||
INTERNAL_API_KEY = "test-internal-key"
|
||
|
||
async def create_demo_session(tier: str = "professional") -> dict:
|
||
"""Create a demo session"""
|
||
async with httpx.AsyncClient() as client:
|
||
response = await client.post(
|
||
f"{DEMO_API_URL}/api/demo/sessions",
|
||
json={"demo_account_type": tier}
|
||
)
|
||
return response.json()
|
||
|
||
async def get_all_data_from_service(
|
||
service_url: str,
|
||
tenant_id: str
|
||
) -> dict:
|
||
"""Fetch all data for a tenant from a service"""
|
||
async with httpx.AsyncClient() as client:
|
||
response = await client.get(
|
||
f"{service_url}/internal/demo/export/{tenant_id}",
|
||
headers={"X-Internal-API-Key": INTERNAL_API_KEY}
|
||
)
|
||
return response.json()
|
||
|
||
def calculate_data_hash(data: dict) -> str:
|
||
"""
|
||
Calculate SHA-256 hash of data, excluding audit timestamps.
|
||
"""
|
||
# Remove non-deterministic fields
|
||
clean_data = remove_audit_fields(data)
|
||
|
||
# Sort keys for consistency
|
||
json_str = json.dumps(clean_data, sort_keys=True)
|
||
|
||
return hashlib.sha256(json_str.encode()).hexdigest()
|
||
|
||
def remove_audit_fields(data: dict) -> dict:
|
||
"""Remove created_at, updated_at fields recursively"""
|
||
if isinstance(data, dict):
|
||
return {
|
||
k: remove_audit_fields(v)
|
||
for k, v in data.items()
|
||
if k not in ["created_at", "updated_at", "id"] # IDs are UUIDs
|
||
}
|
||
elif isinstance(data, list):
|
||
return [remove_audit_fields(item) for item in data]
|
||
else:
|
||
return data
|
||
|
||
async def test_determinism(tier: str = "professional", iterations: int = 100):
|
||
"""
|
||
Test that cloning is deterministic across multiple sessions.
|
||
"""
|
||
print(f"Testing determinism for {tier} tier ({iterations} iterations)...")
|
||
|
||
services = [
|
||
("inventory", "http://inventory-service:8002"),
|
||
("production", "http://production-service:8003"),
|
||
("recipes", "http://recipes-service:8004"),
|
||
]
|
||
|
||
hashes_by_service = {svc[0]: [] for svc in services}
|
||
|
||
for i in range(iterations):
|
||
# Create session
|
||
session = await create_demo_session(tier)
|
||
tenant_id = session["virtual_tenant_id"]
|
||
|
||
# Get data from each service
|
||
for service_name, service_url in services:
|
||
data = await get_all_data_from_service(service_url, tenant_id)
|
||
data_hash = calculate_data_hash(data)
|
||
hashes_by_service[service_name].append(data_hash)
|
||
|
||
# Cleanup
|
||
async with httpx.AsyncClient() as client:
|
||
await client.delete(f"{DEMO_API_URL}/api/demo/sessions/{session['session_id']}")
|
||
|
||
if (i + 1) % 10 == 0:
|
||
print(f" Completed {i + 1}/{iterations} iterations")
|
||
|
||
# Check consistency
|
||
all_consistent = True
|
||
for service_name, hashes in hashes_by_service.items():
|
||
unique_hashes = set(hashes)
|
||
if len(unique_hashes) == 1:
|
||
print(f"✅ {service_name}: All {iterations} hashes identical")
|
||
else:
|
||
print(f"❌ {service_name}: {len(unique_hashes)} different hashes found!")
|
||
all_consistent = False
|
||
|
||
if all_consistent:
|
||
print("\n✅ DETERMINISM TEST PASSED")
|
||
return 0
|
||
else:
|
||
print("\n❌ DETERMINISM TEST FAILED")
|
||
return 1
|
||
|
||
if __name__ == "__main__":
|
||
exit_code = asyncio.run(test_determinism())
|
||
exit(exit_code)
|
||
```
|
||
|
||
#### 2. Latencia y Escalabilidad (Locust)
|
||
|
||
```python
|
||
# scripts/load_test/demo_load_test.py
|
||
"""
|
||
Load test for demo session creation using Locust.
|
||
|
||
Usage:
|
||
locust -f demo_load_test.py --users 50 --spawn-rate 5 --run-time 5m
|
||
"""
|
||
from locust import HttpUser, task, between
|
||
import random
|
||
|
||
class DemoSessionUser(HttpUser):
|
||
wait_time = between(1, 3) # Wait 1-3s between tasks
|
||
|
||
def on_start(self):
|
||
"""Called when a user starts"""
|
||
self.session_id = None
|
||
|
||
@task(3)
|
||
def create_professional_session(self):
|
||
"""Create a professional tier demo session"""
|
||
with self.client.post(
|
||
"/api/demo/sessions",
|
||
json={"demo_account_type": "professional"},
|
||
catch_response=True
|
||
) as response:
|
||
if response.status_code == 200:
|
||
data = response.json()
|
||
self.session_id = data.get("session_id")
|
||
|
||
# Check if cloning completed successfully
|
||
if data.get("status") == "READY":
|
||
response.success()
|
||
else:
|
||
response.failure(f"Session not ready: {data.get('status')}")
|
||
else:
|
||
response.failure(f"Failed to create session: {response.status_code}")
|
||
|
||
@task(1)
|
||
def create_enterprise_session(self):
|
||
"""Create an enterprise tier demo session"""
|
||
with self.client.post(
|
||
"/api/demo/sessions",
|
||
json={"demo_account_type": "enterprise"},
|
||
catch_response=True
|
||
) as response:
|
||
if response.status_code == 200:
|
||
data = response.json()
|
||
self.session_id = data.get("session_id")
|
||
|
||
if data.get("status") == "READY":
|
||
response.success()
|
||
else:
|
||
response.failure(f"Session not ready: {data.get('status')}")
|
||
else:
|
||
response.failure(f"Failed to create session: {response.status_code}")
|
||
|
||
@task(1)
|
||
def get_session_status(self):
|
||
"""Get status of current session"""
|
||
if self.session_id:
|
||
self.client.get(f"/api/demo/sessions/{self.session_id}/status")
|
||
|
||
def on_stop(self):
|
||
"""Called when a user stops - cleanup session"""
|
||
if self.session_id:
|
||
self.client.delete(f"/api/demo/sessions/{self.session_id}")
|
||
```
|
||
|
||
**Ejecutar:**
|
||
|
||
```bash
|
||
# Test de carga: 50 usuarios concurrentes
|
||
locust -f scripts/load_test/demo_load_test.py \
|
||
--host http://localhost:8018 \
|
||
--users 50 \
|
||
--spawn-rate 5 \
|
||
--run-time 5m \
|
||
--html reports/demo_load_test_report.html
|
||
|
||
# Analizar resultados
|
||
# P95 latency debe ser < 12s (professional), < 18s (enterprise)
|
||
# Failure rate debe ser < 1%
|
||
```
|
||
|
||
### Métricas de Prometheus
|
||
|
||
**Archivo:** `services/demo_session/app/monitoring/metrics.py`
|
||
|
||
```python
|
||
from prometheus_client import Counter, Histogram, Gauge
|
||
|
||
# Counters
|
||
demo_sessions_created_total = Counter(
|
||
'demo_sessions_created_total',
|
||
'Total number of demo sessions created',
|
||
['tier', 'status']
|
||
)
|
||
|
||
demo_sessions_deleted_total = Counter(
|
||
'demo_sessions_deleted_total',
|
||
'Total number of demo sessions deleted',
|
||
['tier', 'status']
|
||
)
|
||
|
||
demo_cloning_errors_total = Counter(
|
||
'demo_cloning_errors_total',
|
||
'Total number of cloning errors',
|
||
['tier', 'service', 'error_type']
|
||
)
|
||
|
||
# Histograms (for latency percentiles)
|
||
demo_session_creation_duration_seconds = Histogram(
|
||
'demo_session_creation_duration_seconds',
|
||
'Duration of demo session creation',
|
||
['tier'],
|
||
buckets=[1, 2, 5, 7, 10, 12, 15, 18, 20, 25, 30, 40, 50, 60]
|
||
)
|
||
|
||
demo_service_clone_duration_seconds = Histogram(
|
||
'demo_service_clone_duration_seconds',
|
||
'Duration of individual service cloning',
|
||
['tier', 'service'],
|
||
buckets=[0.5, 1, 2, 3, 5, 10, 15, 20, 30, 40, 50]
|
||
)
|
||
|
||
demo_session_cleanup_duration_seconds = Histogram(
|
||
'demo_session_cleanup_duration_seconds',
|
||
'Duration of demo session cleanup',
|
||
['tier'],
|
||
buckets=[0.5, 1, 2, 5, 10, 15, 20, 30]
|
||
)
|
||
|
||
# Gauges
|
||
demo_sessions_active = Gauge(
|
||
'demo_sessions_active',
|
||
'Number of currently active demo sessions',
|
||
['tier']
|
||
)
|
||
|
||
demo_sessions_pending_cleanup = Gauge(
|
||
'demo_sessions_pending_cleanup',
|
||
'Number of demo sessions pending cleanup'
|
||
)
|
||
```
|
||
|
||
**Queries de ejemplo:**
|
||
|
||
```promql
|
||
# P95 latency for professional tier
|
||
histogram_quantile(0.95,
|
||
rate(demo_session_creation_duration_seconds_bucket{tier="professional"}[5m])
|
||
)
|
||
|
||
# P99 latency for enterprise tier
|
||
histogram_quantile(0.99,
|
||
rate(demo_session_creation_duration_seconds_bucket{tier="enterprise"}[5m])
|
||
)
|
||
|
||
# Error rate by service
|
||
rate(demo_cloning_errors_total[5m])
|
||
|
||
# Active sessions by tier
|
||
demo_sessions_active
|
||
```
|
||
|
||
---
|
||
|
||
## 📚 APÉNDICES
|
||
|
||
### A. Endpoints Internos Completos
|
||
|
||
#### Tenant Service
|
||
|
||
```
|
||
POST /internal/demo/clone
|
||
POST /internal/demo/create-child (enterprise only)
|
||
DELETE /internal/demo/tenant/{virtual_tenant_id}
|
||
HEAD /internal/demo/exists?entity_type=tenant&id={id}&tenant_id={tenant_id}
|
||
GET /internal/demo/export/{tenant_id} (for testing)
|
||
```
|
||
|
||
#### Otros Servicios (Auth, Inventory, Production, etc.)
|
||
|
||
```
|
||
POST /internal/demo/clone
|
||
DELETE /internal/demo/tenant/{virtual_tenant_id}
|
||
HEAD /internal/demo/exists?entity_type={type}&id={id}&tenant_id={tenant_id}
|
||
GET /internal/demo/export/{tenant_id} (for testing)
|
||
```
|
||
|
||
#### Production Service (triggers post-clone)
|
||
|
||
```
|
||
POST /internal/alerts/trigger (trigger production alerts)
|
||
POST /internal/insights/yield/trigger (trigger yield insights)
|
||
```
|
||
|
||
#### Inventory Service (triggers post-clone)
|
||
|
||
```
|
||
POST /internal/alerts/trigger (trigger inventory alerts)
|
||
POST /internal/insights/safety-stock/trigger
|
||
```
|
||
|
||
#### Procurement Service (triggers post-clone)
|
||
|
||
```
|
||
POST /internal/delivery-tracking/trigger
|
||
POST /internal/insights/price/trigger
|
||
```
|
||
|
||
### B. Variables de Entorno
|
||
|
||
```bash
|
||
# Demo Session Service
|
||
DEMO_SESSION_DATABASE_URL=postgresql+asyncpg://user:pass@localhost:5432/demo_session_db
|
||
DEMO_SESSION_DURATION_MINUTES=30
|
||
DEMO_SESSION_MAX_EXTENSIONS=3
|
||
DEMO_SESSION_CLEANUP_INTERVAL_MINUTES=60
|
||
|
||
# Internal API Key (shared across services)
|
||
INTERNAL_API_KEY=your-secret-internal-key
|
||
|
||
# Service URLs (Kubernetes defaults)
|
||
TENANT_SERVICE_URL=http://tenant-service:8000
|
||
AUTH_SERVICE_URL=http://auth-service:8001
|
||
INVENTORY_SERVICE_URL=http://inventory-service:8002
|
||
PRODUCTION_SERVICE_URL=http://production-service:8003
|
||
RECIPES_SERVICE_URL=http://recipes-service:8004
|
||
PROCUREMENT_SERVICE_URL=http://procurement-service:8005
|
||
SUPPLIERS_SERVICE_URL=http://suppliers-service:8006
|
||
ORDERS_SERVICE_URL=http://orders-service:8007
|
||
SALES_SERVICE_URL=http://sales-service:8008
|
||
FORECASTING_SERVICE_URL=http://forecasting-service:8009
|
||
ORCHESTRATOR_SERVICE_URL=http://orchestrator-service:8012
|
||
|
||
# Redis
|
||
REDIS_URL=redis://redis:6379/0
|
||
REDIS_KEY_PREFIX=demo:session
|
||
REDIS_SESSION_TTL=1800 # 30 minutes
|
||
|
||
# Feature Flags
|
||
ENABLE_DEMO_AI_INSIGHTS=true
|
||
ENABLE_DEMO_ALERT_GENERATION=true
|
||
```
|
||
|
||
### C. Modelos de Base de Datos Clave
|
||
|
||
Ver archivos de referencia:
|
||
- [`services/production/app/models/production.py`](services/production/app/models/production.py)
|
||
- [`services/inventory/app/models/inventory.py`](services/inventory/app/models/inventory.py)
|
||
- [`services/tenant/app/models/tenants.py`](services/tenant/app/models/tenants.py)
|
||
- [`services/auth/app/models/users.py`](services/auth/app/models/users.py)
|
||
- [`services/demo_session/app/models/demo_session.py`](services/demo_session/app/models/demo_session.py)
|
||
|
||
### D. Archivos de Referencia del Proyecto
|
||
|
||
| Archivo | Descripción |
|
||
|---------|-------------|
|
||
| [`services/demo_session/app/core/config.py`](services/demo_session/app/core/config.py) | Configuración de cuentas demo y settings |
|
||
| [`services/demo_session/app/services/clone_orchestrator.py`](services/demo_session/app/services/clone_orchestrator.py) | Orquestador de clonación paralela |
|
||
| [`shared/utils/demo_dates.py`](shared/utils/demo_dates.py) | Utilidades de ajuste temporal |
|
||
| [`services/production/app/api/internal_demo.py`](services/production/app/api/internal_demo.py) | Implementación de clonación (ejemplo) |
|
||
| [`SIMPLIFIED_DEMO_SESSION_ARCHITECTURE.md`](SIMPLIFIED_DEMO_SESSION_ARCHITECTURE.md) | Arquitectura simplificada actual |
|
||
|
||
---
|
||
|
||
## 🎯 CONCLUSIÓN
|
||
|
||
Este documento especifica un sistema de demostración técnica **production-grade**, con:
|
||
|
||
✅ **Integridad garantizada** mediante validación cross-service de referencias UUID
|
||
✅ **Determinismo temporal** con ajuste dinámico relativo a `session_created_at`
|
||
✅ **Edge cases realistas** que generan alertas y insights automáticamente
|
||
✅ **Clonación paralela** en 5-15 segundos (3-6x más rápido que arquitectura anterior)
|
||
✅ **Limpieza atómica** con idempotencia y registro de auditoría
|
||
✅ **Validación CI/CD** de esquemas JSON y referencias cruzadas
|
||
✅ **Métricas y observabilidad** con Prometheus
|
||
|
||
**Resultado:** Demos técnicamente impecables que simulan entornos productivos reales, sin infraestructura pesada, con datos coherentes y reproducibles.
|
||
|
||
---
|
||
|
||
**Versión:** 1.0
|
||
**Fecha:** 2025-12-13
|
||
**Autor:** Basado en arquitectura real de bakery-ia
|
||
**Mantenido por:** Equipo de Infraestructura y DevOps
|