Clean scripts

This commit is contained in:
Urtzi Alfaro
2025-12-17 16:36:26 +01:00
parent b715a14848
commit 2619a34041
28 changed files with 0 additions and 7500 deletions

View File

@@ -1,171 +0,0 @@
# Enterprise Demo Fixtures Validation Scripts
This directory contains scripts for validating and managing enterprise demo fixtures for the Bakery AI platform.
## Scripts Overview
### 1. `validate_enterprise_demo_fixtures.py`
**Main Validation Script**
Validates all cross-references between JSON fixtures for enterprise demo sessions. Checks that all referenced IDs exist and are consistent across files.
**Features:**
- Validates user-tenant relationships
- Validates parent-child tenant relationships
- Validates product-tenant and product-user relationships
- Validates ingredient-tenant and ingredient-user relationships
- Validates recipe-tenant and recipe-product relationships
- Validates supplier-tenant relationships
- Checks UUID format validity
- Detects duplicate IDs
**Usage:**
```bash
python scripts/validate_enterprise_demo_fixtures.py
```
### 2. `fix_inventory_user_references.py`
**Fix Script for Missing User References**
Replaces missing user ID `c1a2b3c4-d5e6-47a8-b9c0-d1e2f3a4b5c6` with the production director user ID `ae38accc-1ad4-410d-adbc-a55630908924` in all inventory.json files.
**Usage:**
```bash
python scripts/fix_inventory_user_references.py
```
### 3. `generate_child_auth_files.py`
**Child Auth Files Generator**
Creates auth.json files for each child tenant with appropriate users (manager and staff).
**Usage:**
```bash
python scripts/generate_child_auth_files.py
```
### 4. `demo_fixtures_summary.py`
**Summary Report Generator**
Provides a comprehensive summary of the enterprise demo fixtures status including file sizes, entity counts, and totals.
**Usage:**
```bash
python scripts/demo_fixtures_summary.py
```
### 5. `comprehensive_demo_validation.py`
**Comprehensive Validation Runner**
Runs all validation checks and provides a complete report. This is the recommended script to run for full validation.
**Usage:**
```bash
python scripts/comprehensive_demo_validation.py
```
## Validation Process
### Step 1: Run Comprehensive Validation
```bash
python scripts/comprehensive_demo_validation.py
```
This script will:
1. Run the main validation to check all cross-references
2. Generate a summary report
3. Provide a final status report
### Step 2: Review Results
The validation will output:
-**Success**: All cross-references are valid
-**Failure**: Lists specific issues that need to be fixed
### Step 3: Fix Issues (if any)
If validation fails, you may need to run specific fix scripts:
```bash
# Fix missing user references in inventory
python scripts/fix_inventory_user_references.py
# Generate missing auth files for children
python scripts/generate_child_auth_files.py
```
### Step 4: Re-run Validation
After fixing issues, run the comprehensive validation again:
```bash
python scripts/comprehensive_demo_validation.py
```
## Current Status
**All validation checks are passing!**
The enterprise demo fixtures are ready for use with:
- **6 Tenants** (1 parent + 5 children)
- **25 Users** (15 parent + 10 children)
- **45 Ingredients** (25 parent + 20 children)
- **4 Recipes** (parent only)
- **6 Suppliers** (parent only)
All cross-references have been validated and no missing IDs or broken relationships were detected.
## Fixture Structure
```
shared/demo/fixtures/enterprise/
├── parent/
│ ├── 01-tenant.json # Parent tenant and children definitions
│ ├── 02-auth.json # Parent tenant users
│ ├── 03-inventory.json # Parent inventory (ingredients, products)
│ ├── 04-recipes.json # Parent recipes
│ ├── 05-suppliers.json # Parent suppliers
│ ├── 06-production.json # Parent production data
│ ├── 07-procurement.json # Parent procurement data
│ ├── 08-orders.json # Parent orders
│ ├── 09-sales.json # Parent sales data
│ ├── 10-forecasting.json # Parent forecasting data
│ └── 11-orchestrator.json # Parent orchestrator data
└── children/
├── A0000000-0000-4000-a000-000000000001/ # Madrid - Salamanca
├── B0000000-0000-4000-a000-000000000001/ # Barcelona - Eixample
├── C0000000-0000-4000-a000-000000000001/ # Valencia - Ruzafa
├── D0000000-0000-4000-a000-000000000001/ # Seville - Triana
└── E0000000-0000-4000-a000-000000000001/ # Bilbao - Casco Viejo
```
## Key Relationships Validated
1. **User-Tenant**: Users belong to specific tenants
2. **Parent-Child**: Parent tenant has 5 child locations
3. **Ingredient-Tenant**: Ingredients are associated with tenants
4. **Ingredient-User**: Ingredients are created by users
5. **Recipe-Tenant**: Recipes belong to tenants
6. **Recipe-Product**: Recipes produce specific products
7. **Supplier-Tenant**: Suppliers are associated with tenants
## Requirements
- Python 3.7+
- No additional dependencies required
## Maintenance
To add new validation checks:
1. Add new relationship processing in `validate_enterprise_demo_fixtures.py`
2. Add corresponding validation logic
3. Update the summary script if needed
## Troubleshooting
If validation fails:
1. Check the specific error messages
2. Verify the referenced IDs exist in the appropriate files
3. Run the specific fix scripts if available
4. Manually correct any remaining issues
5. Re-run validation

View File

@@ -1,188 +0,0 @@
#!/usr/bin/env python3
"""
Utility script to clean old purchase orders and production batches with malformed reasoning_data.
This script deletes pending purchase orders and production batches that were created before
the fix for template variable interpolation. After running this script, trigger a new
orchestration run to create fresh data with properly interpolated variables.
Usage:
python scripts/clean_old_dashboard_data.py --tenant-id <tenant_id>
"""
import asyncio
import argparse
import sys
import os
from datetime import datetime, timedelta
# Add services to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../services/procurement/app'))
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../services/production/app'))
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../shared'))
async def clean_old_purchase_orders(tenant_id: str, dry_run: bool = True):
"""Clean old pending purchase orders"""
try:
from app.core.database import AsyncSessionLocal
from sqlalchemy import text
async with AsyncSessionLocal() as session:
# Get count of pending POs
count_result = await session.execute(
text("""
SELECT COUNT(*)
FROM purchase_orders
WHERE tenant_id = :tenant_id
AND status = 'pending_approval'
"""),
{"tenant_id": tenant_id}
)
count = count_result.scalar()
print(f"Found {count} pending purchase orders for tenant {tenant_id}")
if count == 0:
print("No purchase orders to clean.")
return 0
if dry_run:
print(f"DRY RUN: Would delete {count} pending purchase orders")
return count
# Delete pending POs
result = await session.execute(
text("""
DELETE FROM purchase_orders
WHERE tenant_id = :tenant_id
AND status = 'pending_approval'
"""),
{"tenant_id": tenant_id}
)
await session.commit()
deleted = result.rowcount
print(f"✓ Deleted {deleted} pending purchase orders")
return deleted
except Exception as e:
print(f"Error cleaning purchase orders: {e}")
return 0
async def clean_old_production_batches(tenant_id: str, dry_run: bool = True):
"""Clean old pending production batches"""
try:
# Import production service dependencies
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../services/production/app'))
from app.core.database import AsyncSessionLocal as ProductionSession
from sqlalchemy import text
async with ProductionSession() as session:
# Get count of pending/scheduled batches
count_result = await session.execute(
text("""
SELECT COUNT(*)
FROM production_batches
WHERE tenant_id = :tenant_id
AND status IN ('pending', 'scheduled')
"""),
{"tenant_id": tenant_id}
)
count = count_result.scalar()
print(f"Found {count} pending/scheduled production batches for tenant {tenant_id}")
if count == 0:
print("No production batches to clean.")
return 0
if dry_run:
print(f"DRY RUN: Would delete {count} pending/scheduled batches")
return count
# Delete pending/scheduled batches
result = await session.execute(
text("""
DELETE FROM production_batches
WHERE tenant_id = :tenant_id
AND status IN ('pending', 'scheduled')
"""),
{"tenant_id": tenant_id}
)
await session.commit()
deleted = result.rowcount
print(f"✓ Deleted {deleted} pending/scheduled production batches")
return deleted
except Exception as e:
print(f"Error cleaning production batches: {e}")
return 0
async def main():
parser = argparse.ArgumentParser(
description='Clean old dashboard data (POs and batches) with malformed reasoning_data'
)
parser.add_argument(
'--tenant-id',
required=True,
help='Tenant ID to clean data for'
)
parser.add_argument(
'--execute',
action='store_true',
help='Actually delete data (default is dry run)'
)
args = parser.parse_args()
dry_run = not args.execute
print("=" * 60)
print("Dashboard Data Cleanup Utility")
print("=" * 60)
print(f"Tenant ID: {args.tenant_id}")
print(f"Mode: {'DRY RUN (no changes will be made)' if dry_run else 'EXECUTE (will delete data)'}")
print("=" * 60)
print()
if not dry_run:
print("⚠️ WARNING: This will permanently delete pending purchase orders and production batches!")
print(" After deletion, you should trigger a new orchestration run to create fresh data.")
response = input(" Are you sure you want to continue? (yes/no): ")
if response.lower() != 'yes':
print("Aborted.")
return
print()
# Clean purchase orders
print("1. Cleaning Purchase Orders...")
po_count = await clean_old_purchase_orders(args.tenant_id, dry_run)
print()
# Clean production batches
print("2. Cleaning Production Batches...")
batch_count = await clean_old_production_batches(args.tenant_id, dry_run)
print()
print("=" * 60)
print("Summary:")
print(f" Purchase Orders: {po_count} {'would be' if dry_run else ''} deleted")
print(f" Production Batches: {batch_count} {'would be' if dry_run else ''} deleted")
print("=" * 60)
if dry_run:
print("\nTo actually delete the data, run with --execute flag:")
print(f" python scripts/clean_old_dashboard_data.py --tenant-id {args.tenant_id} --execute")
else:
print("\n✓ Data cleaned successfully!")
print("\nNext steps:")
print(" 1. Restart the orchestrator service")
print(" 2. Trigger a new orchestration run from the dashboard")
print(" 3. The new POs and batches will have properly interpolated variables")
if __name__ == '__main__':
asyncio.run(main())

View File

@@ -1,246 +0,0 @@
#!/bin/bash
# Complete Cleanup Script for Kind + Colima + Skaffold Environment
# This script removes all resources, images, and configurations
set -e
echo "🧹 Complete Cleanup for Bakery IA Development Environment"
echo "========================================================"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Show what will be cleaned up
show_cleanup_plan() {
echo ""
print_warning "This script will clean up:"
echo " 🚀 Skaffold deployments and resources"
echo " 🐋 Docker images (bakery/* images)"
echo " ☸️ Kubernetes resources in bakery-ia namespace"
echo " 🔒 cert-manager and TLS certificates"
echo " 🌐 NGINX Ingress Controller"
echo " 📦 Kind cluster (bakery-ia-local)"
echo " 🐳 Colima Docker runtime"
echo " 📝 Local certificate files"
echo " 🗂️ /etc/hosts entries (optional)"
echo ""
read -p "❓ Do you want to continue? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_status "Cleanup cancelled"
exit 0
fi
}
# 1. Cleanup Skaffold deployments
cleanup_skaffold() {
print_status "🚀 Cleaning up Skaffold deployments..."
if command -v skaffold &> /dev/null; then
# Try to delete with different profiles
skaffold delete --profile=dev 2>/dev/null || true
skaffold delete --profile=debug 2>/dev/null || true
skaffold delete 2>/dev/null || true
print_success "Skaffold deployments cleaned up"
else
print_warning "Skaffold not found, skipping Skaffold cleanup"
fi
}
# 2. Cleanup Kubernetes resources
cleanup_kubernetes() {
print_status "☸️ Cleaning up Kubernetes resources..."
if command -v kubectl &> /dev/null && kubectl cluster-info &> /dev/null; then
# Delete application namespace and all resources
kubectl delete namespace bakery-ia --ignore-not-found=true
# Delete cert-manager
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml --ignore-not-found=true 2>/dev/null || true
# Delete NGINX Ingress
kubectl delete -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml --ignore-not-found=true 2>/dev/null || true
# Delete any remaining cluster-wide resources
kubectl delete clusterissuers --all --ignore-not-found=true 2>/dev/null || true
kubectl delete clusterroles,clusterrolebindings -l app.kubernetes.io/name=cert-manager --ignore-not-found=true 2>/dev/null || true
print_success "Kubernetes resources cleaned up"
else
print_warning "Kubectl not available or cluster not running, skipping Kubernetes cleanup"
fi
}
# 3. Cleanup Docker images in Colima
cleanup_docker_images() {
print_status "🐋 Cleaning up Docker images..."
if command -v docker &> /dev/null && docker info &> /dev/null; then
# Remove bakery-specific images
print_status "Removing bakery/* images..."
docker images --format "table {{.Repository}}:{{.Tag}}" | grep "^bakery/" | xargs -r docker rmi -f 2>/dev/null || true
# Remove dangling images
print_status "Removing dangling images..."
docker image prune -f 2>/dev/null || true
# Remove unused images (optional - uncomment if you want aggressive cleanup)
# print_status "Removing unused images..."
# docker image prune -a -f 2>/dev/null || true
# Remove build cache
print_status "Cleaning build cache..."
docker builder prune -f 2>/dev/null || true
print_success "Docker images cleaned up"
else
print_warning "Docker not available, skipping Docker cleanup"
fi
}
# 4. Delete Kind cluster
cleanup_kind_cluster() {
print_status "📦 Deleting Kind cluster..."
if command -v kind &> /dev/null; then
# Delete the specific cluster
kind delete cluster --name bakery-ia-local 2>/dev/null || true
# Also clean up any other bakery clusters
kind get clusters 2>/dev/null | grep -E "(bakery|dev)" | xargs -r -I {} kind delete cluster --name {} 2>/dev/null || true
print_success "Kind cluster deleted"
else
print_warning "Kind not found, skipping cluster cleanup"
fi
}
# 5. Stop and clean Colima
cleanup_colima() {
print_status "🐳 Cleaning up Colima..."
if command -v colima &> /dev/null; then
# Stop the specific profile
colima stop --profile k8s-local 2>/dev/null || true
# Delete the profile (removes all data)
read -p "❓ Do you want to delete the Colima profile (removes all Docker data)? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
colima delete --profile k8s-local --force 2>/dev/null || true
print_success "Colima profile deleted"
else
print_warning "Colima profile kept (stopped only)"
fi
else
print_warning "Colima not found, skipping Colima cleanup"
fi
}
# 6. Cleanup local files
cleanup_local_files() {
print_status "📝 Cleaning up local files..."
# Remove certificate files
rm -f bakery-ia-ca.crt 2>/dev/null || true
rm -f *.crt *.key 2>/dev/null || true
# Remove any Skaffold cache (if exists)
rm -rf ~/.skaffold/cache 2>/dev/null || true
print_success "Local files cleaned up"
}
# 7. Cleanup /etc/hosts entries (optional)
cleanup_hosts_file() {
print_status "🗂️ Cleaning up /etc/hosts entries..."
if grep -q "bakery-ia.local" /etc/hosts 2>/dev/null; then
read -p "❓ Remove bakery-ia entries from /etc/hosts? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
# Backup hosts file first
sudo cp /etc/hosts /etc/hosts.backup.$(date +%Y%m%d_%H%M%S)
# Remove entries
sudo sed -i '' '/bakery-ia.local/d' /etc/hosts
sudo sed -i '' '/api.bakery-ia.local/d' /etc/hosts
sudo sed -i '' '/monitoring.bakery-ia.local/d' /etc/hosts
print_success "Hosts file entries removed"
else
print_warning "Hosts file entries kept"
fi
else
print_status "No bakery-ia entries found in /etc/hosts"
fi
}
# 8. Show system status after cleanup
show_cleanup_summary() {
echo ""
print_success "🎉 Cleanup completed!"
echo ""
print_status "System status after cleanup:"
# Check remaining Docker images
if command -v docker &> /dev/null && docker info &> /dev/null; then
local bakery_images=$(docker images --format "table {{.Repository}}:{{.Tag}}" | grep "^bakery/" | wc -l)
echo " 🐋 Bakery Docker images remaining: $bakery_images"
fi
# Check Kind clusters
if command -v kind &> /dev/null; then
local clusters=$(kind get clusters 2>/dev/null | wc -l)
echo " 📦 Kind clusters remaining: $clusters"
fi
# Check Colima status
if command -v colima &> /dev/null; then
local colima_status=$(colima status --profile k8s-local 2>/dev/null | head -n1 || echo "Not running")
echo " 🐳 Colima k8s-local status: $colima_status"
fi
echo ""
print_status "To restart development environment:"
echo " 🚀 Quick start: ./skaffold-dev.sh"
echo " 🔒 With HTTPS: ./setup-https.sh"
echo " 🏗️ Manual: colima start --cpu 4 --memory 8 --disk 50 --runtime docker --profile k8s-local"
}
# Main execution
main() {
show_cleanup_plan
cleanup_skaffold
cleanup_kubernetes
cleanup_docker_images
cleanup_kind_cluster
cleanup_colima
cleanup_local_files
cleanup_hosts_file
show_cleanup_summary
}
# Run main function
main "$@"

View File

@@ -1,90 +0,0 @@
#!/usr/bin/env python3
"""
Script to complete audit router registration in all remaining services.
"""
import re
from pathlib import Path
BASE_DIR = Path(__file__).parent.parent / "services"
# Services that still need updates (suppliers, pos, training, notification, external, forecasting)
SERVICES = ['suppliers', 'pos', 'training', 'notification', 'external', 'forecasting']
def update_service(service_name):
main_file = BASE_DIR / service_name / "app" / "main.py"
if not main_file.exists():
print(f"⚠️ {service_name}: main.py not found")
return False
content = main_file.read_text()
modified = False
# Check if audit is already imported
if 'import.*audit' in content or ', audit' in content:
print(f"{service_name}: audit already imported")
else:
# Add audit import - find the from .api or from app.api import line
patterns = [
(r'(from \.api import [^)]+)(\))', r'\1, audit\2'), # Multi-line with parentheses
(r'(from \.api import .+)', r'\1, audit'), # Single line with .api
(r'(from app\.api import [^)]+)(\))', r'\1, audit\2'), # Multi-line with app.api
(r'(from app\.api import .+)', r'\1, audit'), # Single line with app.api
]
for pattern, replacement in patterns:
new_content = re.sub(pattern, replacement, content)
if new_content != content:
content = new_content
modified = True
print(f"{service_name}: added audit import")
break
if not modified:
print(f"⚠️ {service_name}: could not find import pattern, needs manual update")
return False
# Check if audit router is already registered
if 'service.add_router(audit.router)' in content:
print(f"{service_name}: audit router already registered")
else:
# Find the last service.add_router line and add audit router after it
lines = content.split('\n')
last_router_index = -1
for i, line in enumerate(lines):
if 'service.add_router(' in line and 'audit' not in line:
last_router_index = i
if last_router_index != -1:
# Insert audit router after the last router registration
lines.insert(last_router_index + 1, 'service.add_router(audit.router)')
content = '\n'.join(lines)
modified = True
print(f"{service_name}: added audit router registration")
else:
print(f"⚠️ {service_name}: could not find router registration pattern, needs manual update")
return False
if modified:
main_file.write_text(content)
print(f"{service_name}: updated successfully")
else:
print(f" {service_name}: no changes needed")
return True
def main():
print("Completing audit router registration in remaining services...\n")
success_count = 0
for service in SERVICES:
if update_service(service):
success_count += 1
print()
print(f"\nCompleted: {success_count}/{len(SERVICES)} services updated successfully")
if __name__ == "__main__":
main()

View File

@@ -1,102 +0,0 @@
#!/usr/bin/env python3
"""
Comprehensive Demo Validation Script
Runs all validation checks for enterprise demo fixtures and provides a complete report.
"""
import subprocess
import sys
import os
def run_validation():
"""Run the enterprise demo fixtures validation"""
print("=== Running Enterprise Demo Fixtures Validation ===")
print()
try:
# Run the main validation script
result = subprocess.run([
sys.executable,
"scripts/validate_enterprise_demo_fixtures.py"
], capture_output=True, text=True, cwd=".")
print(result.stdout)
if result.returncode != 0:
print("❌ Validation failed!")
print("Error output:")
print(result.stderr)
return False
else:
print("✅ Validation passed!")
return True
except Exception as e:
print(f"❌ Error running validation: {e}")
return False
def run_summary():
"""Run the demo fixtures summary"""
print("\n=== Running Demo Fixtures Summary ===")
print()
try:
# Run the summary script
result = subprocess.run([
sys.executable,
"scripts/demo_fixtures_summary.py"
], capture_output=True, text=True, cwd=".")
print(result.stdout)
if result.returncode != 0:
print("❌ Summary failed!")
print("Error output:")
print(result.stderr)
return False
else:
print("✅ Summary completed!")
return True
except Exception as e:
print(f"❌ Error running summary: {e}")
return False
def main():
"""Main function to run comprehensive validation"""
print("🚀 Starting Comprehensive Demo Validation")
print("=" * 60)
# Change to project directory
os.chdir("/Users/urtzialfaro/Documents/bakery-ia")
# Run validation
validation_passed = run_validation()
# Run summary
summary_passed = run_summary()
# Final report
print("\n" + "=" * 60)
print("📋 FINAL REPORT")
print("=" * 60)
if validation_passed and summary_passed:
print("🎉 ALL CHECKS PASSED!")
print("✅ Enterprise demo fixtures are ready for use")
print("✅ All cross-references are valid")
print("✅ No missing IDs or broken relationships")
print("✅ All required files are present")
return True
else:
print("❌ VALIDATION FAILED!")
if not validation_passed:
print("❌ Cross-reference validation failed")
if not summary_passed:
print("❌ Summary generation failed")
return False
if __name__ == "__main__":
success = main()
sys.exit(0 if success else 1)

View File

@@ -1,201 +0,0 @@
#!/usr/bin/env python3
"""
Demo Fixtures Summary Script
Provides a comprehensive summary of the enterprise demo fixtures status.
"""
import json
import os
from pathlib import Path
from collections import defaultdict
def get_file_info(base_path: Path) -> dict:
"""Get information about all fixture files"""
info = {
"parent": defaultdict(list),
"children": defaultdict(dict)
}
# Parent files
parent_dir = base_path / "parent"
if parent_dir.exists():
for file_path in parent_dir.glob("*.json"):
file_size = file_path.stat().st_size
info["parent"][file_path.name] = {
"size_bytes": file_size,
"size_kb": round(file_size / 1024, 2)
}
# Children files
children_dir = base_path / "children"
if children_dir.exists():
for child_dir in children_dir.iterdir():
if child_dir.is_dir():
tenant_id = child_dir.name
for file_path in child_dir.glob("*.json"):
file_size = file_path.stat().st_size
if tenant_id not in info["children"]:
info["children"][tenant_id] = {}
info["children"][tenant_id][file_path.name] = {
"size_bytes": file_size,
"size_kb": round(file_size / 1024, 2)
}
return info
def count_entities(base_path: Path) -> dict:
"""Count entities in fixture files"""
counts = {
"parent": defaultdict(int),
"children": defaultdict(lambda: defaultdict(int))
}
# Parent counts
parent_dir = base_path / "parent"
if parent_dir.exists():
# Tenants
tenant_file = parent_dir / "01-tenant.json"
if tenant_file.exists():
with open(tenant_file, 'r') as f:
data = json.load(f)
counts["parent"]["tenants"] = 1 + len(data.get("children", []))
# Users
auth_file = parent_dir / "02-auth.json"
if auth_file.exists():
with open(auth_file, 'r') as f:
data = json.load(f)
counts["parent"]["users"] = len(data.get("users", []))
# Inventory
inventory_file = parent_dir / "03-inventory.json"
if inventory_file.exists():
with open(inventory_file, 'r') as f:
data = json.load(f)
counts["parent"]["ingredients"] = len(data.get("ingredients", []))
counts["parent"]["products"] = len(data.get("products", []))
# Recipes
recipes_file = parent_dir / "04-recipes.json"
if recipes_file.exists():
with open(recipes_file, 'r') as f:
data = json.load(f)
counts["parent"]["recipes"] = len(data.get("recipes", []))
# Suppliers
suppliers_file = parent_dir / "05-suppliers.json"
if suppliers_file.exists():
with open(suppliers_file, 'r') as f:
data = json.load(f)
counts["parent"]["suppliers"] = len(data.get("suppliers", []))
# Children counts
children_dir = base_path / "children"
if children_dir.exists():
for child_dir in children_dir.iterdir():
if child_dir.is_dir():
tenant_id = child_dir.name
# Users
auth_file = child_dir / "02-auth.json"
if auth_file.exists():
with open(auth_file, 'r') as f:
data = json.load(f)
counts["children"][tenant_id]["users"] = len(data.get("users", []))
# Inventory
inventory_file = child_dir / "03-inventory.json"
if inventory_file.exists():
with open(inventory_file, 'r') as f:
data = json.load(f)
counts["children"][tenant_id]["ingredients"] = len(data.get("ingredients", []))
counts["children"][tenant_id]["products"] = len(data.get("products", []))
# Recipes
recipes_file = child_dir / "04-recipes.json"
if recipes_file.exists():
with open(recipes_file, 'r') as f:
data = json.load(f)
counts["children"][tenant_id]["recipes"] = len(data.get("recipes", []))
# Suppliers
suppliers_file = child_dir / "05-suppliers.json"
if suppliers_file.exists():
with open(suppliers_file, 'r') as f:
data = json.load(f)
counts["children"][tenant_id]["suppliers"] = len(data.get("suppliers", []))
return counts
def main():
"""Main function to display summary"""
print("=== Enterprise Demo Fixtures Summary ===")
print()
base_path = Path("shared/demo/fixtures/enterprise")
# File information
print("📁 FILE INFORMATION")
print("-" * 50)
file_info = get_file_info(base_path)
print("Parent Files:")
for filename, info in file_info["parent"].items():
print(f" {filename}: {info['size_kb']} KB")
print(f"\nChild Files ({len(file_info['children'])} locations):")
for tenant_id, files in file_info["children"].items():
print(f" {tenant_id}:")
for filename, info in files.items():
print(f" {filename}: {info['size_kb']} KB")
# Entity counts
print("\n📊 ENTITY COUNTS")
print("-" * 50)
counts = count_entities(base_path)
print("Parent Entities:")
for entity_type, count in counts["parent"].items():
print(f" {entity_type}: {count}")
print(f"\nChild Entities ({len(counts['children'])} locations):")
for tenant_id, entity_counts in counts["children"].items():
print(f" {tenant_id}:")
for entity_type, count in entity_counts.items():
print(f" {entity_type}: {count}")
# Totals
print("\n📈 TOTALS")
print("-" * 50)
total_users = counts["parent"]["users"]
total_tenants = counts["parent"]["tenants"]
total_ingredients = counts["parent"]["ingredients"]
total_products = counts["parent"]["products"]
total_recipes = counts["parent"]["recipes"]
total_suppliers = counts["parent"]["suppliers"]
for tenant_id, entity_counts in counts["children"].items():
total_users += entity_counts.get("users", 0)
total_ingredients += entity_counts.get("ingredients", 0)
total_products += entity_counts.get("products", 0)
total_recipes += entity_counts.get("recipes", 0)
total_suppliers += entity_counts.get("suppliers", 0)
print(f"Total Tenants: {total_tenants}")
print(f"Total Users: {total_users}")
print(f"Total Ingredients: {total_ingredients}")
print(f"Total Products: {total_products}")
print(f"Total Recipes: {total_recipes}")
print(f"Total Suppliers: {total_suppliers}")
print("\n✅ VALIDATION STATUS")
print("-" * 50)
print("All cross-references validated successfully!")
print("No missing IDs or broken relationships detected.")
if __name__ == "__main__":
main()

View File

@@ -1,72 +0,0 @@
# ================================================================
# services/auth/docker-compose.yml (For standalone testing)
# ================================================================
services:
auth-db:
image: postgres:17-alpine
container_name: auth-db
environment:
POSTGRES_DB: auth_db
POSTGRES_USER: auth_user
POSTGRES_PASSWORD: auth_pass123
ports:
- "5432:5432"
volumes:
- auth_db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U auth_user -d auth_db"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7.4-alpine
container_name: auth-redis
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
rabbitmq:
image: rabbitmq:4.1-management-alpine
container_name: auth-rabbitmq
environment:
RABBITMQ_DEFAULT_USER: bakery
RABBITMQ_DEFAULT_PASS: forecast123
ports:
- "5672:5672"
- "15672:15672"
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 5
auth-service:
build: .
container_name: auth-service
environment:
- DATABASE_URL=postgresql+asyncpg://auth_user:auth_pass123@auth-db:5432/auth_db
- REDIS_URL=redis://redis:6379/0
- RABBITMQ_URL=amqp://bakery:forecast123@rabbitmq:5672/
- DEBUG=true
- LOG_LEVEL=INFO
ports:
- "8001:8000"
depends_on:
auth-db:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
volumes:
- .:/app
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
volumes:
auth_db_data:

View File

@@ -1,80 +0,0 @@
#!/usr/bin/env python3
"""
Fix Inventory User References Script
Replaces the missing user ID c1a2b3c4-d5e6-47a8-b9c0-d1e2f3a4b5c6
with the production director user ID ae38accc-1ad4-410d-adbc-a55630908924
in all inventory.json files.
"""
import json
import os
from pathlib import Path
# The incorrect user ID that needs to be replaced
OLD_USER_ID = "c1a2b3c4-d5e6-47a8-b9c0-d1e2f3a4b5c6"
# The correct production director user ID
NEW_USER_ID = "ae38accc-1ad4-410d-adbc-a55630908924"
def fix_inventory_file(filepath: Path) -> bool:
"""Fix user references in a single inventory.json file"""
try:
with open(filepath, 'r', encoding='utf-8') as f:
data = json.load(f)
changed = False
# Fix ingredients
if "ingredients" in data:
for ingredient in data["ingredients"]:
if ingredient.get("created_by") == OLD_USER_ID:
ingredient["created_by"] = NEW_USER_ID
changed = True
# Fix products
if "products" in data:
for product in data["products"]:
if product.get("created_by") == OLD_USER_ID:
product["created_by"] = NEW_USER_ID
changed = True
if changed:
with open(filepath, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
print(f"✓ Fixed {filepath}")
return True
else:
print(f"✓ No changes needed for {filepath}")
return False
except Exception as e:
print(f"✗ Error processing {filepath}: {e}")
return False
def main():
"""Main function to fix all inventory files"""
print("=== Fixing Inventory User References ===")
print(f"Replacing {OLD_USER_ID} with {NEW_USER_ID}")
print()
base_path = Path("shared/demo/fixtures/enterprise")
# Fix parent inventory
parent_file = base_path / "parent" / "03-inventory.json"
if parent_file.exists():
fix_inventory_file(parent_file)
# Fix children inventories
children_dir = base_path / "children"
if children_dir.exists():
for child_dir in children_dir.iterdir():
if child_dir.is_dir():
inventory_file = child_dir / "03-inventory.json"
if inventory_file.exists():
fix_inventory_file(inventory_file)
print("\n=== Fix Complete ===")
if __name__ == "__main__":
main()

View File

@@ -1,326 +0,0 @@
#!/usr/bin/env bash
# ============================================================================
# Functional Test: Tenant Deletion System
# ============================================================================
# Tests the complete tenant deletion workflow with service tokens
#
# Usage:
# ./scripts/functional_test_deletion.sh <tenant_id>
#
# Example:
# ./scripts/functional_test_deletion.sh dbc2128a-7539-470c-94b9-c1e37031bd77
#
# ============================================================================
set -e # Exit on error
# Require bash 4+ for associative arrays
if [ "${BASH_VERSINFO[0]}" -lt 4 ]; then
echo "Error: This script requires bash 4.0 or higher"
echo "Current version: ${BASH_VERSION}"
exit 1
fi
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
TENANT_ID="${1:-dbc2128a-7539-470c-94b9-c1e37031bd77}"
SERVICE_TOKEN="${SERVICE_TOKEN:-eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJ0ZW5hbnQtZGVsZXRpb24tb3JjaGVzdHJhdG9yIiwidXNlcl9pZCI6InRlbmFudC1kZWxldGlvbi1vcmNoZXN0cmF0b3IiLCJzZXJ2aWNlIjoidGVuYW50LWRlbGV0aW9uLW9yY2hlc3RyYXRvciIsInR5cGUiOiJzZXJ2aWNlIiwiaXNfc2VydmljZSI6dHJ1ZSwicm9sZSI6ImFkbWluIiwiZW1haWwiOiJ0ZW5hbnQtZGVsZXRpb24tb3JjaGVzdHJhdG9yQGludGVybmFsLnNlcnZpY2UiLCJleHAiOjE3OTM0NDIwMzAsImlhdCI6MTc2MTkwNjAzMCwiaXNzIjoiYmFrZXJ5LWF1dGgifQ.I6mWLpkRim2fJ1v9WH24g4YT3-ZGbuFXxCorZxhPp6c}"
# Test mode (preview or delete)
TEST_MODE="${2:-preview}" # preview or delete
# Service list with their endpoints
declare -A SERVICES=(
["orders"]="orders-service:8000"
["inventory"]="inventory-service:8000"
["recipes"]="recipes-service:8000"
["sales"]="sales-service:8000"
["production"]="production-service:8000"
["suppliers"]="suppliers-service:8000"
["pos"]="pos-service:8000"
["external"]="city-service:8000"
["forecasting"]="forecasting-service:8000"
["training"]="training-service:8000"
["alert-processor"]="alert-processor-service:8000"
["notification"]="notification-service:8000"
)
# Results tracking
TOTAL_SERVICES=12
SUCCESSFUL_TESTS=0
FAILED_TESTS=0
declare -a FAILED_SERVICES
# ============================================================================
# Helper Functions
# ============================================================================
print_header() {
echo -e "${BLUE}============================================================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}============================================================================${NC}"
}
print_success() {
echo -e "${GREEN}${NC} $1"
}
print_error() {
echo -e "${RED}${NC} $1"
}
print_warning() {
echo -e "${YELLOW}${NC} $1"
}
print_info() {
echo -e "${BLUE}${NC} $1"
}
# ============================================================================
# Test Functions
# ============================================================================
test_service_preview() {
local service_name=$1
local service_host=$2
local endpoint_path=$3
echo ""
echo -e "${BLUE}Testing ${service_name}...${NC}"
# Get running pod
local pod=$(kubectl get pods -n bakery-ia -l app=${service_name}-service 2>/dev/null | grep Running | head -1 | awk '{print $1}')
if [ -z "$pod" ]; then
print_error "No running pod found for ${service_name}"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
fi
print_info "Pod: ${pod}"
# Execute request inside pod
local response=$(kubectl exec -n bakery-ia "$pod" -- curl -s -w "\n%{http_code}" \
-H "Authorization: Bearer ${SERVICE_TOKEN}" \
"http://localhost:8000${endpoint_path}/tenant/${TENANT_ID}/deletion-preview" 2>&1)
local http_code=$(echo "$response" | tail -1)
local body=$(echo "$response" | sed '$d')
if [ "$http_code" = "200" ]; then
print_success "Preview successful (HTTP ${http_code})"
# Parse and display counts
local total_records=$(echo "$body" | grep -o '"total_records":[0-9]*' | cut -d':' -f2 || echo "0")
print_info "Records to delete: ${total_records}"
# Show breakdown if available
echo "$body" | python3 -m json.tool 2>/dev/null | grep -A50 "breakdown" | head -20 || echo ""
SUCCESSFUL_TESTS=$((SUCCESSFUL_TESTS + 1))
return 0
elif [ "$http_code" = "401" ]; then
print_error "Authentication failed (HTTP ${http_code})"
print_warning "Service token may be invalid or expired"
echo "$body"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
elif [ "$http_code" = "403" ]; then
print_error "Authorization failed (HTTP ${http_code})"
print_warning "Service token not recognized as service"
echo "$body"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
elif [ "$http_code" = "404" ]; then
print_error "Endpoint not found (HTTP ${http_code})"
print_warning "Deletion endpoint may not be implemented"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
elif [ "$http_code" = "500" ]; then
print_error "Server error (HTTP ${http_code})"
echo "$body" | head -5
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
else
print_error "Unexpected response (HTTP ${http_code})"
echo "$body" | head -5
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
fi
}
test_service_deletion() {
local service_name=$1
local service_host=$2
local endpoint_path=$3
echo ""
echo -e "${BLUE}Deleting data in ${service_name}...${NC}"
# Get running pod
local pod=$(kubectl get pods -n bakery-ia -l app=${service_name}-service 2>/dev/null | grep Running | head -1 | awk '{print $1}')
if [ -z "$pod" ]; then
print_error "No running pod found for ${service_name}"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
fi
# Execute deletion request inside pod
local response=$(kubectl exec -n bakery-ia "$pod" -- curl -s -w "\n%{http_code}" \
-X DELETE \
-H "Authorization: Bearer ${SERVICE_TOKEN}" \
"http://localhost:8000${endpoint_path}/tenant/${TENANT_ID}" 2>&1)
local http_code=$(echo "$response" | tail -1)
local body=$(echo "$response" | sed '$d')
if [ "$http_code" = "200" ]; then
print_success "Deletion successful (HTTP ${http_code})"
# Parse and display deletion summary
local total_deleted=$(echo "$body" | grep -o '"total_records_deleted":[0-9]*' | cut -d':' -f2 || echo "0")
print_info "Records deleted: ${total_deleted}"
SUCCESSFUL_TESTS=$((SUCCESSFUL_TESTS + 1))
return 0
else
print_error "Deletion failed (HTTP ${http_code})"
echo "$body" | head -5
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("${service_name}")
return 1
fi
}
# ============================================================================
# Main Test Execution
# ============================================================================
main() {
print_header "Tenant Deletion System - Functional Test"
echo ""
print_info "Tenant ID: ${TENANT_ID}"
print_info "Test Mode: ${TEST_MODE}"
print_info "Services to test: ${TOTAL_SERVICES}"
echo ""
# Verify service token
print_info "Verifying service token..."
if python scripts/generate_service_token.py --verify "${SERVICE_TOKEN}" > /dev/null 2>&1; then
print_success "Service token is valid"
else
print_error "Service token is invalid or expired"
exit 1
fi
echo ""
print_header "Phase 1: Testing Service Previews"
# Test each service preview
test_service_preview "orders" "orders-service:8000" "/api/v1/orders"
test_service_preview "inventory" "inventory-service:8000" "/api/v1/inventory"
test_service_preview "recipes" "recipes-service:8000" "/api/v1/recipes"
test_service_preview "sales" "sales-service:8000" "/api/v1/sales"
test_service_preview "production" "production-service:8000" "/api/v1/production"
test_service_preview "suppliers" "suppliers-service:8000" "/api/v1/suppliers"
test_service_preview "pos" "pos-service:8000" "/api/v1/pos"
test_service_preview "external" "city-service:8000" "/api/v1/nominatim"
test_service_preview "forecasting" "forecasting-service:8000" "/api/v1/forecasting"
test_service_preview "training" "training-service:8000" "/api/v1/training"
test_service_preview "alert-processor" "alert-processor-service:8000" "/api/v1/analytics"
test_service_preview "notification" "notification-service:8000" "/api/v1/notifications"
# Summary
echo ""
print_header "Preview Test Results"
echo -e "Total Services: ${TOTAL_SERVICES}"
echo -e "${GREEN}Successful:${NC} ${SUCCESSFUL_TESTS}/${TOTAL_SERVICES}"
echo -e "${RED}Failed:${NC} ${FAILED_TESTS}/${TOTAL_SERVICES}"
if [ ${FAILED_TESTS} -gt 0 ]; then
echo ""
print_warning "Failed Services:"
for service in "${FAILED_SERVICES[@]}"; do
echo " - ${service}"
done
fi
# Ask for confirmation before actual deletion
if [ "$TEST_MODE" = "delete" ]; then
echo ""
print_header "Phase 2: Actual Deletion"
print_warning "This will PERMANENTLY delete data for tenant ${TENANT_ID}"
print_warning "This operation is IRREVERSIBLE"
echo ""
read -p "Are you sure you want to proceed? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
print_info "Deletion cancelled by user"
exit 0
fi
# Reset counters
SUCCESSFUL_TESTS=0
FAILED_TESTS=0
FAILED_SERVICES=()
# Execute deletions
test_service_deletion "orders" "orders-service:8000" "/api/v1/orders"
test_service_deletion "inventory" "inventory-service:8000" "/api/v1/inventory"
test_service_deletion "recipes" "recipes-service:8000" "/api/v1/recipes"
test_service_deletion "sales" "sales-service:8000" "/api/v1/sales"
test_service_deletion "production" "production-service:8000" "/api/v1/production"
test_service_deletion "suppliers" "suppliers-service:8000" "/api/v1/suppliers"
test_service_deletion "pos" "pos-service:8000" "/api/v1/pos"
test_service_deletion "external" "city-service:8000" "/api/v1/nominatim"
test_service_deletion "forecasting" "forecasting-service:8000" "/api/v1/forecasting"
test_service_deletion "training" "training-service:8000" "/api/v1/training"
test_service_deletion "alert-processor" "alert-processor-service:8000" "/api/v1/analytics"
test_service_deletion "notification" "notification-service:8000" "/api/v1/notifications"
# Deletion summary
echo ""
print_header "Deletion Test Results"
echo -e "Total Services: ${TOTAL_SERVICES}"
echo -e "${GREEN}Successful:${NC} ${SUCCESSFUL_TESTS}/${TOTAL_SERVICES}"
echo -e "${RED}Failed:${NC} ${FAILED_TESTS}/${TOTAL_SERVICES}"
if [ ${FAILED_TESTS} -gt 0 ]; then
echo ""
print_warning "Failed Services:"
for service in "${FAILED_SERVICES[@]}"; do
echo " - ${service}"
done
fi
fi
echo ""
print_header "Test Complete"
if [ ${FAILED_TESTS} -eq 0 ]; then
print_success "All tests passed successfully!"
exit 0
else
print_error "Some tests failed. See details above."
exit 1
fi
}
# Run main function
main

View File

@@ -1,145 +0,0 @@
#!/bin/bash
# ============================================================================
# Functional Test: Tenant Deletion System (Simple Version)
# ============================================================================
set +e # Don't exit on error
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Configuration
TENANT_ID="${1:-dbc2128a-7539-470c-94b9-c1e37031bd77}"
# Generate or use provided SERVICE_TOKEN
if [ -z "${SERVICE_TOKEN}" ]; then
SERVICE_TOKEN=$(python3 scripts/generate_service_token.py tenant-deletion-orchestrator 2>&1 | grep -A1 "Token:" | tail -1 | sed 's/^[[:space:]]*//' | tr -d '\n')
else
# Clean the token if provided (remove whitespace and newlines)
SERVICE_TOKEN=$(echo "${SERVICE_TOKEN}" | tr -d '[:space:]')
fi
# Results
TOTAL_SERVICES=12
SUCCESSFUL_TESTS=0
FAILED_TESTS=0
# Helper functions
print_header() {
echo -e "${BLUE}================================================================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================================================================${NC}"
}
print_success() {
echo -e "${GREEN}${NC} $1"
}
print_error() {
echo -e "${RED}${NC} $1"
}
print_info() {
echo -e "${BLUE}${NC} $1"
}
# Test function
test_service() {
local service_name=$1
local endpoint_path=$2
local port=${3:-8000} # Default to 8000 if not specified
echo ""
echo -e "${BLUE}Testing ${service_name}...${NC}"
# Find running pod
local pod=$(kubectl get pods -n bakery-ia 2>/dev/null | grep "${service_name}" | grep "Running" | grep "1/1" | head -1 | awk '{print $1}')
if [ -z "$pod" ]; then
print_error "No running pod found"
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
fi
print_info "Pod: ${pod}"
# Execute request
local result=$(kubectl exec -n bakery-ia "$pod" -- curl -s -w "\nHTTP_CODE:%{http_code}" \
-H "Authorization: Bearer ${SERVICE_TOKEN}" \
"http://localhost:${port}${endpoint_path}/tenant/${TENANT_ID}/deletion-preview" 2>&1)
local http_code=$(echo "$result" | grep "HTTP_CODE" | cut -d':' -f2)
local body=$(echo "$result" | sed '/HTTP_CODE/d')
if [ "$http_code" = "200" ]; then
print_success "Preview successful (HTTP ${http_code})"
local total=$(echo "$body" | grep -o '"total_records":[0-9]*' | cut -d':' -f2 | head -1)
if [ -n "$total" ]; then
print_info "Records to delete: ${total}"
fi
SUCCESSFUL_TESTS=$((SUCCESSFUL_TESTS + 1))
return 0
elif [ "$http_code" = "401" ]; then
print_error "Authentication failed (HTTP ${http_code})"
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
elif [ "$http_code" = "403" ]; then
print_error "Authorization failed (HTTP ${http_code})"
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
elif [ "$http_code" = "404" ]; then
print_error "Endpoint not found (HTTP ${http_code})"
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
elif [ "$http_code" = "500" ]; then
print_error "Server error (HTTP ${http_code})"
echo "$body" | head -3
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
else
print_error "Unexpected response (HTTP ${http_code})"
FAILED_TESTS=$((FAILED_TESTS + 1))
return 1
fi
}
# Main
print_header "Tenant Deletion System - Functional Test"
echo ""
print_info "Tenant ID: ${TENANT_ID}"
print_info "Services to test: ${TOTAL_SERVICES}"
echo ""
# Test all services
test_service "orders-service" "/api/v1/orders"
test_service "inventory-service" "/api/v1/inventory"
test_service "recipes-service" "/api/v1/recipes"
test_service "sales-service" "/api/v1/sales"
test_service "production-service" "/api/v1/production"
test_service "suppliers-service" "/api/v1/suppliers"
test_service "pos-service" "/api/v1/pos"
test_service "external-service" "/api/v1/external"
test_service "forecasting-service" "/api/v1/forecasting"
test_service "training-service" "/api/v1/training"
test_service "alert-processor-api" "/api/v1/alerts" 8010
test_service "notification-service" "/api/v1/notification"
# Summary
echo ""
print_header "Test Results"
echo "Total Services: ${TOTAL_SERVICES}"
echo -e "${GREEN}Successful:${NC} ${SUCCESSFUL_TESTS}/${TOTAL_SERVICES}"
echo -e "${RED}Failed:${NC} ${FAILED_TESTS}/${TOTAL_SERVICES}"
echo ""
if [ ${FAILED_TESTS} -eq 0 ]; then
print_success "All tests passed!"
exit 0
else
print_error "Some tests failed"
exit 1
fi

View File

@@ -1,281 +0,0 @@
#!/usr/bin/env python3
"""
Script to generate audit.py endpoint files for all services.
This ensures consistency across all microservices.
"""
import os
from pathlib import Path
# Template for audit.py file
AUDIT_TEMPLATE = """# services/{service}/app/api/audit.py
\"\"\"
Audit Logs API - Retrieve audit trail for {service} service
\"\"\"
from fastapi import APIRouter, Depends, HTTPException, Query, Path, status
from typing import Optional, Dict, Any
from uuid import UUID
from datetime import datetime
import structlog
from sqlalchemy import select, func, and_
from sqlalchemy.ext.asyncio import AsyncSession
from app.models import AuditLog
from shared.auth.decorators import get_current_user_dep
from shared.auth.access_control import require_user_role
from shared.routing import RouteBuilder
from shared.models.audit_log_schemas import (
AuditLogResponse,
AuditLogListResponse,
AuditLogStatsResponse
)
from app.core.database import database_manager
route_builder = RouteBuilder('{service_route}')
router = APIRouter(tags=["audit-logs"])
logger = structlog.get_logger()
async def get_db():
\"\"\"Database session dependency\"\"\"
async with database_manager.get_session() as session:
yield session
@router.get(
route_builder.build_base_route("audit-logs"),
response_model=AuditLogListResponse
)
@require_user_role(['admin', 'owner'])
async def get_audit_logs(
tenant_id: UUID = Path(..., description="Tenant ID"),
start_date: Optional[datetime] = Query(None, description="Filter logs from this date"),
end_date: Optional[datetime] = Query(None, description="Filter logs until this date"),
user_id: Optional[UUID] = Query(None, description="Filter by user ID"),
action: Optional[str] = Query(None, description="Filter by action type"),
resource_type: Optional[str] = Query(None, description="Filter by resource type"),
severity: Optional[str] = Query(None, description="Filter by severity level"),
search: Optional[str] = Query(None, description="Search in description field"),
limit: int = Query(100, ge=1, le=1000, description="Number of records to return"),
offset: int = Query(0, ge=0, description="Number of records to skip"),
current_user: Dict[str, Any] = Depends(get_current_user_dep),
db: AsyncSession = Depends(get_db)
):
\"\"\"
Get audit logs for {service} service.
Requires admin or owner role.
\"\"\"
try:
logger.info(
"Retrieving audit logs",
tenant_id=tenant_id,
user_id=current_user.get("user_id"),
filters={{
"start_date": start_date,
"end_date": end_date,
"action": action,
"resource_type": resource_type,
"severity": severity
}}
)
# Build query filters
filters = [AuditLog.tenant_id == tenant_id]
if start_date:
filters.append(AuditLog.created_at >= start_date)
if end_date:
filters.append(AuditLog.created_at <= end_date)
if user_id:
filters.append(AuditLog.user_id == user_id)
if action:
filters.append(AuditLog.action == action)
if resource_type:
filters.append(AuditLog.resource_type == resource_type)
if severity:
filters.append(AuditLog.severity == severity)
if search:
filters.append(AuditLog.description.ilike(f"%{{search}}%"))
# Count total matching records
count_query = select(func.count()).select_from(AuditLog).where(and_(*filters))
total_result = await db.execute(count_query)
total = total_result.scalar() or 0
# Fetch paginated results
query = (
select(AuditLog)
.where(and_(*filters))
.order_by(AuditLog.created_at.desc())
.limit(limit)
.offset(offset)
)
result = await db.execute(query)
audit_logs = result.scalars().all()
# Convert to response models
items = [AuditLogResponse.from_orm(log) for log in audit_logs]
logger.info(
"Successfully retrieved audit logs",
tenant_id=tenant_id,
total=total,
returned=len(items)
)
return AuditLogListResponse(
items=items,
total=total,
limit=limit,
offset=offset,
has_more=(offset + len(items)) < total
)
except Exception as e:
logger.error(
"Failed to retrieve audit logs",
error=str(e),
tenant_id=tenant_id
)
raise HTTPException(
status_code=500,
detail=f"Failed to retrieve audit logs: {{str(e)}}"
)
@router.get(
route_builder.build_base_route("audit-logs/stats"),
response_model=AuditLogStatsResponse
)
@require_user_role(['admin', 'owner'])
async def get_audit_log_stats(
tenant_id: UUID = Path(..., description="Tenant ID"),
start_date: Optional[datetime] = Query(None, description="Filter logs from this date"),
end_date: Optional[datetime] = Query(None, description="Filter logs until this date"),
current_user: Dict[str, Any] = Depends(get_current_user_dep),
db: AsyncSession = Depends(get_db)
):
\"\"\"
Get audit log statistics for {service} service.
Requires admin or owner role.
\"\"\"
try:
logger.info(
"Retrieving audit log statistics",
tenant_id=tenant_id,
user_id=current_user.get("user_id")
)
# Build base filters
filters = [AuditLog.tenant_id == tenant_id]
if start_date:
filters.append(AuditLog.created_at >= start_date)
if end_date:
filters.append(AuditLog.created_at <= end_date)
# Total events
count_query = select(func.count()).select_from(AuditLog).where(and_(*filters))
total_result = await db.execute(count_query)
total_events = total_result.scalar() or 0
# Events by action
action_query = (
select(AuditLog.action, func.count().label('count'))
.where(and_(*filters))
.group_by(AuditLog.action)
)
action_result = await db.execute(action_query)
events_by_action = {{row.action: row.count for row in action_result}}
# Events by severity
severity_query = (
select(AuditLog.severity, func.count().label('count'))
.where(and_(*filters))
.group_by(AuditLog.severity)
)
severity_result = await db.execute(severity_query)
events_by_severity = {{row.severity: row.count for row in severity_result}}
# Events by resource type
resource_query = (
select(AuditLog.resource_type, func.count().label('count'))
.where(and_(*filters))
.group_by(AuditLog.resource_type)
)
resource_result = await db.execute(resource_query)
events_by_resource_type = {{row.resource_type: row.count for row in resource_result}}
# Date range
date_range_query = (
select(
func.min(AuditLog.created_at).label('min_date'),
func.max(AuditLog.created_at).label('max_date')
)
.where(and_(*filters))
)
date_result = await db.execute(date_range_query)
date_row = date_result.one()
logger.info(
"Successfully retrieved audit log statistics",
tenant_id=tenant_id,
total_events=total_events
)
return AuditLogStatsResponse(
total_events=total_events,
events_by_action=events_by_action,
events_by_severity=events_by_severity,
events_by_resource_type=events_by_resource_type,
date_range={{
"min": date_row.min_date,
"max": date_row.max_date
}}
)
except Exception as e:
logger.error(
"Failed to retrieve audit log statistics",
error=str(e),
tenant_id=tenant_id
)
raise HTTPException(
status_code=500,
detail=f"Failed to retrieve audit log statistics: {{str(e)}}"
)
"""
# Services to generate for (excluding sales and inventory which are already done)
SERVICES = [
('orders', 'orders'),
('production', 'production'),
('recipes', 'recipes'),
('suppliers', 'suppliers'),
('pos', 'pos'),
('training', 'training'),
('notification', 'notification'),
('external', 'external'),
('forecasting', 'forecasting'),
]
def main():
base_path = Path(__file__).parent.parent / "services"
for service_name, route_name in SERVICES:
service_path = base_path / service_name / "app" / "api"
audit_file = service_path / "audit.py"
# Create the file
content = AUDIT_TEMPLATE.format(
service=service_name,
service_route=route_name
)
audit_file.write_text(content)
print(f"✓ Created {audit_file}")
if __name__ == "__main__":
main()
print("\n✓ All audit endpoint files generated successfully!")

View File

@@ -1,90 +0,0 @@
#!/usr/bin/env python3
"""
Generate Child Auth Files Script
Creates auth.json files for each child tenant with appropriate users.
"""
import json
import os
from pathlib import Path
from datetime import datetime, timedelta
import uuid
def generate_child_auth_file(tenant_id: str, tenant_name: str, parent_tenant_id: str) -> dict:
"""Generate auth.json data for a child tenant"""
# Generate user IDs based on tenant ID
manager_id = str(uuid.uuid5(uuid.NAMESPACE_DNS, f"manager-{tenant_id}"))
staff_id = str(uuid.uuid5(uuid.NAMESPACE_DNS, f"staff-{tenant_id}"))
# Create users
users = [
{
"id": manager_id,
"tenant_id": tenant_id,
"name": f"Gerente {tenant_name}",
"email": f"gerente.{tenant_id.lower()}@panaderiaartesana.es",
"role": "manager",
"is_active": True,
"created_at": "BASE_TS - 180d",
"updated_at": "BASE_TS - 180d"
},
{
"id": staff_id,
"tenant_id": tenant_id,
"name": f"Empleado {tenant_name}",
"email": f"empleado.{tenant_id.lower()}@panaderiaartesana.es",
"role": "user",
"is_active": True,
"created_at": "BASE_TS - 150d",
"updated_at": "BASE_TS - 150d"
}
]
return {"users": users}
def main():
"""Main function to generate auth files for all child tenants"""
print("=== Generating Child Auth Files ===")
base_path = Path("shared/demo/fixtures/enterprise")
children_dir = base_path / "children"
# Get parent tenant info
parent_tenant_file = base_path / "parent" / "01-tenant.json"
with open(parent_tenant_file, 'r', encoding='utf-8') as f:
parent_data = json.load(f)
parent_tenant_id = parent_data["tenant"]["id"]
# Process each child directory
for child_dir in children_dir.iterdir():
if child_dir.is_dir():
tenant_id = child_dir.name
# Get tenant info from child's tenant.json
child_tenant_file = child_dir / "01-tenant.json"
if child_tenant_file.exists():
with open(child_tenant_file, 'r', encoding='utf-8') as f:
tenant_data = json.load(f)
# Child files have location data, not tenant data
tenant_name = tenant_data["location"]["name"]
# Generate auth data
auth_data = generate_child_auth_file(tenant_id, tenant_name, parent_tenant_id)
# Write auth.json file
auth_file = child_dir / "02-auth.json"
with open(auth_file, 'w', encoding='utf-8') as f:
json.dump(auth_data, f, ensure_ascii=False, indent=2)
print(f"✓ Generated {auth_file}")
else:
print(f"✗ Missing tenant.json in {child_dir}")
print("\n=== Auth File Generation Complete ===")
if __name__ == "__main__":
main()

View File

@@ -1,270 +0,0 @@
#!/usr/bin/env python3
"""
Quick script to generate deletion service boilerplate
Usage: python generate_deletion_service.py <service_name> <model1,model2,model3>
Example: python generate_deletion_service.py pos POSConfiguration,POSTransaction,POSSession
"""
import sys
import os
from pathlib import Path
def generate_deletion_service(service_name: str, models: list[str]):
"""Generate deletion service file from template"""
service_class = f"{service_name.title().replace('_', '')}TenantDeletionService"
model_imports = ", ".join(models)
# Build preview section
preview_code = []
delete_code = []
for i, model in enumerate(models):
model_lower = model.lower().replace('_', ' ')
model_plural = f"{model_lower}s" if not model_lower.endswith('s') else model_lower
preview_code.append(f"""
# Count {model_plural}
try:
{model.lower()}_count = await self.db.scalar(
select(func.count({model}.id)).where({model}.tenant_id == tenant_id)
)
preview["{model_plural}"] = {model.lower()}_count or 0
except Exception:
preview["{model_plural}"] = 0 # Table might not exist
""")
delete_code.append(f"""
# Delete {model_plural}
try:
{model.lower()}_delete = await self.db.execute(
delete({model}).where({model}.tenant_id == tenant_id)
)
result.add_deleted_items("{model_plural}", {model.lower()}_delete.rowcount)
logger.info("Deleted {model_plural} for tenant",
tenant_id=tenant_id,
count={model.lower()}_delete.rowcount)
except Exception as e:
logger.error("Error deleting {model_plural}",
tenant_id=tenant_id,
error=str(e))
result.add_error(f"{model} deletion: {{str(e)}}")
""")
template = f'''"""
{service_name.title()} Service - Tenant Data Deletion
Handles deletion of all {service_name}-related data for a tenant
"""
from typing import Dict
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, delete, func
import structlog
from shared.services.tenant_deletion import BaseTenantDataDeletionService, TenantDataDeletionResult
logger = structlog.get_logger()
class {service_class}(BaseTenantDataDeletionService):
"""Service for deleting all {service_name}-related data for a tenant"""
def __init__(self, db_session: AsyncSession):
super().__init__("{service_name}-service")
self.db = db_session
async def get_tenant_data_preview(self, tenant_id: str) -> Dict[str, int]:
"""Get counts of what would be deleted"""
try:
preview = {{}}
# Import models here to avoid circular imports
from app.models import {model_imports}
{"".join(preview_code)}
return preview
except Exception as e:
logger.error("Error getting deletion preview",
tenant_id=tenant_id,
error=str(e))
return {{}}
async def delete_tenant_data(self, tenant_id: str) -> TenantDataDeletionResult:
"""Delete all data for a tenant"""
result = TenantDataDeletionResult(tenant_id, self.service_name)
try:
# Import models here to avoid circular imports
from app.models import {model_imports}
{"".join(delete_code)}
# Commit all deletions
await self.db.commit()
logger.info("Tenant data deletion completed",
tenant_id=tenant_id,
deleted_counts=result.deleted_counts)
except Exception as e:
logger.error("Fatal error during tenant data deletion",
tenant_id=tenant_id,
error=str(e))
await self.db.rollback()
result.add_error(f"Fatal error: {{str(e)}}")
return result
'''
return template
def generate_api_endpoints(service_name: str):
"""Generate API endpoint code"""
service_class = f"{service_name.title().replace('_', '')}TenantDeletionService"
template = f'''
# ===== Tenant Data Deletion Endpoints =====
@router.delete("/tenant/{{tenant_id}}")
async def delete_tenant_data(
tenant_id: str,
current_user: dict = Depends(get_current_user_dep),
db: AsyncSession = Depends(get_db)
):
"""
Delete all {service_name}-related data for a tenant
Only accessible by internal services (called during tenant deletion)
"""
logger.info(f"Tenant data deletion request received for tenant: {{tenant_id}}")
# Only allow internal service calls
if current_user.get("type") != "service":
raise HTTPException(
status_code=403,
detail="This endpoint is only accessible to internal services"
)
try:
from app.services.tenant_deletion_service import {service_class}
deletion_service = {service_class}(db)
result = await deletion_service.safe_delete_tenant_data(tenant_id)
return {{
"message": "Tenant data deletion completed in {service_name}-service",
"summary": result.to_dict()
}}
except Exception as e:
logger.error(f"Tenant data deletion failed for {{tenant_id}}: {{e}}")
raise HTTPException(
status_code=500,
detail=f"Failed to delete tenant data: {{str(e)}}"
)
@router.get("/tenant/{{tenant_id}}/deletion-preview")
async def preview_tenant_data_deletion(
tenant_id: str,
current_user: dict = Depends(get_current_user_dep),
db: AsyncSession = Depends(get_db)
):
"""
Preview what data would be deleted for a tenant (dry-run)
Accessible by internal services and tenant admins
"""
# Allow internal services and admins
is_service = current_user.get("type") == "service"
is_admin = current_user.get("role") in ["owner", "admin"]
if not (is_service or is_admin):
raise HTTPException(
status_code=403,
detail="Insufficient permissions"
)
try:
from app.services.tenant_deletion_service import {service_class}
deletion_service = {service_class}(db)
preview = await deletion_service.get_tenant_data_preview(tenant_id)
return {{
"tenant_id": tenant_id,
"service": "{service_name}-service",
"data_counts": preview,
"total_items": sum(preview.values())
}}
except Exception as e:
logger.error(f"Deletion preview failed for {{tenant_id}}: {{e}}")
raise HTTPException(
status_code=500,
detail=f"Failed to get deletion preview: {{str(e)}}"
)
'''
return template
def main():
if len(sys.argv) < 3:
print("Usage: python generate_deletion_service.py <service_name> <model1,model2,model3>")
print("Example: python generate_deletion_service.py pos POSConfiguration,POSTransaction,POSSession")
sys.exit(1)
service_name = sys.argv[1]
models = [m.strip() for m in sys.argv[2].split(',')]
# Generate service file
service_code = generate_deletion_service(service_name, models)
# Generate API endpoints
api_code = generate_api_endpoints(service_name)
# Output files
service_dir = Path(f"services/{service_name}/app/services")
print(f"\n{'='*80}")
print(f"Generated code for {service_name} service with models: {', '.join(models)}")
print(f"{'='*80}\n")
print("1. DELETION SERVICE FILE:")
print(f" Location: {service_dir}/tenant_deletion_service.py")
print("-" * 80)
print(service_code)
print()
print("\n2. API ENDPOINTS TO ADD:")
print(f" Add to: services/{service_name}/app/api/<router>.py")
print("-" * 80)
print(api_code)
print()
# Optionally write files
write = input("\nWrite files to disk? (y/n): ").lower().strip()
if write == 'y':
# Create service file
service_dir.mkdir(parents=True, exist_ok=True)
service_file = service_dir / "tenant_deletion_service.py"
with open(service_file, 'w') as f:
f.write(service_code)
print(f"\n✅ Created: {service_file}")
print(f"\n⚠️ Next steps:")
print(f" 1. Review and customize {service_file}")
print(f" 2. Add the API endpoints to services/{service_name}/app/api/<router>.py")
print(f" 3. Test with: curl -X GET 'http://localhost:8000/api/v1/{service_name}/tenant/{{id}}/deletion-preview'")
else:
print("\n✅ Files not written. Copy the code above manually.")
if __name__ == "__main__":
main()

View File

@@ -1,950 +0,0 @@
#!/usr/bin/env python3
"""
Bakery-IA Demo Data Generator - Improved Version
Generates hyper-realistic, deterministic demo seed data for Professional tier.
This script addresses all issues identified in the analysis report:
- Complete inventory with all ingredients and stock entries
- Production consumption calculations aligned with inventory
- Sales data aligned with completed batches
- Forecasting with 88-92% accuracy
- Cross-reference validation
- Edge case scenarios maintained
Usage:
python generate_demo_data_improved.py
Output:
- Updated JSON files in shared/demo/fixtures/professional/
- Validation report in DEMO_DATA_GENERATION_REPORT.md
- Cross-reference validation
"""
import json
import random
import uuid
from datetime import datetime, timedelta
from pathlib import Path
from typing import Dict, List, Any, Tuple
from collections import defaultdict
import copy
# ============================================================================
# CONFIGURATION
# ============================================================================
# Base timestamp for all relative dates
BASE_TS = datetime(2025, 1, 15, 6, 0, 0) # 2025-01-15T06:00:00Z
# Deterministic seed for reproducibility
RANDOM_SEED = 42
random.seed(RANDOM_SEED)
# Paths
BASE_DIR = Path(__file__).parent
FIXTURES_DIR = BASE_DIR / "shared" / "demo" / "fixtures" / "professional"
METADATA_DIR = BASE_DIR / "shared" / "demo" / "metadata"
# Tenant ID
TENANT_ID = "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6"
# ============================================================================
# UTILITY FUNCTIONS
# ============================================================================
def format_timestamp(dt: datetime) -> str:
"""Format datetime as ISO 8601 string."""
return dt.strftime("%Y-%m-%dT%H:%M:%SZ")
def parse_offset(offset_str: str) -> timedelta:
"""Parse offset string like 'BASE_TS - 7d 6h' or 'BASE_TS + 1h30m' to timedelta."""
if not offset_str or offset_str == "BASE_TS":
return timedelta(0)
# Remove 'BASE_TS' and strip
offset_str = offset_str.replace("BASE_TS", "").strip()
sign = 1
if offset_str.startswith("-"):
sign = -1
offset_str = offset_str[1:].strip()
elif offset_str.startswith("+"):
offset_str = offset_str[1:].strip()
delta = timedelta(0)
# Handle combined formats like "1h30m"
import re
# Extract days
day_match = re.search(r'(\d+(?:\.\d+)?)d', offset_str)
if day_match:
delta += timedelta(days=float(day_match.group(1)))
# Extract hours
hour_match = re.search(r'(\d+(?:\.\d+)?)h', offset_str)
if hour_match:
delta += timedelta(hours=float(hour_match.group(1)))
# Extract minutes
min_match = re.search(r'(\d+(?:\.\d+)?)m', offset_str)
if min_match:
delta += timedelta(minutes=float(min_match.group(1)))
return delta * sign
def calculate_timestamp(offset_str: str) -> str:
"""Calculate timestamp from BASE_TS with offset."""
delta = parse_offset(offset_str)
result = BASE_TS + delta
return format_timestamp(result)
def parse_timestamp_flexible(ts_str: str) -> datetime:
"""Parse timestamp that could be ISO format or BASE_TS + offset."""
if not ts_str:
return BASE_TS
if "BASE_TS" in ts_str:
delta = parse_offset(ts_str)
return BASE_TS + delta
try:
return datetime.fromisoformat(ts_str.replace("Z", "+00:00"))
except ValueError:
return BASE_TS
def load_json(filename: str) -> Dict:
"""Load JSON file from fixtures directory."""
path = FIXTURES_DIR / filename
if not path.exists():
return {}
with open(path, 'r', encoding='utf-8') as f:
return json.load(f)
def save_json(filename: str, data: Dict):
"""Save JSON file to fixtures directory."""
path = FIXTURES_DIR / filename
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
def generate_batch_number(sku: str, date: datetime) -> str:
"""Generate unique batch number."""
date_str = date.strftime("%Y%m%d")
sequence = random.randint(1, 999)
return f"{sku}-{date_str}-{sequence:03d}"
def generate_po_number() -> str:
"""Generate unique purchase order number."""
year = BASE_TS.year
sequence = random.randint(1, 999)
return f"PO-{year}-{sequence:03d}"
def generate_sales_id() -> str:
"""Generate unique sales ID."""
year = BASE_TS.year
month = BASE_TS.month
sequence = random.randint(1, 9999)
return f"SALES-{year}{month:02d}-{sequence:04d}"
def generate_order_id() -> str:
"""Generate unique order ID."""
year = BASE_TS.year
sequence = random.randint(1, 9999)
return f"ORDER-{year}-{sequence:04d}"
# ============================================================================
# DATA GENERATORS
# ============================================================================
class DemoDataGenerator:
def __init__(self):
self.tenant_id = TENANT_ID
self.base_ts = BASE_TS
# Load existing data
self.inventory_data = load_json("03-inventory.json")
self.recipes_data = load_json("04-recipes.json")
self.suppliers_data = load_json("05-suppliers.json")
self.production_data = load_json("06-production.json")
self.procurement_data = load_json("07-procurement.json")
self.orders_data = load_json("08-orders.json")
self.sales_data = load_json("09-sales.json")
self.forecasting_data = load_json("10-forecasting.json")
self.quality_data = load_json("12-quality.json")
self.orchestrator_data = load_json("11-orchestrator.json")
# Cross-reference map
self.cross_refs = self._load_cross_refs()
# Tracking
self.validation_errors = []
self.validation_warnings = []
self.changes = []
self.stats = {
'ingredients': 0,
'stock_entries': 0,
'batches': 0,
'sales': 0,
'forecasts': 0,
'critical_stock': 0,
'alerts': 0
}
def _load_cross_refs(self) -> Dict:
"""Load cross-reference map."""
path = METADATA_DIR / "cross_refs_map.json"
if path.exists():
with open(path, 'r', encoding='utf-8') as f:
return json.load(f)
return {}
def _add_validation_error(self, message: str):
"""Add validation error."""
self.validation_errors.append(message)
print(f"❌ ERROR: {message}")
def _add_validation_warning(self, message: str):
"""Add validation warning."""
self.validation_warnings.append(message)
print(f"⚠️ WARNING: {message}")
def _add_change(self, message: str):
"""Add change log entry."""
self.changes.append(message)
# ========================================================================
# INVENTORY GENERATION
# ========================================================================
def generate_complete_inventory(self):
"""Generate complete inventory with all ingredients and stock entries."""
print("📦 Generating complete inventory...")
# Load existing ingredients
ingredients = self.inventory_data.get("ingredients", [])
existing_stock = self.inventory_data.get("stock", [])
# Validate that all ingredients have stock entries
ingredient_ids = {ing["id"] for ing in ingredients}
stock_ingredient_ids = {stock["ingredient_id"] for stock in existing_stock}
missing_stock = ingredient_ids - stock_ingredient_ids
if missing_stock:
self._add_validation_warning(f"Missing stock entries for {len(missing_stock)} ingredients")
# Generate stock entries for missing ingredients
for ing_id in missing_stock:
# Find the ingredient
ingredient = next(ing for ing in ingredients if ing["id"] == ing_id)
# Generate realistic stock entry
stock_entry = self._generate_stock_entry(ingredient)
existing_stock.append(stock_entry)
self._add_change(f"Generated stock entry for {ingredient['name']}")
# Update inventory data
self.inventory_data["stock"] = existing_stock
self.stats["ingredients"] = len(ingredients)
self.stats["stock_entries"] = len(existing_stock)
# Identify critical stock items
critical_count = 0
for stock in existing_stock:
ingredient = next(ing for ing in ingredients if ing["id"] == stock["ingredient_id"])
if ingredient.get("reorder_point") and stock["current_quantity"] < ingredient["reorder_point"]:
critical_count += 1
# Check if there's a pending PO for this ingredient
has_po = self._has_pending_po(ingredient["id"])
if not has_po:
self.stats["alerts"] += 1
self._add_change(f"CRITICAL: {ingredient['name']} below reorder point with NO pending PO")
self.stats["critical_stock"] = critical_count
print(f"✅ Generated complete inventory: {len(ingredients)} ingredients, {len(existing_stock)} stock entries")
print(f"✅ Critical stock items: {critical_count}")
def _generate_stock_entry(self, ingredient: Dict) -> Dict:
"""Generate realistic stock entry for an ingredient."""
# Determine base quantity based on category
category = ingredient.get("ingredient_category", "OTHER")
if category == "FLOUR":
base_qty = random.uniform(150, 300)
elif category == "DAIRY":
base_qty = random.uniform(50, 150)
elif category == "YEAST":
base_qty = random.uniform(5, 20)
else:
base_qty = random.uniform(20, 100)
# Apply realistic variation
quantity = base_qty * random.uniform(0.8, 1.2)
# Determine shelf life
if ingredient.get("is_perishable"):
shelf_life = random.randint(7, 30)
else:
shelf_life = random.randint(90, 180)
# Generate batch number
sku = ingredient.get("sku", "GEN-001")
batch_date = self.base_ts - timedelta(days=random.randint(1, 14))
batch_number = generate_batch_number(sku, batch_date)
return {
"id": str(uuid.uuid4()),
"tenant_id": self.tenant_id,
"ingredient_id": ingredient["id"],
"current_quantity": round(quantity, 2),
"reserved_quantity": round(quantity * random.uniform(0.05, 0.15), 2),
"available_quantity": round(quantity * random.uniform(0.85, 0.95), 2),
"storage_location": self._get_storage_location(ingredient),
"production_stage": "raw_ingredient",
"quality_status": "good",
"expiration_date": calculate_timestamp(f"BASE_TS + {shelf_life}d"),
"supplier_id": self._get_supplier_for_ingredient(ingredient),
"batch_number": batch_number,
"created_at": calculate_timestamp(f"BASE_TS - {random.randint(1, 7)}d"),
"updated_at": "BASE_TS",
"is_available": True,
"is_expired": False
}
def _get_supplier_for_ingredient(self, ingredient: Dict) -> str:
"""Get appropriate supplier ID for ingredient."""
category = ingredient.get("ingredient_category", "OTHER")
suppliers = self.suppliers_data.get("suppliers", [])
# Map categories to suppliers
category_map = {
"FLOUR": "40000000-0000-0000-0000-000000000001", # Harinas del Norte
"DAIRY": "40000000-0000-0000-0000-000000000002", # Lácteos Gipuzkoa
"YEAST": "40000000-0000-0000-0000-000000000006", # Levaduras Spain
"SALT": "40000000-0000-0000-0000-000000000004", # Sal de Mar
}
return category_map.get(category, suppliers[0]["id"] if suppliers else None)
def _get_storage_location(self, ingredient: Dict) -> str:
"""Get storage location based on ingredient type."""
if ingredient.get("is_perishable"):
return "Almacén Refrigerado - Zona B"
else:
return "Almacén Principal - Zona A"
def _has_pending_po(self, ingredient_id: str) -> bool:
"""Check if there's a pending PO for this ingredient."""
pos = self.procurement_data.get("purchase_orders", [])
for po in pos:
if po["status"] in ["pending_approval", "confirmed", "in_transit"]:
for item in po.get("items", []):
if item.get("inventory_product_id") == ingredient_id:
return True
return False
# ========================================================================
# PRODUCTION CONSUMPTION CALCULATIONS
# ========================================================================
def calculate_production_consumptions(self) -> List[Dict]:
"""Calculate ingredient consumptions from completed batches."""
print("🏭 Calculating production consumptions...")
batches = self.production_data.get("batches", [])
recipes = {r["id"]: r for r in self.recipes_data.get("recipes", [])}
recipe_ingredients = self.recipes_data.get("recipe_ingredients", [])
consumptions = []
for batch in batches:
if batch["status"] not in ["COMPLETED", "QUARANTINED"]:
continue
recipe_id = batch.get("recipe_id")
if not recipe_id or recipe_id not in recipes:
continue
recipe = recipes[recipe_id]
actual_qty = batch.get("actual_quantity", 0)
yield_qty = recipe.get("yield_quantity", 1)
if yield_qty == 0:
continue
scale_factor = actual_qty / yield_qty
# Get ingredients for this recipe
ingredients = [ri for ri in recipe_ingredients if ri["recipe_id"] == recipe_id]
for ing in ingredients:
ing_id = ing["ingredient_id"]
ing_qty = ing["quantity"] # in grams or ml
# Convert to base unit (kg or L)
unit = ing.get("unit", "g")
if unit in ["g", "ml"]:
ing_qty_base = ing_qty / 1000.0
else:
ing_qty_base = ing_qty
consumed = ing_qty_base * scale_factor
consumptions.append({
"batch_id": batch["id"],
"batch_number": batch["batch_number"],
"ingredient_id": ing_id,
"quantity_consumed": round(consumed, 2),
"timestamp": batch.get("actual_end_time", batch.get("planned_end_time"))
})
self.stats["consumptions"] = len(consumptions)
print(f"✅ Calculated {len(consumptions)} consumption records from production")
return consumptions
def apply_consumptions_to_stock(self, consumptions: List[Dict], stock: List[Dict]):
"""Apply consumption calculations to stock data."""
print("📉 Applying consumptions to stock...")
# Group consumptions by ingredient
consumption_by_ingredient = defaultdict(float)
for cons in consumptions:
consumption_by_ingredient[cons["ingredient_id"]] += cons["quantity_consumed"]
# Update stock quantities
for stock_item in stock:
ing_id = stock_item["ingredient_id"]
if ing_id in consumption_by_ingredient:
consumed = consumption_by_ingredient[ing_id]
# Update quantities
stock_item["current_quantity"] = round(stock_item["current_quantity"] - consumed, 2)
stock_item["available_quantity"] = round(stock_item["available_quantity"] - consumed, 2)
# Ensure quantities don't go negative
if stock_item["current_quantity"] < 0:
stock_item["current_quantity"] = 0
if stock_item["available_quantity"] < 0:
stock_item["available_quantity"] = 0
print(f"✅ Applied consumptions to {len(stock)} stock items")
# ========================================================================
# SALES GENERATION
# ========================================================================
def generate_sales_data(self) -> List[Dict]:
"""Generate historical sales data aligned with completed batches."""
print("💰 Generating sales data...")
batches = self.production_data.get("batches", [])
completed = [b for b in batches if b["status"] == "COMPLETED"]
sales = []
sale_id_counter = 1
for batch in completed:
product_id = batch["product_id"]
actual_qty = batch.get("actual_quantity", 0)
# Determine sales from this batch (90-98% of production)
sold_qty = actual_qty * random.uniform(0.90, 0.98)
# Split into 2-4 sales transactions
num_sales = random.randint(2, 4)
# Parse batch end time
end_time_str = batch.get("actual_end_time", batch.get("planned_end_time"))
batch_date = parse_timestamp_flexible(end_time_str)
for i in range(num_sales):
sale_qty = sold_qty / num_sales * random.uniform(0.8, 1.2)
sale_time = batch_date + timedelta(hours=random.uniform(2, 10))
# Calculate offset from BASE_TS
offset_delta = sale_time - self.base_ts
# Handle negative offsets
if offset_delta < timedelta(0):
offset_delta = -offset_delta
offset_str = f"BASE_TS - {abs(offset_delta.days)}d {offset_delta.seconds//3600}h"
else:
offset_str = f"BASE_TS + {offset_delta.days}d {offset_delta.seconds//3600}h"
sales.append({
"id": generate_sales_id(),
"tenant_id": self.tenant_id,
"product_id": product_id,
"quantity": round(sale_qty, 2),
"unit_price": round(random.uniform(2.5, 8.5), 2),
"total_amount": round(sale_qty * random.uniform(2.5, 8.5), 2),
"sales_date": offset_str,
"sales_channel": random.choice(["retail", "wholesale", "online"]),
"payment_method": random.choice(["cash", "card", "transfer"]),
"customer_id": "50000000-0000-0000-0000-000000000001", # Generic customer
"created_at": offset_str,
"updated_at": offset_str
})
sale_id_counter += 1
self.stats["sales"] = len(sales)
print(f"✅ Generated {len(sales)} sales records")
return sales
# ========================================================================
# FORECASTING GENERATION
# ========================================================================
def generate_forecasting_data(self) -> List[Dict]:
"""Generate forecasting data with 88-92% accuracy."""
print("📊 Generating forecasting data...")
# Get products from inventory
products = [ing for ing in self.inventory_data.get("ingredients", [])
if ing.get("product_type") == "FINISHED_PRODUCT"]
forecasts = []
forecast_id_counter = 1
# Generate forecasts for next 7 days
for day_offset in range(1, 8):
forecast_date = self.base_ts + timedelta(days=day_offset)
date_str = calculate_timestamp(f"BASE_TS + {day_offset}d")
for product in products:
# Get historical sales for this product (last 7 days)
historical_sales = self._get_historical_sales(product["id"])
# If no historical sales, use a reasonable default based on product type
if not historical_sales:
# Estimate based on product category
product_name = product.get("name", "").lower()
if "baguette" in product_name:
avg_sales = random.uniform(20, 40)
elif "croissant" in product_name:
avg_sales = random.uniform(15, 30)
elif "pan" in product_name or "bread" in product_name:
avg_sales = random.uniform(10, 25)
else:
avg_sales = random.uniform(5, 15)
else:
avg_sales = sum(historical_sales) / len(historical_sales)
# Generate forecast with 88-92% accuracy (12-8% error)
error_factor = random.uniform(-0.12, 0.12) # ±12% error → ~88% accuracy
predicted = avg_sales * (1 + error_factor)
# Ensure positive prediction
if predicted < 0:
predicted = avg_sales * 0.8
confidence = round(random.uniform(88, 92), 1)
forecasts.append({
"id": str(uuid.uuid4()),
"tenant_id": self.tenant_id,
"product_id": product["id"],
"forecast_date": date_str,
"predicted_quantity": round(predicted, 2),
"confidence_percentage": confidence,
"forecast_type": "daily",
"created_at": "BASE_TS",
"updated_at": "BASE_TS",
"notes": f"Forecast accuracy: {confidence}% (seed={RANDOM_SEED})"
})
forecast_id_counter += 1
# Calculate actual accuracy
accuracy = self._calculate_forecasting_accuracy()
self.stats["forecasting_accuracy"] = accuracy
self.stats["forecasts"] = len(forecasts)
print(f"✅ Generated {len(forecasts)} forecasts with {accuracy}% accuracy")
return forecasts
def _get_historical_sales(self, product_id: str) -> List[float]:
"""Get historical sales for a product (last 7 days)."""
sales = self.sales_data.get("sales_data", [])
historical = []
for sale in sales:
if sale.get("product_id") == product_id:
# Parse sale date
sale_date_str = sale.get("sales_date")
if sale_date_str and "BASE_TS" in sale_date_str:
sale_date = parse_timestamp_flexible(sale_date_str)
# Check if within last 7 days
if 0 <= (sale_date - self.base_ts).days <= 7:
historical.append(sale.get("quantity", 0))
return historical
def _calculate_forecasting_accuracy(self) -> float:
"""Calculate historical forecasting accuracy."""
# This is a simplified calculation - in reality we'd compare actual vs predicted
# For demo purposes, we'll use the target accuracy based on our error factor
return round(random.uniform(88, 92), 1)
# ========================================================================
# CROSS-REFERENCE VALIDATION
# ========================================================================
def validate_cross_references(self):
"""Validate all cross-references between services."""
print("🔗 Validating cross-references...")
# Validate production batches product IDs
batches = self.production_data.get("batches", [])
products = {p["id"]: p for p in self.inventory_data.get("ingredients", [])
if p.get("product_type") == "FINISHED_PRODUCT"}
for batch in batches:
product_id = batch.get("product_id")
if product_id and product_id not in products:
self._add_validation_error(f"Batch {batch['batch_number']} references non-existent product {product_id}")
# Validate recipe ingredients
recipe_ingredients = self.recipes_data.get("recipe_ingredients", [])
ingredients = {ing["id"]: ing for ing in self.inventory_data.get("ingredients", [])}
for ri in recipe_ingredients:
ing_id = ri.get("ingredient_id")
if ing_id and ing_id not in ingredients:
self._add_validation_error(f"Recipe ingredient references non-existent ingredient {ing_id}")
# Validate procurement PO items
pos = self.procurement_data.get("purchase_orders", [])
for po in pos:
for item in po.get("items", []):
inv_product_id = item.get("inventory_product_id")
if inv_product_id and inv_product_id not in self.inventory_data.get("ingredients", []):
self._add_validation_error(f"PO {po['po_number']} references non-existent inventory product {inv_product_id}")
# Validate sales product IDs
sales = self.sales_data.get("sales_data", [])
for sale in sales:
product_id = sale.get("product_id")
if product_id and product_id not in products:
self._add_validation_error(f"Sales record references non-existent product {product_id}")
# Validate forecasting product IDs
forecasts = self.forecasting_data.get("forecasts", [])
for forecast in forecasts:
product_id = forecast.get("product_id")
if product_id and product_id not in products:
self._add_validation_error(f"Forecast references non-existent product {product_id}")
if not self.validation_errors:
print("✅ All cross-references validated successfully")
else:
print(f"❌ Found {len(self.validation_errors)} cross-reference errors")
# ========================================================================
# ORCHESTRATOR UPDATE
# ========================================================================
def update_orchestrator_results(self):
"""Update orchestrator results with actual data."""
print("🎛️ Updating orchestrator results...")
# Load orchestrator data
orchestrator_data = self.orchestrator_data
# Update with actual counts
orchestrator_data["results"] = {
"ingredients_created": self.stats["ingredients"],
"stock_entries_created": self.stats["stock_entries"],
"batches_created": self.stats["batches"],
"sales_created": self.stats["sales"],
"forecasts_created": self.stats["forecasts"],
"consumptions_calculated": self.stats["consumptions"],
"critical_stock_items": self.stats["critical_stock"],
"active_alerts": self.stats["alerts"],
"forecasting_accuracy": self.stats["forecasting_accuracy"],
"cross_reference_errors": len(self.validation_errors),
"cross_reference_warnings": len(self.validation_warnings)
}
# Add edge case alerts
alerts = [
{
"alert_type": "OVERDUE_BATCH",
"severity": "high",
"message": "Production should have started 2 hours ago - BATCH-LATE-0001",
"created_at": "BASE_TS"
},
{
"alert_type": "DELAYED_DELIVERY",
"severity": "high",
"message": "Supplier delivery 4 hours late - PO-LATE-0001",
"created_at": "BASE_TS"
},
{
"alert_type": "CRITICAL_STOCK",
"severity": "critical",
"message": "Harina T55 below reorder point with NO pending PO",
"created_at": "BASE_TS"
}
]
orchestrator_data["alerts"] = alerts
orchestrator_data["completed_at"] = "BASE_TS"
orchestrator_data["status"] = "completed"
self.orchestrator_data = orchestrator_data
print("✅ Updated orchestrator results with actual data")
# ========================================================================
# MAIN EXECUTION
# ========================================================================
def generate_all_data(self):
"""Generate all demo data."""
print("🚀 Starting Bakery-IA Demo Data Generation")
print("=" * 60)
# Step 1: Generate complete inventory
self.generate_complete_inventory()
# Step 2: Calculate production consumptions
consumptions = self.calculate_production_consumptions()
# Step 3: Apply consumptions to stock
stock = self.inventory_data.get("stock", [])
self.apply_consumptions_to_stock(consumptions, stock)
self.inventory_data["stock"] = stock
# Step 4: Generate sales data
sales_data = self.generate_sales_data()
self.sales_data["sales_data"] = sales_data
# Step 5: Generate forecasting data
forecasts = self.generate_forecasting_data()
self.forecasting_data["forecasts"] = forecasts
# Step 6: Validate cross-references
self.validate_cross_references()
# Step 7: Update orchestrator results
self.update_orchestrator_results()
# Step 8: Save all data
self.save_all_data()
# Step 9: Generate report
self.generate_report()
print("\n🎉 Demo Data Generation Complete!")
print(f"📊 Generated {sum(self.stats.values())} total records")
print(f"✅ Validation: {len(self.validation_errors)} errors, {len(self.validation_warnings)} warnings")
def save_all_data(self):
"""Save all generated data to JSON files."""
print("💾 Saving generated data...")
# Save inventory
save_json("03-inventory.json", self.inventory_data)
# Save production (no changes needed, but save for completeness)
save_json("06-production.json", self.production_data)
# Save procurement (no changes needed)
save_json("07-procurement.json", self.procurement_data)
# Save sales
save_json("09-sales.json", self.sales_data)
# Save forecasting
save_json("10-forecasting.json", self.forecasting_data)
# Save orchestrator
save_json("11-orchestrator.json", self.orchestrator_data)
print("✅ All data saved to JSON files")
def generate_report(self):
"""Generate comprehensive report."""
print("📋 Generating report...")
report = f"""# Bakery-IA Demo Data Generation Report
## Executive Summary
**Generation Date**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
**Tier**: Professional - Panadería Artesana Madrid
**BASE_TS**: {BASE_TS.strftime('%Y-%m-%dT%H:%M:%SZ')}
**Random Seed**: {RANDOM_SEED}
## Generation Statistics
### Data Generated
- **Ingredients**: {self.stats['ingredients']}
- **Stock Entries**: {self.stats['stock_entries']}
- **Production Batches**: {self.stats['batches']}
- **Sales Records**: {self.stats['sales']}
- **Forecasts**: {self.stats['forecasts']}
- **Consumption Records**: {self.stats['consumptions']}
### Alerts & Critical Items
- **Critical Stock Items**: {self.stats['critical_stock']}
- **Active Alerts**: {self.stats['alerts']}
- **Forecasting Accuracy**: {self.stats['forecasting_accuracy']}%
### Validation Results
- **Cross-Reference Errors**: {len(self.validation_errors)}
- **Cross-Reference Warnings**: {len(self.validation_warnings)}
## Changes Made
"""
# Add changes
if self.changes:
report += "### Changes\n\n"
for change in self.changes:
report += f"- {change}\n"
else:
report += "### Changes\n\nNo changes made (data already complete)\n"
# Add validation issues
if self.validation_errors or self.validation_warnings:
report += "\n## Validation Issues\n\n"
if self.validation_errors:
report += "### Errors\n\n"
for error in self.validation_errors:
report += f"- ❌ {error}\n"
if self.validation_warnings:
report += "### Warnings\n\n"
for warning in self.validation_warnings:
report += f"- ⚠️ {warning}\n"
else:
report += "\n## Validation Issues\n\n✅ No validation issues found\n"
# Add edge cases
report += f"""
## Edge Cases Maintained
### Inventory Edge Cases
- **Harina T55**: 80kg < 150kg reorder point, NO pending PO → RED alert
- **Mantequilla**: 25kg < 40kg reorder point, has PO-2025-006 → WARNING
- **Levadura Fresca**: 8kg < 10kg reorder point, has PO-2025-004 → WARNING
### Production Edge Cases
- **OVERDUE BATCH**: BATCH-LATE-0001 (Baguette, planned start: BASE_TS - 2h)
- **IN_PROGRESS BATCH**: BATCH-INPROGRESS-0001 (Croissant, started: BASE_TS - 1h45m)
- **UPCOMING BATCH**: BATCH-UPCOMING-0001 (Pan Integral, planned: BASE_TS + 1h30m)
- **QUARANTINED BATCH**: batch 000000000004 (Napolitana Chocolate, quality failed)
### Procurement Edge Cases
- **LATE DELIVERY**: PO-LATE-0001 (expected: BASE_TS - 4h, status: pending_approval)
- **URGENT PO**: PO-2025-004 (status: confirmed, delivery late)
## Cross-Reference Validation
### Validated References
- ✅ Production batches → Inventory products
- ✅ Recipe ingredients → Inventory ingredients
- ✅ Procurement PO items → Inventory products
- ✅ Sales records → Inventory products
- ✅ Forecasting → Inventory products
## KPIs Dashboard
```json
{{
"production_fulfillment": 87,
"critical_stock_count": {self.stats['critical_stock']},
"open_alerts": {self.stats['alerts']},
"forecasting_accuracy": {self.stats['forecasting_accuracy']},
"batches_today": {{
"overdue": 1,
"in_progress": 1,
"upcoming": 2,
"completed": 0
}}
}}
```
## Technical Details
### Deterministic Generation
- **Random Seed**: {RANDOM_SEED}
- **Variations**: ±10-20% in quantities, ±5-10% in prices
- **Batch Numbers**: Format `SKU-YYYYMMDD-NNN`
- **Timestamps**: Relative to BASE_TS with offsets
### Data Quality
- **Completeness**: All ingredients have stock entries
- **Consistency**: Production consumptions aligned with inventory
- **Accuracy**: Forecasting accuracy {self.stats['forecasting_accuracy']}%
- **Validation**: {len(self.validation_errors)} errors, {len(self.validation_warnings)} warnings
## Files Updated
- `shared/demo/fixtures/professional/03-inventory.json`
- `shared/demo/fixtures/professional/06-production.json`
- `shared/demo/fixtures/professional/07-procurement.json`
- `shared/demo/fixtures/professional/09-sales.json`
- `shared/demo/fixtures/professional/10-forecasting.json`
- `shared/demo/fixtures/professional/11-orchestrator.json`
## Conclusion
✅ **Demo data generation completed successfully**
- All cross-references validated
- Edge cases maintained
- Forecasting accuracy: {self.stats['forecasting_accuracy']}%
- Critical stock items: {self.stats['critical_stock']}
- Active alerts: {self.stats['alerts']}
**Status**: Ready for demo deployment 🎉
"""
# Save report
report_path = BASE_DIR / "DEMO_DATA_GENERATION_REPORT.md"
with open(report_path, 'w', encoding='utf-8') as f:
f.write(report)
print(f"✅ Report saved to {report_path}")
# ============================================================================
# MAIN EXECUTION
# ============================================================================
def main():
"""Main execution function."""
print("🚀 Starting Improved Bakery-IA Demo Data Generation")
print("=" * 60)
# Initialize generator
generator = DemoDataGenerator()
# Generate all data
generator.generate_all_data()
print("\n🎉 All tasks completed successfully!")
print("📋 Summary:")
print(f" • Generated complete inventory with {generator.stats['ingredients']} ingredients")
print(f" • Calculated {generator.stats['consumptions']} production consumptions")
print(f" • Generated {generator.stats['sales']} sales records")
print(f" • Generated {generator.stats['forecasts']} forecasts with {generator.stats['forecasting_accuracy']}% accuracy")
print(f" • Validated all cross-references")
print(f" • Updated orchestrator results")
print(f" • Validation: {len(generator.validation_errors)} errors, {len(generator.validation_warnings)} warnings")
if generator.validation_errors:
print("\n⚠️ Please review validation errors above")
return 1
else:
print("\n✅ All data validated successfully - ready for deployment!")
return 0
if __name__ == "__main__":
exit(main())

View File

@@ -1,244 +0,0 @@
#!/usr/bin/env python3
"""
Generate Service-to-Service Authentication Token
This script generates JWT tokens for service-to-service communication
in the Bakery-IA tenant deletion system.
Usage:
python scripts/generate_service_token.py <service_name> [--days DAYS]
Examples:
# Generate token for orchestrator (1 year expiration)
python scripts/generate_service_token.py tenant-deletion-orchestrator
# Generate token for specific service with custom expiration
python scripts/generate_service_token.py auth-service --days 90
# Generate tokens for all services
python scripts/generate_service_token.py --all
"""
import sys
import os
import argparse
from datetime import timedelta
from pathlib import Path
# Add project root to path
project_root = Path(__file__).parent.parent
sys.path.insert(0, str(project_root))
from shared.auth.jwt_handler import JWTHandler
# Get JWT secret from environment (same as services use)
JWT_SECRET_KEY = os.getenv("JWT_SECRET_KEY", "your-secret-key-change-in-production-min-32-chars")
# Service names used in the system
SERVICES = [
"tenant-deletion-orchestrator",
"auth-service",
"tenant-service",
"orders-service",
"inventory-service",
"recipes-service",
"sales-service",
"production-service",
"suppliers-service",
"pos-service",
"external-service",
"forecasting-service",
"training-service",
"alert-processor-service",
"notification-service"
]
def generate_token(service_name: str, days: int = 365) -> str:
"""
Generate a service token
Args:
service_name: Name of the service
days: Token expiration in days (default: 365)
Returns:
JWT service token
"""
jwt_handler = JWTHandler(
secret_key=JWT_SECRET_KEY,
algorithm="HS256"
)
token = jwt_handler.create_service_token(
service_name=service_name,
expires_delta=timedelta(days=days)
)
return token
def verify_token(token: str) -> dict:
"""
Verify a service token and return its payload
Args:
token: JWT token to verify
Returns:
Token payload dictionary
"""
jwt_handler = JWTHandler(
secret_key=JWT_SECRET_KEY,
algorithm="HS256"
)
payload = jwt_handler.verify_token(token)
if not payload:
raise ValueError("Invalid or expired token")
return payload
def main():
parser = argparse.ArgumentParser(
description="Generate service-to-service authentication tokens",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Generate token for orchestrator
%(prog)s tenant-deletion-orchestrator
# Generate token with custom expiration
%(prog)s auth-service --days 90
# Generate tokens for all services
%(prog)s --all
# Verify a token
%(prog)s --verify <token>
"""
)
parser.add_argument(
"service_name",
nargs="?",
help="Name of the service (e.g., 'tenant-deletion-orchestrator')"
)
parser.add_argument(
"--days",
type=int,
default=365,
help="Token expiration in days (default: 365)"
)
parser.add_argument(
"--all",
action="store_true",
help="Generate tokens for all services"
)
parser.add_argument(
"--verify",
metavar="TOKEN",
help="Verify a token and show its payload"
)
parser.add_argument(
"--list-services",
action="store_true",
help="List all available service names"
)
args = parser.parse_args()
# List services
if args.list_services:
print("\nAvailable Services:")
print("=" * 50)
for service in SERVICES:
print(f" - {service}")
print()
return 0
# Verify token
if args.verify:
try:
payload = verify_token(args.verify)
print("\n✓ Token is valid!")
print("=" * 50)
print(f"Service Name: {payload.get('service')}")
print(f"Type: {payload.get('type')}")
print(f"Is Service: {payload.get('is_service')}")
print(f"Role: {payload.get('role')}")
print(f"Issued At: {payload.get('iat')}")
print(f"Expires At: {payload.get('exp')}")
print("=" * 50)
print()
return 0
except Exception as e:
print(f"\n✗ Token verification failed: {e}\n")
return 1
# Generate for all services
if args.all:
print(f"\nGenerating service tokens (expires in {args.days} days)...")
print("=" * 80)
for service in SERVICES:
try:
token = generate_token(service, args.days)
print(f"\n{service}:")
print(f" export {service.upper().replace('-', '_')}_TOKEN='{token}'")
except Exception as e:
print(f"\n✗ Failed to generate token for {service}: {e}")
print("\n" + "=" * 80)
print("\n Copy the export statements above to set environment variables")
print(" Or save them to a .env file for your services\n")
return 0
# Generate for single service
if not args.service_name:
parser.print_help()
return 1
try:
print(f"\nGenerating service token for: {args.service_name}")
print(f"Expiration: {args.days} days")
print("=" * 80)
token = generate_token(args.service_name, args.days)
print("\n✓ Token generated successfully!\n")
print("Token:")
print(f" {token}")
print()
print("Environment Variable:")
env_var = args.service_name.upper().replace('-', '_') + '_TOKEN'
print(f" export {env_var}='{token}'")
print()
print("Usage in Code:")
print(f" headers = {{'Authorization': f'Bearer {{os.getenv(\"{env_var}\")}}'}}")
print()
print("Test with curl:")
print(f" curl -H 'Authorization: Bearer {token}' https://localhost/api/v1/...")
print()
print("=" * 80)
print()
# Verify the token we just created
print("Verifying token...")
payload = verify_token(token)
print("✓ Token is valid and verified!\n")
return 0
except Exception as e:
print(f"\n✗ Error: {e}\n")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,283 +0,0 @@
#!/usr/bin/env python3
"""
Migrate all demo JSON files from offset_days/ISO timestamps to BASE_TS markers.
This script performs a one-time migration to align with the new architecture.
"""
import json
import sys
from pathlib import Path
from datetime import datetime, timezone
from typing import Any, Dict
# Base reference date used in current JSON files
BASE_REFERENCE_ISO = "2025-01-15T06:00:00Z"
BASE_REFERENCE = datetime.fromisoformat(BASE_REFERENCE_ISO.replace('Z', '+00:00'))
# Date fields to transform by entity type
DATE_FIELDS_MAP = {
'purchase_orders': [
'order_date', 'required_delivery_date', 'estimated_delivery_date',
'expected_delivery_date', 'sent_to_supplier_at', 'supplier_confirmation_date',
'created_at', 'updated_at'
],
'batches': [
'planned_start_time', 'planned_end_time', 'actual_start_time',
'actual_end_time', 'completed_at', 'created_at', 'updated_at'
],
'equipment': [
'install_date', 'last_maintenance_date', 'next_maintenance_date',
'created_at', 'updated_at'
],
'ingredients': ['created_at', 'updated_at'],
'stock_batches': [
'received_date', 'expiration_date', 'best_before_date',
'created_at', 'updated_at'
],
'customers': ['last_order_date', 'created_at', 'updated_at'],
'orders': [
'order_date', 'delivery_date', 'promised_date',
'completed_at', 'created_at', 'updated_at'
],
'completed_orders': [
'order_date', 'delivery_date', 'promised_date',
'completed_at', 'created_at', 'updated_at'
],
'forecasts': ['forecast_date', 'created_at', 'updated_at'],
'prediction_batches': ['prediction_date', 'created_at', 'updated_at'],
'sales_data': ['created_at', 'updated_at'],
'quality_controls': ['created_at', 'updated_at'],
'quality_alerts': ['created_at', 'updated_at'],
'customer_orders': [
'order_date', 'delivery_date', 'promised_date',
'completed_at', 'created_at', 'updated_at'
],
'order_items': ['created_at', 'updated_at'],
'procurement_requirements': ['created_at', 'updated_at'],
'replenishment_plans': ['created_at', 'updated_at'],
'production_schedules': ['schedule_date', 'created_at', 'updated_at'],
'users': ['created_at', 'updated_at'],
'stock': ['expiration_date', 'received_date', 'created_at', 'updated_at'],
'recipes': ['created_at', 'updated_at'],
'recipe_ingredients': ['created_at', 'updated_at'],
'suppliers': ['created_at', 'updated_at'],
'production_batches': ['start_time', 'end_time', 'created_at', 'updated_at'],
'purchase_order_items': ['created_at', 'updated_at'],
# Enterprise children files
'local_inventory': ['expiration_date', 'received_date', 'created_at', 'updated_at'],
'local_sales': ['created_at', 'updated_at'],
'local_orders': ['order_date', 'delivery_date', 'created_at', 'updated_at'],
'local_production_batches': [
'planned_start_time', 'planned_end_time', 'actual_start_time',
'actual_end_time', 'created_at', 'updated_at'
],
'local_forecasts': ['forecast_date', 'created_at', 'updated_at']
}
def calculate_offset_from_base(iso_timestamp: str) -> str:
"""
Calculate BASE_TS offset from an ISO timestamp.
Args:
iso_timestamp: ISO 8601 timestamp string
Returns:
BASE_TS marker string (e.g., "BASE_TS + 2d 3h")
"""
try:
target_time = datetime.fromisoformat(iso_timestamp.replace('Z', '+00:00'))
except (ValueError, AttributeError):
return None
# Calculate offset from BASE_REFERENCE
offset = target_time - BASE_REFERENCE
total_seconds = int(offset.total_seconds())
if total_seconds == 0:
return "BASE_TS"
# Convert to days, hours, minutes
days = offset.days
remaining_seconds = total_seconds - (days * 86400)
hours = remaining_seconds // 3600
minutes = (remaining_seconds % 3600) // 60
# Build BASE_TS expression
parts = []
if days != 0:
parts.append(f"{abs(days)}d")
if hours != 0:
parts.append(f"{abs(hours)}h")
if minutes != 0:
parts.append(f"{abs(minutes)}m")
if not parts:
return "BASE_TS"
operator = "+" if total_seconds > 0 else "-"
return f"BASE_TS {operator} {' '.join(parts)}"
def migrate_date_field(value: Any, field_name: str) -> Any:
"""
Migrate a single date field to BASE_TS format.
Args:
value: Field value (can be ISO string, offset_days dict, or None)
field_name: Name of the field being migrated
Returns:
BASE_TS marker string or original value (if already BASE_TS or None)
"""
if value is None:
return None
# Already a BASE_TS marker - keep as-is
if isinstance(value, str) and value.startswith("BASE_TS"):
return value
# Handle ISO timestamp strings
if isinstance(value, str) and ('T' in value or 'Z' in value):
return calculate_offset_from_base(value)
# Handle offset_days dictionary format (from inventory stock)
if isinstance(value, dict) and 'offset_days' in value:
days = value.get('offset_days', 0)
hour = value.get('hour', 0)
minute = value.get('minute', 0)
parts = []
if days != 0:
parts.append(f"{abs(days)}d")
if hour != 0:
parts.append(f"{abs(hour)}h")
if minute != 0:
parts.append(f"{abs(minute)}m")
if not parts:
return "BASE_TS"
operator = "+" if days >= 0 else "-"
return f"BASE_TS {operator} {' '.join(parts)}"
return None
def migrate_entity(entity: Dict[str, Any], date_fields: list) -> Dict[str, Any]:
"""
Migrate all date fields in an entity to BASE_TS format.
Also removes *_offset_days fields as they're now redundant.
Args:
entity: Entity dictionary
date_fields: List of date field names to migrate
Returns:
Migrated entity dictionary
"""
migrated = entity.copy()
# Remove offset_days fields and migrate their values
offset_fields_to_remove = []
for key in list(migrated.keys()):
if key.endswith('_offset_days'):
# Extract base field name
base_field = key.replace('_offset_days', '')
# Calculate BASE_TS marker
offset_days = migrated[key]
if offset_days == 0:
migrated[base_field] = "BASE_TS"
else:
operator = "+" if offset_days > 0 else "-"
migrated[base_field] = f"BASE_TS {operator} {abs(offset_days)}d"
offset_fields_to_remove.append(key)
# Remove offset_days fields
for key in offset_fields_to_remove:
del migrated[key]
# Migrate ISO timestamp fields
for field in date_fields:
if field in migrated:
migrated[field] = migrate_date_field(migrated[field], field)
return migrated
def migrate_json_file(file_path: Path) -> bool:
"""
Migrate a single JSON file to BASE_TS format.
Args:
file_path: Path to JSON file
Returns:
True if file was modified, False otherwise
"""
print(f"\n📄 Processing: {file_path.relative_to(file_path.parents[3])}")
try:
with open(file_path, 'r', encoding='utf-8') as f:
data = json.load(f)
except Exception as e:
print(f" ❌ Failed to load: {e}")
return False
modified = False
# Migrate each entity type
for entity_type, date_fields in DATE_FIELDS_MAP.items():
if entity_type in data:
original_count = len(data[entity_type])
data[entity_type] = [
migrate_entity(entity, date_fields)
for entity in data[entity_type]
]
if original_count > 0:
print(f" ✅ Migrated {original_count} {entity_type}")
modified = True
if modified:
# Write back with pretty formatting
with open(file_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
print(f" 💾 File updated successfully")
return modified
def main():
"""Main migration function"""
# Find all JSON files in demo fixtures
root_dir = Path(__file__).parent.parent
fixtures_dir = root_dir / "shared" / "demo" / "fixtures"
if not fixtures_dir.exists():
print(f"❌ Fixtures directory not found: {fixtures_dir}")
return 1
# Find all JSON files
json_files = list(fixtures_dir.rglob("*.json"))
if not json_files:
print(f"❌ No JSON files found in {fixtures_dir}")
return 1
print(f"🔍 Found {len(json_files)} JSON files to migrate")
# Migrate each file
total_modified = 0
for json_file in sorted(json_files):
if migrate_json_file(json_file):
total_modified += 1
print(f"\n✅ Migration complete: {total_modified}/{len(json_files)} files modified")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,78 +0,0 @@
#!/bin/bash
# Quick test script for deletion endpoints via localhost (port-forwarded or ingress)
# This tests with the real Bakery-IA demo tenant
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Demo tenant from the system
TENANT_ID="dbc2128a-7539-470c-94b9-c1e37031bd77"
DEMO_SESSION_ID="demo_8rkT9JjXWFuVmdqT798Nyg"
# Base URL (through ingress or port-forward)
BASE_URL="${BASE_URL:-https://localhost}"
echo -e "${BLUE}Testing Deletion System with Real Services${NC}"
echo -e "${BLUE}===========================================${NC}"
echo ""
echo -e "Tenant ID: ${YELLOW}$TENANT_ID${NC}"
echo -e "Base URL: ${YELLOW}$BASE_URL${NC}"
echo ""
# Test function
test_service() {
local service_name=$1
local endpoint=$2
echo -n "Testing $service_name... "
# Try to access the deletion preview endpoint
response=$(curl -k -s -w "\n%{http_code}" \
-H "X-Demo-Session-Id: $DEMO_SESSION_ID" \
-H "X-Tenant-ID: $TENANT_ID" \
"$BASE_URL$endpoint/tenant/$TENANT_ID/deletion-preview" 2>&1)
http_code=$(echo "$response" | tail -1)
body=$(echo "$response" | sed '$d')
if [ "$http_code" = "200" ]; then
# Try to parse total records
total=$(echo "$body" | grep -o '"total_records":[0-9]*' | cut -d':' -f2 || echo "?")
echo -e "${GREEN}${NC} (HTTP $http_code, Records: $total)"
elif [ "$http_code" = "401" ] || [ "$http_code" = "403" ]; then
echo -e "${YELLOW}${NC} (HTTP $http_code - Auth required)"
elif [ "$http_code" = "404" ]; then
echo -e "${RED}${NC} (HTTP $http_code - Endpoint not found)"
else
echo -e "${RED}${NC} (HTTP $http_code)"
fi
}
# Test all services
echo "Testing deletion preview endpoints:"
echo ""
test_service "Orders" "/api/v1/orders"
test_service "Inventory" "/api/v1/inventory"
test_service "Recipes" "/api/v1/recipes"
test_service "Sales" "/api/v1/sales"
test_service "Production" "/api/v1/production"
test_service "Suppliers" "/api/v1/suppliers"
test_service "POS" "/api/v1/pos"
test_service "External" "/api/v1/external"
test_service "Forecasting" "/api/v1/forecasting"
test_service "Training" "/api/v1/training"
test_service "Alert Processor" "/api/v1/alerts"
test_service "Notification" "/api/v1/notifications"
echo ""
echo -e "${BLUE}Test completed!${NC}"
echo ""
echo -e "${YELLOW}Note:${NC} 401/403 responses are expected - deletion endpoints require service tokens"
echo -e "${YELLOW}Note:${NC} To test with proper auth, set up service-to-service authentication"

View File

@@ -1,41 +0,0 @@
#!/bin/bash
# Script to register audit routers in all service main.py files
set -e
BASE_DIR="/Users/urtzialfaro/Documents/bakery-ia/services"
echo "Registering audit routers in service main.py files..."
# Function to add audit import and router registration
add_audit_to_service() {
local service=$1
local main_file="$BASE_DIR/$service/app/main.py"
if [ ! -f "$main_file" ]; then
echo "⚠️ $service: main.py not found, skipping"
return
fi
# Check if audit is already imported
if grep -q "import.*audit" "$main_file"; then
echo "$service: audit already imported"
else
echo "⚠️ $service: needs manual import addition"
fi
# Check if audit router is already registered
if grep -q "service.add_router(audit.router)" "$main_file"; then
echo "$service: audit router already registered"
else
echo "⚠️ $service: needs manual router registration"
fi
}
# Process each service
for service in recipes suppliers pos training notification external forecasting; do
add_audit_to_service "$service"
done
echo ""
echo "Done! Please check warnings above for services needing manual updates."

View File

@@ -1,111 +0,0 @@
#!/usr/bin/env python3
"""
Test deterministic cloning by creating multiple sessions and comparing data hashes.
"""
import asyncio
import hashlib
import json
from typing import List, Dict
import httpx
DEMO_API_URL = "http://localhost:8018"
INTERNAL_API_KEY = "test-internal-key"
async def create_demo_session(tier: str = "professional") -> dict:
"""Create a demo session"""
async with httpx.AsyncClient() as client:
response = await client.post(
f"{DEMO_API_URL}/api/demo/sessions",
json={"demo_account_type": tier}
)
return response.json()
async def get_all_data_from_service(
service_url: str,
tenant_id: str
) -> dict:
"""Fetch all data for a tenant from a service"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"{service_url}/internal/demo/export/{tenant_id}",
headers={"X-Internal-API-Key": INTERNAL_API_KEY}
)
return response.json()
def calculate_data_hash(data: dict) -> str:
"""
Calculate SHA-256 hash of data, excluding audit timestamps.
"""
# Remove non-deterministic fields
clean_data = remove_audit_fields(data)
# Sort keys for consistency
json_str = json.dumps(clean_data, sort_keys=True)
return hashlib.sha256(json_str.encode()).hexdigest()
def remove_audit_fields(data: dict) -> dict:
"""Remove created_at, updated_at fields recursively"""
if isinstance(data, dict):
return {
k: remove_audit_fields(v)
for k, v in data.items()
if k not in ["created_at", "updated_at", "id"] # IDs are UUIDs
}
elif isinstance(data, list):
return [remove_audit_fields(item) for item in data]
else:
return data
async def test_determinism(tier: str = "professional", iterations: int = 10):
"""
Test that cloning is deterministic across multiple sessions.
"""
print(f"Testing determinism for {tier} tier ({iterations} iterations)...")
services = [
("inventory", "http://inventory-service:8002"),
("production", "http://production-service:8003"),
("recipes", "http://recipes-service:8004"),
]
hashes_by_service = {svc[0]: [] for svc in services}
for i in range(iterations):
# Create session
session = await create_demo_session(tier)
tenant_id = session["virtual_tenant_id"]
# Get data from each service
for service_name, service_url in services:
data = await get_all_data_from_service(service_url, tenant_id)
data_hash = calculate_data_hash(data)
hashes_by_service[service_name].append(data_hash)
# Cleanup
async with httpx.AsyncClient() as client:
await client.delete(f"{DEMO_API_URL}/api/demo/sessions/{session['session_id']}")
if (i + 1) % 10 == 0:
print(f" Completed {i + 1}/{iterations} iterations")
# Check consistency
all_consistent = True
for service_name, hashes in hashes_by_service.items():
unique_hashes = set(hashes)
if len(unique_hashes) == 1:
print(f"{service_name}: All {iterations} hashes identical")
else:
print(f"{service_name}: {len(unique_hashes)} different hashes found!")
all_consistent = False
if all_consistent:
print("\n✅ DETERMINISM TEST PASSED")
return 0
else:
print("\n❌ DETERMINISM TEST FAILED")
return 1
if __name__ == "__main__":
exit_code = asyncio.run(test_determinism())
exit(exit_code)

View File

@@ -1,140 +0,0 @@
#!/bin/bash
# Quick script to test all deletion endpoints
# Usage: ./test_deletion_endpoints.sh <tenant_id>
set -e
TENANT_ID=${1:-"test-tenant-123"}
BASE_URL=${BASE_URL:-"http://localhost:8000"}
TOKEN=${AUTH_TOKEN:-"test-token"}
echo "================================"
echo "Testing Deletion Endpoints"
echo "Tenant ID: $TENANT_ID"
echo "Base URL: $BASE_URL"
echo "================================"
echo ""
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to test endpoint
test_endpoint() {
local service=$1
local method=$2
local path=$3
local expected_status=${4:-200}
echo -n "Testing $service ($method $path)... "
response=$(curl -s -w "\n%{http_code}" \
-X $method \
-H "Authorization: Bearer $TOKEN" \
-H "X-Internal-Service: test-script" \
"$BASE_URL/api/v1/$path" 2>&1)
status_code=$(echo "$response" | tail -n1)
body=$(echo "$response" | head -n-1)
if [ "$status_code" == "$expected_status" ] || [ "$status_code" == "404" ]; then
if [ "$status_code" == "404" ]; then
echo -e "${YELLOW}NOT IMPLEMENTED${NC} (404)"
else
echo -e "${GREEN}✓ PASSED${NC} ($status_code)"
if [ "$method" == "GET" ]; then
# Show preview counts
total=$(echo "$body" | jq -r '.total_items // 0' 2>/dev/null || echo "N/A")
if [ "$total" != "N/A" ]; then
echo " → Preview: $total items would be deleted"
fi
elif [ "$method" == "DELETE" ]; then
# Show deletion summary
deleted=$(echo "$body" | jq -r '.summary.total_deleted // 0' 2>/dev/null || echo "N/A")
if [ "$deleted" != "N/A" ]; then
echo " → Deleted: $deleted items"
fi
fi
fi
else
echo -e "${RED}✗ FAILED${NC} ($status_code)"
echo " Response: $body"
fi
}
echo "=== COMPLETED SERVICES ==="
echo ""
echo "1. Tenant Service:"
test_endpoint "tenant" "GET" "tenants/$TENANT_ID"
test_endpoint "tenant" "DELETE" "tenants/$TENANT_ID"
echo ""
echo "2. Orders Service:"
test_endpoint "orders" "GET" "orders/tenant/$TENANT_ID/deletion-preview"
test_endpoint "orders" "DELETE" "orders/tenant/$TENANT_ID"
echo ""
echo "3. Inventory Service:"
test_endpoint "inventory" "GET" "inventory/tenant/$TENANT_ID/deletion-preview"
test_endpoint "inventory" "DELETE" "inventory/tenant/$TENANT_ID"
echo ""
echo "4. Recipes Service:"
test_endpoint "recipes" "GET" "recipes/tenant/$TENANT_ID/deletion-preview"
test_endpoint "recipes" "DELETE" "recipes/tenant/$TENANT_ID"
echo ""
echo "5. Sales Service:"
test_endpoint "sales" "GET" "sales/tenant/$TENANT_ID/deletion-preview"
test_endpoint "sales" "DELETE" "sales/tenant/$TENANT_ID"
echo ""
echo "6. Production Service:"
test_endpoint "production" "GET" "production/tenant/$TENANT_ID/deletion-preview"
test_endpoint "production" "DELETE" "production/tenant/$TENANT_ID"
echo ""
echo "7. Suppliers Service:"
test_endpoint "suppliers" "GET" "suppliers/tenant/$TENANT_ID/deletion-preview"
test_endpoint "suppliers" "DELETE" "suppliers/tenant/$TENANT_ID"
echo ""
echo "=== PENDING SERVICES ==="
echo ""
echo "8. POS Service:"
test_endpoint "pos" "GET" "pos/tenant/$TENANT_ID/deletion-preview"
test_endpoint "pos" "DELETE" "pos/tenant/$TENANT_ID"
echo ""
echo "9. External Service:"
test_endpoint "external" "GET" "external/tenant/$TENANT_ID/deletion-preview"
test_endpoint "external" "DELETE" "external/tenant/$TENANT_ID"
echo ""
echo "10. Alert Processor Service:"
test_endpoint "alert_processor" "GET" "alerts/tenant/$TENANT_ID/deletion-preview"
test_endpoint "alert_processor" "DELETE" "alerts/tenant/$TENANT_ID"
echo ""
echo "11. Forecasting Service:"
test_endpoint "forecasting" "GET" "forecasts/tenant/$TENANT_ID/deletion-preview"
test_endpoint "forecasting" "DELETE" "forecasts/tenant/$TENANT_ID"
echo ""
echo "12. Training Service:"
test_endpoint "training" "GET" "models/tenant/$TENANT_ID/deletion-preview"
test_endpoint "training" "DELETE" "models/tenant/$TENANT_ID"
echo ""
echo "13. Notification Service:"
test_endpoint "notification" "GET" "notifications/tenant/$TENANT_ID/deletion-preview"
test_endpoint "notification" "DELETE" "notifications/tenant/$TENANT_ID"
echo ""
echo "================================"
echo "Testing Complete!"
echo "================================"

View File

@@ -1,225 +0,0 @@
#!/bin/bash
# ================================================================
# Tenant Deletion System - Integration Test Script
# ================================================================
# Tests all 12 services' deletion endpoints
# Usage: ./scripts/test_deletion_system.sh [tenant_id]
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
TENANT_ID="${1:-dbc2128a-7539-470c-94b9-c1e37031bd77}" # Default demo tenant
SERVICE_TOKEN="${SERVICE_TOKEN:-demo_service_token}"
# Service URLs (update these based on your environment)
ORDERS_URL="${ORDERS_URL:-http://localhost:8000/api/v1/orders}"
INVENTORY_URL="${INVENTORY_URL:-http://localhost:8001/api/v1/inventory}"
RECIPES_URL="${RECIPES_URL:-http://localhost:8002/api/v1/recipes}"
SALES_URL="${SALES_URL:-http://localhost:8003/api/v1/sales}"
PRODUCTION_URL="${PRODUCTION_URL:-http://localhost:8004/api/v1/production}"
SUPPLIERS_URL="${SUPPLIERS_URL:-http://localhost:8005/api/v1/suppliers}"
POS_URL="${POS_URL:-http://localhost:8006/api/v1/pos}"
EXTERNAL_URL="${EXTERNAL_URL:-http://localhost:8007/api/v1/external}"
FORECASTING_URL="${FORECASTING_URL:-http://localhost:8008/api/v1/forecasting}"
TRAINING_URL="${TRAINING_URL:-http://localhost:8009/api/v1/training}"
ALERT_PROCESSOR_URL="${ALERT_PROCESSOR_URL:-http://localhost:8000/api/v1/alerts}"
NOTIFICATION_URL="${NOTIFICATION_URL:-http://localhost:8011/api/v1/notifications}"
# Test results
TOTAL_TESTS=0
PASSED_TESTS=0
FAILED_TESTS=0
declare -a FAILED_SERVICES
# Helper functions
print_header() {
echo -e "${BLUE}================================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================================${NC}"
}
print_success() {
echo -e "${GREEN}${NC} $1"
}
print_error() {
echo -e "${RED}${NC} $1"
}
print_warning() {
echo -e "${YELLOW}${NC} $1"
}
print_info() {
echo -e "${BLUE}${NC} $1"
}
# Test individual service deletion preview
test_service_preview() {
local service_name=$1
local service_url=$2
local endpoint_path=$3
TOTAL_TESTS=$((TOTAL_TESTS + 1))
echo ""
print_info "Testing $service_name service..."
local full_url="${service_url}${endpoint_path}/tenant/${TENANT_ID}/deletion-preview"
# Make request
response=$(curl -k -s -w "\nHTTP_STATUS:%{http_code}" \
-H "Authorization: Bearer ${SERVICE_TOKEN}" \
-H "X-Service-Token: ${SERVICE_TOKEN}" \
"${full_url}" 2>&1)
# Extract HTTP status
http_status=$(echo "$response" | grep "HTTP_STATUS" | cut -d':' -f2)
body=$(echo "$response" | sed '/HTTP_STATUS/d')
if [ "$http_status" = "200" ]; then
# Parse total records if available
total_records=$(echo "$body" | grep -o '"total_records":[0-9]*' | cut -d':' -f2 || echo "N/A")
print_success "$service_name: HTTP $http_status (Records: $total_records)"
PASSED_TESTS=$((PASSED_TESTS + 1))
# Show preview details if verbose
if [ "${VERBOSE:-0}" = "1" ]; then
echo "$body" | jq '.' 2>/dev/null || echo "$body"
fi
else
print_error "$service_name: HTTP $http_status"
FAILED_TESTS=$((FAILED_TESTS + 1))
FAILED_SERVICES+=("$service_name")
# Show error details
echo " URL: $full_url"
echo " Response: $body" | head -n 5
fi
}
# Main test execution
main() {
print_header "Tenant Deletion System - Integration Tests"
print_info "Testing tenant: $TENANT_ID"
print_info "Using service token: ${SERVICE_TOKEN:0:20}..."
echo ""
# Test all services
print_header "Testing Individual Services (12 total)"
test_service_preview "Orders" "$ORDERS_URL" "/orders"
test_service_preview "Inventory" "$INVENTORY_URL" "/inventory"
test_service_preview "Recipes" "$RECIPES_URL" "/recipes"
test_service_preview "Sales" "$SALES_URL" "/sales"
test_service_preview "Production" "$PRODUCTION_URL" "/production"
test_service_preview "Suppliers" "$SUPPLIERS_URL" "/suppliers"
test_service_preview "POS" "$POS_URL" "/pos"
test_service_preview "External" "$EXTERNAL_URL" "/external"
test_service_preview "Forecasting" "$FORECASTING_URL" "/forecasting"
test_service_preview "Training" "$TRAINING_URL" "/training"
test_service_preview "Alert Processor" "$ALERT_PROCESSOR_URL" "/alerts"
test_service_preview "Notification" "$NOTIFICATION_URL" "/notifications"
# Print summary
echo ""
print_header "Test Summary"
echo -e "Total Tests: $TOTAL_TESTS"
echo -e "${GREEN}Passed: $PASSED_TESTS${NC}"
if [ $FAILED_TESTS -gt 0 ]; then
echo -e "${RED}Failed: $FAILED_TESTS${NC}"
echo ""
print_error "Failed services:"
for service in "${FAILED_SERVICES[@]}"; do
echo " - $service"
done
echo ""
print_warning "Some services are not accessible or not implemented."
print_info "Make sure all services are running and URLs are correct."
exit 1
else
echo -e "${GREEN}Failed: $FAILED_TESTS${NC}"
echo ""
print_success "All services passed! ✨"
exit 0
fi
}
# Check dependencies
check_dependencies() {
if ! command -v curl &> /dev/null; then
print_error "curl is required but not installed."
exit 1
fi
if ! command -v jq &> /dev/null; then
print_warning "jq not found. Install for better output formatting."
fi
}
# Show usage
show_usage() {
cat << EOF
Usage: $0 [OPTIONS] [tenant_id]
Test the tenant deletion system across all 12 microservices.
Options:
-h, --help Show this help message
-v, --verbose Show detailed response bodies
-t, --tenant ID Specify tenant ID to test (default: demo tenant)
Environment Variables:
SERVICE_TOKEN Service authentication token
*_URL Individual service URLs (e.g., ORDERS_URL)
Examples:
# Test with default demo tenant
$0
# Test specific tenant
$0 abc-123-def-456
# Test with verbose output
VERBOSE=1 $0
# Test with custom service URLs
ORDERS_URL=http://orders:8000/api/v1/orders $0
EOF
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_usage
exit 0
;;
-v|--verbose)
VERBOSE=1
shift
;;
-t|--tenant)
TENANT_ID="$2"
shift 2
;;
*)
TENANT_ID="$1"
shift
;;
esac
done
# Run tests
check_dependencies
main

View File

@@ -1,418 +0,0 @@
#!/usr/bin/env python3
"""
Cross-reference validation script for Bakery-IA demo data.
Validates UUID references across different services and fixtures.
"""
import json
import os
import sys
from pathlib import Path
from typing import Dict, List, Any, Optional
from uuid import UUID
# Configuration
BASE_DIR = Path(__file__).parent.parent / "shared" / "demo"
FIXTURES_DIR = BASE_DIR / "fixtures" / "professional"
METADATA_DIR = BASE_DIR / "metadata"
class ValidationError(Exception):
"""Custom exception for validation errors."""
pass
class CrossReferenceValidator:
def __init__(self):
self.fixtures = {}
self.cross_refs_map = {}
self.errors = []
self.warnings = []
def load_fixtures(self):
"""Load all fixture files."""
fixture_files = [
"01-tenant.json", "02-auth.json", "03-inventory.json",
"04-recipes.json", "05-suppliers.json", "06-production.json",
"07-procurement.json", "08-orders.json", "09-sales.json",
"10-forecasting.json"
]
for filename in fixture_files:
filepath = FIXTURES_DIR / filename
if filepath.exists():
try:
with open(filepath, 'r', encoding='utf-8') as f:
self.fixtures[filename] = json.load(f)
except (json.JSONDecodeError, IOError) as e:
self.errors.append(f"Failed to load {filename}: {str(e)}")
else:
self.warnings.append(f"Fixture file {filename} not found")
def load_cross_refs_map(self):
"""Load cross-reference mapping from metadata."""
map_file = METADATA_DIR / "cross_refs_map.json"
if map_file.exists():
try:
with open(map_file, 'r', encoding='utf-8') as f:
data = json.load(f)
self.cross_refs_map = data.get("references", [])
except (json.JSONDecodeError, IOError) as e:
self.errors.append(f"Failed to load cross_refs_map.json: {str(e)}")
else:
self.errors.append("cross_refs_map.json not found")
def is_valid_uuid(self, uuid_str: str) -> bool:
"""Check if a string is a valid UUID."""
try:
UUID(uuid_str)
return True
except ValueError:
return False
def get_entity_by_id(self, service: str, entity_type: str, entity_id: str) -> Optional[Dict]:
"""Find an entity by ID in the loaded fixtures."""
# Map service names to fixture files
service_to_fixture = {
"inventory": "03-inventory.json",
"recipes": "04-recipes.json",
"suppliers": "05-suppliers.json",
"production": "06-production.json",
"procurement": "07-procurement.json",
"orders": "08-orders.json",
"sales": "09-sales.json",
"forecasting": "10-forecasting.json"
}
if service not in service_to_fixture:
return None
fixture_file = service_to_fixture[service]
if fixture_file not in self.fixtures:
return None
fixture_data = self.fixtures[fixture_file]
# Find the entity based on entity_type
if entity_type == "Ingredient":
return self._find_in_ingredients(fixture_data, entity_id)
elif entity_type == "Recipe":
return self._find_in_recipes(fixture_data, entity_id)
elif entity_type == "Supplier":
return self._find_in_suppliers(fixture_data, entity_id)
elif entity_type == "ProductionBatch":
return self._find_in_production_batches(fixture_data, entity_id)
elif entity_type == "PurchaseOrder":
return self._find_in_purchase_orders(fixture_data, entity_id)
elif entity_type == "Customer":
return self._find_in_customers(fixture_data, entity_id)
elif entity_type == "SalesData":
return self._find_in_sales_data(fixture_data, entity_id)
elif entity_type == "Forecast":
return self._find_in_forecasts(fixture_data, entity_id)
return None
def _find_in_ingredients(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find ingredient by ID."""
if "ingredients" in data:
for ingredient in data["ingredients"]:
if ingredient.get("id") == entity_id:
return ingredient
return None
def _find_in_recipes(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find recipe by ID."""
if "recipes" in data:
for recipe in data["recipes"]:
if recipe.get("id") == entity_id:
return recipe
return None
def _find_in_suppliers(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find supplier by ID."""
if "suppliers" in data:
for supplier in data["suppliers"]:
if supplier.get("id") == entity_id:
return supplier
return None
def _find_in_production_batches(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find production batch by ID."""
if "production_batches" in data:
for batch in data["production_batches"]:
if batch.get("id") == entity_id:
return batch
return None
def _find_in_purchase_orders(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find purchase order by ID."""
if "purchase_orders" in data:
for po in data["purchase_orders"]:
if po.get("id") == entity_id:
return po
return None
def _find_in_customers(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find customer by ID."""
if "customers" in data:
for customer in data["customers"]:
if customer.get("id") == entity_id:
return customer
return None
def _find_in_sales_data(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find sales data by ID."""
if "sales_data" in data:
for sales in data["sales_data"]:
if sales.get("id") == entity_id:
return sales
return None
def _find_in_forecasts(self, data: Dict, entity_id: str) -> Optional[Dict]:
"""Find forecast by ID."""
if "forecasts" in data:
for forecast in data["forecasts"]:
if forecast.get("id") == entity_id:
return forecast
return None
def validate_cross_references(self):
"""Validate all cross-references defined in the map."""
for ref in self.cross_refs_map:
from_service = ref["from_service"]
from_entity = ref["from_entity"]
from_field = ref["from_field"]
to_service = ref["to_service"]
to_entity = ref["to_entity"]
required = ref.get("required", False)
# Find all entities of the "from" type
entities = self._get_all_entities(from_service, from_entity)
for entity in entities:
ref_id = entity.get(from_field)
if not ref_id:
if required:
self.errors.append(
f"{from_entity} {entity.get('id')} missing required field {from_field}"
)
continue
if not self.is_valid_uuid(ref_id):
self.errors.append(
f"{from_entity} {entity.get('id')} has invalid UUID in {from_field}: {ref_id}"
)
continue
# Check if the referenced entity exists
target_entity = self.get_entity_by_id(to_service, to_entity, ref_id)
if not target_entity:
if required:
self.errors.append(
f"{from_entity} {entity.get('id')} references non-existent {to_entity} {ref_id}"
)
else:
self.warnings.append(
f"{from_entity} {entity.get('id')} references non-existent {to_entity} {ref_id}"
)
continue
# Check filters if specified
to_filter = ref.get("to_filter", {})
if to_filter:
self._validate_filters_case_insensitive(target_entity, to_filter, entity, ref)
def _get_all_entities(self, service: str, entity_type: str) -> List[Dict]:
"""Get all entities of a specific type from a service."""
entities = []
# Map entity types to fixture file and path
entity_mapping = {
"ProductionBatch": ("06-production.json", "production_batches"),
"RecipeIngredient": ("04-recipes.json", "recipe_ingredients"),
"Stock": ("03-inventory.json", "stock"),
"PurchaseOrder": ("07-procurement.json", "purchase_orders"),
"PurchaseOrderItem": ("07-procurement.json", "purchase_order_items"),
"OrderItem": ("08-orders.json", "order_items"),
"SalesData": ("09-sales.json", "sales_data"),
"Forecast": ("10-forecasting.json", "forecasts")
}
if entity_type in entity_mapping:
fixture_file, path = entity_mapping[entity_type]
if fixture_file in self.fixtures:
data = self.fixtures[fixture_file]
if path in data:
return data[path]
return entities
def _validate_filters_case_insensitive(self, target_entity: Dict, filters: Dict, source_entity: Dict, ref: Dict):
"""Validate that target entity matches specified filters (case-insensitive)."""
for filter_key, filter_value in filters.items():
actual_value = target_entity.get(filter_key)
if actual_value is None:
self.errors.append(
f"{source_entity.get('id')} references {target_entity.get('id')} "
f"but {filter_key} is missing (expected {filter_value})"
)
elif str(actual_value).lower() != str(filter_value).lower():
self.errors.append(
f"{source_entity.get('id')} references {target_entity.get('id')} "
f"but {filter_key}={actual_value} != {filter_value}"
)
def validate_required_fields(self):
"""Validate required fields in all fixtures."""
required_fields_map = {
"01-tenant.json": {
"tenant": ["id", "name", "subscription_tier"]
},
"02-auth.json": {
"users": ["id", "name", "email", "role"]
},
"03-inventory.json": {
"ingredients": ["id", "name", "product_type", "ingredient_category"],
"stock": ["id", "ingredient_id", "quantity", "location"]
},
"04-recipes.json": {
"recipes": ["id", "name", "status", "difficulty_level"],
"recipe_ingredients": ["id", "recipe_id", "ingredient_id", "quantity"]
},
"05-suppliers.json": {
"suppliers": ["id", "name", "supplier_code", "status"]
},
"06-production.json": {
"equipment": ["id", "name", "type", "status"],
"production_batches": ["id", "product_id", "status", "start_time"]
},
"07-procurement.json": {
"purchase_orders": ["id", "po_number", "supplier_id", "status"],
"purchase_order_items": ["id", "purchase_order_id", "inventory_product_id", "ordered_quantity"]
},
"08-orders.json": {
"customers": ["id", "customer_code", "name", "customer_type"],
"customer_orders": ["id", "customer_id", "order_number", "status"],
"order_items": ["id", "order_id", "product_id", "quantity"]
},
"09-sales.json": {
"sales_data": ["id", "product_id", "quantity_sold", "unit_price"]
},
"10-forecasting.json": {
"forecasts": ["id", "product_id", "forecast_date", "predicted_quantity"]
}
}
for filename, required_structure in required_fields_map.items():
if filename in self.fixtures:
data = self.fixtures[filename]
for entity_type, required_fields in required_structure.items():
if entity_type in data:
entities = data[entity_type]
if isinstance(entities, list):
for entity in entities:
if isinstance(entity, dict):
for field in required_fields:
if field not in entity:
entity_id = entity.get('id', 'unknown')
self.errors.append(
f"{filename}: {entity_type} {entity_id} missing required field {field}"
)
elif isinstance(entities, dict):
# Handle tenant which is a single dict
for field in required_fields:
if field not in entities:
entity_id = entities.get('id', 'unknown')
self.errors.append(
f"{filename}: {entity_type} {entity_id} missing required field {field}"
)
def validate_date_formats(self):
"""Validate that all dates are in ISO format."""
date_fields = [
"created_at", "updated_at", "start_time", "end_time",
"order_date", "delivery_date", "expected_delivery_date",
"sale_date", "forecast_date", "contract_start_date", "contract_end_date"
]
for filename, data in self.fixtures.items():
self._check_date_fields(data, date_fields, filename)
def _check_date_fields(self, data: Any, date_fields: List[str], context: str):
"""Recursively check for date fields."""
if isinstance(data, dict):
for key, value in data.items():
if key in date_fields and isinstance(value, str):
if not self._is_iso_format(value):
self.errors.append(f"{context}: Invalid date format in {key}: {value}")
elif isinstance(value, (dict, list)):
self._check_date_fields(value, date_fields, context)
elif isinstance(data, list):
for item in data:
self._check_date_fields(item, date_fields, context)
def _is_iso_format(self, date_str: str) -> bool:
"""Check if a string is in ISO format or BASE_TS marker."""
try:
# Accept BASE_TS markers (e.g., "BASE_TS - 1h", "BASE_TS + 2d")
if date_str.startswith("BASE_TS"):
return True
# Accept offset-based dates (used in some fixtures)
if "_offset_" in date_str:
return True
# Simple check for ISO format (YYYY-MM-DDTHH:MM:SSZ or similar)
if len(date_str) < 19:
return False
return date_str.endswith('Z') and date_str[10] == 'T'
except:
return False
def run_validation(self) -> bool:
"""Run all validation checks."""
print("🔍 Starting cross-reference validation...")
# Load data
self.load_fixtures()
self.load_cross_refs_map()
if self.errors:
print("❌ Errors during data loading:")
for error in self.errors:
print(f" - {error}")
return False
# Run validation checks
print("📋 Validating cross-references...")
self.validate_cross_references()
print("📝 Validating required fields...")
self.validate_required_fields()
print("📅 Validating date formats...")
self.validate_date_formats()
# Report results
if self.errors:
print(f"\n❌ Validation failed with {len(self.errors)} errors:")
for error in self.errors:
print(f" - {error}")
if self.warnings:
print(f"\n⚠️ {len(self.warnings)} warnings:")
for warning in self.warnings:
print(f" - {warning}")
return False
else:
print("\n✅ All validation checks passed!")
if self.warnings:
print(f"⚠️ {len(self.warnings)} warnings:")
for warning in self.warnings:
print(f" - {warning}")
return True
if __name__ == "__main__":
validator = CrossReferenceValidator()
success = validator.run_validation()
sys.exit(0 if success else 1)

View File

@@ -1,242 +0,0 @@
#!/usr/bin/env python3
"""
Validate demo JSON files to ensure all dates use BASE_TS markers.
This script enforces the new architecture requirement that all temporal
data in demo fixtures must use BASE_TS markers for deterministic sessions.
"""
import json
import sys
from pathlib import Path
from typing import Dict, List, Tuple, Set
# Date/time fields that should use BASE_TS markers or be null
DATE_TIME_FIELDS = {
# Common fields
'created_at', 'updated_at',
# Procurement
'order_date', 'required_delivery_date', 'estimated_delivery_date',
'expected_delivery_date', 'sent_to_supplier_at', 'supplier_confirmation_date',
'approval_deadline',
# Production
'planned_start_time', 'planned_end_time', 'actual_start_time',
'actual_end_time', 'completed_at', 'install_date', 'last_maintenance_date',
'next_maintenance_date',
# Inventory
'received_date', 'expiration_date', 'best_before_date',
'original_expiration_date', 'transformation_date', 'final_expiration_date',
# Orders
'order_date', 'delivery_date', 'promised_date', 'last_order_date',
# Forecasting
'forecast_date', 'prediction_date',
# Schedules
'schedule_date', 'shift_start', 'shift_end', 'finalized_at',
# Quality
'check_time',
# Generic
'date', 'start_time', 'end_time'
}
class ValidationError:
"""Represents a validation error"""
def __init__(self, file_path: Path, entity_type: str, entity_index: int,
field_name: str, value: any, message: str):
self.file_path = file_path
self.entity_type = entity_type
self.entity_index = entity_index
self.field_name = field_name
self.value = value
self.message = message
def __str__(self):
return (
f"{self.file_path.name} » {self.entity_type}[{self.entity_index}] » "
f"{self.field_name}: {self.message}\n"
f" Value: {self.value}"
)
def validate_date_value(value: any, field_name: str) -> Tuple[bool, str]:
"""
Validate a single date field value.
Returns:
(is_valid, error_message)
"""
# Null values are allowed
if value is None:
return True, ""
# BASE_TS markers are the expected format
if isinstance(value, str) and value.startswith("BASE_TS"):
# Validate BASE_TS marker format
if value == "BASE_TS":
return True, ""
# Should be "BASE_TS + ..." or "BASE_TS - ..."
parts = value.split()
if len(parts) < 3:
return False, f"Invalid BASE_TS marker format (expected 'BASE_TS +/- <offset>')"
if parts[1] not in ['+', '-']:
return False, f"Invalid BASE_TS operator (expected + or -)"
# Extract offset parts (starting from index 2)
offset_parts = ' '.join(parts[2:])
# Validate offset components (must contain d, h, or m)
if not any(c in offset_parts for c in ['d', 'h', 'm']):
return False, f"BASE_TS offset must contain at least one of: d (days), h (hours), m (minutes)"
return True, ""
# ISO 8601 timestamps are NOT allowed (should use BASE_TS)
if isinstance(value, str) and ('T' in value or 'Z' in value):
return False, "Found ISO 8601 timestamp - should use BASE_TS marker instead"
# offset_days dictionaries are NOT allowed (legacy format)
if isinstance(value, dict) and 'offset_days' in value:
return False, "Found offset_days dictionary - should use BASE_TS marker instead"
# Unknown format
return False, f"Unknown date format (type: {type(value).__name__})"
def validate_entity(entity: Dict, entity_type: str, entity_index: int,
file_path: Path) -> List[ValidationError]:
"""
Validate all date fields in a single entity.
Returns:
List of validation errors
"""
errors = []
# Check for legacy offset_days fields
for key in entity.keys():
if key.endswith('_offset_days'):
base_field = key.replace('_offset_days', '')
errors.append(ValidationError(
file_path, entity_type, entity_index, key,
entity[key],
f"Legacy offset_days field found - migrate to BASE_TS marker in '{base_field}' field"
))
# Validate date/time fields
for field_name, value in entity.items():
if field_name in DATE_TIME_FIELDS:
is_valid, error_msg = validate_date_value(value, field_name)
if not is_valid:
errors.append(ValidationError(
file_path, entity_type, entity_index, field_name,
value, error_msg
))
return errors
def validate_json_file(file_path: Path) -> List[ValidationError]:
"""
Validate all entities in a JSON file.
Returns:
List of validation errors
"""
try:
with open(file_path, 'r', encoding='utf-8') as f:
data = json.load(f)
except json.JSONDecodeError as e:
return [ValidationError(
file_path, "FILE", 0, "JSON",
None, f"Invalid JSON: {e}"
)]
except Exception as e:
return [ValidationError(
file_path, "FILE", 0, "READ",
None, f"Failed to read file: {e}"
)]
errors = []
# Validate each entity type
for entity_type, entities in data.items():
if isinstance(entities, list):
for i, entity in enumerate(entities):
if isinstance(entity, dict):
errors.extend(
validate_entity(entity, entity_type, i, file_path)
)
return errors
def main():
"""Main validation function"""
# Find all JSON files in demo fixtures
root_dir = Path(__file__).parent.parent
fixtures_dir = root_dir / "shared" / "demo" / "fixtures"
if not fixtures_dir.exists():
print(f"❌ Fixtures directory not found: {fixtures_dir}")
return 1
# Find all JSON files
json_files = sorted(fixtures_dir.rglob("*.json"))
if not json_files:
print(f"❌ No JSON files found in {fixtures_dir}")
return 1
print(f"🔍 Validating {len(json_files)} JSON files...\n")
# Validate each file
all_errors = []
files_with_errors = 0
for json_file in json_files:
errors = validate_json_file(json_file)
if errors:
files_with_errors += 1
all_errors.extend(errors)
# Print file header
relative_path = json_file.relative_to(fixtures_dir)
print(f"\n📄 {relative_path}")
print(f" Found {len(errors)} error(s):")
# Print each error
for error in errors:
print(f" {error}")
# Print summary
print("\n" + "=" * 80)
if all_errors:
print(f"\n❌ VALIDATION FAILED")
print(f" Total errors: {len(all_errors)}")
print(f" Files with errors: {files_with_errors}/{len(json_files)}")
print(f"\n💡 Fix these errors by:")
print(f" 1. Replacing ISO timestamps with BASE_TS markers")
print(f" 2. Removing *_offset_days fields")
print(f" 3. Using format: 'BASE_TS +/- <offset>' where offset uses d/h/m")
print(f" Examples: 'BASE_TS', 'BASE_TS + 2d', 'BASE_TS - 4h', 'BASE_TS + 1h30m'")
return 1
else:
print(f"\n✅ ALL VALIDATIONS PASSED")
print(f" Files validated: {len(json_files)}")
print(f" All date fields use BASE_TS markers correctly")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,297 +0,0 @@
#!/bin/bash
# validate_demo_seeding.sh
# Comprehensive smoke test for demo seeding validation
# Tests both Professional and Enterprise demo templates
set -e # Exit on error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Counters
TESTS_PASSED=0
TESTS_FAILED=0
TESTS_TOTAL=0
# Fixed Demo Tenant IDs
DEMO_TENANT_PROFESSIONAL="a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6"
DEMO_TENANT_ENTERPRISE_PARENT="c3d4e5f6-a7b8-49c0-d1e2-f3a4b5c6d7e8"
DEMO_TENANT_CHILD_1="d4e5f6a7-b8c9-40d1-e2f3-a4b5c6d7e8f9"
DEMO_TENANT_CHILD_2="e5f6a7b8-c9d0-41e2-f3a4-b5c6d7e8f9a0"
DEMO_TENANT_CHILD_3="f6a7b8c9-d0e1-42f3-a4b5-c6d7e8f9a0b1"
# Database connection strings (from Kubernetes secrets)
get_db_url() {
local service=$1
kubectl get secret database-secrets -n bakery-ia -o jsonpath="{.data.${service}_DATABASE_URL}" | base64 -d
}
# Test helper functions
test_start() {
TESTS_TOTAL=$((TESTS_TOTAL + 1))
echo -e "${BLUE}[TEST $TESTS_TOTAL]${NC} $1"
}
test_pass() {
TESTS_PASSED=$((TESTS_PASSED + 1))
echo -e " ${GREEN}✓ PASS${NC}: $1"
}
test_fail() {
TESTS_FAILED=$((TESTS_FAILED + 1))
echo -e " ${RED}✗ FAIL${NC}: $1"
}
test_warn() {
echo -e " ${YELLOW}⚠ WARN${NC}: $1"
}
# SQL query helper
query_db() {
local db_url=$1
local query=$2
kubectl run psql-temp-$RANDOM --rm -i --restart=Never --image=postgres:17-alpine -- \
psql "$db_url" -t -c "$query" 2>/dev/null | xargs
}
echo "========================================"
echo "🧪 Demo Seeding Validation Test Suite"
echo "========================================"
echo ""
echo "Testing Professional and Enterprise demo templates..."
echo ""
# =============================================================================
# PHASE 1: PROFESSIONAL TIER VALIDATION
# =============================================================================
echo "========================================"
echo "📦 Phase 1: Professional Tier (Single Bakery)"
echo "========================================"
echo ""
# Test 1: Tenant Service - Professional tenant exists
test_start "Professional tenant exists in tenant service"
TENANT_DB=$(get_db_url "TENANT")
TENANT_COUNT=$(query_db "$TENANT_DB" "SELECT COUNT(*) FROM tenants WHERE id='$DEMO_TENANT_PROFESSIONAL' AND business_model='individual_bakery'")
if [ "$TENANT_COUNT" -eq 1 ]; then
test_pass "Professional tenant found (Panadería Artesana Madrid)"
else
test_fail "Professional tenant not found or incorrect count: $TENANT_COUNT"
fi
# Test 2: Inventory - Professional has raw ingredients
test_start "Professional tenant has raw ingredients"
INVENTORY_DB=$(get_db_url "INVENTORY")
INGREDIENT_COUNT=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM ingredients WHERE tenant_id='$DEMO_TENANT_PROFESSIONAL' AND product_type='INGREDIENT'")
if [ "$INGREDIENT_COUNT" -ge 20 ]; then
test_pass "Found $INGREDIENT_COUNT raw ingredients (expected ~24)"
else
test_fail "Insufficient raw ingredients: $INGREDIENT_COUNT (expected >=20)"
fi
# Test 3: Inventory - Professional has finished products
test_start "Professional tenant has finished products"
PRODUCT_COUNT=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM ingredients WHERE tenant_id='$DEMO_TENANT_PROFESSIONAL' AND product_type='FINISHED_PRODUCT'")
if [ "$PRODUCT_COUNT" -ge 4 ]; then
test_pass "Found $PRODUCT_COUNT finished products (expected ~4)"
else
test_fail "Insufficient finished products: $PRODUCT_COUNT (expected >=4)"
fi
# Test 4: Recipes - Professional has recipes
test_start "Professional tenant has recipes"
RECIPES_DB=$(get_db_url "RECIPES")
RECIPE_COUNT=$(query_db "$RECIPES_DB" "SELECT COUNT(*) FROM recipes WHERE tenant_id='$DEMO_TENANT_PROFESSIONAL'")
if [ "$RECIPE_COUNT" -ge 4 ]; then
test_pass "Found $RECIPE_COUNT recipes (expected ~4-20)"
else
test_fail "Insufficient recipes: $RECIPE_COUNT (expected >=4)"
fi
# Test 5: Sales - Professional has sales history
test_start "Professional tenant has sales history"
SALES_DB=$(get_db_url "SALES")
SALES_COUNT=$(query_db "$SALES_DB" "SELECT COUNT(*) FROM sales_data WHERE tenant_id='$DEMO_TENANT_PROFESSIONAL'")
if [ "$SALES_COUNT" -ge 100 ]; then
test_pass "Found $SALES_COUNT sales records (expected ~360 for 90 days)"
else
test_warn "Lower than expected sales records: $SALES_COUNT (expected >=100)"
fi
# =============================================================================
# PHASE 2: ENTERPRISE PARENT VALIDATION
# =============================================================================
echo ""
echo "========================================"
echo "🏭 Phase 2: Enterprise Parent (Obrador)"
echo "========================================"
echo ""
# Test 6: Tenant Service - Enterprise parent exists
test_start "Enterprise parent tenant exists"
PARENT_COUNT=$(query_db "$TENANT_DB" "SELECT COUNT(*) FROM tenants WHERE id='$DEMO_TENANT_ENTERPRISE_PARENT' AND business_model='enterprise_chain'")
if [ "$PARENT_COUNT" -eq 1 ]; then
test_pass "Enterprise parent found (Obrador Madrid)"
else
test_fail "Enterprise parent not found or incorrect count: $PARENT_COUNT"
fi
# Test 7: Inventory - Parent has raw ingredients (scaled 10x)
test_start "Enterprise parent has raw ingredients"
PARENT_INGREDIENT_COUNT=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM ingredients WHERE tenant_id='$DEMO_TENANT_ENTERPRISE_PARENT' AND product_type='INGREDIENT'")
if [ "$PARENT_INGREDIENT_COUNT" -ge 20 ]; then
test_pass "Found $PARENT_INGREDIENT_COUNT raw ingredients (expected ~24)"
else
test_fail "Insufficient parent raw ingredients: $PARENT_INGREDIENT_COUNT (expected >=20)"
fi
# Test 8: Recipes - Parent has recipes
test_start "Enterprise parent has recipes"
PARENT_RECIPE_COUNT=$(query_db "$RECIPES_DB" "SELECT COUNT(*) FROM recipes WHERE tenant_id='$DEMO_TENANT_ENTERPRISE_PARENT'")
if [ "$PARENT_RECIPE_COUNT" -ge 4 ]; then
test_pass "Found $PARENT_RECIPE_COUNT recipes (expected ~4-20)"
else
test_fail "Insufficient parent recipes: $PARENT_RECIPE_COUNT (expected >=4)"
fi
# Test 9: Production - Parent has production batches
test_start "Enterprise parent has production batches"
PRODUCTION_DB=$(get_db_url "PRODUCTION")
BATCH_COUNT=$(query_db "$PRODUCTION_DB" "SELECT COUNT(*) FROM production_batches WHERE tenant_id='$DEMO_TENANT_ENTERPRISE_PARENT'")
if [ "$BATCH_COUNT" -ge 50 ]; then
test_pass "Found $BATCH_COUNT production batches (expected ~120)"
elif [ "$BATCH_COUNT" -ge 20 ]; then
test_warn "Lower production batches: $BATCH_COUNT (expected ~120)"
else
test_fail "Insufficient production batches: $BATCH_COUNT (expected >=50)"
fi
# =============================================================================
# PHASE 3: CHILD RETAIL OUTLETS VALIDATION
# =============================================================================
echo ""
echo "========================================"
echo "🏪 Phase 3: Child Retail Outlets"
echo "========================================"
echo ""
# Test each child tenant
for CHILD_ID in "$DEMO_TENANT_CHILD_1" "$DEMO_TENANT_CHILD_2" "$DEMO_TENANT_CHILD_3"; do
case "$CHILD_ID" in
"$DEMO_TENANT_CHILD_1") CHILD_NAME="Madrid Centro" ;;
"$DEMO_TENANT_CHILD_2") CHILD_NAME="Barcelona Gràcia" ;;
"$DEMO_TENANT_CHILD_3") CHILD_NAME="Valencia Ruzafa" ;;
esac
echo ""
echo "Testing: $CHILD_NAME"
echo "----------------------------------------"
# Test 10a: Child has finished products ONLY (no raw ingredients)
test_start "[$CHILD_NAME] Has finished products ONLY"
CHILD_PRODUCTS=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM ingredients WHERE tenant_id='$CHILD_ID' AND product_type='FINISHED_PRODUCT'")
CHILD_RAW=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM ingredients WHERE tenant_id='$CHILD_ID' AND product_type='INGREDIENT'")
if [ "$CHILD_PRODUCTS" -eq 4 ] && [ "$CHILD_RAW" -eq 0 ]; then
test_pass "Found $CHILD_PRODUCTS finished products, 0 raw ingredients (correct retail model)"
elif [ "$CHILD_RAW" -gt 0 ]; then
test_fail "Child has raw ingredients ($CHILD_RAW) - should only have finished products"
else
test_warn "Product count mismatch: $CHILD_PRODUCTS (expected 4)"
fi
# Test 10b: Child has stock batches
test_start "[$CHILD_NAME] Has stock batches"
CHILD_STOCK=$(query_db "$INVENTORY_DB" "SELECT COUNT(*) FROM stock WHERE tenant_id='$CHILD_ID'")
if [ "$CHILD_STOCK" -ge 10 ]; then
test_pass "Found $CHILD_STOCK stock batches (expected ~16)"
else
test_warn "Lower stock batches: $CHILD_STOCK (expected ~16)"
fi
# Test 10c: Child has sales history
test_start "[$CHILD_NAME] Has sales history"
CHILD_SALES=$(query_db "$SALES_DB" "SELECT COUNT(*) FROM sales_data WHERE tenant_id='$CHILD_ID'")
if [ "$CHILD_SALES" -ge 80 ]; then
test_pass "Found $CHILD_SALES sales records (expected ~120 for 30 days)"
else
test_warn "Lower sales records: $CHILD_SALES (expected ~120)"
fi
# Test 10d: Child has customers
test_start "[$CHILD_NAME] Has walk-in customers"
ORDERS_DB=$(get_db_url "ORDERS")
CHILD_CUSTOMERS=$(query_db "$ORDERS_DB" "SELECT COUNT(*) FROM customers WHERE tenant_id='$CHILD_ID'")
if [ "$CHILD_CUSTOMERS" -ge 40 ]; then
test_pass "Found $CHILD_CUSTOMERS customers (expected 60-100)"
else
test_warn "Lower customer count: $CHILD_CUSTOMERS (expected 60-100)"
fi
done
# =============================================================================
# PHASE 4: DISTRIBUTION VALIDATION
# =============================================================================
echo ""
echo "========================================"
echo "🚚 Phase 4: Distribution & Logistics"
echo "========================================"
echo ""
# Test 11: Distribution routes exist
test_start "Distribution routes created (Mon/Wed/Fri pattern)"
DISTRIBUTION_DB=$(get_db_url "DISTRIBUTION")
ROUTE_COUNT=$(query_db "$DISTRIBUTION_DB" "SELECT COUNT(*) FROM delivery_routes WHERE tenant_id='$DEMO_TENANT_ENTERPRISE_PARENT'")
if [ "$ROUTE_COUNT" -ge 10 ]; then
test_pass "Found $ROUTE_COUNT delivery routes (expected ~13 for 30 days, Mon/Wed/Fri)"
else
test_warn "Lower route count: $ROUTE_COUNT (expected ~13)"
fi
# Test 12: Shipments exist for all children
test_start "Shipments created for all retail outlets"
SHIPMENT_COUNT=$(query_db "$DISTRIBUTION_DB" "SELECT COUNT(*) FROM shipments WHERE parent_tenant_id='$DEMO_TENANT_ENTERPRISE_PARENT'")
if [ "$SHIPMENT_COUNT" -ge 30 ]; then
test_pass "Found $SHIPMENT_COUNT shipments (expected ~39: 13 routes × 3 children)"
else
test_warn "Lower shipment count: $SHIPMENT_COUNT (expected ~39)"
fi
# =============================================================================
# SUMMARY
# =============================================================================
echo ""
echo "========================================"
echo "📊 Test Summary"
echo "========================================"
echo ""
echo "Total Tests: $TESTS_TOTAL"
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}"
echo -e "${RED}Failed: $TESTS_FAILED${NC}"
echo ""
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ ALL TESTS PASSED!${NC}"
echo ""
echo "Demo templates are ready for cloning:"
echo " ✓ Professional tier (single bakery): ~3,500 records"
echo " ✓ Enterprise parent (Obrador): ~3,000 records"
echo " ✓ 3 Child retail outlets: ~700 records"
echo " ✓ Distribution history: ~52 records"
echo " ✓ Total template data: ~4,200-4,800 records"
echo ""
exit 0
else
echo -e "${RED}❌ SOME TESTS FAILED${NC}"
echo ""
echo "Please review the failed tests above and:"
echo " 1. Check that all seed jobs completed successfully"
echo " 2. Verify database connections"
echo " 3. Check seed script logs for errors"
echo ""
exit 1
fi

View File

@@ -1,584 +0,0 @@
#!/usr/bin/env python3
"""
Enterprise Demo Fixtures Validation Script
Validates cross-references between JSON fixtures for enterprise demo sessions.
Checks that all referenced IDs exist and are consistent across files.
"""
import json
import os
import sys
from pathlib import Path
from typing import Dict, List, Set, Any, Optional
from collections import defaultdict
import uuid
# Color codes for output
RED = '\033[91m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
BLUE = '\033[94m'
RESET = '\033[0m'
class FixtureValidator:
def __init__(self, base_path: str = "shared/demo/fixtures/enterprise"):
self.base_path = Path(base_path)
self.parent_path = self.base_path / "parent"
self.children_paths = {}
# Load all fixture data
self.tenant_data = {}
self.user_data = {}
self.location_data = {}
self.product_data = {}
self.supplier_data = {}
self.recipe_data = {}
self.procurement_data = {}
self.order_data = {}
self.production_data = {}
# Track all IDs for validation
self.all_ids = defaultdict(set)
self.references = defaultdict(list)
# Expected IDs from tenant.json
self.expected_tenant_ids = set()
self.expected_user_ids = set()
self.expected_location_ids = set()
def load_all_fixtures(self) -> None:
"""Load all JSON fixtures from parent and children directories"""
print(f"{BLUE}Loading fixtures from {self.base_path}{RESET}")
# Load parent fixtures
self._load_parent_fixtures()
# Load children fixtures
self._load_children_fixtures()
print(f"{GREEN}✓ Loaded fixtures successfully{RESET}\n")
def _load_parent_fixtures(self) -> None:
"""Load parent enterprise fixtures"""
if not self.parent_path.exists():
print(f"{RED}✗ Parent fixtures directory not found: {self.parent_path}{RESET}")
sys.exit(1)
# Load in order to establish dependencies
files_to_load = [
"01-tenant.json",
"02-auth.json",
"03-inventory.json",
"04-recipes.json",
"05-suppliers.json",
"06-production.json",
"07-procurement.json",
"08-orders.json",
"09-sales.json",
"10-forecasting.json",
"11-orchestrator.json"
]
for filename in files_to_load:
filepath = self.parent_path / filename
if filepath.exists():
with open(filepath, 'r', encoding='utf-8') as f:
data = json.load(f)
self._process_fixture_file(filename, data, "parent")
def _load_children_fixtures(self) -> None:
"""Load children enterprise fixtures"""
children_dir = self.base_path / "children"
if not children_dir.exists():
print(f"{YELLOW}⚠ Children fixtures directory not found: {children_dir}{RESET}")
return
# Find all child tenant directories
child_dirs = [d for d in children_dir.iterdir() if d.is_dir()]
for child_dir in child_dirs:
tenant_id = child_dir.name
self.children_paths[tenant_id] = child_dir
# Load child fixtures
files_to_load = [
"01-tenant.json",
"02-auth.json",
"03-inventory.json",
"04-recipes.json",
"05-suppliers.json",
"06-production.json",
"07-procurement.json",
"08-orders.json",
"09-sales.json",
"10-forecasting.json",
"11-orchestrator.json"
]
for filename in files_to_load:
filepath = child_dir / filename
if filepath.exists():
with open(filepath, 'r', encoding='utf-8') as f:
data = json.load(f)
self._process_fixture_file(filename, data, tenant_id)
def _process_fixture_file(self, filename: str, data: Any, context: str) -> None:
"""Process a fixture file and extract IDs and references"""
print(f" Processing {filename} ({context})...")
if filename == "01-tenant.json":
self._process_tenant_data(data, context)
elif filename == "02-auth.json":
self._process_auth_data(data, context)
elif filename == "03-inventory.json":
self._process_inventory_data(data, context)
elif filename == "04-recipes.json":
self._process_recipe_data(data, context)
elif filename == "05-suppliers.json":
self._process_supplier_data(data, context)
elif filename == "06-production.json":
self._process_production_data(data, context)
elif filename == "07-procurement.json":
self._process_procurement_data(data, context)
elif filename == "08-orders.json":
self._process_order_data(data, context)
elif filename == "09-sales.json":
self._process_sales_data(data, context)
elif filename == "10-forecasting.json":
self._process_forecasting_data(data, context)
elif filename == "11-orchestrator.json":
self._process_orchestrator_data(data, context)
def _process_tenant_data(self, data: Any, context: str) -> None:
"""Process tenant.json data"""
tenant = data.get("tenant", {})
owner = data.get("owner", {})
subscription = data.get("subscription", {})
children = data.get("children", [])
# Store tenant data
tenant_id = tenant.get("id")
if tenant_id:
self.tenant_data[tenant_id] = tenant
self.all_ids["tenant"].add(tenant_id)
if context == "parent":
self.expected_tenant_ids.add(tenant_id)
# Store owner user
owner_id = owner.get("id")
if owner_id:
self.user_data[owner_id] = owner
self.all_ids["user"].add(owner_id)
self.expected_user_ids.add(owner_id)
# Store subscription
subscription_id = subscription.get("id")
if subscription_id:
self.all_ids["subscription"].add(subscription_id)
# Store child tenants
for child in children:
child_id = child.get("id")
if child_id:
self.tenant_data[child_id] = child
self.all_ids["tenant"].add(child_id)
self.expected_tenant_ids.add(child_id)
# Track parent-child relationship
self.references["parent_child"].append({
"parent": tenant_id,
"child": child_id,
"context": context
})
def _process_auth_data(self, data: Any, context: str) -> None:
"""Process auth.json data"""
users = data.get("users", [])
for user in users:
user_id = user.get("id")
tenant_id = user.get("tenant_id")
if user_id:
self.user_data[user_id] = user
self.all_ids["user"].add(user_id)
self.expected_user_ids.add(user_id)
# Track user-tenant relationship
if tenant_id:
self.references["user_tenant"].append({
"user_id": user_id,
"tenant_id": tenant_id,
"context": context
})
def _process_inventory_data(self, data: Any, context: str) -> None:
"""Process inventory.json data"""
products = data.get("products", [])
ingredients = data.get("ingredients", [])
locations = data.get("locations", [])
# Store products
for product in products:
product_id = product.get("id")
tenant_id = product.get("tenant_id")
created_by = product.get("created_by")
if product_id:
self.product_data[product_id] = product
self.all_ids["product"].add(product_id)
# Track product-tenant relationship
if tenant_id:
self.references["product_tenant"].append({
"product_id": product_id,
"tenant_id": tenant_id,
"context": context
})
# Track product-user relationship
if created_by:
self.references["product_user"].append({
"product_id": product_id,
"user_id": created_by,
"context": context
})
# Store ingredients
for ingredient in ingredients:
ingredient_id = ingredient.get("id")
tenant_id = ingredient.get("tenant_id")
created_by = ingredient.get("created_by")
if ingredient_id:
self.product_data[ingredient_id] = ingredient
self.all_ids["ingredient"].add(ingredient_id)
# Track ingredient-tenant relationship
if tenant_id:
self.references["ingredient_tenant"].append({
"ingredient_id": ingredient_id,
"tenant_id": tenant_id,
"context": context
})
# Track ingredient-user relationship
if created_by:
self.references["ingredient_user"].append({
"ingredient_id": ingredient_id,
"user_id": created_by,
"context": context
})
# Store locations
for location in locations:
location_id = location.get("id")
if location_id:
self.location_data[location_id] = location
self.all_ids["location"].add(location_id)
self.expected_location_ids.add(location_id)
def _process_recipe_data(self, data: Any, context: str) -> None:
"""Process recipes.json data"""
recipes = data.get("recipes", [])
for recipe in recipes:
recipe_id = recipe.get("id")
tenant_id = recipe.get("tenant_id")
finished_product_id = recipe.get("finished_product_id")
if recipe_id:
self.recipe_data[recipe_id] = recipe
self.all_ids["recipe"].add(recipe_id)
# Track recipe-tenant relationship
if tenant_id:
self.references["recipe_tenant"].append({
"recipe_id": recipe_id,
"tenant_id": tenant_id,
"context": context
})
# Track recipe-product relationship
if finished_product_id:
self.references["recipe_product"].append({
"recipe_id": recipe_id,
"product_id": finished_product_id,
"context": context
})
def _process_supplier_data(self, data: Any, context: str) -> None:
"""Process suppliers.json data"""
suppliers = data.get("suppliers", [])
for supplier in suppliers:
supplier_id = supplier.get("id")
tenant_id = supplier.get("tenant_id")
if supplier_id:
self.supplier_data[supplier_id] = supplier
self.all_ids["supplier"].add(supplier_id)
# Track supplier-tenant relationship
if tenant_id:
self.references["supplier_tenant"].append({
"supplier_id": supplier_id,
"tenant_id": tenant_id,
"context": context
})
def _process_production_data(self, data: Any, context: str) -> None:
"""Process production.json data"""
# Extract production-related IDs
pass
def _process_procurement_data(self, data: Any, context: str) -> None:
"""Process procurement.json data"""
# Extract procurement-related IDs
pass
def _process_order_data(self, data: Any, context: str) -> None:
"""Process orders.json data"""
# Extract order-related IDs
pass
def _process_sales_data(self, data: Any, context: str) -> None:
"""Process sales.json data"""
# Extract sales-related IDs
pass
def _process_forecasting_data(self, data: Any, context: str) -> None:
"""Process forecasting.json data"""
# Extract forecasting-related IDs
pass
def _process_orchestrator_data(self, data: Any, context: str) -> None:
"""Process orchestrator.json data"""
# Extract orchestrator-related IDs
pass
def validate_all_references(self) -> bool:
"""Validate all cross-references in the fixtures"""
print(f"{BLUE}Validating cross-references...{RESET}")
all_valid = True
# Validate user-tenant relationships
if "user_tenant" in self.references:
print(f"\n{YELLOW}Validating User-Tenant relationships...{RESET}")
for ref in self.references["user_tenant"]:
user_id = ref["user_id"]
tenant_id = ref["tenant_id"]
context = ref["context"]
if user_id not in self.user_data:
print(f"{RED}✗ User {user_id} referenced but not found in user data (context: {context}){RESET}")
all_valid = False
if tenant_id not in self.tenant_data:
print(f"{RED}✗ Tenant {tenant_id} referenced by user {user_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate parent-child relationships
if "parent_child" in self.references:
print(f"\n{YELLOW}Validating Parent-Child relationships...{RESET}")
for ref in self.references["parent_child"]:
parent_id = ref["parent"]
child_id = ref["child"]
context = ref["context"]
if parent_id not in self.tenant_data:
print(f"{RED}✗ Parent tenant {parent_id} not found (context: {context}){RESET}")
all_valid = False
if child_id not in self.tenant_data:
print(f"{RED}✗ Child tenant {child_id} not found (context: {context}){RESET}")
all_valid = False
# Validate product-tenant relationships
if "product_tenant" in self.references:
print(f"\n{YELLOW}Validating Product-Tenant relationships...{RESET}")
for ref in self.references["product_tenant"]:
product_id = ref["product_id"]
tenant_id = ref["tenant_id"]
context = ref["context"]
if product_id not in self.product_data:
print(f"{RED}✗ Product {product_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if tenant_id not in self.tenant_data:
print(f"{RED}✗ Tenant {tenant_id} referenced by product {product_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate product-user relationships
if "product_user" in self.references:
print(f"\n{YELLOW}Validating Product-User relationships...{RESET}")
for ref in self.references["product_user"]:
product_id = ref["product_id"]
user_id = ref["user_id"]
context = ref["context"]
if product_id not in self.product_data:
print(f"{RED}✗ Product {product_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if user_id not in self.user_data:
print(f"{RED}✗ User {user_id} referenced by product {product_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate ingredient-tenant relationships
if "ingredient_tenant" in self.references:
print(f"\n{YELLOW}Validating Ingredient-Tenant relationships...{RESET}")
for ref in self.references["ingredient_tenant"]:
ingredient_id = ref["ingredient_id"]
tenant_id = ref["tenant_id"]
context = ref["context"]
if ingredient_id not in self.product_data:
print(f"{RED}✗ Ingredient {ingredient_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if tenant_id not in self.tenant_data:
print(f"{RED}✗ Tenant {tenant_id} referenced by ingredient {ingredient_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate ingredient-user relationships
if "ingredient_user" in self.references:
print(f"\n{YELLOW}Validating Ingredient-User relationships...{RESET}")
for ref in self.references["ingredient_user"]:
ingredient_id = ref["ingredient_id"]
user_id = ref["user_id"]
context = ref["context"]
if ingredient_id not in self.product_data:
print(f"{RED}✗ Ingredient {ingredient_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if user_id not in self.user_data:
print(f"{RED}✗ User {user_id} referenced by ingredient {ingredient_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate recipe-tenant relationships
if "recipe_tenant" in self.references:
print(f"\n{YELLOW}Validating Recipe-Tenant relationships...{RESET}")
for ref in self.references["recipe_tenant"]:
recipe_id = ref["recipe_id"]
tenant_id = ref["tenant_id"]
context = ref["context"]
if recipe_id not in self.recipe_data:
print(f"{RED}✗ Recipe {recipe_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if tenant_id not in self.tenant_data:
print(f"{RED}✗ Tenant {tenant_id} referenced by recipe {recipe_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate recipe-product relationships
if "recipe_product" in self.references:
print(f"\n{YELLOW}Validating Recipe-Product relationships...{RESET}")
for ref in self.references["recipe_product"]:
recipe_id = ref["recipe_id"]
product_id = ref["product_id"]
context = ref["context"]
if recipe_id not in self.recipe_data:
print(f"{RED}✗ Recipe {recipe_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if product_id not in self.product_data:
print(f"{RED}✗ Product {product_id} referenced by recipe {recipe_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate supplier-tenant relationships
if "supplier_tenant" in self.references:
print(f"\n{YELLOW}Validating Supplier-Tenant relationships...{RESET}")
for ref in self.references["supplier_tenant"]:
supplier_id = ref["supplier_id"]
tenant_id = ref["tenant_id"]
context = ref["context"]
if supplier_id not in self.supplier_data:
print(f"{RED}✗ Supplier {supplier_id} referenced but not found (context: {context}){RESET}")
all_valid = False
if tenant_id not in self.tenant_data:
print(f"{RED}✗ Tenant {tenant_id} referenced by supplier {supplier_id} but not found (context: {context}){RESET}")
all_valid = False
# Validate UUID format for all IDs
print(f"\n{YELLOW}Validating UUID formats...{RESET}")
for entity_type, ids in self.all_ids.items():
for entity_id in ids:
try:
uuid.UUID(entity_id)
except ValueError:
print(f"{RED}✗ Invalid UUID format for {entity_type} ID: {entity_id}{RESET}")
all_valid = False
# Check for duplicate IDs
print(f"\n{YELLOW}Checking for duplicate IDs...{RESET}")
all_entities = []
for ids in self.all_ids.values():
all_entities.extend(ids)
duplicates = [id for id in all_entities if all_entities.count(id) > 1]
if duplicates:
print(f"{RED}✗ Found duplicate IDs: {', '.join(duplicates)}{RESET}")
all_valid = False
if all_valid:
print(f"{GREEN}✓ All cross-references are valid!{RESET}")
else:
print(f"{RED}✗ Found validation errors!{RESET}")
return all_valid
def generate_summary(self) -> None:
"""Generate a summary of the loaded fixtures"""
print(f"\n{BLUE}=== Fixture Summary ==={RESET}")
print(f"Tenants: {len(self.tenant_data)}")
print(f"Users: {len(self.user_data)}")
print(f"Products: {len(self.product_data)}")
print(f"Suppliers: {len(self.supplier_data)}")
print(f"Recipes: {len(self.recipe_data)}")
print(f"Locations: {len(self.location_data)}")
print(f"\nEntity Types: {list(self.all_ids.keys())}")
for entity_type, ids in self.all_ids.items():
print(f" {entity_type}: {len(ids)} IDs")
print(f"\nReference Types: {list(self.references.keys())}")
for ref_type, refs in self.references.items():
print(f" {ref_type}: {len(refs)} references")
def run_validation(self) -> bool:
"""Run the complete validation process"""
print(f"{BLUE}=== Enterprise Demo Fixtures Validator ==={RESET}")
print(f"Base Path: {self.base_path}\n")
try:
self.load_all_fixtures()
self.generate_summary()
return self.validate_all_references()
except Exception as e:
print(f"{RED}✗ Validation failed with error: {e}{RESET}")
import traceback
traceback.print_exc()
return False
if __name__ == "__main__":
validator = FixtureValidator()
success = validator.run_validation()
if success:
print(f"\n{GREEN}=== Validation Complete: All checks passed! ==={RESET}")
sys.exit(0)
else:
print(f"\n{RED}=== Validation Complete: Errors found! ==={RESET}")
sys.exit(1)