# Fix Missing AI Insights - Forecasting & Procurement ## Current Status | Insight Type | Current | Target | Status | |--------------|---------|--------|--------| | Inventory | 2-3 | 2-3 | ✅ READY | | Production | 1-2 | 2-3 | ✅ READY | | **Forecasting** | **0** | **1-2** | ❌ **BROKEN** | | **Procurement** | **0-1** | **1-2** | ⚠️ **LIMITED DATA** | --- ## Issue #1: Forecasting Insights (0 forecasts cloned) ### Root Cause The forecasting service returned "0 records cloned" even though [10-forecasting.json](shared/demo/fixtures/professional/10-forecasting.json) contains **28 forecasts**. ### Investigation Findings 1. **Fixture file exists** ✅ - 28 forecasts present 2. **Clone endpoint exists** ✅ - [services/forecasting/app/api/internal_demo.py](services/forecasting/app/api/internal_demo.py) 3. **Data structure correct** ✅ - Has all required fields ### Possible Causes **A. Idempotency Check Triggered** ```python # Line 181-195 in internal_demo.py existing_check = await db.execute( select(Forecast).where(Forecast.tenant_id == virtual_uuid).limit(1) ) existing_forecast = existing_check.scalar_one_or_none() if existing_forecast: logger.warning( "Demo data already exists, skipping clone", virtual_tenant_id=str(virtual_uuid) ) return { "status": "skipped", "reason": "Data already exists", "records_cloned": 0 } ``` **Solution**: The virtual tenant is new, so this shouldn't trigger. But need to verify. **B. Database Commit Issue** The code might insert forecasts but not commit them properly. **C. Field Mapping Issue** The forecast model might expect different fields than what's in the JSON. ### Verification Commands ```bash # 1. Check if forecasts were actually inserted for the virtual tenant kubectl exec -it -n bakery-ia forecasting-db-xxxx -- psql -U postgres -d forecasting -c \ "SELECT COUNT(*) FROM forecasts WHERE tenant_id = '740b96c4-d242-47d7-8a6e-a0a8b5c51d5e';" # 2. Check forecasting service logs for errors kubectl logs -n bakery-ia forecasting-service-xxxx | grep -E "ERROR|error|failed|Failed" | tail -20 # 3. Test clone endpoint directly curl -X POST http://forecasting-service:8000/internal/demo/clone \ -H "X-Internal-API-Key: $INTERNAL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "base_tenant_id": "a1b2c3d4-e5f6-47a8-b9c0-d1e2f3a4b5c6", "virtual_tenant_id": "test-uuid", "demo_account_type": "professional", "session_created_at": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'" }' ``` ### Quick Fix (If DB Empty) Create forecasts manually for testing: ```python # Script: create_test_forecasts.py import asyncio from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession from sqlalchemy.orm import sessionmaker from datetime import datetime, timezone, timedelta import uuid async def create_test_forecasts(): engine = create_async_engine("postgresql+asyncpg://user:pass@host/forecasting") async_session = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False) async with async_session() as session: # Get Forecast model from services.forecasting.app.models.forecasts import Forecast virtual_tenant_id = uuid.UUID("740b96c4-d242-47d7-8a6e-a0a8b5c51d5e") # Create 7 days of forecasts for 4 products products = [ "20000000-0000-0000-0000-000000000001", "20000000-0000-0000-0000-000000000002", "20000000-0000-0000-0000-000000000003", "20000000-0000-0000-0000-000000000004", ] for day in range(7): for product_id in products: forecast = Forecast( id=uuid.uuid4(), tenant_id=virtual_tenant_id, inventory_product_id=uuid.UUID(product_id), forecast_date=datetime.now(timezone.utc) + timedelta(days=day), predicted_demand=20.0 + (day * 2.5), confidence=85.0 + (day % 5), model_version="hybrid_v1", forecast_type="daily", created_at=datetime.now(timezone.utc) ) session.add(forecast) await session.commit() print("✓ Created 28 test forecasts") if __name__ == "__main__": asyncio.run(create_test_forecasts()) ``` --- ## Issue #2: Procurement Insights (Limited Data) ### Root Cause The procurement ML models need **purchase order items with unit prices** to detect price trends, but the fixture file [07-procurement.json](shared/demo/fixtures/professional/07-procurement.json) only has: - Purchase order headers (10 POs) - No `items` arrays with individual ingredient prices ### What Procurement Insights Need **Price Forecaster**: Requires PO items showing price history over time: ```json { "purchase_orders": [ { "id": "po-uuid-1", "order_date": "BASE_TS - 60d", "items": [ { "ingredient_id": "10000000-0000-0000-0000-000000000001", "ingredient_name": "Harina de Trigo T55", "ordered_quantity": 500.0, "unit_price": 0.85, // ← Price 60 days ago "total_price": 425.0 } ] }, { "id": "po-uuid-2", "order_date": "BASE_TS - 30d", "items": [ { "ingredient_id": "10000000-0000-0000-0000-000000000001", "ingredient_name": "Harina de Trigo T55", "ordered_quantity": 500.0, "unit_price": 0.88, // ← Price increased! "total_price": 440.0 } ] }, { "id": "po-uuid-3", "order_date": "BASE_TS - 1d", "items": [ { "ingredient_id": "10000000-0000-0000-0000-000000000001", "ingredient_name": "Harina de Trigo T55", "ordered_quantity": 500.0, "unit_price": 0.92, // ← 8% increase over 60 days! "total_price": 460.0 } ] } ] } ``` **Supplier Performance Analyzer**: Needs delivery tracking (already present in fixture): ```json { "delivery_delayed": true, "delay_hours": 4 } ``` ### Solution: Enhance 07-procurement.json Add `items` arrays to existing purchase orders with price trends: ```python # Script: enhance_procurement_data.py import json import random from datetime import datetime, timedelta # Price trend data (8% increase over 90 days for some ingredients) INGREDIENTS_WITH_TRENDS = [ { "id": "10000000-0000-0000-0000-000000000001", "name": "Harina de Trigo T55", "base_price": 0.85, "trend": 0.08, # 8% increase "variability": 0.02 }, { "id": "10000000-0000-0000-0000-000000000011", "name": "Mantequilla sin Sal", "base_price": 6.50, "trend": 0.12, # 12% increase "variability": 0.05 }, { "id": "10000000-0000-0000-0000-000000000012", "name": "Leche Entera Fresca", "base_price": 0.95, "trend": -0.03, # 3% decrease (seasonal) "variability": 0.02 } ] def calculate_price(ingredient, days_ago): """Calculate price based on trend""" trend_factor = 1 + (ingredient["trend"] * (90 - days_ago) / 90) variability = random.uniform(-ingredient["variability"], ingredient["variability"]) return round(ingredient["base_price"] * trend_factor * (1 + variability), 2) def add_items_to_pos(): with open('shared/demo/fixtures/professional/07-procurement.json') as f: data = json.load(f) for po in data['purchase_orders']: # Extract days ago from order_date order_date_str = po.get('order_date', 'BASE_TS - 1d') if 'BASE_TS' in order_date_str: # Parse "BASE_TS - 1d" to get days if '- ' in order_date_str: days_str = order_date_str.split('- ')[1].replace('d', '').strip() try: days_ago = int(days_str.split('d')[0]) except: days_ago = 1 else: days_ago = 0 else: days_ago = 30 # Default # Add 2-3 items per PO items = [] for ingredient in random.sample(INGREDIENTS_WITH_TRENDS, k=random.randint(2, 3)): unit_price = calculate_price(ingredient, days_ago) quantity = random.randint(200, 500) items.append({ "ingredient_id": ingredient["id"], "ingredient_name": ingredient["name"], "ordered_quantity": float(quantity), "unit_price": unit_price, "total_price": round(quantity * unit_price, 2), "received_quantity": None, "status": "pending" }) po['items'] = items # Save back with open('shared/demo/fixtures/professional/07-procurement.json', 'w') as f: json.dump(data, f, indent=2, ensure_ascii=False) print(f"✓ Added items to {len(data['purchase_orders'])} purchase orders") if __name__ == "__main__": add_items_to_pos() ``` **Run it**: ```bash python enhance_procurement_data.py ``` **Expected Result**: - 10 POs now have `items` arrays - Each PO has 2-3 items - Prices show trends over time - Procurement insights should generate: - "Mantequilla price up 12% in 90 days - consider bulk purchase" - "Harina T55 trending up 8% - lock in current supplier contract" --- ## Summary of Actions ### 1. Forecasting Fix (IMMEDIATE) ```bash # Verify forecasts in database kubectl get pods -n bakery-ia | grep forecasting-db kubectl exec -it -n bakery-ia forecasting-db-xxxx -- psql -U postgres -d forecasting # In psql: SELECT tenant_id, COUNT(*) FROM forecasts GROUP BY tenant_id; # If virtual tenant has 0 forecasts: # - Check forecasting service logs for errors # - Manually trigger clone endpoint # - Or use the create_test_forecasts.py script above ``` ### 2. Procurement Enhancement (15 minutes) ```bash # Run the enhancement script python enhance_procurement_data.py # Verify cat shared/demo/fixtures/professional/07-procurement.json | jq '.purchase_orders[0].items' # Should see items array with prices ``` ### 3. Create New Demo Session ```bash # After fixes, create fresh demo session curl -X POST http://localhost:8000/api/demo/sessions \ -H "Content-Type: application/json" \ -d '{"demo_account_type":"professional"}' | jq # Wait 60 seconds for AI models to run # Check insights (should now have 5-8 total) curl "http://localhost:8000/api/ai-insights/tenants/{virtual_tenant_id}/insights" | jq '.total' ``` --- ## Expected Results After Fixes | Service | Insights Before | Insights After | Status | |---------|----------------|----------------|--------| | Inventory | 2-3 | 2-3 | ✅ No change | | Production | 1-2 | 1-2 | ✅ No change | | **Forecasting** | **0** | **1-2** | ✅ **FIXED** | | **Procurement** | **0** | **1-2** | ✅ **FIXED** | | **TOTAL** | **3-6** | **6-10** | ✅ **TARGET MET** | ### Sample Insights After Fix **Forecasting**: - "Demand trending up 15% for Croissants - recommend increasing production by 12 units next week" - "Weekend sales pattern detected - reduce Saturday production by 40% to minimize waste" **Procurement**: - "Price alert: Mantequilla up 12% in 90 days - consider bulk purchase to lock in rates" - "Cost optimization: Harina T55 price trending up 8% - negotiate long-term contract with Harinas del Norte" - "Supplier performance: 3/10 deliveries delayed from Harinas del Norte - consider backup supplier" --- ## Files to Modify 1. **shared/demo/fixtures/professional/07-procurement.json** - Add `items` arrays 2. **(Optional) services/forecasting/app/api/internal_demo.py** - Debug why 0 forecasts cloned --- ## Testing Checklist - [ ] Run `enhance_procurement_data.py` - [ ] Verify PO items added: `jq '.purchase_orders[0].items' 07-procurement.json` - [ ] Check forecasting DB: `SELECT COUNT(*) FROM forecasts WHERE tenant_id = '{virtual_id}'` - [ ] Create new demo session - [ ] Wait 60 seconds - [ ] Query AI insights: Should see 6-10 total - [ ] Verify categories: inventory (2-3), production (1-2), forecasting (1-2), procurement (1-2) - [ ] Check insight quality: Prices, trends, recommendations present --- ## Troubleshooting **If forecasts still 0 after demo session**: 1. Check forecasting service logs: `kubectl logs -n bakery-ia forecasting-service-xxx | grep clone` 2. Look for errors in clone endpoint 3. Verify fixture file path is correct 4. Manually insert test forecasts using script above **If procurement insights still 0**: 1. Verify PO items exist: `jq '.purchase_orders[].items | length' 07-procurement.json` 2. Check if price trends are significant enough (>5% change) 3. Look for procurement service logs: `kubectl logs -n bakery-ia procurement-service-xxx | grep -i price` **If insights not showing in frontend**: 1. Check API returns data: `curl http://localhost:8000/api/ai-insights/tenants/{id}/insights` 2. Verify tenant_id matches between frontend and API 3. Check browser console for errors 4. Verify AI insights service is running