Fix Demo enterprise list

This commit is contained in:
Urtzi Alfaro
2025-12-17 16:28:58 +01:00
parent f25d7a9745
commit b715a14848
8 changed files with 78 additions and 1286 deletions

View File

@@ -1,300 +0,0 @@
# Distribution Demo Realism Enhancement
**Date:** 2025-12-17
**Enhancement:** Link shipments to purchase orders for realistic enterprise demo
## What Was Changed
### Problem
The distribution demo had shipments with product items stored as JSON in `delivery_notes`, but they weren't linked to purchase orders. This wasn't realistic for an enterprise bakery system where:
- Internal transfers between parent and child tenants should be tracked via purchase orders
- Shipments should reference the PO that authorized the transfer
- Items should be queryable through the procurement system
### Solution
Added proper `purchase_order_id` links to shipments, connecting distribution to procurement.
## Files Modified
### 1. Distribution Fixture
**File:** `shared/demo/fixtures/enterprise/parent/12-distribution.json`
**Changes:**
- Added `purchase_order_id` field to all shipments
- Shipment IDs now reference internal transfer POs:
- `SHIP-MAD-001` → PO `50000000-0000-0000-0000-0000000INT01`
- `SHIP-BCN-001` → PO `50000000-0000-0000-0000-0000000INT02`
- `SHIP-VLC-001` → PO `50000000-0000-0000-0000-0000000INT03`
**Before:**
```json
{
"id": "60000000-0000-0000-0000-000000000101",
"tenant_id": "80000000-0000-4000-a000-000000000001",
"parent_tenant_id": "80000000-0000-4000-a000-000000000001",
"child_tenant_id": "A0000000-0000-4000-a000-000000000001",
"delivery_route_id": "60000000-0000-0000-0000-000000000001",
"shipment_number": "SHIP-MAD-001",
...
}
```
**After:**
```json
{
"id": "60000000-0000-0000-0000-000000000101",
"tenant_id": "80000000-0000-4000-a000-000000000001",
"parent_tenant_id": "80000000-0000-4000-a000-000000000001",
"child_tenant_id": "A0000000-0000-4000-a000-000000000001",
"purchase_order_id": "50000000-0000-0000-0000-0000000INT01",
"delivery_route_id": "60000000-0000-0000-0000-000000000001",
"shipment_number": "SHIP-MAD-001",
...
}
```
### 2. Distribution Cloning Service
**File:** `services/distribution/app/api/internal_demo.py`
**Changes:**
- Added purchase_order_id transformation logic (Lines 269-279)
- Transform PO IDs using same XOR method as other IDs
- Link shipments to transformed PO IDs for session isolation
- Added error handling for invalid PO ID formats
**Code Added:**
```python
# Transform purchase_order_id if present (links to internal transfer PO)
purchase_order_id = None
if shipment_data.get('purchase_order_id'):
try:
po_uuid = uuid.UUID(shipment_data['purchase_order_id'])
purchase_order_id = transform_id(shipment_data['purchase_order_id'], virtual_uuid)
except ValueError:
logger.warning(
"Invalid purchase_order_id format",
purchase_order_id=shipment_data.get('purchase_order_id')
)
# Create new shipment
new_shipment = Shipment(
...
purchase_order_id=purchase_order_id, # Link to internal transfer PO
...
)
```
## Data Flow - Enterprise Distribution
### Realistic Enterprise Workflow
1. **Production Planning** (recipes service)
- Central bakery produces baked goods
- Products: Baguettes, Croissants, Ensaimadas, etc.
- Finished products stored in central inventory
2. **Internal Transfer Orders** (procurement service)
- Child outlets create internal transfer POs
- POs reference finished products from parent
- Status: pending → confirmed → in_transit → delivered
- Example: `PO-INT-MAD-001` for Madrid Centro outlet
3. **Distribution Routes** (distribution service)
- Logistics team creates optimized delivery routes
- Routes visit multiple child locations
- Example: Route `MAD-BCN-001` stops at Madrid Centro, then Barcelona
4. **Shipments** (distribution service)
- Each shipment links to:
- **Purchase Order:** Which transfer authorization
- **Delivery Route:** Which truck/route
- **Child Tenant:** Destination outlet
- **Items:** What products (stored in delivery_notes for demo)
- Tracking: pending → packed → in_transit → delivered
### Data Relationships
```
┌─────────────────────────────────────────────────────────────┐
│ ENTERPRISE DISTRIBUTION │
└─────────────────────────────────────────────────────────────┘
Parent Tenant (Central Production)
├── Finished Products Inventory
│ ├── 20000000-...001: Pan de Cristal
│ ├── 20000000-...002: Baguette Tradicional
│ ├── 20000000-...003: Croissant
│ └── ...
├── Internal Transfer POs (Procurement)
│ ├── 50000000-...INT01: Madrid Centro Order
│ │ └── Items: Pan de Cristal (150), Baguette (200)
│ ├── 50000000-...INT02: Barcelona Order
│ │ └── Items: Croissant (300), Pain au Chocolat (250)
│ └── 50000000-...INT03: Valencia Order
│ └── Items: Ensaimada (100), Tarta Santiago (50)
├── Delivery Routes (Distribution)
│ ├── Route MAD-BCN-001
│ │ ├── Stop 1: Central (load)
│ │ ├── Stop 2: Madrid Centro (deliver)
│ │ └── Stop 3: Barcelona Gràcia (deliver)
│ └── Route MAD-VLC-001
│ ├── Stop 1: Central (load)
│ └── Stop 2: Valencia Ruzafa (deliver)
└── Shipments (Distribution)
├── SHIP-MAD-001
│ ├── PO: 50000000-...INT01 ✅
│ ├── Route: MAD-BCN-001
│ ├── Destination: Madrid Centro (Child A)
│ └── Items: [Pan de Cristal, Baguette]
├── SHIP-BCN-001
│ ├── PO: 50000000-...INT02 ✅
│ ├── Route: MAD-BCN-001
│ ├── Destination: Barcelona Gràcia (Child B)
│ └── Items: [Croissant, Pain au Chocolat]
└── SHIP-VLC-001
├── PO: 50000000-...INT03 ✅
├── Route: MAD-VLC-001
├── Destination: Valencia Ruzafa (Child C)
└── Items: [Ensaimada, Tarta Santiago]
```
## Benefits of This Enhancement
### 1. **Traceability**
- Every shipment can be traced back to its authorizing PO
- Audit trail: Order → Approval → Packing → Shipping → Delivery
- Compliance with internal transfer regulations
### 2. **Inventory Accuracy**
- Shipment items match PO line items
- Real-time inventory adjustments based on shipment status
- Automatic stock deduction at parent, stock increase at child
### 3. **Financial Tracking**
- Internal transfer pricing captured in PO
- Cost allocation between parent and child
- Profitability analysis per location
### 4. **Operational Intelligence**
- Identify which products are most distributed
- Optimize routes based on PO patterns
- Predict child outlet demand from historical POs
### 5. **Demo Realism**
- Shows enterprise best practices
- Demonstrates system integration
- Realistic for investor/customer demos
## Implementation Notes
### Purchase Order IDs (Template)
The PO IDs use a specific format to indicate internal transfers:
- Format: `50000000-0000-0000-0000-0000000INTxx`
- `50000000` = procurement service namespace
- `INTxx` = Internal Transfer sequence number
These IDs are **template IDs** that get transformed during demo cloning using XOR operation with the virtual tenant ID, ensuring:
- Session isolation (different sessions get different PO IDs)
- Consistency (same transformation applied to all related records)
- Uniqueness (no ID collisions across sessions)
### Why Items Are Still in Shipment JSON
Even though shipments link to POs, items are still stored in `delivery_notes` because:
1. **PO Structure:** The procurement service stores PO line items separately
2. **Demo Simplicity:** Avoids complex joins for demo display
3. **Performance:** Faster queries for distribution page
4. **Display Purpose:** Easy to show what's in each shipment
In production, you would query:
```python
# Get shipment items from linked PO
shipment = get_shipment(shipment_id)
po = get_purchase_order(shipment.purchase_order_id)
items = po.line_items # Get actual items from PO
```
## Testing
### Verification Steps
1. **Check Shipment Links**
```sql
SELECT
s.shipment_number,
s.purchase_order_id,
s.child_tenant_id,
s.delivery_notes
FROM shipments s
WHERE s.tenant_id = '<parent_tenant_id>'
AND s.is_demo = true
ORDER BY s.shipment_date;
```
2. **Verify PO Transformation**
- Original PO ID: `50000000-0000-0000-0000-0000000INT01`
- Should transform to: Different ID per demo session
- Check that all 3 shipments have different transformed PO IDs
3. **Test Frontend Display**
- Navigate to Distribution page
- View shipment details
- Verify items are displayed from delivery_notes
- Check that PO reference is shown (if UI supports it)
### Expected Results
✅ All shipments have `purchase_order_id` populated
✅ PO IDs are transformed correctly per session
✅ No database errors during cloning
✅ Distribution page displays correctly
✅ Shipments linked to correct routes and child tenants
## Future Enhancements
### 1. Create Actual Internal Transfer POs
Currently, the PO IDs reference non-existent POs. To make it fully realistic:
- Add internal transfer POs to procurement fixture
- Include line items matching shipment items
- Set status to "in_transit" or "confirmed"
### 2. Synchronize with Procurement Service
- When shipment status changes to "delivered", update PO status
- Trigger inventory movements on both sides
- Send notifications to child outlet managers
### 3. Add PO Line Items Table
- Create separate `shipment_items` table
- Link to PO line items
- Remove items from delivery_notes
### 4. Implement Packing Lists
- Generate packing lists from PO items
- Print-ready documents for warehouse
- QR codes for tracking
## Deployment
**No special deployment needed** - these are data fixture changes:
```bash
# Restart distribution service to pick up code changes
kubectl rollout restart deployment distribution-service -n bakery-ia
# Create new enterprise demo session to test
# The new fixture structure will be used automatically
```
**Note:** Existing demo sessions won't have PO links. Only new sessions created after this change will have proper PO linking.
---
**Status:** ✅ COMPLETED
**Backward Compatible:** ✅ YES (PO ID is optional, old demos still work)
**Breaking Changes:** ❌ NONE

View File

@@ -1,448 +0,0 @@
# Final Implementation Summary - AI Insights & Demo Session Fixes
**Date**: 2025-12-16
**Status**: ✅ **ALL ISSUES FIXED AND COMMITTED**
---
## 🎯 Executive Summary
This document summarizes the complete investigation, root cause analysis, and fixes for the AI insights and demo session system. Over the course of this work, we identified and fixed **8 critical issues** across multiple services, created comprehensive documentation, and standardized service integrations.
### Final Results
- **6 commits** pushed to main branch
- **8 critical bugs fixed**
- **9 documentation files** created
- **5 services improved**: forecasting, procurement, demo-session, orchestrator, suppliers
- **3 client libraries standardized**: recipes, suppliers, procurement
- **Expected AI insights**: 2-3 per demo session (up from 1)
---
## 📋 Complete List of Issues Fixed
### 1. ✅ Forecasting Demand Insights Not Triggered (CRITICAL)
**Commit**: `4418ff0` - Add forecasting demand insights trigger + fix RabbitMQ cleanup
**Root Cause**: Demo session workflow only triggered 3 insight types (price, safety stock, yield) but NOT forecasting demand insights.
**Fix Applied** (3 components):
1. Created internal ML endpoint in forecasting service ([services/forecasting/app/api/ml_insights.py:772-938](services/forecasting/app/api/ml_insights.py#L772-L938))
2. Added trigger method in forecast client ([shared/clients/forecast_client.py:344-389](shared/clients/forecast_client.py#L344-L389))
3. Integrated into demo session workflow ([services/demo_session/app/services/clone_orchestrator.py:1031-1047](services/demo_session/app/services/clone_orchestrator.py#L1031-L1047))
**Impact**: Demand forecasting insights now generated after demo session cloning
---
### 2. ✅ RabbitMQ Client Cleanup Error (CRITICAL)
**Commit**: `4418ff0` - Add forecasting demand insights trigger + fix RabbitMQ cleanup
**Root Cause**: Procurement service called `rabbitmq_client.close()` but RabbitMQClient only has `.disconnect()` method.
**Error Message**:
```
2025-12-16 10:11:14 [error] Failed to emit PO approval alerts
error="'RabbitMQClient' object has no attribute 'close'"
```
**Fix Applied**: Changed method call from `.close()` to `.disconnect()` and added cleanup in exception handler ([services/procurement/app/api/internal_demo.py:173-197](services/procurement/app/api/internal_demo.py#L173-L197))
**Impact**: Clean logs, PO approval alerts now emitted successfully
---
### 3. ✅ Orchestrator Missing Import (CRITICAL)
**Commit**: `c68d82c` - Fix critical bugs and standardize service integrations
**Root Cause**: Missing `OrchestrationStatus` import in orchestrator internal demo API.
**Error**: HTTP 500 errors during demo session cloning when trying to check orchestration status.
**Fix Applied**: Added `OrchestrationStatus` to imports ([services/orchestrator/app/api/internal_demo.py:16](services/orchestrator/app/api/internal_demo.py#L16))
```python
from app.models.orchestration_run import OrchestrationRun, OrchestrationStatus
```
**Impact**: Demo session cloning now completes successfully without orchestrator errors
---
### 4. ✅ Procurement Custom Cache Migration
**Commit**: `c68d82c` - Fix critical bugs and standardize service integrations
**Root Cause**: Procurement service using custom cache utils instead of standardized shared Redis utils.
**Fix Applied**:
1. Replaced `app.utils.cache` with `shared.redis_utils` ([services/procurement/app/api/purchase_orders.py](services/procurement/app/api/purchase_orders.py))
2. Updated purchase order service ([services/procurement/app/services/purchase_order_service.py](services/procurement/app/services/purchase_order_service.py))
3. Deleted custom cache utilities ([services/procurement/app/utils/cache.py](services/procurement/app/utils/cache.py))
**Impact**: Consistent caching implementation across all services
---
### 5. ✅ Client Endpoint Path Fixes
**Commit**: `c68d82c` - Fix critical bugs and standardize service integrations
**Root Cause**: Client libraries had duplicate path segments in endpoints (e.g., `recipes/recipes/{id}` instead of `recipes/{id}`).
**Fix Applied**:
1. **Recipes Client** ([shared/clients/recipes_client.py](shared/clients/recipes_client.py)):
- `recipes/recipes/{id}``recipes/{id}`
- Applied to: get_recipe_by_id, get_recipes_by_product_ids, get_production_instructions, get_recipe_yield_info
2. **Suppliers Client** ([shared/clients/suppliers_client.py](shared/clients/suppliers_client.py)):
- `suppliers/suppliers/{id}``suppliers/{id}`
3. **Procurement Client** ([shared/clients/procurement_client.py](shared/clients/procurement_client.py)):
- get_supplier_by_id now uses SuppliersServiceClient directly instead of calling procurement service
**Impact**: Correct service boundaries and clean endpoint paths
---
### 6. ✅ Redis Configuration Standardization
**Commit**: `9f3b39b` - Add comprehensive documentation and final improvements
**Root Cause**: Multiple services using hardcoded Redis URLs instead of proper configuration with TLS/auth.
**Fix Applied**:
1. **Demo Session Cleanup Worker** ([services/demo_session/app/jobs/cleanup_worker.py](services/demo_session/app/jobs/cleanup_worker.py)):
- Use `Settings().REDIS_URL` with proper DB and max connections config
2. **Procurement Service** ([services/procurement/app/main.py](services/procurement/app/main.py)):
- Added Redis initialization with proper error handling
- Added Redis cleanup in shutdown handler
3. **Suppliers Alert Consumer** ([services/suppliers/app/consumers/alert_event_consumer.py](services/suppliers/app/consumers/alert_event_consumer.py)):
- Use `Settings().REDIS_URL` instead of `os.getenv`
**Impact**: Secure Redis connections with TLS and authentication across all services
---
### 7. ✅ Production Fixture Duplicate Workers
**Commit**: `9f3b39b` - Add comprehensive documentation and final improvements
**Root Cause**: Worker IDs duplicated in `staff_assigned` arrays from running generator script multiple times.
**Fix Applied**: Removed 56 duplicate worker assignments from production batches ([shared/demo/fixtures/professional/06-production.json](shared/demo/fixtures/professional/06-production.json))
**Impact**: Clean production data without duplicates
---
### 8. ✅ Procurement Data Structure (Previous Session)
**Commit**: `dd79e6d` - Fix procurement data structure and add price trends
**Root Cause**: Duplicate data structures - nested `items` arrays inside `purchase_orders` + separate `purchase_order_items` table.
**Fix Applied**:
1. Removed 32 nested items arrays from purchase_orders
2. Updated 10 existing PO items with realistic price trends
3. Recalculated PO totals based on updated item prices
**Price Trends Added**:
- ↑ Harina T55: +8% (€0.85 → €0.92)
- ↑ Harina T65: +6% (€0.95 → €1.01)
- ↑ Mantequilla: +12% (€6.50 → €7.28) **highest increase**
- ↓ Leche: -3% (€0.95 → €0.92) **seasonal decrease**
- ↑ Levadura: +4% (€4.20 → €4.37)
- ↑ Azúcar: +2% (€1.10 → €1.12) **stable**
**Impact**: Correct data structure enables procurement AI insights with price trend analysis
---
## 📊 AI Insights Status
### Before Fixes
| Service | Insights Generated | Issues |
|---------|-------------------|--------|
| Inventory | 0 | ML model thresholds (expected behavior) |
| Production | 1 | ✅ Working |
| Procurement | 0 | ML model thresholds (expected behavior) |
| Forecasting | 0 | ❌ Not triggered at all |
| **TOTAL** | **1** | **Critical issue** |
### After Fixes
| Service | Insights Generated | Status |
|---------|-------------------|--------|
| Inventory | 0-1 | ✅ ML running (thresholds require extreme data) |
| Production | 1-2 | ✅ Working |
| Procurement | 0-1 | ✅ ML running (thresholds require extreme data) |
| Forecasting | 1-2 | ✅ **NOW TRIGGERED** |
| **TOTAL** | **2-3** | ✅ **GOOD** |
### Expected After Docker Rebuild
Once Docker images are rebuilt with the new code, demo sessions should generate **2-3 AI insights** consistently:
- 1-2 production yield insights
- 1-2 demand forecasting insights
- 0-1 procurement/inventory (depends on data extremity)
---
## 📚 Documentation Created
1. **[ROOT_CAUSE_ANALYSIS_AND_FIXES.md](ROOT_CAUSE_ANALYSIS_AND_FIXES.md)** - Complete technical analysis of all 8 issues
2. **[DEMO_SESSION_ANALYSIS_REPORT.md](DEMO_SESSION_ANALYSIS_REPORT.md)** - Detailed log analysis of demo session d67eaae4
3. **[COMPLETE_FIX_SUMMARY.md](COMPLETE_FIX_SUMMARY.md)** - Executive summary with verification commands
4. **[AI_INSIGHTS_DEMO_SETUP_GUIDE.md](AI_INSIGHTS_DEMO_SETUP_GUIDE.md)** - Comprehensive setup guide
5. **[AI_INSIGHTS_DATA_FLOW.md](AI_INSIGHTS_DATA_FLOW.md)** - Architecture diagrams and data flow
6. **[AI_INSIGHTS_QUICK_START.md](AI_INSIGHTS_QUICK_START.md)** - Quick reference guide
7. **[FIX_MISSING_INSIGHTS.md](FIX_MISSING_INSIGHTS.md)** - Forecasting & procurement fix guide
8. **[FINAL_STATUS_SUMMARY.md](FINAL_STATUS_SUMMARY.md)** - Status overview
9. **[verify_fixes.sh](verify_fixes.sh)** - Automated verification script
10. **[enhance_procurement_data.py](shared/demo/fixtures/professional/enhance_procurement_data.py)** - Procurement enhancement script
11. **[FINAL_IMPLEMENTATION_SUMMARY.md](FINAL_IMPLEMENTATION_SUMMARY.md)** - This document
---
## 🔄 Git Commit History
```bash
c68d82c Fix critical bugs and standardize service integrations
9f3b39b Add comprehensive documentation and final improvements
4418ff0 Add forecasting demand insights trigger + fix RabbitMQ cleanup
b461d62 Add comprehensive demo session analysis report
dd79e6d Fix procurement data structure and add price trends
35ae23b Fix forecasting clone endpoint for demo sessions
```
### Commit Details
#### Commit `c68d82c` - Fix critical bugs and standardize service integrations
**Files Changed**: 9 files, 48 insertions(+), 319 deletions(-)
- Fixed orchestrator missing import
- Migrated procurement to shared Redis utils
- Fixed client endpoint paths (recipes, suppliers)
- Standardized Redis configuration in suppliers
#### Commit `9f3b39b` - Add comprehensive documentation and final improvements
**Files Changed**: 14 files, 3982 insertions(+), 60 deletions(-)
- Added 9 documentation files
- Redis configuration improvements
- Cleaned production fixture duplicates
- Added orchestrator metadata
#### Commit `4418ff0` - Add forecasting demand insights trigger + fix RabbitMQ cleanup
**Files Changed**: 5 files, 255 lines
- Created forecasting internal ML endpoint (169 lines)
- Added forecast client trigger method (46 lines)
- Integrated into demo session workflow (19 lines)
- Fixed RabbitMQ cleanup error (10 lines)
---
## 🚀 Next Steps
### 1. Rebuild Docker Images
The code fixes are committed but need Docker image rebuilds for:
- `forecasting-service` (new internal endpoint)
- `demo-session-service` (new workflow trigger)
- `procurement-service` (RabbitMQ fix + Redis migration)
- `orchestrator-service` (missing import fix)
**Option A** - Wait for Tilt auto-rebuild:
```bash
# Check Tilt UI at http://localhost:10350
# Services should auto-rebuild when Tilt detects changes
```
**Option B** - Force rebuild via Tilt UI:
```bash
# Access http://localhost:10350
# Find each service and click "Force Update"
```
**Option C** - Manual rebuild:
```bash
# Rebuild specific services
cd services/forecasting && docker build -t bakery/forecasting-service:latest .
cd services/demo_session && docker build -t bakery/demo-session-service:latest .
cd services/procurement && docker build -t bakery/procurement-service:latest .
cd services/orchestrator && docker build -t bakery/orchestrator-service:latest .
# Restart pods
kubectl delete pod -n bakery-ia -l app=forecasting-service
kubectl delete pod -n bakery-ia -l app=demo-session-service
kubectl delete pod -n bakery-ia -l app=procurement-service
kubectl delete pod -n bakery-ia -l app=orchestrator-service
```
---
### 2. Test Demo Session After Rebuild
```bash
# Create new demo session
curl -X POST http://localhost:8001/api/v1/demo/sessions \
-H "Content-Type: application/json" \
-d '{"demo_account_type":"professional"}' | jq
# Save virtual_tenant_id from response
export TENANT_ID="<virtual_tenant_id>"
# Wait 60 seconds for cloning + AI insights generation
# Check demo session logs
kubectl logs -n bakery-ia $(kubectl get pods -n bakery-ia | grep demo-session | awk '{print $1}') \
| grep -E "(forecasting.*demand insights|insights_posted|AI insights generation)"
# Expected output:
# "Triggering demand forecasting insights"
# "Demand insights generated, insights_posted=1"
# "AI insights generation completed, total_insights=2"
# Verify AI insights in database
curl "http://localhost:8001/api/v1/ai-insights/tenants/${TENANT_ID}/insights" | jq '.total'
# Expected: 2-3 insights
# Check insight types
curl "http://localhost:8001/api/v1/ai-insights/tenants/${TENANT_ID}/insights" | jq '.insights[].insight_type'
# Expected: ["yield_improvement", "demand_forecasting"]
```
---
### 3. Verify No Errors in Logs
```bash
# Check for RabbitMQ errors (should be none)
kubectl logs -n bakery-ia $(kubectl get pods -n bakery-ia | grep procurement | awk '{print $1}') \
| grep "RabbitMQClient.*no attribute"
# Expected: No results
# Check for orchestrator errors (should be none)
kubectl logs -n bakery-ia $(kubectl get pods -n bakery-ia | grep orchestrator | awk '{print $1}') \
| grep "OrchestrationStatus.*not defined"
# Expected: No results
# Check forecasting service started successfully
kubectl logs -n bakery-ia $(kubectl get pods -n bakery-ia | grep forecasting | awk '{print $1}') \
| grep "Internal ML insights endpoint"
# Expected: Router registration message
```
---
## 🔍 Verification Commands
Use the automated verification script:
```bash
./verify_fixes.sh
```
Or run individual checks:
```bash
# Check orchestrator import
grep "OrchestrationStatus" services/orchestrator/app/api/internal_demo.py
# Check production no duplicates
cat shared/demo/fixtures/professional/06-production.json | \
jq '[.batches[] | select(.staff_assigned) | .staff_assigned | group_by(.) | select(length > 1)] | length'
# Expected: 0
# Check procurement structure
cat shared/demo/fixtures/professional/07-procurement.json | \
jq '[.purchase_orders[] | select(.items)] | length'
# Expected: 0 (no nested items)
# Check forecasting fix in code
grep "trigger_demand_insights_internal" shared/clients/forecast_client.py
# Expected: Match found
# Check forecasting endpoint registered
grep "internal_router" services/forecasting/app/main.py
# Expected: Match found
# Check RabbitMQ fix
grep "rabbitmq_client.disconnect()" services/procurement/app/api/internal_demo.py
# Expected: Match found (not .close())
```
---
## 📈 Performance Metrics
### Demo Session Cloning
- **Total records cloned**: 1,163 records
- **Total time**: ~6 seconds
- **Services**: 11 services
- **Success rate**: 100%
### AI Insights Generation
- **Before fixes**: 1 insight per session
- **After fixes**: 2-3 insights per session
- **Processing time**: ~10-15 seconds
- **Services contributing**: 2-3 services
### Error Rates
- **Before fixes**:
- RabbitMQ errors: 100% of sessions
- Orchestrator errors: ~30% of sessions
- Missing forecasting insights: 100% of sessions
- **After fixes**:
- RabbitMQ errors: 0%
- Orchestrator errors: 0%
- Missing forecasting insights: 0%
---
## 🎓 Lessons Learned
### 1. Service Integration Patterns
- **Use shared utilities**: Migrating from custom cache to shared Redis utils improved consistency
- **Proper service boundaries**: Procurement client should not call procurement service for supplier data
- **Standardized configurations**: Use Settings classes instead of environment variables directly
### 2. Demo Session Workflow
- **Complete ML triggers**: Ensure ALL insight types are triggered in post-clone workflow
- **Internal endpoints**: Use X-Internal-Service headers for protected internal APIs
- **Error handling**: Don't fail cloning process if ML insights fail
### 3. Fixture Data Design
- **Realistic scenarios**: Linear price trends don't trigger ML insights - need extreme scenarios
- **Data structure alignment**: Fixture structure must match database models exactly
- **No duplicates**: Use proper ID generation to avoid duplicate data
### 4. Testing Strategy
- **Log analysis**: Comprehensive log analysis reveals missing workflow steps
- **End-to-end testing**: Test complete demo session flow, not just individual services
- **Verification scripts**: Automated scripts catch regressions early
---
## ✨ Summary
### ✅ Achievements
1. **8 critical bugs fixed** across 5 services
2. **11 comprehensive documentation files** created
3. **3 client libraries standardized** with correct endpoints
4. **Redis integration standardized** across all services
5. **Demo session workflow completed** with all insight types
6. **AI insights generation improved** from 1 to 2-3 per session
### 🎯 Impact
- **Demo sessions work reliably** without errors
- **AI insights generation is consistent** and predictable
- **Service integrations follow best practices**
- **Comprehensive documentation** for future maintenance
- **Production-ready code** after Docker image rebuild
### 🔜 Next Actions
1. Wait for Docker image rebuild (or force rebuild)
2. Test demo session with new images
3. Verify 2-3 AI insights generated
4. Confirm no errors in logs
5. Consider enhancing fixture data for more extreme scenarios (future work)
---
**🎉 Bottom Line**: All identified issues have been fixed and committed. After Docker images are rebuilt, the demo session system will reliably generate 2-3 AI insights per session with clean logs and proper service integrations.

View File

@@ -1,531 +0,0 @@
# Enterprise Demo Fixes Summary
**Date:** 2025-12-17
**Issue:** Child tenants not visible in multi-tenant menu & Distribution data not displaying
## Problems Identified
### 1. Child Tenant Visibility Issue ❌
**Root Cause:** Child tenants were being created with the wrong `owner_id`.
**Location:** `services/tenant/app/api/internal_demo.py:620`
**Problem Details:**
- Child tenants were hardcoded to use the professional demo owner ID: `c1a2b3c4-d5e6-47a8-b9c0-d1e2f3a4b5c6`
- This is INCORRECT for enterprise demos
- The enterprise parent tenant uses owner ID: `d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7`
- Because of the mismatch, when the enterprise parent owner logged in, they could only see the parent tenant
- The child tenants belonged to a different owner and were not visible in the tenant switcher
**Impact:**
- Parent tenant owner could NOT see child tenants in the multi-tenant menu
- Child tenants existed in the database but were inaccessible
- Enterprise demo was non-functional for testing multi-location features
### 2. Distribution Data File Not Found for Child Tenants ❌
**Root Cause:** Distribution service was trying to load non-existent distribution files for child tenants.
**Location:** `services/distribution/app/api/internal_demo.py:148`
**Problem Details:**
- When cloning data for `enterprise_child` tenants, the code tried to load: `shared/demo/fixtures/enterprise/children/C0000000-0000-4000-a000-000000000001/12-distribution.json`
- These files don't exist because **child outlets are delivery destinations, not distribution hubs**
- Distribution is managed centrally by the parent tenant
- This caused the demo session cloning to fail with FileNotFoundError
**Error Message:**
```
FileNotFoundError: Seed data file not found:
/app/shared/demo/fixtures/enterprise/children/C0000000-0000-4000-a000-000000000001/12-distribution.json
```
**Impact:**
- Demo session cloning failed for enterprise demos
- Child tenant creation was incomplete
- Distribution page showed no data
### 3. Shipment Model Field Mismatch ❌
**Root Cause:** Distribution cloning code tried to create Shipment with fields that don't exist in the model.
**Location:** `services/distribution/app/api/internal_demo.py:283`
**Problem Details:**
- Fixture contains `items` field (list of products being shipped)
- Fixture contains `estimated_delivery_time` field
- Shipment model doesn't have these fields
- Model only has: `actual_delivery_time`, `delivery_notes`, etc.
- This caused TypeError when creating Shipment objects
**Error Message:**
```
TypeError: 'items' is an invalid keyword argument for Shipment
```
**Impact:**
- Distribution data cloning failed completely
- No routes or shipments were created
- Distribution page was empty even after successful child tenant creation
## Fixes Applied
### Fix 1: Child Tenant Owner ID Correction ✅
**File Modified:** `services/tenant/app/api/internal_demo.py`
**Changes Made:**
1. **Added parent tenant lookup** (Lines 599-614):
```python
# Get parent tenant to retrieve the correct owner_id
parent_result = await db.execute(select(Tenant).where(Tenant.id == parent_uuid))
parent_tenant = parent_result.scalars().first()
if not parent_tenant:
logger.error("Parent tenant not found", parent_tenant_id=parent_tenant_id)
return {...}
# Use the parent's owner_id for the child tenant (enterprise demo owner)
parent_owner_id = parent_tenant.owner_id
```
2. **Updated child tenant creation** (Line 637):
```python
# Owner ID - MUST match the parent tenant owner (enterprise demo owner)
# This ensures the parent owner can see and access child tenants
owner_id=parent_owner_id
```
3. **Updated TenantMember creation** (Line 711):
```python
# Use the parent's owner_id (already retrieved above)
# This ensures consistency between tenant.owner_id and TenantMember records
child_owner_member = TenantMember(
tenant_id=virtual_uuid,
user_id=parent_owner_id, # Changed from hardcoded UUID
role="owner",
...
)
```
4. **Enhanced logging** (Line 764):
```python
logger.info(
"Child outlet created successfully",
...
owner_id=str(parent_owner_id), # Added for debugging
...
)
```
### Fix 2: Distribution Data Loading for Child Tenants ✅
**File Modified:** `services/distribution/app/api/internal_demo.py`
**Changes Made:**
1. **Added early return for child tenants** (Lines 147-166):
```python
elif demo_account_type == "enterprise_child":
# Child outlets don't have their own distribution data
# Distribution is managed centrally by the parent tenant
# Child locations are delivery destinations, not distribution hubs
logger.info(
"Skipping distribution cloning for child outlet - distribution managed by parent",
base_tenant_id=base_tenant_id,
virtual_tenant_id=virtual_tenant_id,
session_id=session_id
)
duration_ms = int((datetime.now(timezone.utc) - start_time).total_seconds() * 1000)
return {
"service": "distribution",
"status": "completed",
"records_cloned": 0,
"duration_ms": duration_ms,
"details": {
"note": "Child outlets don't manage distribution - handled by parent tenant"
}
}
```
**Rationale:**
- In an enterprise bakery setup, the **central production facility (parent)** manages all distribution
- **Retail outlets (children)** are **receiving locations**, not distribution hubs
- The parent's distribution.json already includes routes and shipments that reference child tenant locations
- Attempting to load child-specific distribution files was architecturally incorrect
### Fix 3: Shipment Field Compatibility ✅
**File Modified:** `services/distribution/app/api/internal_demo.py`
**Changes Made:**
1. **Removed estimated_delivery_time field** (Lines 261-267):
```python
# Note: The Shipment model doesn't have estimated_delivery_time
# Only actual_delivery_time is stored
actual_delivery_time = parse_date_field(
shipment_data.get('actual_delivery_time'),
session_time,
"actual_delivery_time"
)
```
2. **Stored items in delivery_notes** (Lines 273-287):
```python
# Store items in delivery_notes as JSON for demo purposes
# (In production, items would be in the linked purchase order)
import json
items_json = json.dumps(shipment_data.get('items', [])) if shipment_data.get('items') else None
new_shipment = Shipment(
...
total_weight_kg=shipment_data.get('total_weight_kg'),
actual_delivery_time=actual_delivery_time,
# Store items info in delivery_notes for demo display
delivery_notes=f"{shipment_data.get('notes', '')}\nItems: {items_json}" if items_json else shipment_data.get('notes'),
...
)
```
**Rationale:**
- Shipment model represents delivery tracking, not content inventory
- In production systems, shipment items are stored in the linked purchase order
- For demo purposes, we store items as JSON in the `delivery_notes` field
- This allows the demo to show what's being shipped without requiring full PO integration
## How Data Flows in Enterprise Demo
### User & Ownership Structure
```
Enterprise Demo Owner
├── ID: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7
├── Email: director@panaderiaartesana.es
├── Role: owner
├── Parent Tenant (Central Production)
│ ├── ID: 80000000-0000-4000-a000-000000000001 (template)
│ ├── Name: "Panadería Artesana España - Central"
│ ├── Type: parent
│ └── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7
└── Child Tenants (Retail Outlets)
├── Madrid - Salamanca
│ ├── ID: A0000000-0000-4000-a000-000000000001 (template)
│ ├── Type: child
│ └── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7 ✅ (NOW CORRECT)
├── Barcelona - Eixample
│ ├── ID: B0000000-0000-4000-a000-000000000001
│ └── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7 ✅
├── Valencia - Ruzafa
│ ├── ID: C0000000-0000-4000-a000-000000000001
│ └── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7 ✅
├── Seville - Triana
│ ├── ID: D0000000-0000-4000-a000-000000000001
│ └── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7 ✅
└── Bilbao - Casco Viejo
├── ID: E0000000-0000-4000-a000-000000000001
└── Owner: d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7 ✅
```
### Tenant Loading Flow
1. **User logs into enterprise demo**
- Demo session created with `demo_account_type: "enterprise"`
- Session ID stored in JWT token
2. **Frontend requests user tenants**
- Calls: `GET /tenants/user/{user_id}/owned`
- Backend: `services/tenant/app/api/tenant_operations.py:284`
3. **Backend retrieves virtual tenants**
- Extracts `demo_session_id` from JWT
- Calls: `tenant_service.get_virtual_tenants_for_session(demo_session_id, "enterprise")`
- Query: `SELECT * FROM tenants WHERE demo_session_id = ? AND owner_id = ?`
- Returns: Parent + All child tenants with matching owner_id ✅
4. **Frontend displays in TenantSwitcher**
- Component: `frontend/src/components/ui/TenantSwitcher.tsx`
- Shows all tenants where user is owner
- Now includes all 6 tenants (1 parent + 5 children) ✅
### Distribution Data Flow
1. **Demo session cloning**
- Orchestrator calls distribution service: `POST /internal/demo/clone`
- Loads fixture: `shared/demo/fixtures/enterprise/parent/12-distribution.json`
2. **Distribution data includes**
- Delivery routes with route_sequence (stops at multiple locations)
- Shipments linked to child tenants
- All dates use BASE_TS markers for session-relative times
3. **Frontend queries distribution**
- Calls: `GET /tenants/{tenant_id}/distribution/routes?date={date}`
- Calls: `GET /tenants/{tenant_id}/distribution/shipments?date={date}`
- Service: `frontend/src/api/hooks/useEnterpriseDashboard.ts:307`
## Testing Instructions
### 1. Restart Services
After applying the fixes, you need to restart the affected services:
```bash
# Restart tenant service (Fix 1: child tenant owner_id)
kubectl rollout restart deployment tenant-service -n bakery-ia
# Restart distribution service (Fix 2: skip child distribution loading)
kubectl rollout restart deployment distribution-service -n bakery-ia
# Or restart all services at once
./kubernetes_restart.sh
```
### 2. Create New Enterprise Demo Session
**Important:** You must create a NEW demo session to test the fix. Existing sessions have already created child tenants with the wrong owner_id.
```bash
# Navigate to frontend
cd frontend
# Start development server if not running
npm run dev
# Open browser to demo page
# http://localhost:3000/demo
```
### 3. Test Child Tenant Visibility
1. Click "Try Enterprise Demo" button
2. Wait for demo session to initialize
3. After redirect to dashboard, look for the tenant switcher in the top-left
4. Click on the tenant switcher dropdown
5. **Expected Result:** You should see 6 organizations:
- Panadería Artesana España - Central (parent)
- Madrid - Salamanca (child)
- Barcelona - Eixample (child)
- Valencia - Ruzafa (child)
- Seville - Triana (child)
- Bilbao - Casco Viejo (child)
### 4. Test Distribution Page
1. From the enterprise dashboard, navigate to "Distribution"
2. Check if routes and shipments are displayed
3. **Expected Result:** You should see:
- Active routes count
- Pending deliveries count
- Distribution map with route visualization
- List of routes in the "Rutas" tab
### 5. Verify Database (Optional)
If you have database access:
```sql
-- Check child tenant owner_ids
SELECT
id,
name,
tenant_type,
owner_id,
demo_session_id
FROM tenants
WHERE tenant_type = 'child'
AND is_demo = true
ORDER BY created_at DESC
LIMIT 10;
-- Should show owner_id = 'd2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7' for all child tenants
```
## Troubleshooting
### Child Tenants Still Not Visible
1. **Verify you created a NEW demo session** after deploying the fix
- Old sessions have child tenants with wrong owner_id
- Solution: Create a new demo session
2. **Check logs for child tenant creation**
```bash
kubectl logs -f deployment/tenant-service -n bakery-ia | grep "Child outlet created"
```
- Should show: `owner_id=d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7`
3. **Verify demo session ID in JWT**
- Open browser DevTools > Application > Storage > Local Storage
- Check if `demo_session_id` is present in token
- Should match the session_id in database
### Distribution Data Not Showing
1. **Check date parameter**
- Distribution page defaults to today's date
- Demo data uses BASE_TS (session creation time)
- Routes might be scheduled for BASE_TS + 2h, BASE_TS + 3h, etc.
- Solution: Try querying without date filter or use session date
2. **Verify distribution data was cloned**
```bash
kubectl logs -f deployment/demo-session-service -n bakery-ia | grep "distribution"
```
- Should show: "Distribution data cloning completed"
- Should show: records_cloned > 0
3. **Check backend endpoint**
```bash
# Get tenant ID from tenant switcher
TENANT_ID="your-virtual-tenant-id"
# Query routes directly
curl -H "Authorization: Bearer YOUR_TOKEN" \
"http://localhost:8000/tenants/${TENANT_ID}/distribution/routes"
```
4. **Check browser console for errors**
- Open DevTools > Console
- Look for API errors or failed requests
- Check Network tab for distribution API calls
## Files Changed
1. **services/tenant/app/api/internal_demo.py**
- Lines 599-614: Added parent tenant lookup
- Line 637: Fixed child tenant owner_id
- Line 711: Fixed TenantMember owner_id
- Line 764: Enhanced logging
2. **services/distribution/app/api/internal_demo.py**
- Lines 147-166: Skip distribution cloning for child tenants
- Lines 261-267: Removed unsupported `estimated_delivery_time` field
- Lines 273-292: Fixed `items` field issue (model doesn't support it)
- Stored items data in `delivery_notes` field for demo display
- Added clear logging explaining why child tenants don't get distribution data
## Verification Checklist
- [x] Child tenant owner_id now matches parent tenant owner_id
- [x] Child tenants include demo_session_id for session-based queries
- [x] TenantMember records use consistent owner_id
- [x] Distribution fixture exists with proper structure
- [x] Distribution API endpoints are correctly implemented
- [x] Frontend hooks properly call distribution API
- [x] Distribution cloning skips child tenants (they don't manage distribution)
- [x] FileNotFoundError for child distribution files is resolved
- [x] Shipment model field compatibility issues resolved
- [x] Items data stored in delivery_notes for demo display
## Next Steps
1. **Deploy Fixes**
```bash
kubectl rollout restart deployment tenant-service -n bakery-ia
kubectl rollout restart deployment distribution-service -n bakery-ia
```
2. **Create New Demo Session**
- Must be a new session, old sessions have wrong data
3. **Test Multi-Tenant Menu**
- Verify all 6 tenants visible
- Test switching between tenants
4. **Test Distribution Page**
- Check if data displays
- If not, investigate date filtering
5. **Monitor Logs**
```bash
# Watch tenant service logs
kubectl logs -f deployment/tenant-service -n bakery-ia
# Watch distribution service logs
kubectl logs -f deployment/distribution-service -n bakery-ia
```
## Additional Notes
### Why This Fix Works
The tenant visibility is controlled by the `owner_id` field. When a user logs in and requests their tenants:
1. Backend extracts user_id from JWT: `d2e3f4a5-b6c7-48d9-e0f1-a2b3c4d5e6f7`
2. Queries database: `SELECT * FROM tenants WHERE owner_id = ? AND demo_session_id = ?`
3. Previously: Parent had correct owner_id, children had wrong owner_id → Only parent returned
4. Now: Parent AND children have same owner_id → All tenants returned ✅
### Distribution Data Structure
The distribution fixture creates a realistic enterprise distribution scenario:
- **Routes:** Delivery routes from central production to retail outlets
- **Shipments:** Individual shipments assigned to routes
- **Child References:** Shipments reference child_tenant_id for destination tracking
- **Time Offsets:** Uses BASE_TS + offset for realistic scheduling
Example:
```json
{
"route_number": "MAD-BCN-001",
"route_date": "BASE_TS + 2h", // 2 hours after session creation
"route_sequence": [
{"stop_number": 1, "location_id": "parent-id"},
{"stop_number": 2, "location_id": "child-A-id"},
{"stop_number": 3, "location_id": "child-B-id"}
]
}
```
This creates a distribution network where:
- Central production (parent) produces goods
- Distribution routes deliver to retail outlets (children)
- Shipments track individual deliveries
- All entities are linked for network-wide visibility
---
## Summary of All Changes
### Services Modified
1. **tenant-service** - Fixed child tenant owner_id
2. **distribution-service** - Fixed child cloning + shipment fields
### Database Impact
- Child tenants created in new sessions will have correct owner_id
- Distribution routes and shipments will be created successfully
- No migration needed (only affects new demo sessions)
### Deployment Commands
```bash
# Restart affected services
kubectl rollout restart deployment tenant-service -n bakery-ia
kubectl rollout restart deployment distribution-service -n bakery-ia
# Verify deployments
kubectl rollout status deployment tenant-service -n bakery-ia
kubectl rollout status deployment distribution-service -n bakery-ia
```
### Testing Checklist
- [ ] Create new enterprise demo session
- [ ] Verify 6 tenants visible in tenant switcher
- [ ] Switch between parent and child tenants
- [ ] Navigate to Distribution page on parent tenant
- [ ] Verify routes and shipments are displayed
- [ ] Check demo session logs for errors
---
**Fix Status:** ✅ ALL FIXES COMPLETED
**Testing Status:** ⏳ PENDING USER VERIFICATION
**Production Ready:** ✅ YES (after testing)

View File

@@ -44,8 +44,8 @@ export class TenantService {
} }
async getUserTenants(userId: string): Promise<TenantResponse[]> { async getUserTenants(userId: string): Promise<TenantResponse[]> {
// Use the /owned endpoint since /users/{userId} has validation issues // Use the /tenants endpoint to get both owned and member tenants
return apiClient.get<TenantResponse[]>(`${this.baseUrl}/user/${userId}/owned`); return apiClient.get<TenantResponse[]>(`${this.baseUrl}/user/${userId}/tenants`);
} }
async getUserOwnedTenants(userId: string): Promise<TenantResponse[]> { async getUserOwnedTenants(userId: string): Promise<TenantResponse[]> {

View File

@@ -14,6 +14,7 @@ export interface TenantState {
// Actions // Actions
setCurrentTenant: (tenant: TenantResponse) => void; setCurrentTenant: (tenant: TenantResponse) => void;
setAvailableTenants: (tenants: TenantResponse[]) => void;
switchTenant: (tenantId: string) => Promise<boolean>; switchTenant: (tenantId: string) => Promise<boolean>;
loadUserTenants: () => Promise<void>; loadUserTenants: () => Promise<void>;
loadCurrentTenantAccess: () => Promise<void>; loadCurrentTenantAccess: () => Promise<void>;
@@ -47,6 +48,10 @@ export const useTenantStore = create<TenantState>()(
} }
}, },
setAvailableTenants: (tenants: TenantResponse[]) => {
set({ availableTenants: tenants });
},
switchTenant: async (tenantId: string): Promise<boolean> => { switchTenant: async (tenantId: string): Promise<boolean> => {
try { try {
set({ isLoading: true, error: null }); set({ isLoading: true, error: null });
@@ -234,6 +239,7 @@ export const useTenantError = () => useTenantStore((state) => state.error);
// Hook for tenant actions // Hook for tenant actions
export const useTenantActions = () => useTenantStore((state) => ({ export const useTenantActions = () => useTenantStore((state) => ({
setCurrentTenant: state.setCurrentTenant, setCurrentTenant: state.setCurrentTenant,
setAvailableTenants: state.setAvailableTenants,
switchTenant: state.switchTenant, switchTenant: state.switchTenant,
loadUserTenants: state.loadUserTenants, loadUserTenants: state.loadUserTenants,
loadCurrentTenantAccess: state.loadCurrentTenantAccess, loadCurrentTenantAccess: state.loadCurrentTenantAccess,

View File

@@ -75,6 +75,11 @@ async def get_user_owned_tenants(request: Request, user_id: str = Path(...)):
"""Get all tenants owned by a user""" """Get all tenants owned by a user"""
return await _proxy_to_tenant_service(request, f"/api/v1/tenants/user/{user_id}/owned") return await _proxy_to_tenant_service(request, f"/api/v1/tenants/user/{user_id}/owned")
@router.get("/user/{user_id}/tenants")
async def get_user_all_tenants(request: Request, user_id: str = Path(...)):
"""Get all tenants accessible by a user (both owned and member tenants)"""
return await _proxy_to_tenant_service(request, f"/api/v1/tenants/user/{user_id}/tenants")
@router.delete("/user/{user_id}/memberships") @router.delete("/user/{user_id}/memberships")
async def delete_user_tenants(request: Request, user_id: str = Path(...)): async def delete_user_tenants(request: Request, user_id: str = Path(...)):
"""Get all tenant memberships for a user (admin only)""" """Get all tenant memberships for a user (admin only)"""

View File

@@ -378,10 +378,22 @@ async def get_nearby_tenants(
@track_endpoint_metrics("tenant_get_user_tenants") @track_endpoint_metrics("tenant_get_user_tenants")
async def get_user_tenants( async def get_user_tenants(
user_id: str = Path(..., description="User ID"), user_id: str = Path(..., description="User ID"),
current_user: Dict[str, Any] = Depends(get_current_user_dep),
tenant_service: EnhancedTenantService = Depends(get_enhanced_tenant_service) tenant_service: EnhancedTenantService = Depends(get_enhanced_tenant_service)
): ):
"""Get all tenants owned by a user - Fixed endpoint for frontend""" """Get all tenants owned by a user - Fixed endpoint for frontend"""
# Security check: users can only access their own tenants unless they're admin or demo user
is_demo_user = current_user.get("is_demo", False)
is_service_account = current_user.get("type") == "service"
user_role = current_user.get('role', '').lower()
if user_id != current_user["user_id"] and not is_service_account and not (is_demo_user and user_id == "demo-user") and user_role != 'admin':
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Can only access your own tenants"
)
try: try:
tenants = await tenant_service.get_user_tenants(user_id) tenants = await tenant_service.get_user_tenants(user_id)
logger.info("Retrieved user tenants", user_id=user_id, tenant_count=len(tenants)) logger.info("Retrieved user tenants", user_id=user_id, tenant_count=len(tenants))
@@ -398,10 +410,22 @@ async def get_user_tenants(
@track_endpoint_metrics("tenant_get_user_memberships") @track_endpoint_metrics("tenant_get_user_memberships")
async def get_user_memberships( async def get_user_memberships(
user_id: str = Path(..., description="User ID"), user_id: str = Path(..., description="User ID"),
current_user: Dict[str, Any] = Depends(get_current_user_dep),
tenant_service: EnhancedTenantService = Depends(get_enhanced_tenant_service) tenant_service: EnhancedTenantService = Depends(get_enhanced_tenant_service)
): ):
"""Get all tenant memberships for a user (for authentication service)""" """Get all tenant memberships for a user (for authentication service)"""
# Security check: users can only access their own memberships unless they're admin or demo user
is_demo_user = current_user.get("is_demo", False)
is_service_account = current_user.get("type") == "service"
user_role = current_user.get('role', '').lower()
if user_id != current_user["user_id"] and not is_service_account and not (is_demo_user and user_id == "demo-user") and user_role != 'admin':
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Can only access your own memberships"
)
try: try:
memberships = await tenant_service.get_user_memberships(user_id) memberships = await tenant_service.get_user_memberships(user_id)
logger.info("Retrieved user memberships", user_id=user_id, membership_count=len(memberships)) logger.info("Retrieved user memberships", user_id=user_id, membership_count=len(memberships))

View File

@@ -309,18 +309,54 @@ class EnhancedTenantService:
error=str(e)) error=str(e))
return None return None
async def get_user_tenants(self, owner_id: str) -> List[TenantResponse]: async def get_user_tenants(self, user_id: str) -> List[TenantResponse]:
"""Get all tenants owned by a user""" """Get all tenants accessible by a user (both owned and member tenants)"""
try: try:
async with self.database_manager.get_session() as db_session: async with self.database_manager.get_session() as db_session:
await self._init_repositories(db_session) await self._init_repositories(db_session)
tenants = await self.tenant_repo.get_tenants_by_owner(owner_id)
return [TenantResponse.from_orm(tenant) for tenant in tenants] # Get tenants where user is the owner
owned_tenants = await self.tenant_repo.get_tenants_by_owner(user_id)
# Get tenants where user is a member (but not owner)
memberships = await self.member_repo.get_user_memberships(user_id, active_only=True)
# Get tenant details for each membership
member_tenant_ids = [str(membership.tenant_id) for membership in memberships]
member_tenants = []
if member_tenant_ids:
# Get tenant details for each membership
for tenant_id in member_tenant_ids:
tenant = await self.tenant_repo.get_by_id(tenant_id)
if tenant:
member_tenants.append(tenant)
# Combine and deduplicate (in case user is both owner and member)
all_tenants = owned_tenants + member_tenants
# Remove duplicates by tenant ID
unique_tenants = []
seen_ids = set()
for tenant in all_tenants:
if str(tenant.id) not in seen_ids:
seen_ids.add(str(tenant.id))
unique_tenants.append(tenant)
logger.info(
"Retrieved user tenants",
user_id=user_id,
owned_count=len(owned_tenants),
member_count=len(member_tenants),
total_count=len(unique_tenants)
)
return [TenantResponse.from_orm(tenant) for tenant in unique_tenants]
except Exception as e: except Exception as e:
logger.error("Error getting user tenants", logger.error("Error getting user tenants",
owner_id=owner_id, user_id=user_id,
error=str(e)) error=str(e))
return [] return []