Add more services
This commit is contained in:
36
services/production/Dockerfile
Normal file
36
services/production/Dockerfile
Normal file
@@ -0,0 +1,36 @@
|
||||
# Production Service Dockerfile
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install Python dependencies
|
||||
COPY services/production/requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy shared modules
|
||||
COPY shared/ ./shared/
|
||||
|
||||
# Copy application code
|
||||
COPY services/production/app/ ./app/
|
||||
|
||||
# Create logs directory
|
||||
RUN mkdir -p logs
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000
|
||||
|
||||
# Set environment variables
|
||||
ENV PYTHONPATH=/app
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8000/health || exit 1
|
||||
|
||||
# Run the application
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
187
services/production/README.md
Normal file
187
services/production/README.md
Normal file
@@ -0,0 +1,187 @@
|
||||
# Production Service
|
||||
|
||||
Production planning and batch management service for the bakery management system.
|
||||
|
||||
## Overview
|
||||
|
||||
The Production Service handles all production-related operations including:
|
||||
|
||||
- **Production Planning**: Calculate daily requirements using demand forecasts and inventory levels
|
||||
- **Batch Management**: Track production batches from start to finish
|
||||
- **Capacity Management**: Equipment, staff, and time scheduling
|
||||
- **Quality Control**: Yield tracking, waste management, efficiency metrics
|
||||
- **Alert System**: Comprehensive monitoring and notifications
|
||||
|
||||
## Features
|
||||
|
||||
### Core Capabilities
|
||||
- Daily production requirements calculation
|
||||
- Production batch lifecycle management
|
||||
- Real-time capacity planning and utilization
|
||||
- Quality control tracking and metrics
|
||||
- Comprehensive alert system with multiple severity levels
|
||||
- Integration with inventory, orders, recipes, and sales services
|
||||
|
||||
### API Endpoints
|
||||
|
||||
#### Dashboard & Planning
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/dashboard-summary` - Production dashboard data
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/daily-requirements` - Daily production planning
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/requirements` - Requirements for procurement
|
||||
|
||||
#### Batch Management
|
||||
- `POST /api/v1/tenants/{tenant_id}/production/batches` - Create production batch
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/batches/active` - Get active batches
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/batches/{batch_id}` - Get batch details
|
||||
- `PUT /api/v1/tenants/{tenant_id}/production/batches/{batch_id}/status` - Update batch status
|
||||
|
||||
#### Scheduling & Capacity
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/schedule` - Production schedule
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/capacity/status` - Capacity status
|
||||
|
||||
#### Alerts & Monitoring
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/alerts` - Production alerts
|
||||
- `POST /api/v1/tenants/{tenant_id}/production/alerts/{alert_id}/acknowledge` - Acknowledge alerts
|
||||
|
||||
#### Analytics
|
||||
- `GET /api/v1/tenants/{tenant_id}/production/metrics/yield` - Yield metrics
|
||||
|
||||
## Service Integration
|
||||
|
||||
### Shared Clients Used
|
||||
- **InventoryServiceClient**: Stock levels, ingredient availability
|
||||
- **OrdersServiceClient**: Demand requirements, customer orders
|
||||
- **RecipesServiceClient**: Recipe requirements, ingredient calculations
|
||||
- **SalesServiceClient**: Historical sales data
|
||||
- **NotificationServiceClient**: Alert notifications
|
||||
|
||||
### Authentication
|
||||
Uses shared authentication patterns with tenant isolation:
|
||||
- JWT token validation
|
||||
- Tenant access verification
|
||||
- User permission checks
|
||||
|
||||
## Configuration
|
||||
|
||||
Key configuration options in `app/core/config.py`:
|
||||
|
||||
### Production Planning
|
||||
- `PLANNING_HORIZON_DAYS`: Days ahead for planning (default: 7)
|
||||
- `PRODUCTION_BUFFER_PERCENTAGE`: Safety buffer for production (default: 10%)
|
||||
- `MINIMUM_BATCH_SIZE`: Minimum batch size (default: 1.0)
|
||||
- `MAXIMUM_BATCH_SIZE`: Maximum batch size (default: 100.0)
|
||||
|
||||
### Capacity Management
|
||||
- `DEFAULT_WORKING_HOURS_PER_DAY`: Standard working hours (default: 12)
|
||||
- `MAX_OVERTIME_HOURS`: Maximum overtime allowed (default: 4)
|
||||
- `CAPACITY_UTILIZATION_TARGET`: Target utilization (default: 85%)
|
||||
|
||||
### Quality Control
|
||||
- `MINIMUM_YIELD_PERCENTAGE`: Minimum acceptable yield (default: 85%)
|
||||
- `QUALITY_SCORE_THRESHOLD`: Minimum quality score (default: 8.0)
|
||||
|
||||
### Alert Thresholds
|
||||
- `CAPACITY_EXCEEDED_THRESHOLD`: Capacity alert threshold (default: 100%)
|
||||
- `PRODUCTION_DELAY_THRESHOLD_MINUTES`: Delay alert threshold (default: 60)
|
||||
- `LOW_YIELD_ALERT_THRESHOLD`: Low yield alert (default: 80%)
|
||||
|
||||
## Database Models
|
||||
|
||||
### ProductionBatch
|
||||
- Complete batch tracking from planning to completion
|
||||
- Status management (pending, in_progress, completed, etc.)
|
||||
- Cost tracking and yield calculations
|
||||
- Quality metrics integration
|
||||
|
||||
### ProductionSchedule
|
||||
- Daily production scheduling
|
||||
- Capacity planning and tracking
|
||||
- Staff and equipment assignments
|
||||
- Performance metrics
|
||||
|
||||
### ProductionCapacity
|
||||
- Resource availability tracking
|
||||
- Equipment and staff capacity
|
||||
- Maintenance scheduling
|
||||
- Utilization monitoring
|
||||
|
||||
### QualityCheck
|
||||
- Quality control measurements
|
||||
- Pass/fail tracking
|
||||
- Defect recording
|
||||
- Corrective action management
|
||||
|
||||
### ProductionAlert
|
||||
- Comprehensive alert system
|
||||
- Multiple severity levels
|
||||
- Action recommendations
|
||||
- Resolution tracking
|
||||
|
||||
## Alert System
|
||||
|
||||
### Alert Types
|
||||
- **Capacity Exceeded**: When production requirements exceed available capacity
|
||||
- **Production Delay**: When batches are delayed beyond thresholds
|
||||
- **Cost Spike**: When production costs exceed normal ranges
|
||||
- **Low Yield**: When yield percentages fall below targets
|
||||
- **Quality Issues**: When quality scores consistently decline
|
||||
- **Equipment Maintenance**: When equipment needs maintenance
|
||||
|
||||
### Severity Levels
|
||||
- **Critical**: WhatsApp + Email + Dashboard + SMS
|
||||
- **High**: WhatsApp + Email + Dashboard
|
||||
- **Medium**: Email + Dashboard
|
||||
- **Low**: Dashboard only
|
||||
|
||||
## Development
|
||||
|
||||
### Setup
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Set up database
|
||||
# Configure DATABASE_URL environment variable
|
||||
|
||||
# Run migrations
|
||||
alembic upgrade head
|
||||
|
||||
# Start service
|
||||
uvicorn app.main:app --reload
|
||||
```
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
# Run tests
|
||||
pytest
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=app
|
||||
```
|
||||
|
||||
### Docker
|
||||
```bash
|
||||
# Build image
|
||||
docker build -t production-service .
|
||||
|
||||
# Run container
|
||||
docker run -p 8000:8000 production-service
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
The service is designed for containerized deployment with:
|
||||
- Health checks at `/health`
|
||||
- Structured logging
|
||||
- Metrics collection
|
||||
- Database migrations
|
||||
- Service discovery integration
|
||||
|
||||
## Architecture
|
||||
|
||||
Follows Domain-Driven Microservices Architecture:
|
||||
- Clean separation of concerns
|
||||
- Repository pattern for data access
|
||||
- Service layer for business logic
|
||||
- API layer for external interface
|
||||
- Shared infrastructure for cross-cutting concerns
|
||||
6
services/production/app/__init__.py
Normal file
6
services/production/app/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
# ================================================================
|
||||
# services/production/app/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Production service application package
|
||||
"""
|
||||
6
services/production/app/api/__init__.py
Normal file
6
services/production/app/api/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
# ================================================================
|
||||
# services/production/app/api/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
API routes and endpoints for production service
|
||||
"""
|
||||
462
services/production/app/api/production.py
Normal file
462
services/production/app/api/production.py
Normal file
@@ -0,0 +1,462 @@
|
||||
# ================================================================
|
||||
# services/production/app/api/production.py
|
||||
# ================================================================
|
||||
"""
|
||||
Production API endpoints
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Path, Query
|
||||
from typing import Optional, List
|
||||
from datetime import date, datetime
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from shared.auth.decorators import get_current_user_dep, get_current_tenant_id_dep
|
||||
from app.core.database import get_db
|
||||
from app.services.production_service import ProductionService
|
||||
from app.services.production_alert_service import ProductionAlertService
|
||||
from app.schemas.production import (
|
||||
ProductionBatchCreate, ProductionBatchUpdate, ProductionBatchStatusUpdate,
|
||||
ProductionBatchResponse, ProductionBatchListResponse,
|
||||
DailyProductionRequirements, ProductionDashboardSummary, ProductionMetrics,
|
||||
ProductionAlertResponse, ProductionAlertListResponse
|
||||
)
|
||||
from app.core.config import settings
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
router = APIRouter(tags=["production"])
|
||||
|
||||
|
||||
def get_production_service() -> ProductionService:
|
||||
"""Dependency injection for production service"""
|
||||
from app.core.database import database_manager
|
||||
return ProductionService(database_manager, settings)
|
||||
|
||||
|
||||
def get_production_alert_service() -> ProductionAlertService:
|
||||
"""Dependency injection for production alert service"""
|
||||
from app.core.database import database_manager
|
||||
return ProductionAlertService(database_manager, settings)
|
||||
|
||||
|
||||
# ================================================================
|
||||
# DASHBOARD ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/dashboard-summary", response_model=ProductionDashboardSummary)
|
||||
async def get_dashboard_summary(
|
||||
tenant_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
production_service: ProductionService = Depends(get_production_service)
|
||||
):
|
||||
"""Get production dashboard summary using shared auth"""
|
||||
try:
|
||||
# Verify tenant access using shared auth pattern
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
summary = await production_service.get_dashboard_summary(tenant_id)
|
||||
|
||||
logger.info("Retrieved production dashboard summary",
|
||||
tenant_id=str(tenant_id), user_id=current_user.get("user_id"))
|
||||
|
||||
return summary
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting production dashboard summary",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get dashboard summary")
|
||||
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/daily-requirements", response_model=DailyProductionRequirements)
|
||||
async def get_daily_requirements(
|
||||
tenant_id: UUID = Path(...),
|
||||
date: Optional[date] = Query(None, description="Target date for production requirements"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
production_service: ProductionService = Depends(get_production_service)
|
||||
):
|
||||
"""Get daily production requirements"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
target_date = date or datetime.now().date()
|
||||
requirements = await production_service.calculate_daily_requirements(tenant_id, target_date)
|
||||
|
||||
logger.info("Retrieved daily production requirements",
|
||||
tenant_id=str(tenant_id), date=target_date.isoformat())
|
||||
|
||||
return requirements
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting daily production requirements",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get daily requirements")
|
||||
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/requirements", response_model=dict)
|
||||
async def get_production_requirements(
|
||||
tenant_id: UUID = Path(...),
|
||||
date: Optional[date] = Query(None, description="Target date for production requirements"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
production_service: ProductionService = Depends(get_production_service)
|
||||
):
|
||||
"""Get production requirements for procurement planning"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
target_date = date or datetime.now().date()
|
||||
requirements = await production_service.get_production_requirements(tenant_id, target_date)
|
||||
|
||||
logger.info("Retrieved production requirements for procurement",
|
||||
tenant_id=str(tenant_id), date=target_date.isoformat())
|
||||
|
||||
return requirements
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting production requirements",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get production requirements")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# PRODUCTION BATCH ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.post("/tenants/{tenant_id}/production/batches", response_model=ProductionBatchResponse)
|
||||
async def create_production_batch(
|
||||
batch_data: ProductionBatchCreate,
|
||||
tenant_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
production_service: ProductionService = Depends(get_production_service)
|
||||
):
|
||||
"""Create a new production batch"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
batch = await production_service.create_production_batch(tenant_id, batch_data)
|
||||
|
||||
logger.info("Created production batch",
|
||||
batch_id=str(batch.id), tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionBatchResponse.model_validate(batch)
|
||||
|
||||
except ValueError as e:
|
||||
logger.warning("Invalid batch data", error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error("Error creating production batch",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to create production batch")
|
||||
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/batches/active", response_model=ProductionBatchListResponse)
|
||||
async def get_active_batches(
|
||||
tenant_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get currently active production batches"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
from app.repositories.production_batch_repository import ProductionBatchRepository
|
||||
batch_repo = ProductionBatchRepository(db)
|
||||
|
||||
batches = await batch_repo.get_active_batches(str(tenant_id))
|
||||
batch_responses = [ProductionBatchResponse.model_validate(batch) for batch in batches]
|
||||
|
||||
logger.info("Retrieved active production batches",
|
||||
count=len(batches), tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionBatchListResponse(
|
||||
batches=batch_responses,
|
||||
total_count=len(batches),
|
||||
page=1,
|
||||
page_size=len(batches)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting active batches",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get active batches")
|
||||
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/batches/{batch_id}", response_model=ProductionBatchResponse)
|
||||
async def get_batch_details(
|
||||
tenant_id: UUID = Path(...),
|
||||
batch_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get detailed information about a production batch"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
from app.repositories.production_batch_repository import ProductionBatchRepository
|
||||
batch_repo = ProductionBatchRepository(db)
|
||||
|
||||
batch = await batch_repo.get(batch_id)
|
||||
if not batch or str(batch.tenant_id) != str(tenant_id):
|
||||
raise HTTPException(status_code=404, detail="Production batch not found")
|
||||
|
||||
logger.info("Retrieved production batch details",
|
||||
batch_id=str(batch_id), tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionBatchResponse.model_validate(batch)
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error getting batch details",
|
||||
error=str(e), batch_id=str(batch_id), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get batch details")
|
||||
|
||||
|
||||
@router.put("/tenants/{tenant_id}/production/batches/{batch_id}/status", response_model=ProductionBatchResponse)
|
||||
async def update_batch_status(
|
||||
status_update: ProductionBatchStatusUpdate,
|
||||
tenant_id: UUID = Path(...),
|
||||
batch_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
production_service: ProductionService = Depends(get_production_service)
|
||||
):
|
||||
"""Update production batch status"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
batch = await production_service.update_batch_status(tenant_id, batch_id, status_update)
|
||||
|
||||
logger.info("Updated production batch status",
|
||||
batch_id=str(batch_id),
|
||||
new_status=status_update.status.value,
|
||||
tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionBatchResponse.model_validate(batch)
|
||||
|
||||
except ValueError as e:
|
||||
logger.warning("Invalid status update", error=str(e), batch_id=str(batch_id))
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
except Exception as e:
|
||||
logger.error("Error updating batch status",
|
||||
error=str(e), batch_id=str(batch_id), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to update batch status")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# PRODUCTION SCHEDULE ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/schedule", response_model=dict)
|
||||
async def get_production_schedule(
|
||||
tenant_id: UUID = Path(...),
|
||||
start_date: Optional[date] = Query(None, description="Start date for schedule"),
|
||||
end_date: Optional[date] = Query(None, description="End date for schedule"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get production schedule for a date range"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
# Default to next 7 days if no dates provided
|
||||
if not start_date:
|
||||
start_date = datetime.now().date()
|
||||
if not end_date:
|
||||
end_date = start_date + timedelta(days=7)
|
||||
|
||||
from app.repositories.production_schedule_repository import ProductionScheduleRepository
|
||||
schedule_repo = ProductionScheduleRepository(db)
|
||||
|
||||
schedules = await schedule_repo.get_schedules_by_date_range(
|
||||
str(tenant_id), start_date, end_date
|
||||
)
|
||||
|
||||
schedule_data = {
|
||||
"start_date": start_date.isoformat(),
|
||||
"end_date": end_date.isoformat(),
|
||||
"schedules": [
|
||||
{
|
||||
"id": str(schedule.id),
|
||||
"date": schedule.schedule_date.isoformat(),
|
||||
"shift_start": schedule.shift_start.isoformat(),
|
||||
"shift_end": schedule.shift_end.isoformat(),
|
||||
"capacity_utilization": schedule.utilization_percentage,
|
||||
"batches_planned": schedule.total_batches_planned,
|
||||
"is_finalized": schedule.is_finalized
|
||||
}
|
||||
for schedule in schedules
|
||||
],
|
||||
"total_schedules": len(schedules)
|
||||
}
|
||||
|
||||
logger.info("Retrieved production schedule",
|
||||
tenant_id=str(tenant_id),
|
||||
start_date=start_date.isoformat(),
|
||||
end_date=end_date.isoformat(),
|
||||
schedules_count=len(schedules))
|
||||
|
||||
return schedule_data
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting production schedule",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get production schedule")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# ALERTS ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/alerts", response_model=ProductionAlertListResponse)
|
||||
async def get_production_alerts(
|
||||
tenant_id: UUID = Path(...),
|
||||
active_only: bool = Query(True, description="Return only active alerts"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
alert_service: ProductionAlertService = Depends(get_production_alert_service)
|
||||
):
|
||||
"""Get production-related alerts"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
if active_only:
|
||||
alerts = await alert_service.get_active_alerts(tenant_id)
|
||||
else:
|
||||
# Get all alerts (would need additional repo method)
|
||||
alerts = await alert_service.get_active_alerts(tenant_id)
|
||||
|
||||
alert_responses = [ProductionAlertResponse.model_validate(alert) for alert in alerts]
|
||||
|
||||
logger.info("Retrieved production alerts",
|
||||
count=len(alerts), tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionAlertListResponse(
|
||||
alerts=alert_responses,
|
||||
total_count=len(alerts),
|
||||
page=1,
|
||||
page_size=len(alerts)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting production alerts",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get production alerts")
|
||||
|
||||
|
||||
@router.post("/tenants/{tenant_id}/production/alerts/{alert_id}/acknowledge", response_model=ProductionAlertResponse)
|
||||
async def acknowledge_alert(
|
||||
tenant_id: UUID = Path(...),
|
||||
alert_id: UUID = Path(...),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
alert_service: ProductionAlertService = Depends(get_production_alert_service)
|
||||
):
|
||||
"""Acknowledge a production-related alert"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
acknowledged_by = current_user.get("email", "unknown_user")
|
||||
alert = await alert_service.acknowledge_alert(tenant_id, alert_id, acknowledged_by)
|
||||
|
||||
logger.info("Acknowledged production alert",
|
||||
alert_id=str(alert_id),
|
||||
acknowledged_by=acknowledged_by,
|
||||
tenant_id=str(tenant_id))
|
||||
|
||||
return ProductionAlertResponse.model_validate(alert)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error acknowledging production alert",
|
||||
error=str(e), alert_id=str(alert_id), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to acknowledge alert")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# CAPACITY MANAGEMENT ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/capacity/status", response_model=dict)
|
||||
async def get_capacity_status(
|
||||
tenant_id: UUID = Path(...),
|
||||
date: Optional[date] = Query(None, description="Date for capacity status"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get production capacity status for a specific date"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
target_date = date or datetime.now().date()
|
||||
|
||||
from app.repositories.production_capacity_repository import ProductionCapacityRepository
|
||||
capacity_repo = ProductionCapacityRepository(db)
|
||||
|
||||
capacity_summary = await capacity_repo.get_capacity_utilization_summary(
|
||||
str(tenant_id), target_date, target_date
|
||||
)
|
||||
|
||||
logger.info("Retrieved capacity status",
|
||||
tenant_id=str(tenant_id), date=target_date.isoformat())
|
||||
|
||||
return capacity_summary
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting capacity status",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get capacity status")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# METRICS AND ANALYTICS ENDPOINTS
|
||||
# ================================================================
|
||||
|
||||
@router.get("/tenants/{tenant_id}/production/metrics/yield", response_model=dict)
|
||||
async def get_yield_metrics(
|
||||
tenant_id: UUID = Path(...),
|
||||
start_date: date = Query(..., description="Start date for metrics"),
|
||||
end_date: date = Query(..., description="End date for metrics"),
|
||||
current_tenant: str = Depends(get_current_tenant_id_dep),
|
||||
current_user: dict = Depends(get_current_user_dep),
|
||||
db=Depends(get_db)
|
||||
):
|
||||
"""Get production yield metrics for analysis"""
|
||||
try:
|
||||
if str(tenant_id) != current_tenant:
|
||||
raise HTTPException(status_code=403, detail="Access denied to this tenant")
|
||||
|
||||
from app.repositories.production_batch_repository import ProductionBatchRepository
|
||||
batch_repo = ProductionBatchRepository(db)
|
||||
|
||||
metrics = await batch_repo.get_production_metrics(str(tenant_id), start_date, end_date)
|
||||
|
||||
logger.info("Retrieved yield metrics",
|
||||
tenant_id=str(tenant_id),
|
||||
start_date=start_date.isoformat(),
|
||||
end_date=end_date.isoformat())
|
||||
|
||||
return metrics
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting yield metrics",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get yield metrics")
|
||||
6
services/production/app/core/__init__.py
Normal file
6
services/production/app/core/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
# ================================================================
|
||||
# services/production/app/core/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Core configuration and database setup
|
||||
"""
|
||||
92
services/production/app/core/config.py
Normal file
92
services/production/app/core/config.py
Normal file
@@ -0,0 +1,92 @@
|
||||
# ================================================================
|
||||
# PRODUCTION SERVICE CONFIGURATION
|
||||
# services/production/app/core/config.py
|
||||
# ================================================================
|
||||
|
||||
"""
|
||||
Production service configuration
|
||||
Production planning and batch management
|
||||
"""
|
||||
|
||||
from shared.config.base import BaseServiceSettings
|
||||
import os
|
||||
|
||||
class ProductionSettings(BaseServiceSettings):
|
||||
"""Production service specific settings"""
|
||||
|
||||
# Service Identity
|
||||
APP_NAME: str = "Production Service"
|
||||
SERVICE_NAME: str = "production-service"
|
||||
VERSION: str = "1.0.0"
|
||||
DESCRIPTION: str = "Production planning and batch management"
|
||||
|
||||
# Database Configuration
|
||||
DATABASE_URL: str = os.getenv("PRODUCTION_DATABASE_URL",
|
||||
"postgresql+asyncpg://production_user:production_pass123@production-db:5432/production_db")
|
||||
|
||||
# Redis Database (for production queues and caching)
|
||||
REDIS_DB: int = 3
|
||||
|
||||
# Service URLs for communication
|
||||
GATEWAY_URL: str = os.getenv("GATEWAY_URL", "http://gateway:8080")
|
||||
ORDERS_SERVICE_URL: str = os.getenv("ORDERS_SERVICE_URL", "http://orders:8000")
|
||||
INVENTORY_SERVICE_URL: str = os.getenv("INVENTORY_SERVICE_URL", "http://inventory:8000")
|
||||
RECIPES_SERVICE_URL: str = os.getenv("RECIPES_SERVICE_URL", "http://recipes:8000")
|
||||
SALES_SERVICE_URL: str = os.getenv("SALES_SERVICE_URL", "http://sales:8000")
|
||||
FORECASTING_SERVICE_URL: str = os.getenv("FORECASTING_SERVICE_URL", "http://forecasting:8000")
|
||||
|
||||
# Production Planning Configuration
|
||||
PLANNING_HORIZON_DAYS: int = int(os.getenv("PLANNING_HORIZON_DAYS", "7"))
|
||||
MINIMUM_BATCH_SIZE: float = float(os.getenv("MINIMUM_BATCH_SIZE", "1.0"))
|
||||
MAXIMUM_BATCH_SIZE: float = float(os.getenv("MAXIMUM_BATCH_SIZE", "100.0"))
|
||||
PRODUCTION_BUFFER_PERCENTAGE: float = float(os.getenv("PRODUCTION_BUFFER_PERCENTAGE", "10.0"))
|
||||
|
||||
# Capacity Management
|
||||
DEFAULT_WORKING_HOURS_PER_DAY: int = int(os.getenv("DEFAULT_WORKING_HOURS_PER_DAY", "12"))
|
||||
MAX_OVERTIME_HOURS: int = int(os.getenv("MAX_OVERTIME_HOURS", "4"))
|
||||
CAPACITY_UTILIZATION_TARGET: float = float(os.getenv("CAPACITY_UTILIZATION_TARGET", "0.85"))
|
||||
CAPACITY_WARNING_THRESHOLD: float = float(os.getenv("CAPACITY_WARNING_THRESHOLD", "0.95"))
|
||||
|
||||
# Quality Control
|
||||
QUALITY_CHECK_ENABLED: bool = os.getenv("QUALITY_CHECK_ENABLED", "true").lower() == "true"
|
||||
MINIMUM_YIELD_PERCENTAGE: float = float(os.getenv("MINIMUM_YIELD_PERCENTAGE", "85.0"))
|
||||
QUALITY_SCORE_THRESHOLD: float = float(os.getenv("QUALITY_SCORE_THRESHOLD", "8.0"))
|
||||
|
||||
# Batch Management
|
||||
BATCH_AUTO_NUMBERING: bool = os.getenv("BATCH_AUTO_NUMBERING", "true").lower() == "true"
|
||||
BATCH_NUMBER_PREFIX: str = os.getenv("BATCH_NUMBER_PREFIX", "PROD")
|
||||
BATCH_TRACKING_ENABLED: bool = os.getenv("BATCH_TRACKING_ENABLED", "true").lower() == "true"
|
||||
|
||||
# Production Scheduling
|
||||
SCHEDULE_OPTIMIZATION_ENABLED: bool = os.getenv("SCHEDULE_OPTIMIZATION_ENABLED", "true").lower() == "true"
|
||||
PREP_TIME_BUFFER_MINUTES: int = int(os.getenv("PREP_TIME_BUFFER_MINUTES", "30"))
|
||||
CLEANUP_TIME_BUFFER_MINUTES: int = int(os.getenv("CLEANUP_TIME_BUFFER_MINUTES", "15"))
|
||||
|
||||
# Business Rules for Bakery Operations
|
||||
BUSINESS_HOUR_START: int = 6 # 6 AM - early start for fresh bread
|
||||
BUSINESS_HOUR_END: int = 22 # 10 PM
|
||||
PEAK_PRODUCTION_HOURS_START: int = 4 # 4 AM
|
||||
PEAK_PRODUCTION_HOURS_END: int = 10 # 10 AM
|
||||
|
||||
# Weekend and Holiday Adjustments
|
||||
WEEKEND_PRODUCTION_FACTOR: float = float(os.getenv("WEEKEND_PRODUCTION_FACTOR", "0.7"))
|
||||
HOLIDAY_PRODUCTION_FACTOR: float = float(os.getenv("HOLIDAY_PRODUCTION_FACTOR", "0.3"))
|
||||
SPECIAL_EVENT_PRODUCTION_FACTOR: float = float(os.getenv("SPECIAL_EVENT_PRODUCTION_FACTOR", "1.5"))
|
||||
|
||||
# Alert Thresholds
|
||||
CAPACITY_EXCEEDED_THRESHOLD: float = float(os.getenv("CAPACITY_EXCEEDED_THRESHOLD", "1.0"))
|
||||
PRODUCTION_DELAY_THRESHOLD_MINUTES: int = int(os.getenv("PRODUCTION_DELAY_THRESHOLD_MINUTES", "60"))
|
||||
LOW_YIELD_ALERT_THRESHOLD: float = float(os.getenv("LOW_YIELD_ALERT_THRESHOLD", "0.80"))
|
||||
URGENT_ORDER_THRESHOLD_HOURS: int = int(os.getenv("URGENT_ORDER_THRESHOLD_HOURS", "4"))
|
||||
|
||||
# Cost Management
|
||||
COST_TRACKING_ENABLED: bool = os.getenv("COST_TRACKING_ENABLED", "true").lower() == "true"
|
||||
LABOR_COST_PER_HOUR: float = float(os.getenv("LABOR_COST_PER_HOUR", "15.0"))
|
||||
OVERHEAD_COST_PERCENTAGE: float = float(os.getenv("OVERHEAD_COST_PERCENTAGE", "20.0"))
|
||||
|
||||
# Integration Settings
|
||||
INVENTORY_INTEGRATION_ENABLED: bool = os.getenv("INVENTORY_INTEGRATION_ENABLED", "true").lower() == "true"
|
||||
AUTOMATIC_INGREDIENT_RESERVATION: bool = os.getenv("AUTOMATIC_INGREDIENT_RESERVATION", "true").lower() == "true"
|
||||
REAL_TIME_INVENTORY_UPDATES: bool = os.getenv("REAL_TIME_INVENTORY_UPDATES", "true").lower() == "true"
|
||||
|
||||
settings = ProductionSettings()
|
||||
51
services/production/app/core/database.py
Normal file
51
services/production/app/core/database.py
Normal file
@@ -0,0 +1,51 @@
|
||||
# ================================================================
|
||||
# services/production/app/core/database.py
|
||||
# ================================================================
|
||||
"""
|
||||
Database configuration for production service
|
||||
"""
|
||||
|
||||
import structlog
|
||||
from shared.database import DatabaseManager, create_database_manager
|
||||
from shared.database.base import Base
|
||||
from shared.database.transactions import TransactionManager
|
||||
from app.core.config import settings
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
# Create database manager following shared pattern
|
||||
database_manager = create_database_manager(
|
||||
settings.DATABASE_URL,
|
||||
settings.SERVICE_NAME
|
||||
)
|
||||
|
||||
# Transaction manager for the service
|
||||
transaction_manager = TransactionManager(database_manager)
|
||||
|
||||
# Use exactly the same pattern as training/forecasting services
|
||||
async def get_db():
|
||||
"""Database dependency"""
|
||||
async with database_manager.get_session() as db:
|
||||
yield db
|
||||
|
||||
def get_db_transaction():
|
||||
"""Get database transaction manager"""
|
||||
return database_manager.get_transaction()
|
||||
|
||||
async def get_db_health():
|
||||
"""Check database health"""
|
||||
try:
|
||||
health_status = await database_manager.health_check()
|
||||
return health_status.get("healthy", False)
|
||||
except Exception as e:
|
||||
logger.error(f"Database health check failed: {e}")
|
||||
return False
|
||||
|
||||
async def init_database():
|
||||
"""Initialize database tables"""
|
||||
try:
|
||||
await database_manager.create_tables()
|
||||
logger.info("Production service database initialized successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize database: {e}")
|
||||
raise
|
||||
124
services/production/app/main.py
Normal file
124
services/production/app/main.py
Normal file
@@ -0,0 +1,124 @@
|
||||
# ================================================================
|
||||
# services/production/app/main.py
|
||||
# ================================================================
|
||||
"""
|
||||
Production Service - FastAPI Application
|
||||
Production planning and batch management service
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from contextlib import asynccontextmanager
|
||||
import structlog
|
||||
|
||||
from app.core.config import settings
|
||||
from app.core.database import init_database, get_db_health
|
||||
from app.api.production import router as production_router
|
||||
|
||||
# Configure logging
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Manage application lifespan events"""
|
||||
# Startup
|
||||
try:
|
||||
await init_database()
|
||||
logger.info("Production service started successfully")
|
||||
except Exception as e:
|
||||
logger.error("Failed to initialize production service", error=str(e))
|
||||
raise
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown
|
||||
logger.info("Production service shutting down")
|
||||
|
||||
|
||||
# Create FastAPI application
|
||||
app = FastAPI(
|
||||
title=settings.APP_NAME,
|
||||
description=settings.DESCRIPTION,
|
||||
version=settings.VERSION,
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
# Add CORS middleware
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # Configure based on environment
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Include routers
|
||||
app.include_router(production_router, prefix="/api/v1")
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
try:
|
||||
db_healthy = await get_db_health()
|
||||
|
||||
health_status = {
|
||||
"status": "healthy" if db_healthy else "unhealthy",
|
||||
"service": settings.SERVICE_NAME,
|
||||
"version": settings.VERSION,
|
||||
"database": "connected" if db_healthy else "disconnected"
|
||||
}
|
||||
|
||||
if not db_healthy:
|
||||
health_status["status"] = "unhealthy"
|
||||
|
||||
return health_status
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Health check failed", error=str(e))
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": settings.SERVICE_NAME,
|
||||
"version": settings.VERSION,
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
"""Root endpoint"""
|
||||
return {
|
||||
"service": settings.APP_NAME,
|
||||
"version": settings.VERSION,
|
||||
"description": settings.DESCRIPTION,
|
||||
"status": "running"
|
||||
}
|
||||
|
||||
|
||||
@app.middleware("http")
|
||||
async def logging_middleware(request: Request, call_next):
|
||||
"""Add request logging middleware"""
|
||||
import time
|
||||
|
||||
start_time = time.time()
|
||||
response = await call_next(request)
|
||||
process_time = time.time() - start_time
|
||||
|
||||
logger.info("HTTP request processed",
|
||||
method=request.method,
|
||||
url=str(request.url),
|
||||
status_code=response.status_code,
|
||||
process_time=round(process_time, 4))
|
||||
|
||||
return response
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=8000,
|
||||
reload=settings.DEBUG
|
||||
)
|
||||
22
services/production/app/models/__init__.py
Normal file
22
services/production/app/models/__init__.py
Normal file
@@ -0,0 +1,22 @@
|
||||
# ================================================================
|
||||
# services/production/app/models/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Production service models
|
||||
"""
|
||||
|
||||
from .production import (
|
||||
ProductionBatch,
|
||||
ProductionSchedule,
|
||||
ProductionCapacity,
|
||||
QualityCheck,
|
||||
ProductionAlert
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"ProductionBatch",
|
||||
"ProductionSchedule",
|
||||
"ProductionCapacity",
|
||||
"QualityCheck",
|
||||
"ProductionAlert"
|
||||
]
|
||||
471
services/production/app/models/production.py
Normal file
471
services/production/app/models/production.py
Normal file
@@ -0,0 +1,471 @@
|
||||
# ================================================================
|
||||
# services/production/app/models/production.py
|
||||
# ================================================================
|
||||
"""
|
||||
Production models for the production service
|
||||
"""
|
||||
|
||||
from sqlalchemy import Column, String, Integer, Float, DateTime, Boolean, Text, JSON, Enum as SQLEnum
|
||||
from sqlalchemy.dialects.postgresql import UUID
|
||||
from sqlalchemy.sql import func
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any, Optional
|
||||
import uuid
|
||||
import enum
|
||||
|
||||
from shared.database.base import Base
|
||||
|
||||
|
||||
class ProductionStatus(str, enum.Enum):
|
||||
"""Production batch status enumeration"""
|
||||
PENDING = "pending"
|
||||
IN_PROGRESS = "in_progress"
|
||||
COMPLETED = "completed"
|
||||
CANCELLED = "cancelled"
|
||||
ON_HOLD = "on_hold"
|
||||
QUALITY_CHECK = "quality_check"
|
||||
FAILED = "failed"
|
||||
|
||||
|
||||
class ProductionPriority(str, enum.Enum):
|
||||
"""Production priority levels"""
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
URGENT = "urgent"
|
||||
|
||||
|
||||
class AlertSeverity(str, enum.Enum):
|
||||
"""Alert severity levels"""
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
CRITICAL = "critical"
|
||||
|
||||
|
||||
class ProductionBatch(Base):
|
||||
"""Production batch model for tracking individual production runs"""
|
||||
__tablename__ = "production_batches"
|
||||
|
||||
# Primary identification
|
||||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||||
batch_number = Column(String(50), nullable=False, unique=True, index=True)
|
||||
|
||||
# Product and recipe information
|
||||
product_id = Column(UUID(as_uuid=True), nullable=False, index=True) # Reference to inventory/recipes
|
||||
product_name = Column(String(255), nullable=False)
|
||||
recipe_id = Column(UUID(as_uuid=True), nullable=True)
|
||||
|
||||
# Production planning
|
||||
planned_start_time = Column(DateTime(timezone=True), nullable=False)
|
||||
planned_end_time = Column(DateTime(timezone=True), nullable=False)
|
||||
planned_quantity = Column(Float, nullable=False)
|
||||
planned_duration_minutes = Column(Integer, nullable=False)
|
||||
|
||||
# Actual production tracking
|
||||
actual_start_time = Column(DateTime(timezone=True), nullable=True)
|
||||
actual_end_time = Column(DateTime(timezone=True), nullable=True)
|
||||
actual_quantity = Column(Float, nullable=True)
|
||||
actual_duration_minutes = Column(Integer, nullable=True)
|
||||
|
||||
# Status and priority
|
||||
status = Column(SQLEnum(ProductionStatus), nullable=False, default=ProductionStatus.PENDING, index=True)
|
||||
priority = Column(SQLEnum(ProductionPriority), nullable=False, default=ProductionPriority.MEDIUM)
|
||||
|
||||
# Cost tracking
|
||||
estimated_cost = Column(Float, nullable=True)
|
||||
actual_cost = Column(Float, nullable=True)
|
||||
labor_cost = Column(Float, nullable=True)
|
||||
material_cost = Column(Float, nullable=True)
|
||||
overhead_cost = Column(Float, nullable=True)
|
||||
|
||||
# Quality metrics
|
||||
yield_percentage = Column(Float, nullable=True) # actual/planned quantity
|
||||
quality_score = Column(Float, nullable=True)
|
||||
waste_quantity = Column(Float, nullable=True)
|
||||
defect_quantity = Column(Float, nullable=True)
|
||||
|
||||
# Equipment and resources
|
||||
equipment_used = Column(JSON, nullable=True) # List of equipment IDs
|
||||
staff_assigned = Column(JSON, nullable=True) # List of staff IDs
|
||||
station_id = Column(String(50), nullable=True)
|
||||
|
||||
# Business context
|
||||
order_id = Column(UUID(as_uuid=True), nullable=True) # Associated customer order
|
||||
forecast_id = Column(UUID(as_uuid=True), nullable=True) # Associated demand forecast
|
||||
is_rush_order = Column(Boolean, default=False)
|
||||
is_special_recipe = Column(Boolean, default=False)
|
||||
|
||||
# Notes and tracking
|
||||
production_notes = Column(Text, nullable=True)
|
||||
quality_notes = Column(Text, nullable=True)
|
||||
delay_reason = Column(String(255), nullable=True)
|
||||
cancellation_reason = Column(String(255), nullable=True)
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
completed_at = Column(DateTime(timezone=True), nullable=True)
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary following shared pattern"""
|
||||
return {
|
||||
"id": str(self.id),
|
||||
"tenant_id": str(self.tenant_id),
|
||||
"batch_number": self.batch_number,
|
||||
"product_id": str(self.product_id),
|
||||
"product_name": self.product_name,
|
||||
"recipe_id": str(self.recipe_id) if self.recipe_id else None,
|
||||
"planned_start_time": self.planned_start_time.isoformat() if self.planned_start_time else None,
|
||||
"planned_end_time": self.planned_end_time.isoformat() if self.planned_end_time else None,
|
||||
"planned_quantity": self.planned_quantity,
|
||||
"planned_duration_minutes": self.planned_duration_minutes,
|
||||
"actual_start_time": self.actual_start_time.isoformat() if self.actual_start_time else None,
|
||||
"actual_end_time": self.actual_end_time.isoformat() if self.actual_end_time else None,
|
||||
"actual_quantity": self.actual_quantity,
|
||||
"actual_duration_minutes": self.actual_duration_minutes,
|
||||
"status": self.status.value if self.status else None,
|
||||
"priority": self.priority.value if self.priority else None,
|
||||
"estimated_cost": self.estimated_cost,
|
||||
"actual_cost": self.actual_cost,
|
||||
"labor_cost": self.labor_cost,
|
||||
"material_cost": self.material_cost,
|
||||
"overhead_cost": self.overhead_cost,
|
||||
"yield_percentage": self.yield_percentage,
|
||||
"quality_score": self.quality_score,
|
||||
"waste_quantity": self.waste_quantity,
|
||||
"defect_quantity": self.defect_quantity,
|
||||
"equipment_used": self.equipment_used,
|
||||
"staff_assigned": self.staff_assigned,
|
||||
"station_id": self.station_id,
|
||||
"order_id": str(self.order_id) if self.order_id else None,
|
||||
"forecast_id": str(self.forecast_id) if self.forecast_id else None,
|
||||
"is_rush_order": self.is_rush_order,
|
||||
"is_special_recipe": self.is_special_recipe,
|
||||
"production_notes": self.production_notes,
|
||||
"quality_notes": self.quality_notes,
|
||||
"delay_reason": self.delay_reason,
|
||||
"cancellation_reason": self.cancellation_reason,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
"completed_at": self.completed_at.isoformat() if self.completed_at else None,
|
||||
}
|
||||
|
||||
|
||||
class ProductionSchedule(Base):
|
||||
"""Production schedule model for planning and tracking daily production"""
|
||||
__tablename__ = "production_schedules"
|
||||
|
||||
# Primary identification
|
||||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||||
|
||||
# Schedule information
|
||||
schedule_date = Column(DateTime(timezone=True), nullable=False, index=True)
|
||||
shift_start = Column(DateTime(timezone=True), nullable=False)
|
||||
shift_end = Column(DateTime(timezone=True), nullable=False)
|
||||
|
||||
# Capacity planning
|
||||
total_capacity_hours = Column(Float, nullable=False)
|
||||
planned_capacity_hours = Column(Float, nullable=False)
|
||||
actual_capacity_hours = Column(Float, nullable=True)
|
||||
overtime_hours = Column(Float, nullable=True, default=0.0)
|
||||
|
||||
# Staff and equipment
|
||||
staff_count = Column(Integer, nullable=False)
|
||||
equipment_capacity = Column(JSON, nullable=True) # Equipment availability
|
||||
station_assignments = Column(JSON, nullable=True) # Station schedules
|
||||
|
||||
# Production metrics
|
||||
total_batches_planned = Column(Integer, nullable=False, default=0)
|
||||
total_batches_completed = Column(Integer, nullable=True, default=0)
|
||||
total_quantity_planned = Column(Float, nullable=False, default=0.0)
|
||||
total_quantity_produced = Column(Float, nullable=True, default=0.0)
|
||||
|
||||
# Status tracking
|
||||
is_finalized = Column(Boolean, default=False)
|
||||
is_active = Column(Boolean, default=True)
|
||||
|
||||
# Performance metrics
|
||||
efficiency_percentage = Column(Float, nullable=True)
|
||||
utilization_percentage = Column(Float, nullable=True)
|
||||
on_time_completion_rate = Column(Float, nullable=True)
|
||||
|
||||
# Notes and adjustments
|
||||
schedule_notes = Column(Text, nullable=True)
|
||||
schedule_adjustments = Column(JSON, nullable=True) # Track changes made
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
finalized_at = Column(DateTime(timezone=True), nullable=True)
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary following shared pattern"""
|
||||
return {
|
||||
"id": str(self.id),
|
||||
"tenant_id": str(self.tenant_id),
|
||||
"schedule_date": self.schedule_date.isoformat() if self.schedule_date else None,
|
||||
"shift_start": self.shift_start.isoformat() if self.shift_start else None,
|
||||
"shift_end": self.shift_end.isoformat() if self.shift_end else None,
|
||||
"total_capacity_hours": self.total_capacity_hours,
|
||||
"planned_capacity_hours": self.planned_capacity_hours,
|
||||
"actual_capacity_hours": self.actual_capacity_hours,
|
||||
"overtime_hours": self.overtime_hours,
|
||||
"staff_count": self.staff_count,
|
||||
"equipment_capacity": self.equipment_capacity,
|
||||
"station_assignments": self.station_assignments,
|
||||
"total_batches_planned": self.total_batches_planned,
|
||||
"total_batches_completed": self.total_batches_completed,
|
||||
"total_quantity_planned": self.total_quantity_planned,
|
||||
"total_quantity_produced": self.total_quantity_produced,
|
||||
"is_finalized": self.is_finalized,
|
||||
"is_active": self.is_active,
|
||||
"efficiency_percentage": self.efficiency_percentage,
|
||||
"utilization_percentage": self.utilization_percentage,
|
||||
"on_time_completion_rate": self.on_time_completion_rate,
|
||||
"schedule_notes": self.schedule_notes,
|
||||
"schedule_adjustments": self.schedule_adjustments,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
"finalized_at": self.finalized_at.isoformat() if self.finalized_at else None,
|
||||
}
|
||||
|
||||
|
||||
class ProductionCapacity(Base):
|
||||
"""Production capacity model for tracking equipment and resource availability"""
|
||||
__tablename__ = "production_capacity"
|
||||
|
||||
# Primary identification
|
||||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||||
|
||||
# Capacity definition
|
||||
resource_type = Column(String(50), nullable=False) # equipment, staff, station
|
||||
resource_id = Column(String(100), nullable=False)
|
||||
resource_name = Column(String(255), nullable=False)
|
||||
|
||||
# Time period
|
||||
date = Column(DateTime(timezone=True), nullable=False, index=True)
|
||||
start_time = Column(DateTime(timezone=True), nullable=False)
|
||||
end_time = Column(DateTime(timezone=True), nullable=False)
|
||||
|
||||
# Capacity metrics
|
||||
total_capacity_units = Column(Float, nullable=False) # Total available capacity
|
||||
allocated_capacity_units = Column(Float, nullable=False, default=0.0)
|
||||
remaining_capacity_units = Column(Float, nullable=False)
|
||||
|
||||
# Status
|
||||
is_available = Column(Boolean, default=True)
|
||||
is_maintenance = Column(Boolean, default=False)
|
||||
is_reserved = Column(Boolean, default=False)
|
||||
|
||||
# Equipment specific
|
||||
equipment_type = Column(String(100), nullable=True)
|
||||
max_batch_size = Column(Float, nullable=True)
|
||||
min_batch_size = Column(Float, nullable=True)
|
||||
setup_time_minutes = Column(Integer, nullable=True)
|
||||
cleanup_time_minutes = Column(Integer, nullable=True)
|
||||
|
||||
# Performance tracking
|
||||
efficiency_rating = Column(Float, nullable=True)
|
||||
maintenance_status = Column(String(50), nullable=True)
|
||||
last_maintenance_date = Column(DateTime(timezone=True), nullable=True)
|
||||
|
||||
# Notes
|
||||
notes = Column(Text, nullable=True)
|
||||
restrictions = Column(JSON, nullable=True) # Product type restrictions, etc.
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary following shared pattern"""
|
||||
return {
|
||||
"id": str(self.id),
|
||||
"tenant_id": str(self.tenant_id),
|
||||
"resource_type": self.resource_type,
|
||||
"resource_id": self.resource_id,
|
||||
"resource_name": self.resource_name,
|
||||
"date": self.date.isoformat() if self.date else None,
|
||||
"start_time": self.start_time.isoformat() if self.start_time else None,
|
||||
"end_time": self.end_time.isoformat() if self.end_time else None,
|
||||
"total_capacity_units": self.total_capacity_units,
|
||||
"allocated_capacity_units": self.allocated_capacity_units,
|
||||
"remaining_capacity_units": self.remaining_capacity_units,
|
||||
"is_available": self.is_available,
|
||||
"is_maintenance": self.is_maintenance,
|
||||
"is_reserved": self.is_reserved,
|
||||
"equipment_type": self.equipment_type,
|
||||
"max_batch_size": self.max_batch_size,
|
||||
"min_batch_size": self.min_batch_size,
|
||||
"setup_time_minutes": self.setup_time_minutes,
|
||||
"cleanup_time_minutes": self.cleanup_time_minutes,
|
||||
"efficiency_rating": self.efficiency_rating,
|
||||
"maintenance_status": self.maintenance_status,
|
||||
"last_maintenance_date": self.last_maintenance_date.isoformat() if self.last_maintenance_date else None,
|
||||
"notes": self.notes,
|
||||
"restrictions": self.restrictions,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
class QualityCheck(Base):
|
||||
"""Quality check model for tracking production quality metrics"""
|
||||
__tablename__ = "quality_checks"
|
||||
|
||||
# Primary identification
|
||||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||||
batch_id = Column(UUID(as_uuid=True), nullable=False, index=True) # FK to ProductionBatch
|
||||
|
||||
# Check information
|
||||
check_type = Column(String(50), nullable=False) # visual, weight, temperature, etc.
|
||||
check_time = Column(DateTime(timezone=True), nullable=False)
|
||||
checker_id = Column(String(100), nullable=True) # Staff member who performed check
|
||||
|
||||
# Quality metrics
|
||||
quality_score = Column(Float, nullable=False) # 1-10 scale
|
||||
pass_fail = Column(Boolean, nullable=False)
|
||||
defect_count = Column(Integer, nullable=False, default=0)
|
||||
defect_types = Column(JSON, nullable=True) # List of defect categories
|
||||
|
||||
# Measurements
|
||||
measured_weight = Column(Float, nullable=True)
|
||||
measured_temperature = Column(Float, nullable=True)
|
||||
measured_moisture = Column(Float, nullable=True)
|
||||
measured_dimensions = Column(JSON, nullable=True)
|
||||
|
||||
# Standards comparison
|
||||
target_weight = Column(Float, nullable=True)
|
||||
target_temperature = Column(Float, nullable=True)
|
||||
target_moisture = Column(Float, nullable=True)
|
||||
tolerance_percentage = Column(Float, nullable=True)
|
||||
|
||||
# Results
|
||||
within_tolerance = Column(Boolean, nullable=True)
|
||||
corrective_action_needed = Column(Boolean, default=False)
|
||||
corrective_actions = Column(JSON, nullable=True)
|
||||
|
||||
# Notes and documentation
|
||||
check_notes = Column(Text, nullable=True)
|
||||
photos_urls = Column(JSON, nullable=True) # URLs to quality check photos
|
||||
certificate_url = Column(String(500), nullable=True)
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary following shared pattern"""
|
||||
return {
|
||||
"id": str(self.id),
|
||||
"tenant_id": str(self.tenant_id),
|
||||
"batch_id": str(self.batch_id),
|
||||
"check_type": self.check_type,
|
||||
"check_time": self.check_time.isoformat() if self.check_time else None,
|
||||
"checker_id": self.checker_id,
|
||||
"quality_score": self.quality_score,
|
||||
"pass_fail": self.pass_fail,
|
||||
"defect_count": self.defect_count,
|
||||
"defect_types": self.defect_types,
|
||||
"measured_weight": self.measured_weight,
|
||||
"measured_temperature": self.measured_temperature,
|
||||
"measured_moisture": self.measured_moisture,
|
||||
"measured_dimensions": self.measured_dimensions,
|
||||
"target_weight": self.target_weight,
|
||||
"target_temperature": self.target_temperature,
|
||||
"target_moisture": self.target_moisture,
|
||||
"tolerance_percentage": self.tolerance_percentage,
|
||||
"within_tolerance": self.within_tolerance,
|
||||
"corrective_action_needed": self.corrective_action_needed,
|
||||
"corrective_actions": self.corrective_actions,
|
||||
"check_notes": self.check_notes,
|
||||
"photos_urls": self.photos_urls,
|
||||
"certificate_url": self.certificate_url,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
class ProductionAlert(Base):
|
||||
"""Production alert model for tracking production issues and notifications"""
|
||||
__tablename__ = "production_alerts"
|
||||
|
||||
# Primary identification
|
||||
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
tenant_id = Column(UUID(as_uuid=True), nullable=False, index=True)
|
||||
|
||||
# Alert classification
|
||||
alert_type = Column(String(50), nullable=False, index=True) # capacity_exceeded, delay, quality_issue, etc.
|
||||
severity = Column(SQLEnum(AlertSeverity), nullable=False, default=AlertSeverity.MEDIUM)
|
||||
title = Column(String(255), nullable=False)
|
||||
message = Column(Text, nullable=False)
|
||||
|
||||
# Context
|
||||
batch_id = Column(UUID(as_uuid=True), nullable=True, index=True) # Associated batch if applicable
|
||||
schedule_id = Column(UUID(as_uuid=True), nullable=True, index=True) # Associated schedule if applicable
|
||||
source_system = Column(String(50), nullable=False, default="production")
|
||||
|
||||
# Status
|
||||
is_active = Column(Boolean, default=True)
|
||||
is_acknowledged = Column(Boolean, default=False)
|
||||
is_resolved = Column(Boolean, default=False)
|
||||
|
||||
# Actions and recommendations
|
||||
recommended_actions = Column(JSON, nullable=True) # List of suggested actions
|
||||
actions_taken = Column(JSON, nullable=True) # List of actions actually taken
|
||||
|
||||
# Business impact
|
||||
impact_level = Column(String(20), nullable=True) # low, medium, high, critical
|
||||
estimated_cost_impact = Column(Float, nullable=True)
|
||||
estimated_time_impact_minutes = Column(Integer, nullable=True)
|
||||
|
||||
# Resolution tracking
|
||||
acknowledged_by = Column(String(100), nullable=True)
|
||||
acknowledged_at = Column(DateTime(timezone=True), nullable=True)
|
||||
resolved_by = Column(String(100), nullable=True)
|
||||
resolved_at = Column(DateTime(timezone=True), nullable=True)
|
||||
resolution_notes = Column(Text, nullable=True)
|
||||
|
||||
# Alert data
|
||||
alert_data = Column(JSON, nullable=True) # Additional context data
|
||||
alert_metadata = Column(JSON, nullable=True) # Metadata for the alert
|
||||
|
||||
# Timestamps
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary following shared pattern"""
|
||||
return {
|
||||
"id": str(self.id),
|
||||
"tenant_id": str(self.tenant_id),
|
||||
"alert_type": self.alert_type,
|
||||
"severity": self.severity.value if self.severity else None,
|
||||
"title": self.title,
|
||||
"message": self.message,
|
||||
"batch_id": str(self.batch_id) if self.batch_id else None,
|
||||
"schedule_id": str(self.schedule_id) if self.schedule_id else None,
|
||||
"source_system": self.source_system,
|
||||
"is_active": self.is_active,
|
||||
"is_acknowledged": self.is_acknowledged,
|
||||
"is_resolved": self.is_resolved,
|
||||
"recommended_actions": self.recommended_actions,
|
||||
"actions_taken": self.actions_taken,
|
||||
"impact_level": self.impact_level,
|
||||
"estimated_cost_impact": self.estimated_cost_impact,
|
||||
"estimated_time_impact_minutes": self.estimated_time_impact_minutes,
|
||||
"acknowledged_by": self.acknowledged_by,
|
||||
"acknowledged_at": self.acknowledged_at.isoformat() if self.acknowledged_at else None,
|
||||
"resolved_by": self.resolved_by,
|
||||
"resolved_at": self.resolved_at.isoformat() if self.resolved_at else None,
|
||||
"resolution_notes": self.resolution_notes,
|
||||
"alert_data": self.alert_data,
|
||||
"alert_metadata": self.alert_metadata,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
20
services/production/app/repositories/__init__.py
Normal file
20
services/production/app/repositories/__init__.py
Normal file
@@ -0,0 +1,20 @@
|
||||
# ================================================================
|
||||
# services/production/app/repositories/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Repository layer for data access
|
||||
"""
|
||||
|
||||
from .production_batch_repository import ProductionBatchRepository
|
||||
from .production_schedule_repository import ProductionScheduleRepository
|
||||
from .production_capacity_repository import ProductionCapacityRepository
|
||||
from .quality_check_repository import QualityCheckRepository
|
||||
from .production_alert_repository import ProductionAlertRepository
|
||||
|
||||
__all__ = [
|
||||
"ProductionBatchRepository",
|
||||
"ProductionScheduleRepository",
|
||||
"ProductionCapacityRepository",
|
||||
"QualityCheckRepository",
|
||||
"ProductionAlertRepository"
|
||||
]
|
||||
221
services/production/app/repositories/base.py
Normal file
221
services/production/app/repositories/base.py
Normal file
@@ -0,0 +1,221 @@
|
||||
"""
|
||||
Base Repository for Production Service
|
||||
Service-specific repository base class with production utilities
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any, Type
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import text, and_, or_
|
||||
from datetime import datetime, date, timedelta
|
||||
import structlog
|
||||
|
||||
from shared.database.repository import BaseRepository
|
||||
from shared.database.exceptions import DatabaseError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionBaseRepository(BaseRepository):
|
||||
"""Base repository for production service with common production operations"""
|
||||
|
||||
def __init__(self, model: Type, session: AsyncSession, cache_ttl: Optional[int] = 300):
|
||||
# Production data is more dynamic, shorter cache time (5 minutes)
|
||||
super().__init__(model, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def get_by_tenant_id(self, tenant_id: str, skip: int = 0, limit: int = 100) -> List:
|
||||
"""Get records by tenant ID"""
|
||||
if hasattr(self.model, 'tenant_id'):
|
||||
return await self.get_multi(
|
||||
skip=skip,
|
||||
limit=limit,
|
||||
filters={"tenant_id": tenant_id},
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
return await self.get_multi(skip=skip, limit=limit)
|
||||
|
||||
@transactional
|
||||
async def get_by_status(
|
||||
self,
|
||||
tenant_id: str,
|
||||
status: str,
|
||||
skip: int = 0,
|
||||
limit: int = 100
|
||||
) -> List:
|
||||
"""Get records by tenant and status"""
|
||||
if hasattr(self.model, 'status'):
|
||||
return await self.get_multi(
|
||||
skip=skip,
|
||||
limit=limit,
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"status": status
|
||||
},
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
return await self.get_by_tenant_id(tenant_id, skip, limit)
|
||||
|
||||
@transactional
|
||||
async def get_by_date_range(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date,
|
||||
date_field: str = "created_at",
|
||||
skip: int = 0,
|
||||
limit: int = 100
|
||||
) -> List:
|
||||
"""Get records by tenant and date range"""
|
||||
try:
|
||||
start_datetime = datetime.combine(start_date, datetime.min.time())
|
||||
end_datetime = datetime.combine(end_date, datetime.max.time())
|
||||
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
f"{date_field}__gte": start_datetime,
|
||||
f"{date_field}__lte": end_datetime
|
||||
}
|
||||
|
||||
return await self.get_multi(
|
||||
skip=skip,
|
||||
limit=limit,
|
||||
filters=filters,
|
||||
order_by=date_field,
|
||||
order_desc=True
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error("Error fetching records by date range",
|
||||
error=str(e), tenant_id=tenant_id)
|
||||
raise DatabaseError(f"Failed to fetch records by date range: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_active_records(
|
||||
self,
|
||||
tenant_id: str,
|
||||
active_field: str = "is_active",
|
||||
skip: int = 0,
|
||||
limit: int = 100
|
||||
) -> List:
|
||||
"""Get active records for a tenant"""
|
||||
if hasattr(self.model, active_field):
|
||||
return await self.get_multi(
|
||||
skip=skip,
|
||||
limit=limit,
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
active_field: True
|
||||
},
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
return await self.get_by_tenant_id(tenant_id, skip, limit)
|
||||
|
||||
def _validate_production_data(
|
||||
self,
|
||||
data: Dict[str, Any],
|
||||
required_fields: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Validate production data with required fields"""
|
||||
errors = []
|
||||
|
||||
# Check required fields
|
||||
for field in required_fields:
|
||||
if field not in data or data[field] is None:
|
||||
errors.append(f"Missing required field: {field}")
|
||||
|
||||
# Validate tenant_id format
|
||||
if "tenant_id" in data:
|
||||
try:
|
||||
import uuid
|
||||
uuid.UUID(str(data["tenant_id"]))
|
||||
except (ValueError, TypeError):
|
||||
errors.append("Invalid tenant_id format")
|
||||
|
||||
# Validate datetime fields
|
||||
datetime_fields = ["planned_start_time", "planned_end_time", "actual_start_time", "actual_end_time"]
|
||||
for field in datetime_fields:
|
||||
if field in data and data[field] is not None:
|
||||
if not isinstance(data[field], (datetime, str)):
|
||||
errors.append(f"Invalid datetime format for {field}")
|
||||
|
||||
# Validate numeric fields
|
||||
numeric_fields = ["planned_quantity", "actual_quantity", "quality_score", "yield_percentage"]
|
||||
for field in numeric_fields:
|
||||
if field in data and data[field] is not None:
|
||||
try:
|
||||
float(data[field])
|
||||
if data[field] < 0:
|
||||
errors.append(f"{field} cannot be negative")
|
||||
except (ValueError, TypeError):
|
||||
errors.append(f"Invalid numeric value for {field}")
|
||||
|
||||
# Validate percentage fields (0-100)
|
||||
percentage_fields = ["yield_percentage", "efficiency_percentage", "utilization_percentage"]
|
||||
for field in percentage_fields:
|
||||
if field in data and data[field] is not None:
|
||||
try:
|
||||
value = float(data[field])
|
||||
if value < 0 or value > 100:
|
||||
errors.append(f"{field} must be between 0 and 100")
|
||||
except (ValueError, TypeError):
|
||||
pass # Already caught by numeric validation
|
||||
|
||||
return {
|
||||
"is_valid": len(errors) == 0,
|
||||
"errors": errors
|
||||
}
|
||||
|
||||
async def get_production_statistics(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Get production statistics for a tenant and date range"""
|
||||
try:
|
||||
# Base query for the model
|
||||
start_datetime = datetime.combine(start_date, datetime.min.time())
|
||||
end_datetime = datetime.combine(end_date, datetime.max.time())
|
||||
|
||||
# This would need to be implemented per specific model
|
||||
# For now, return basic count
|
||||
records = await self.get_by_date_range(
|
||||
tenant_id, start_date, end_date, limit=1000
|
||||
)
|
||||
|
||||
return {
|
||||
"total_records": len(records),
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating production statistics",
|
||||
error=str(e), tenant_id=tenant_id)
|
||||
raise DatabaseError(f"Failed to calculate statistics: {str(e)}")
|
||||
|
||||
async def check_duplicate(
|
||||
self,
|
||||
tenant_id: str,
|
||||
unique_fields: Dict[str, Any]
|
||||
) -> bool:
|
||||
"""Check if a record with the same unique fields exists"""
|
||||
try:
|
||||
filters = {"tenant_id": tenant_id}
|
||||
filters.update(unique_fields)
|
||||
|
||||
existing = await self.get_multi(
|
||||
filters=filters,
|
||||
limit=1
|
||||
)
|
||||
|
||||
return len(existing) > 0
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error checking for duplicates",
|
||||
error=str(e), tenant_id=tenant_id)
|
||||
return False
|
||||
@@ -0,0 +1,379 @@
|
||||
"""
|
||||
Production Alert Repository
|
||||
Repository for production alert operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, and_, text, desc, func
|
||||
from datetime import datetime, timedelta, date
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from .base import ProductionBaseRepository
|
||||
from app.models.production import ProductionAlert, AlertSeverity
|
||||
from shared.database.exceptions import DatabaseError, ValidationError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionAlertRepository(ProductionBaseRepository):
|
||||
"""Repository for production alert operations"""
|
||||
|
||||
def __init__(self, session: AsyncSession, cache_ttl: Optional[int] = 60):
|
||||
# Alerts are very dynamic, very short cache time (1 minute)
|
||||
super().__init__(ProductionAlert, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def create_alert(self, alert_data: Dict[str, Any]) -> ProductionAlert:
|
||||
"""Create a new production alert with validation"""
|
||||
try:
|
||||
# Validate alert data
|
||||
validation_result = self._validate_production_data(
|
||||
alert_data,
|
||||
["tenant_id", "alert_type", "title", "message"]
|
||||
)
|
||||
|
||||
if not validation_result["is_valid"]:
|
||||
raise ValidationError(f"Invalid alert data: {validation_result['errors']}")
|
||||
|
||||
# Set default values
|
||||
if "severity" not in alert_data:
|
||||
alert_data["severity"] = AlertSeverity.MEDIUM
|
||||
if "source_system" not in alert_data:
|
||||
alert_data["source_system"] = "production"
|
||||
if "is_active" not in alert_data:
|
||||
alert_data["is_active"] = True
|
||||
if "is_acknowledged" not in alert_data:
|
||||
alert_data["is_acknowledged"] = False
|
||||
if "is_resolved" not in alert_data:
|
||||
alert_data["is_resolved"] = False
|
||||
|
||||
# Create alert
|
||||
alert = await self.create(alert_data)
|
||||
|
||||
logger.info("Production alert created successfully",
|
||||
alert_id=str(alert.id),
|
||||
alert_type=alert.alert_type,
|
||||
severity=alert.severity.value if alert.severity else None,
|
||||
tenant_id=str(alert.tenant_id))
|
||||
|
||||
return alert
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error creating production alert", error=str(e))
|
||||
raise DatabaseError(f"Failed to create production alert: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_active_alerts(
|
||||
self,
|
||||
tenant_id: str,
|
||||
severity: Optional[AlertSeverity] = None
|
||||
) -> List[ProductionAlert]:
|
||||
"""Get active production alerts for a tenant"""
|
||||
try:
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"is_active": True,
|
||||
"is_resolved": False
|
||||
}
|
||||
|
||||
if severity:
|
||||
filters["severity"] = severity
|
||||
|
||||
alerts = await self.get_multi(
|
||||
filters=filters,
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved active production alerts",
|
||||
count=len(alerts),
|
||||
severity=severity.value if severity else "all",
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching active alerts", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch active alerts: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_alerts_by_type(
|
||||
self,
|
||||
tenant_id: str,
|
||||
alert_type: str,
|
||||
include_resolved: bool = False
|
||||
) -> List[ProductionAlert]:
|
||||
"""Get production alerts by type"""
|
||||
try:
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"alert_type": alert_type
|
||||
}
|
||||
|
||||
if not include_resolved:
|
||||
filters["is_resolved"] = False
|
||||
|
||||
alerts = await self.get_multi(
|
||||
filters=filters,
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved alerts by type",
|
||||
count=len(alerts),
|
||||
alert_type=alert_type,
|
||||
include_resolved=include_resolved,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching alerts by type", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch alerts by type: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_alerts_by_batch(
|
||||
self,
|
||||
tenant_id: str,
|
||||
batch_id: str
|
||||
) -> List[ProductionAlert]:
|
||||
"""Get production alerts for a specific batch"""
|
||||
try:
|
||||
alerts = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"batch_id": batch_id
|
||||
},
|
||||
order_by="created_at",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved alerts by batch",
|
||||
count=len(alerts),
|
||||
batch_id=batch_id,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching alerts by batch", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch alerts by batch: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def acknowledge_alert(
|
||||
self,
|
||||
alert_id: UUID,
|
||||
acknowledged_by: str,
|
||||
acknowledgment_notes: Optional[str] = None
|
||||
) -> ProductionAlert:
|
||||
"""Acknowledge a production alert"""
|
||||
try:
|
||||
alert = await self.get(alert_id)
|
||||
if not alert:
|
||||
raise ValidationError(f"Alert {alert_id} not found")
|
||||
|
||||
if alert.is_acknowledged:
|
||||
raise ValidationError("Alert is already acknowledged")
|
||||
|
||||
update_data = {
|
||||
"is_acknowledged": True,
|
||||
"acknowledged_by": acknowledged_by,
|
||||
"acknowledged_at": datetime.utcnow(),
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
if acknowledgment_notes:
|
||||
current_actions = alert.actions_taken or []
|
||||
current_actions.append({
|
||||
"action": "acknowledged",
|
||||
"by": acknowledged_by,
|
||||
"at": datetime.utcnow().isoformat(),
|
||||
"notes": acknowledgment_notes
|
||||
})
|
||||
update_data["actions_taken"] = current_actions
|
||||
|
||||
alert = await self.update(alert_id, update_data)
|
||||
|
||||
logger.info("Acknowledged production alert",
|
||||
alert_id=str(alert_id),
|
||||
acknowledged_by=acknowledged_by)
|
||||
|
||||
return alert
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error acknowledging alert", error=str(e))
|
||||
raise DatabaseError(f"Failed to acknowledge alert: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def resolve_alert(
|
||||
self,
|
||||
alert_id: UUID,
|
||||
resolved_by: str,
|
||||
resolution_notes: str
|
||||
) -> ProductionAlert:
|
||||
"""Resolve a production alert"""
|
||||
try:
|
||||
alert = await self.get(alert_id)
|
||||
if not alert:
|
||||
raise ValidationError(f"Alert {alert_id} not found")
|
||||
|
||||
if alert.is_resolved:
|
||||
raise ValidationError("Alert is already resolved")
|
||||
|
||||
update_data = {
|
||||
"is_resolved": True,
|
||||
"is_active": False,
|
||||
"resolved_by": resolved_by,
|
||||
"resolved_at": datetime.utcnow(),
|
||||
"resolution_notes": resolution_notes,
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
# Add to actions taken
|
||||
current_actions = alert.actions_taken or []
|
||||
current_actions.append({
|
||||
"action": "resolved",
|
||||
"by": resolved_by,
|
||||
"at": datetime.utcnow().isoformat(),
|
||||
"notes": resolution_notes
|
||||
})
|
||||
update_data["actions_taken"] = current_actions
|
||||
|
||||
alert = await self.update(alert_id, update_data)
|
||||
|
||||
logger.info("Resolved production alert",
|
||||
alert_id=str(alert_id),
|
||||
resolved_by=resolved_by)
|
||||
|
||||
return alert
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error resolving alert", error=str(e))
|
||||
raise DatabaseError(f"Failed to resolve alert: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_alert_statistics(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Get alert statistics for a tenant and date range"""
|
||||
try:
|
||||
start_datetime = datetime.combine(start_date, datetime.min.time())
|
||||
end_datetime = datetime.combine(end_date, datetime.max.time())
|
||||
|
||||
alerts = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"created_at__gte": start_datetime,
|
||||
"created_at__lte": end_datetime
|
||||
}
|
||||
)
|
||||
|
||||
total_alerts = len(alerts)
|
||||
active_alerts = len([a for a in alerts if a.is_active])
|
||||
acknowledged_alerts = len([a for a in alerts if a.is_acknowledged])
|
||||
resolved_alerts = len([a for a in alerts if a.is_resolved])
|
||||
|
||||
# Group by severity
|
||||
by_severity = {}
|
||||
for severity in AlertSeverity:
|
||||
severity_alerts = [a for a in alerts if a.severity == severity]
|
||||
by_severity[severity.value] = {
|
||||
"total": len(severity_alerts),
|
||||
"active": len([a for a in severity_alerts if a.is_active]),
|
||||
"resolved": len([a for a in severity_alerts if a.is_resolved])
|
||||
}
|
||||
|
||||
# Group by alert type
|
||||
by_type = {}
|
||||
for alert in alerts:
|
||||
alert_type = alert.alert_type
|
||||
if alert_type not in by_type:
|
||||
by_type[alert_type] = {
|
||||
"total": 0,
|
||||
"active": 0,
|
||||
"resolved": 0
|
||||
}
|
||||
|
||||
by_type[alert_type]["total"] += 1
|
||||
if alert.is_active:
|
||||
by_type[alert_type]["active"] += 1
|
||||
if alert.is_resolved:
|
||||
by_type[alert_type]["resolved"] += 1
|
||||
|
||||
# Calculate resolution time statistics
|
||||
resolved_with_times = [
|
||||
a for a in alerts
|
||||
if a.is_resolved and a.resolved_at and a.created_at
|
||||
]
|
||||
|
||||
resolution_times = []
|
||||
for alert in resolved_with_times:
|
||||
resolution_time = (alert.resolved_at - alert.created_at).total_seconds() / 3600 # hours
|
||||
resolution_times.append(resolution_time)
|
||||
|
||||
avg_resolution_time = sum(resolution_times) / len(resolution_times) if resolution_times else 0
|
||||
|
||||
return {
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"total_alerts": total_alerts,
|
||||
"active_alerts": active_alerts,
|
||||
"acknowledged_alerts": acknowledged_alerts,
|
||||
"resolved_alerts": resolved_alerts,
|
||||
"acknowledgment_rate": round((acknowledged_alerts / total_alerts * 100) if total_alerts > 0 else 0, 2),
|
||||
"resolution_rate": round((resolved_alerts / total_alerts * 100) if total_alerts > 0 else 0, 2),
|
||||
"average_resolution_time_hours": round(avg_resolution_time, 2),
|
||||
"by_severity": by_severity,
|
||||
"by_alert_type": by_type,
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating alert statistics", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate alert statistics: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def cleanup_old_resolved_alerts(
|
||||
self,
|
||||
tenant_id: str,
|
||||
days_to_keep: int = 30
|
||||
) -> int:
|
||||
"""Clean up old resolved alerts"""
|
||||
try:
|
||||
cutoff_date = datetime.utcnow() - timedelta(days=days_to_keep)
|
||||
|
||||
old_alerts = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"is_resolved": True,
|
||||
"resolved_at__lt": cutoff_date
|
||||
}
|
||||
)
|
||||
|
||||
deleted_count = 0
|
||||
for alert in old_alerts:
|
||||
await self.delete(alert.id)
|
||||
deleted_count += 1
|
||||
|
||||
logger.info("Cleaned up old resolved alerts",
|
||||
deleted_count=deleted_count,
|
||||
days_to_keep=days_to_keep,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return deleted_count
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error cleaning up old alerts", error=str(e))
|
||||
raise DatabaseError(f"Failed to clean up old alerts: {str(e)}")
|
||||
@@ -0,0 +1,346 @@
|
||||
"""
|
||||
Production Batch Repository
|
||||
Repository for production batch operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, and_, text, desc, func, or_
|
||||
from datetime import datetime, timedelta, date
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from .base import ProductionBaseRepository
|
||||
from app.models.production import ProductionBatch, ProductionStatus, ProductionPriority
|
||||
from shared.database.exceptions import DatabaseError, ValidationError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionBatchRepository(ProductionBaseRepository):
|
||||
"""Repository for production batch operations"""
|
||||
|
||||
def __init__(self, session: AsyncSession, cache_ttl: Optional[int] = 300):
|
||||
# Production batches are dynamic, short cache time (5 minutes)
|
||||
super().__init__(ProductionBatch, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def create_batch(self, batch_data: Dict[str, Any]) -> ProductionBatch:
|
||||
"""Create a new production batch with validation"""
|
||||
try:
|
||||
# Validate batch data
|
||||
validation_result = self._validate_production_data(
|
||||
batch_data,
|
||||
["tenant_id", "product_id", "product_name", "planned_start_time",
|
||||
"planned_end_time", "planned_quantity", "planned_duration_minutes"]
|
||||
)
|
||||
|
||||
if not validation_result["is_valid"]:
|
||||
raise ValidationError(f"Invalid batch data: {validation_result['errors']}")
|
||||
|
||||
# Generate batch number if not provided
|
||||
if "batch_number" not in batch_data or not batch_data["batch_number"]:
|
||||
batch_data["batch_number"] = await self._generate_batch_number(
|
||||
batch_data["tenant_id"]
|
||||
)
|
||||
|
||||
# Set default values
|
||||
if "status" not in batch_data:
|
||||
batch_data["status"] = ProductionStatus.PENDING
|
||||
if "priority" not in batch_data:
|
||||
batch_data["priority"] = ProductionPriority.MEDIUM
|
||||
if "is_rush_order" not in batch_data:
|
||||
batch_data["is_rush_order"] = False
|
||||
if "is_special_recipe" not in batch_data:
|
||||
batch_data["is_special_recipe"] = False
|
||||
|
||||
# Check for duplicate batch number
|
||||
if await self.check_duplicate(batch_data["tenant_id"], {"batch_number": batch_data["batch_number"]}):
|
||||
raise ValidationError(f"Batch number {batch_data['batch_number']} already exists")
|
||||
|
||||
# Create batch
|
||||
batch = await self.create(batch_data)
|
||||
|
||||
logger.info("Production batch created successfully",
|
||||
batch_id=str(batch.id),
|
||||
batch_number=batch.batch_number,
|
||||
tenant_id=str(batch.tenant_id))
|
||||
|
||||
return batch
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error creating production batch", error=str(e))
|
||||
raise DatabaseError(f"Failed to create production batch: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_active_batches(self, tenant_id: str) -> List[ProductionBatch]:
|
||||
"""Get active production batches for a tenant"""
|
||||
try:
|
||||
active_statuses = [
|
||||
ProductionStatus.PENDING,
|
||||
ProductionStatus.IN_PROGRESS,
|
||||
ProductionStatus.QUALITY_CHECK,
|
||||
ProductionStatus.ON_HOLD
|
||||
]
|
||||
|
||||
batches = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"status__in": active_statuses
|
||||
},
|
||||
order_by="planned_start_time"
|
||||
)
|
||||
|
||||
logger.info("Retrieved active production batches",
|
||||
count=len(batches),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return batches
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching active batches", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch active batches: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_batches_by_date_range(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date,
|
||||
status: Optional[ProductionStatus] = None
|
||||
) -> List[ProductionBatch]:
|
||||
"""Get production batches within a date range"""
|
||||
try:
|
||||
start_datetime = datetime.combine(start_date, datetime.min.time())
|
||||
end_datetime = datetime.combine(end_date, datetime.max.time())
|
||||
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"planned_start_time__gte": start_datetime,
|
||||
"planned_start_time__lte": end_datetime
|
||||
}
|
||||
|
||||
if status:
|
||||
filters["status"] = status
|
||||
|
||||
batches = await self.get_multi(
|
||||
filters=filters,
|
||||
order_by="planned_start_time"
|
||||
)
|
||||
|
||||
logger.info("Retrieved batches by date range",
|
||||
count=len(batches),
|
||||
start_date=start_date.isoformat(),
|
||||
end_date=end_date.isoformat(),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return batches
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching batches by date range", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch batches by date range: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_batches_by_product(
|
||||
self,
|
||||
tenant_id: str,
|
||||
product_id: str,
|
||||
limit: int = 50
|
||||
) -> List[ProductionBatch]:
|
||||
"""Get production batches for a specific product"""
|
||||
try:
|
||||
batches = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"product_id": product_id
|
||||
},
|
||||
order_by="created_at",
|
||||
order_desc=True,
|
||||
limit=limit
|
||||
)
|
||||
|
||||
logger.info("Retrieved batches by product",
|
||||
count=len(batches),
|
||||
product_id=product_id,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return batches
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching batches by product", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch batches by product: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def update_batch_status(
|
||||
self,
|
||||
batch_id: UUID,
|
||||
status: ProductionStatus,
|
||||
actual_quantity: Optional[float] = None,
|
||||
notes: Optional[str] = None
|
||||
) -> ProductionBatch:
|
||||
"""Update production batch status"""
|
||||
try:
|
||||
batch = await self.get(batch_id)
|
||||
if not batch:
|
||||
raise ValidationError(f"Batch {batch_id} not found")
|
||||
|
||||
update_data = {
|
||||
"status": status,
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
# Set completion time if completed
|
||||
if status == ProductionStatus.COMPLETED:
|
||||
update_data["completed_at"] = datetime.utcnow()
|
||||
update_data["actual_end_time"] = datetime.utcnow()
|
||||
|
||||
if actual_quantity is not None:
|
||||
update_data["actual_quantity"] = actual_quantity
|
||||
# Calculate yield percentage
|
||||
if batch.planned_quantity > 0:
|
||||
update_data["yield_percentage"] = (actual_quantity / batch.planned_quantity) * 100
|
||||
|
||||
# Set start time if starting production
|
||||
if status == ProductionStatus.IN_PROGRESS and not batch.actual_start_time:
|
||||
update_data["actual_start_time"] = datetime.utcnow()
|
||||
|
||||
# Add notes
|
||||
if notes:
|
||||
if status == ProductionStatus.CANCELLED:
|
||||
update_data["cancellation_reason"] = notes
|
||||
elif status == ProductionStatus.ON_HOLD:
|
||||
update_data["delay_reason"] = notes
|
||||
else:
|
||||
update_data["production_notes"] = notes
|
||||
|
||||
batch = await self.update(batch_id, update_data)
|
||||
|
||||
logger.info("Updated batch status",
|
||||
batch_id=str(batch_id),
|
||||
new_status=status.value,
|
||||
actual_quantity=actual_quantity)
|
||||
|
||||
return batch
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error updating batch status", error=str(e))
|
||||
raise DatabaseError(f"Failed to update batch status: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_production_metrics(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Get production metrics for a tenant and date range"""
|
||||
try:
|
||||
batches = await self.get_batches_by_date_range(tenant_id, start_date, end_date)
|
||||
|
||||
total_batches = len(batches)
|
||||
completed_batches = len([b for b in batches if b.status == ProductionStatus.COMPLETED])
|
||||
in_progress_batches = len([b for b in batches if b.status == ProductionStatus.IN_PROGRESS])
|
||||
cancelled_batches = len([b for b in batches if b.status == ProductionStatus.CANCELLED])
|
||||
|
||||
# Calculate totals
|
||||
total_planned_quantity = sum(b.planned_quantity for b in batches)
|
||||
total_actual_quantity = sum(b.actual_quantity or 0 for b in batches if b.actual_quantity)
|
||||
|
||||
# Calculate average yield
|
||||
completed_with_yield = [b for b in batches if b.yield_percentage is not None]
|
||||
avg_yield = (
|
||||
sum(b.yield_percentage for b in completed_with_yield) / len(completed_with_yield)
|
||||
if completed_with_yield else 0
|
||||
)
|
||||
|
||||
# Calculate on-time completion rate
|
||||
on_time_completed = len([
|
||||
b for b in batches
|
||||
if b.status == ProductionStatus.COMPLETED
|
||||
and b.actual_end_time
|
||||
and b.planned_end_time
|
||||
and b.actual_end_time <= b.planned_end_time
|
||||
])
|
||||
|
||||
on_time_rate = (on_time_completed / completed_batches * 100) if completed_batches > 0 else 0
|
||||
|
||||
return {
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"total_batches": total_batches,
|
||||
"completed_batches": completed_batches,
|
||||
"in_progress_batches": in_progress_batches,
|
||||
"cancelled_batches": cancelled_batches,
|
||||
"completion_rate": (completed_batches / total_batches * 100) if total_batches > 0 else 0,
|
||||
"total_planned_quantity": total_planned_quantity,
|
||||
"total_actual_quantity": total_actual_quantity,
|
||||
"average_yield_percentage": round(avg_yield, 2),
|
||||
"on_time_completion_rate": round(on_time_rate, 2),
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating production metrics", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate production metrics: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_urgent_batches(self, tenant_id: str, hours_ahead: int = 4) -> List[ProductionBatch]:
|
||||
"""Get batches that need to start within the specified hours"""
|
||||
try:
|
||||
cutoff_time = datetime.utcnow() + timedelta(hours=hours_ahead)
|
||||
|
||||
batches = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"status": ProductionStatus.PENDING,
|
||||
"planned_start_time__lte": cutoff_time
|
||||
},
|
||||
order_by="planned_start_time"
|
||||
)
|
||||
|
||||
logger.info("Retrieved urgent batches",
|
||||
count=len(batches),
|
||||
hours_ahead=hours_ahead,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return batches
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching urgent batches", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch urgent batches: {str(e)}")
|
||||
|
||||
async def _generate_batch_number(self, tenant_id: str) -> str:
|
||||
"""Generate a unique batch number"""
|
||||
try:
|
||||
# Get current date for prefix
|
||||
today = datetime.utcnow().date()
|
||||
date_prefix = today.strftime("%Y%m%d")
|
||||
|
||||
# Count batches created today
|
||||
today_start = datetime.combine(today, datetime.min.time())
|
||||
today_end = datetime.combine(today, datetime.max.time())
|
||||
|
||||
daily_batches = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"created_at__gte": today_start,
|
||||
"created_at__lte": today_end
|
||||
}
|
||||
)
|
||||
|
||||
# Generate sequential number
|
||||
sequence = len(daily_batches) + 1
|
||||
batch_number = f"PROD-{date_prefix}-{sequence:03d}"
|
||||
|
||||
return batch_number
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error generating batch number", error=str(e))
|
||||
# Fallback to timestamp-based number
|
||||
timestamp = int(datetime.utcnow().timestamp())
|
||||
return f"PROD-{timestamp}"
|
||||
@@ -0,0 +1,341 @@
|
||||
"""
|
||||
Production Capacity Repository
|
||||
Repository for production capacity operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, and_, text, desc, func
|
||||
from datetime import datetime, timedelta, date
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from .base import ProductionBaseRepository
|
||||
from app.models.production import ProductionCapacity
|
||||
from shared.database.exceptions import DatabaseError, ValidationError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionCapacityRepository(ProductionBaseRepository):
|
||||
"""Repository for production capacity operations"""
|
||||
|
||||
def __init__(self, session: AsyncSession, cache_ttl: Optional[int] = 600):
|
||||
# Capacity data changes moderately, medium cache time (10 minutes)
|
||||
super().__init__(ProductionCapacity, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def create_capacity(self, capacity_data: Dict[str, Any]) -> ProductionCapacity:
|
||||
"""Create a new production capacity entry with validation"""
|
||||
try:
|
||||
# Validate capacity data
|
||||
validation_result = self._validate_production_data(
|
||||
capacity_data,
|
||||
["tenant_id", "resource_type", "resource_id", "resource_name",
|
||||
"date", "start_time", "end_time", "total_capacity_units"]
|
||||
)
|
||||
|
||||
if not validation_result["is_valid"]:
|
||||
raise ValidationError(f"Invalid capacity data: {validation_result['errors']}")
|
||||
|
||||
# Set default values
|
||||
if "allocated_capacity_units" not in capacity_data:
|
||||
capacity_data["allocated_capacity_units"] = 0.0
|
||||
if "remaining_capacity_units" not in capacity_data:
|
||||
capacity_data["remaining_capacity_units"] = capacity_data["total_capacity_units"]
|
||||
if "is_available" not in capacity_data:
|
||||
capacity_data["is_available"] = True
|
||||
if "is_maintenance" not in capacity_data:
|
||||
capacity_data["is_maintenance"] = False
|
||||
if "is_reserved" not in capacity_data:
|
||||
capacity_data["is_reserved"] = False
|
||||
|
||||
# Create capacity entry
|
||||
capacity = await self.create(capacity_data)
|
||||
|
||||
logger.info("Production capacity created successfully",
|
||||
capacity_id=str(capacity.id),
|
||||
resource_type=capacity.resource_type,
|
||||
resource_id=capacity.resource_id,
|
||||
tenant_id=str(capacity.tenant_id))
|
||||
|
||||
return capacity
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error creating production capacity", error=str(e))
|
||||
raise DatabaseError(f"Failed to create production capacity: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_capacity_by_resource(
|
||||
self,
|
||||
tenant_id: str,
|
||||
resource_id: str,
|
||||
date_filter: Optional[date] = None
|
||||
) -> List[ProductionCapacity]:
|
||||
"""Get capacity entries for a specific resource"""
|
||||
try:
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"resource_id": resource_id
|
||||
}
|
||||
|
||||
if date_filter:
|
||||
filters["date"] = date_filter
|
||||
|
||||
capacities = await self.get_multi(
|
||||
filters=filters,
|
||||
order_by="start_time"
|
||||
)
|
||||
|
||||
logger.info("Retrieved capacity by resource",
|
||||
count=len(capacities),
|
||||
resource_id=resource_id,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return capacities
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching capacity by resource", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch capacity by resource: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_available_capacity(
|
||||
self,
|
||||
tenant_id: str,
|
||||
resource_type: str,
|
||||
target_date: date,
|
||||
required_capacity: float
|
||||
) -> List[ProductionCapacity]:
|
||||
"""Get available capacity for a specific date and capacity requirement"""
|
||||
try:
|
||||
capacities = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"resource_type": resource_type,
|
||||
"date": target_date,
|
||||
"is_available": True,
|
||||
"is_maintenance": False,
|
||||
"remaining_capacity_units__gte": required_capacity
|
||||
},
|
||||
order_by="remaining_capacity_units",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved available capacity",
|
||||
count=len(capacities),
|
||||
resource_type=resource_type,
|
||||
required_capacity=required_capacity,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return capacities
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching available capacity", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch available capacity: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def allocate_capacity(
|
||||
self,
|
||||
capacity_id: UUID,
|
||||
allocation_amount: float,
|
||||
allocation_notes: Optional[str] = None
|
||||
) -> ProductionCapacity:
|
||||
"""Allocate capacity units from a capacity entry"""
|
||||
try:
|
||||
capacity = await self.get(capacity_id)
|
||||
if not capacity:
|
||||
raise ValidationError(f"Capacity {capacity_id} not found")
|
||||
|
||||
if allocation_amount > capacity.remaining_capacity_units:
|
||||
raise ValidationError(
|
||||
f"Insufficient capacity: requested {allocation_amount}, "
|
||||
f"available {capacity.remaining_capacity_units}"
|
||||
)
|
||||
|
||||
new_allocated = capacity.allocated_capacity_units + allocation_amount
|
||||
new_remaining = capacity.remaining_capacity_units - allocation_amount
|
||||
|
||||
update_data = {
|
||||
"allocated_capacity_units": new_allocated,
|
||||
"remaining_capacity_units": new_remaining,
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
if allocation_notes:
|
||||
current_notes = capacity.notes or ""
|
||||
update_data["notes"] = f"{current_notes}\n{allocation_notes}".strip()
|
||||
|
||||
capacity = await self.update(capacity_id, update_data)
|
||||
|
||||
logger.info("Allocated capacity",
|
||||
capacity_id=str(capacity_id),
|
||||
allocation_amount=allocation_amount,
|
||||
remaining_capacity=new_remaining)
|
||||
|
||||
return capacity
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error allocating capacity", error=str(e))
|
||||
raise DatabaseError(f"Failed to allocate capacity: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def release_capacity(
|
||||
self,
|
||||
capacity_id: UUID,
|
||||
release_amount: float,
|
||||
release_notes: Optional[str] = None
|
||||
) -> ProductionCapacity:
|
||||
"""Release allocated capacity units back to a capacity entry"""
|
||||
try:
|
||||
capacity = await self.get(capacity_id)
|
||||
if not capacity:
|
||||
raise ValidationError(f"Capacity {capacity_id} not found")
|
||||
|
||||
if release_amount > capacity.allocated_capacity_units:
|
||||
raise ValidationError(
|
||||
f"Cannot release more than allocated: requested {release_amount}, "
|
||||
f"allocated {capacity.allocated_capacity_units}"
|
||||
)
|
||||
|
||||
new_allocated = capacity.allocated_capacity_units - release_amount
|
||||
new_remaining = capacity.remaining_capacity_units + release_amount
|
||||
|
||||
update_data = {
|
||||
"allocated_capacity_units": new_allocated,
|
||||
"remaining_capacity_units": new_remaining,
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
if release_notes:
|
||||
current_notes = capacity.notes or ""
|
||||
update_data["notes"] = f"{current_notes}\n{release_notes}".strip()
|
||||
|
||||
capacity = await self.update(capacity_id, update_data)
|
||||
|
||||
logger.info("Released capacity",
|
||||
capacity_id=str(capacity_id),
|
||||
release_amount=release_amount,
|
||||
remaining_capacity=new_remaining)
|
||||
|
||||
return capacity
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error releasing capacity", error=str(e))
|
||||
raise DatabaseError(f"Failed to release capacity: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_capacity_utilization_summary(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date,
|
||||
resource_type: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Get capacity utilization summary for a date range"""
|
||||
try:
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"date__gte": start_date,
|
||||
"date__lte": end_date
|
||||
}
|
||||
|
||||
if resource_type:
|
||||
filters["resource_type"] = resource_type
|
||||
|
||||
capacities = await self.get_multi(filters=filters)
|
||||
|
||||
total_capacity = sum(c.total_capacity_units for c in capacities)
|
||||
total_allocated = sum(c.allocated_capacity_units for c in capacities)
|
||||
total_available = sum(c.remaining_capacity_units for c in capacities)
|
||||
|
||||
# Group by resource type
|
||||
by_resource_type = {}
|
||||
for capacity in capacities:
|
||||
rt = capacity.resource_type
|
||||
if rt not in by_resource_type:
|
||||
by_resource_type[rt] = {
|
||||
"total_capacity": 0,
|
||||
"allocated_capacity": 0,
|
||||
"available_capacity": 0,
|
||||
"resource_count": 0
|
||||
}
|
||||
|
||||
by_resource_type[rt]["total_capacity"] += capacity.total_capacity_units
|
||||
by_resource_type[rt]["allocated_capacity"] += capacity.allocated_capacity_units
|
||||
by_resource_type[rt]["available_capacity"] += capacity.remaining_capacity_units
|
||||
by_resource_type[rt]["resource_count"] += 1
|
||||
|
||||
# Calculate utilization percentages
|
||||
for rt_data in by_resource_type.values():
|
||||
if rt_data["total_capacity"] > 0:
|
||||
rt_data["utilization_percentage"] = round(
|
||||
(rt_data["allocated_capacity"] / rt_data["total_capacity"]) * 100, 2
|
||||
)
|
||||
else:
|
||||
rt_data["utilization_percentage"] = 0
|
||||
|
||||
return {
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"total_capacity_units": total_capacity,
|
||||
"total_allocated_units": total_allocated,
|
||||
"total_available_units": total_available,
|
||||
"overall_utilization_percentage": round(
|
||||
(total_allocated / total_capacity * 100) if total_capacity > 0 else 0, 2
|
||||
),
|
||||
"by_resource_type": by_resource_type,
|
||||
"total_resources": len(capacities),
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating capacity utilization summary", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate capacity utilization summary: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def set_maintenance_mode(
|
||||
self,
|
||||
capacity_id: UUID,
|
||||
is_maintenance: bool,
|
||||
maintenance_notes: Optional[str] = None
|
||||
) -> ProductionCapacity:
|
||||
"""Set maintenance mode for a capacity entry"""
|
||||
try:
|
||||
capacity = await self.get(capacity_id)
|
||||
if not capacity:
|
||||
raise ValidationError(f"Capacity {capacity_id} not found")
|
||||
|
||||
update_data = {
|
||||
"is_maintenance": is_maintenance,
|
||||
"is_available": not is_maintenance, # Not available when in maintenance
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
if is_maintenance:
|
||||
update_data["maintenance_status"] = "in_maintenance"
|
||||
if maintenance_notes:
|
||||
update_data["notes"] = maintenance_notes
|
||||
else:
|
||||
update_data["maintenance_status"] = "operational"
|
||||
update_data["last_maintenance_date"] = datetime.utcnow()
|
||||
|
||||
capacity = await self.update(capacity_id, update_data)
|
||||
|
||||
logger.info("Set maintenance mode",
|
||||
capacity_id=str(capacity_id),
|
||||
is_maintenance=is_maintenance)
|
||||
|
||||
return capacity
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error setting maintenance mode", error=str(e))
|
||||
raise DatabaseError(f"Failed to set maintenance mode: {str(e)}")
|
||||
@@ -0,0 +1,279 @@
|
||||
"""
|
||||
Production Schedule Repository
|
||||
Repository for production schedule operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, and_, text, desc, func
|
||||
from datetime import datetime, timedelta, date
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from .base import ProductionBaseRepository
|
||||
from app.models.production import ProductionSchedule
|
||||
from shared.database.exceptions import DatabaseError, ValidationError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionScheduleRepository(ProductionBaseRepository):
|
||||
"""Repository for production schedule operations"""
|
||||
|
||||
def __init__(self, session: AsyncSession, cache_ttl: Optional[int] = 600):
|
||||
# Schedules are more stable, medium cache time (10 minutes)
|
||||
super().__init__(ProductionSchedule, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def create_schedule(self, schedule_data: Dict[str, Any]) -> ProductionSchedule:
|
||||
"""Create a new production schedule with validation"""
|
||||
try:
|
||||
# Validate schedule data
|
||||
validation_result = self._validate_production_data(
|
||||
schedule_data,
|
||||
["tenant_id", "schedule_date", "shift_start", "shift_end",
|
||||
"total_capacity_hours", "planned_capacity_hours", "staff_count"]
|
||||
)
|
||||
|
||||
if not validation_result["is_valid"]:
|
||||
raise ValidationError(f"Invalid schedule data: {validation_result['errors']}")
|
||||
|
||||
# Set default values
|
||||
if "is_finalized" not in schedule_data:
|
||||
schedule_data["is_finalized"] = False
|
||||
if "is_active" not in schedule_data:
|
||||
schedule_data["is_active"] = True
|
||||
if "overtime_hours" not in schedule_data:
|
||||
schedule_data["overtime_hours"] = 0.0
|
||||
|
||||
# Validate date uniqueness
|
||||
existing_schedule = await self.get_schedule_by_date(
|
||||
schedule_data["tenant_id"],
|
||||
schedule_data["schedule_date"]
|
||||
)
|
||||
if existing_schedule:
|
||||
raise ValidationError(f"Schedule for date {schedule_data['schedule_date']} already exists")
|
||||
|
||||
# Create schedule
|
||||
schedule = await self.create(schedule_data)
|
||||
|
||||
logger.info("Production schedule created successfully",
|
||||
schedule_id=str(schedule.id),
|
||||
schedule_date=schedule.schedule_date.isoformat(),
|
||||
tenant_id=str(schedule.tenant_id))
|
||||
|
||||
return schedule
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error creating production schedule", error=str(e))
|
||||
raise DatabaseError(f"Failed to create production schedule: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_schedule_by_date(
|
||||
self,
|
||||
tenant_id: str,
|
||||
schedule_date: date
|
||||
) -> Optional[ProductionSchedule]:
|
||||
"""Get production schedule for a specific date"""
|
||||
try:
|
||||
schedules = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"schedule_date": schedule_date
|
||||
},
|
||||
limit=1
|
||||
)
|
||||
|
||||
schedule = schedules[0] if schedules else None
|
||||
|
||||
if schedule:
|
||||
logger.info("Retrieved production schedule by date",
|
||||
schedule_id=str(schedule.id),
|
||||
schedule_date=schedule_date.isoformat(),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return schedule
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching schedule by date", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch schedule by date: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_schedules_by_date_range(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> List[ProductionSchedule]:
|
||||
"""Get production schedules within a date range"""
|
||||
try:
|
||||
schedules = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"schedule_date__gte": start_date,
|
||||
"schedule_date__lte": end_date
|
||||
},
|
||||
order_by="schedule_date"
|
||||
)
|
||||
|
||||
logger.info("Retrieved schedules by date range",
|
||||
count=len(schedules),
|
||||
start_date=start_date.isoformat(),
|
||||
end_date=end_date.isoformat(),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return schedules
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching schedules by date range", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch schedules by date range: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_active_schedules(self, tenant_id: str) -> List[ProductionSchedule]:
|
||||
"""Get active production schedules for a tenant"""
|
||||
try:
|
||||
schedules = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"is_active": True
|
||||
},
|
||||
order_by="schedule_date"
|
||||
)
|
||||
|
||||
logger.info("Retrieved active production schedules",
|
||||
count=len(schedules),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return schedules
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching active schedules", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch active schedules: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def finalize_schedule(
|
||||
self,
|
||||
schedule_id: UUID,
|
||||
finalized_by: str
|
||||
) -> ProductionSchedule:
|
||||
"""Finalize a production schedule"""
|
||||
try:
|
||||
schedule = await self.get(schedule_id)
|
||||
if not schedule:
|
||||
raise ValidationError(f"Schedule {schedule_id} not found")
|
||||
|
||||
if schedule.is_finalized:
|
||||
raise ValidationError("Schedule is already finalized")
|
||||
|
||||
update_data = {
|
||||
"is_finalized": True,
|
||||
"finalized_at": datetime.utcnow(),
|
||||
"updated_at": datetime.utcnow()
|
||||
}
|
||||
|
||||
schedule = await self.update(schedule_id, update_data)
|
||||
|
||||
logger.info("Production schedule finalized",
|
||||
schedule_id=str(schedule_id),
|
||||
finalized_by=finalized_by)
|
||||
|
||||
return schedule
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error finalizing schedule", error=str(e))
|
||||
raise DatabaseError(f"Failed to finalize schedule: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def update_schedule_metrics(
|
||||
self,
|
||||
schedule_id: UUID,
|
||||
metrics: Dict[str, Any]
|
||||
) -> ProductionSchedule:
|
||||
"""Update production schedule metrics"""
|
||||
try:
|
||||
schedule = await self.get(schedule_id)
|
||||
if not schedule:
|
||||
raise ValidationError(f"Schedule {schedule_id} not found")
|
||||
|
||||
# Validate metrics
|
||||
valid_metrics = [
|
||||
"actual_capacity_hours", "total_batches_completed",
|
||||
"total_quantity_produced", "efficiency_percentage",
|
||||
"utilization_percentage", "on_time_completion_rate"
|
||||
]
|
||||
|
||||
update_data = {"updated_at": datetime.utcnow()}
|
||||
|
||||
for metric, value in metrics.items():
|
||||
if metric in valid_metrics:
|
||||
update_data[metric] = value
|
||||
|
||||
schedule = await self.update(schedule_id, update_data)
|
||||
|
||||
logger.info("Updated schedule metrics",
|
||||
schedule_id=str(schedule_id),
|
||||
metrics=list(metrics.keys()))
|
||||
|
||||
return schedule
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error updating schedule metrics", error=str(e))
|
||||
raise DatabaseError(f"Failed to update schedule metrics: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_schedule_performance_summary(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Get schedule performance summary for a date range"""
|
||||
try:
|
||||
schedules = await self.get_schedules_by_date_range(tenant_id, start_date, end_date)
|
||||
|
||||
total_schedules = len(schedules)
|
||||
finalized_schedules = len([s for s in schedules if s.is_finalized])
|
||||
|
||||
# Calculate averages
|
||||
total_planned_hours = sum(s.planned_capacity_hours for s in schedules)
|
||||
total_actual_hours = sum(s.actual_capacity_hours or 0 for s in schedules)
|
||||
total_overtime = sum(s.overtime_hours or 0 for s in schedules)
|
||||
|
||||
# Calculate efficiency metrics
|
||||
schedules_with_efficiency = [s for s in schedules if s.efficiency_percentage is not None]
|
||||
avg_efficiency = (
|
||||
sum(s.efficiency_percentage for s in schedules_with_efficiency) / len(schedules_with_efficiency)
|
||||
if schedules_with_efficiency else 0
|
||||
)
|
||||
|
||||
schedules_with_utilization = [s for s in schedules if s.utilization_percentage is not None]
|
||||
avg_utilization = (
|
||||
sum(s.utilization_percentage for s in schedules_with_utilization) / len(schedules_with_utilization)
|
||||
if schedules_with_utilization else 0
|
||||
)
|
||||
|
||||
return {
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"total_schedules": total_schedules,
|
||||
"finalized_schedules": finalized_schedules,
|
||||
"finalization_rate": (finalized_schedules / total_schedules * 100) if total_schedules > 0 else 0,
|
||||
"total_planned_hours": total_planned_hours,
|
||||
"total_actual_hours": total_actual_hours,
|
||||
"total_overtime_hours": total_overtime,
|
||||
"capacity_utilization": (total_actual_hours / total_planned_hours * 100) if total_planned_hours > 0 else 0,
|
||||
"average_efficiency_percentage": round(avg_efficiency, 2),
|
||||
"average_utilization_percentage": round(avg_utilization, 2),
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating schedule performance summary", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate schedule performance summary: {str(e)}")
|
||||
319
services/production/app/repositories/quality_check_repository.py
Normal file
319
services/production/app/repositories/quality_check_repository.py
Normal file
@@ -0,0 +1,319 @@
|
||||
"""
|
||||
Quality Check Repository
|
||||
Repository for quality check operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy import select, and_, text, desc, func
|
||||
from datetime import datetime, timedelta, date
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from .base import ProductionBaseRepository
|
||||
from app.models.production import QualityCheck
|
||||
from shared.database.exceptions import DatabaseError, ValidationError
|
||||
from shared.database.transactions import transactional
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class QualityCheckRepository(ProductionBaseRepository):
|
||||
"""Repository for quality check operations"""
|
||||
|
||||
def __init__(self, session: AsyncSession, cache_ttl: Optional[int] = 300):
|
||||
# Quality checks are dynamic, short cache time (5 minutes)
|
||||
super().__init__(QualityCheck, session, cache_ttl)
|
||||
|
||||
@transactional
|
||||
async def create_quality_check(self, check_data: Dict[str, Any]) -> QualityCheck:
|
||||
"""Create a new quality check with validation"""
|
||||
try:
|
||||
# Validate check data
|
||||
validation_result = self._validate_production_data(
|
||||
check_data,
|
||||
["tenant_id", "batch_id", "check_type", "check_time",
|
||||
"quality_score", "pass_fail"]
|
||||
)
|
||||
|
||||
if not validation_result["is_valid"]:
|
||||
raise ValidationError(f"Invalid quality check data: {validation_result['errors']}")
|
||||
|
||||
# Validate quality score range (1-10)
|
||||
if check_data.get("quality_score"):
|
||||
score = float(check_data["quality_score"])
|
||||
if score < 1 or score > 10:
|
||||
raise ValidationError("Quality score must be between 1 and 10")
|
||||
|
||||
# Set default values
|
||||
if "defect_count" not in check_data:
|
||||
check_data["defect_count"] = 0
|
||||
if "corrective_action_needed" not in check_data:
|
||||
check_data["corrective_action_needed"] = False
|
||||
|
||||
# Create quality check
|
||||
quality_check = await self.create(check_data)
|
||||
|
||||
logger.info("Quality check created successfully",
|
||||
check_id=str(quality_check.id),
|
||||
batch_id=str(quality_check.batch_id),
|
||||
check_type=quality_check.check_type,
|
||||
quality_score=quality_check.quality_score,
|
||||
tenant_id=str(quality_check.tenant_id))
|
||||
|
||||
return quality_check
|
||||
|
||||
except ValidationError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error("Error creating quality check", error=str(e))
|
||||
raise DatabaseError(f"Failed to create quality check: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_checks_by_batch(
|
||||
self,
|
||||
tenant_id: str,
|
||||
batch_id: str
|
||||
) -> List[QualityCheck]:
|
||||
"""Get all quality checks for a specific batch"""
|
||||
try:
|
||||
checks = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"batch_id": batch_id
|
||||
},
|
||||
order_by="check_time"
|
||||
)
|
||||
|
||||
logger.info("Retrieved quality checks by batch",
|
||||
count=len(checks),
|
||||
batch_id=batch_id,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return checks
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching quality checks by batch", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch quality checks by batch: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_checks_by_date_range(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date,
|
||||
check_type: Optional[str] = None
|
||||
) -> List[QualityCheck]:
|
||||
"""Get quality checks within a date range"""
|
||||
try:
|
||||
start_datetime = datetime.combine(start_date, datetime.min.time())
|
||||
end_datetime = datetime.combine(end_date, datetime.max.time())
|
||||
|
||||
filters = {
|
||||
"tenant_id": tenant_id,
|
||||
"check_time__gte": start_datetime,
|
||||
"check_time__lte": end_datetime
|
||||
}
|
||||
|
||||
if check_type:
|
||||
filters["check_type"] = check_type
|
||||
|
||||
checks = await self.get_multi(
|
||||
filters=filters,
|
||||
order_by="check_time",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved quality checks by date range",
|
||||
count=len(checks),
|
||||
start_date=start_date.isoformat(),
|
||||
end_date=end_date.isoformat(),
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return checks
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching quality checks by date range", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch quality checks by date range: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_failed_checks(
|
||||
self,
|
||||
tenant_id: str,
|
||||
days_back: int = 7
|
||||
) -> List[QualityCheck]:
|
||||
"""Get failed quality checks from the last N days"""
|
||||
try:
|
||||
cutoff_date = datetime.utcnow() - timedelta(days=days_back)
|
||||
|
||||
checks = await self.get_multi(
|
||||
filters={
|
||||
"tenant_id": tenant_id,
|
||||
"pass_fail": False,
|
||||
"check_time__gte": cutoff_date
|
||||
},
|
||||
order_by="check_time",
|
||||
order_desc=True
|
||||
)
|
||||
|
||||
logger.info("Retrieved failed quality checks",
|
||||
count=len(checks),
|
||||
days_back=days_back,
|
||||
tenant_id=tenant_id)
|
||||
|
||||
return checks
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error fetching failed quality checks", error=str(e))
|
||||
raise DatabaseError(f"Failed to fetch failed quality checks: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_quality_metrics(
|
||||
self,
|
||||
tenant_id: str,
|
||||
start_date: date,
|
||||
end_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Get quality metrics for a tenant and date range"""
|
||||
try:
|
||||
checks = await self.get_checks_by_date_range(tenant_id, start_date, end_date)
|
||||
|
||||
total_checks = len(checks)
|
||||
passed_checks = len([c for c in checks if c.pass_fail])
|
||||
failed_checks = total_checks - passed_checks
|
||||
|
||||
# Calculate average quality score
|
||||
quality_scores = [c.quality_score for c in checks if c.quality_score is not None]
|
||||
avg_quality_score = sum(quality_scores) / len(quality_scores) if quality_scores else 0
|
||||
|
||||
# Calculate defect rate
|
||||
total_defects = sum(c.defect_count for c in checks)
|
||||
avg_defects_per_check = total_defects / total_checks if total_checks > 0 else 0
|
||||
|
||||
# Group by check type
|
||||
by_check_type = {}
|
||||
for check in checks:
|
||||
check_type = check.check_type
|
||||
if check_type not in by_check_type:
|
||||
by_check_type[check_type] = {
|
||||
"total_checks": 0,
|
||||
"passed_checks": 0,
|
||||
"failed_checks": 0,
|
||||
"avg_quality_score": 0,
|
||||
"total_defects": 0
|
||||
}
|
||||
|
||||
by_check_type[check_type]["total_checks"] += 1
|
||||
if check.pass_fail:
|
||||
by_check_type[check_type]["passed_checks"] += 1
|
||||
else:
|
||||
by_check_type[check_type]["failed_checks"] += 1
|
||||
by_check_type[check_type]["total_defects"] += check.defect_count
|
||||
|
||||
# Calculate pass rates by check type
|
||||
for type_data in by_check_type.values():
|
||||
if type_data["total_checks"] > 0:
|
||||
type_data["pass_rate"] = round(
|
||||
(type_data["passed_checks"] / type_data["total_checks"]) * 100, 2
|
||||
)
|
||||
else:
|
||||
type_data["pass_rate"] = 0
|
||||
|
||||
type_scores = [c.quality_score for c in checks
|
||||
if c.check_type == check_type and c.quality_score is not None]
|
||||
type_data["avg_quality_score"] = round(
|
||||
sum(type_scores) / len(type_scores) if type_scores else 0, 2
|
||||
)
|
||||
|
||||
# Identify trends
|
||||
checks_needing_action = len([c for c in checks if c.corrective_action_needed])
|
||||
|
||||
return {
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"total_checks": total_checks,
|
||||
"passed_checks": passed_checks,
|
||||
"failed_checks": failed_checks,
|
||||
"pass_rate_percentage": round((passed_checks / total_checks * 100) if total_checks > 0 else 0, 2),
|
||||
"average_quality_score": round(avg_quality_score, 2),
|
||||
"total_defects": total_defects,
|
||||
"average_defects_per_check": round(avg_defects_per_check, 2),
|
||||
"checks_needing_corrective_action": checks_needing_action,
|
||||
"by_check_type": by_check_type,
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating quality metrics", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate quality metrics: {str(e)}")
|
||||
|
||||
@transactional
|
||||
async def get_quality_trends(
|
||||
self,
|
||||
tenant_id: str,
|
||||
check_type: str,
|
||||
days_back: int = 30
|
||||
) -> Dict[str, Any]:
|
||||
"""Get quality trends for a specific check type"""
|
||||
try:
|
||||
end_date = datetime.utcnow().date()
|
||||
start_date = end_date - timedelta(days=days_back)
|
||||
|
||||
checks = await self.get_checks_by_date_range(
|
||||
tenant_id, start_date, end_date, check_type
|
||||
)
|
||||
|
||||
# Group by date
|
||||
daily_metrics = {}
|
||||
for check in checks:
|
||||
check_date = check.check_time.date()
|
||||
if check_date not in daily_metrics:
|
||||
daily_metrics[check_date] = {
|
||||
"total_checks": 0,
|
||||
"passed_checks": 0,
|
||||
"quality_scores": [],
|
||||
"defect_count": 0
|
||||
}
|
||||
|
||||
daily_metrics[check_date]["total_checks"] += 1
|
||||
if check.pass_fail:
|
||||
daily_metrics[check_date]["passed_checks"] += 1
|
||||
if check.quality_score is not None:
|
||||
daily_metrics[check_date]["quality_scores"].append(check.quality_score)
|
||||
daily_metrics[check_date]["defect_count"] += check.defect_count
|
||||
|
||||
# Calculate daily pass rates and averages
|
||||
trend_data = []
|
||||
for date_key, metrics in sorted(daily_metrics.items()):
|
||||
pass_rate = (metrics["passed_checks"] / metrics["total_checks"] * 100) if metrics["total_checks"] > 0 else 0
|
||||
avg_score = sum(metrics["quality_scores"]) / len(metrics["quality_scores"]) if metrics["quality_scores"] else 0
|
||||
|
||||
trend_data.append({
|
||||
"date": date_key.isoformat(),
|
||||
"total_checks": metrics["total_checks"],
|
||||
"pass_rate": round(pass_rate, 2),
|
||||
"average_quality_score": round(avg_score, 2),
|
||||
"total_defects": metrics["defect_count"]
|
||||
})
|
||||
|
||||
# Calculate overall trend direction
|
||||
if len(trend_data) >= 2:
|
||||
recent_avg = sum(d["pass_rate"] for d in trend_data[-7:]) / min(7, len(trend_data))
|
||||
earlier_avg = sum(d["pass_rate"] for d in trend_data[:-7]) / max(1, len(trend_data) - 7)
|
||||
trend_direction = "improving" if recent_avg > earlier_avg else "declining" if recent_avg < earlier_avg else "stable"
|
||||
else:
|
||||
trend_direction = "insufficient_data"
|
||||
|
||||
return {
|
||||
"check_type": check_type,
|
||||
"period_start": start_date.isoformat(),
|
||||
"period_end": end_date.isoformat(),
|
||||
"trend_direction": trend_direction,
|
||||
"daily_data": trend_data,
|
||||
"total_checks": len(checks),
|
||||
"tenant_id": tenant_id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating quality trends", error=str(e))
|
||||
raise DatabaseError(f"Failed to calculate quality trends: {str(e)}")
|
||||
6
services/production/app/schemas/__init__.py
Normal file
6
services/production/app/schemas/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
# ================================================================
|
||||
# services/production/app/schemas/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Pydantic schemas for request/response models
|
||||
"""
|
||||
414
services/production/app/schemas/production.py
Normal file
414
services/production/app/schemas/production.py
Normal file
@@ -0,0 +1,414 @@
|
||||
# ================================================================
|
||||
# services/production/app/schemas/production.py
|
||||
# ================================================================
|
||||
"""
|
||||
Pydantic schemas for production service
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field, validator
|
||||
from typing import Optional, List, Dict, Any, Union
|
||||
from datetime import datetime, date
|
||||
from uuid import UUID
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class ProductionStatusEnum(str, Enum):
|
||||
"""Production batch status enumeration for API"""
|
||||
PENDING = "pending"
|
||||
IN_PROGRESS = "in_progress"
|
||||
COMPLETED = "completed"
|
||||
CANCELLED = "cancelled"
|
||||
ON_HOLD = "on_hold"
|
||||
QUALITY_CHECK = "quality_check"
|
||||
FAILED = "failed"
|
||||
|
||||
|
||||
class ProductionPriorityEnum(str, Enum):
|
||||
"""Production priority levels for API"""
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
URGENT = "urgent"
|
||||
|
||||
|
||||
class AlertSeverityEnum(str, Enum):
|
||||
"""Alert severity levels for API"""
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
CRITICAL = "critical"
|
||||
|
||||
|
||||
# ================================================================
|
||||
# PRODUCTION BATCH SCHEMAS
|
||||
# ================================================================
|
||||
|
||||
class ProductionBatchBase(BaseModel):
|
||||
"""Base schema for production batch"""
|
||||
product_id: UUID
|
||||
product_name: str = Field(..., min_length=1, max_length=255)
|
||||
recipe_id: Optional[UUID] = None
|
||||
planned_start_time: datetime
|
||||
planned_end_time: datetime
|
||||
planned_quantity: float = Field(..., gt=0)
|
||||
planned_duration_minutes: int = Field(..., gt=0)
|
||||
priority: ProductionPriorityEnum = ProductionPriorityEnum.MEDIUM
|
||||
is_rush_order: bool = False
|
||||
is_special_recipe: bool = False
|
||||
production_notes: Optional[str] = None
|
||||
|
||||
@validator('planned_end_time')
|
||||
def validate_end_time_after_start(cls, v, values):
|
||||
if 'planned_start_time' in values and v <= values['planned_start_time']:
|
||||
raise ValueError('planned_end_time must be after planned_start_time')
|
||||
return v
|
||||
|
||||
|
||||
class ProductionBatchCreate(ProductionBatchBase):
|
||||
"""Schema for creating a production batch"""
|
||||
batch_number: Optional[str] = Field(None, max_length=50)
|
||||
order_id: Optional[UUID] = None
|
||||
forecast_id: Optional[UUID] = None
|
||||
equipment_used: Optional[List[str]] = None
|
||||
staff_assigned: Optional[List[str]] = None
|
||||
station_id: Optional[str] = Field(None, max_length=50)
|
||||
|
||||
|
||||
class ProductionBatchUpdate(BaseModel):
|
||||
"""Schema for updating a production batch"""
|
||||
product_name: Optional[str] = Field(None, min_length=1, max_length=255)
|
||||
planned_start_time: Optional[datetime] = None
|
||||
planned_end_time: Optional[datetime] = None
|
||||
planned_quantity: Optional[float] = Field(None, gt=0)
|
||||
planned_duration_minutes: Optional[int] = Field(None, gt=0)
|
||||
actual_quantity: Optional[float] = Field(None, ge=0)
|
||||
priority: Optional[ProductionPriorityEnum] = None
|
||||
equipment_used: Optional[List[str]] = None
|
||||
staff_assigned: Optional[List[str]] = None
|
||||
station_id: Optional[str] = Field(None, max_length=50)
|
||||
production_notes: Optional[str] = None
|
||||
|
||||
|
||||
class ProductionBatchStatusUpdate(BaseModel):
|
||||
"""Schema for updating production batch status"""
|
||||
status: ProductionStatusEnum
|
||||
actual_quantity: Optional[float] = Field(None, ge=0)
|
||||
notes: Optional[str] = None
|
||||
|
||||
|
||||
class ProductionBatchResponse(BaseModel):
|
||||
"""Schema for production batch response"""
|
||||
id: UUID
|
||||
tenant_id: UUID
|
||||
batch_number: str
|
||||
product_id: UUID
|
||||
product_name: str
|
||||
recipe_id: Optional[UUID]
|
||||
planned_start_time: datetime
|
||||
planned_end_time: datetime
|
||||
planned_quantity: float
|
||||
planned_duration_minutes: int
|
||||
actual_start_time: Optional[datetime]
|
||||
actual_end_time: Optional[datetime]
|
||||
actual_quantity: Optional[float]
|
||||
actual_duration_minutes: Optional[int]
|
||||
status: ProductionStatusEnum
|
||||
priority: ProductionPriorityEnum
|
||||
estimated_cost: Optional[float]
|
||||
actual_cost: Optional[float]
|
||||
yield_percentage: Optional[float]
|
||||
quality_score: Optional[float]
|
||||
equipment_used: Optional[List[str]]
|
||||
staff_assigned: Optional[List[str]]
|
||||
station_id: Optional[str]
|
||||
order_id: Optional[UUID]
|
||||
forecast_id: Optional[UUID]
|
||||
is_rush_order: bool
|
||||
is_special_recipe: bool
|
||||
production_notes: Optional[str]
|
||||
quality_notes: Optional[str]
|
||||
delay_reason: Optional[str]
|
||||
cancellation_reason: Optional[str]
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
completed_at: Optional[datetime]
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
# ================================================================
|
||||
# PRODUCTION SCHEDULE SCHEMAS
|
||||
# ================================================================
|
||||
|
||||
class ProductionScheduleBase(BaseModel):
|
||||
"""Base schema for production schedule"""
|
||||
schedule_date: date
|
||||
shift_start: datetime
|
||||
shift_end: datetime
|
||||
total_capacity_hours: float = Field(..., gt=0)
|
||||
planned_capacity_hours: float = Field(..., gt=0)
|
||||
staff_count: int = Field(..., gt=0)
|
||||
equipment_capacity: Optional[Dict[str, Any]] = None
|
||||
station_assignments: Optional[Dict[str, Any]] = None
|
||||
schedule_notes: Optional[str] = None
|
||||
|
||||
@validator('shift_end')
|
||||
def validate_shift_end_after_start(cls, v, values):
|
||||
if 'shift_start' in values and v <= values['shift_start']:
|
||||
raise ValueError('shift_end must be after shift_start')
|
||||
return v
|
||||
|
||||
@validator('planned_capacity_hours')
|
||||
def validate_planned_capacity(cls, v, values):
|
||||
if 'total_capacity_hours' in values and v > values['total_capacity_hours']:
|
||||
raise ValueError('planned_capacity_hours cannot exceed total_capacity_hours')
|
||||
return v
|
||||
|
||||
|
||||
class ProductionScheduleCreate(ProductionScheduleBase):
|
||||
"""Schema for creating a production schedule"""
|
||||
pass
|
||||
|
||||
|
||||
class ProductionScheduleUpdate(BaseModel):
|
||||
"""Schema for updating a production schedule"""
|
||||
shift_start: Optional[datetime] = None
|
||||
shift_end: Optional[datetime] = None
|
||||
total_capacity_hours: Optional[float] = Field(None, gt=0)
|
||||
planned_capacity_hours: Optional[float] = Field(None, gt=0)
|
||||
staff_count: Optional[int] = Field(None, gt=0)
|
||||
overtime_hours: Optional[float] = Field(None, ge=0)
|
||||
equipment_capacity: Optional[Dict[str, Any]] = None
|
||||
station_assignments: Optional[Dict[str, Any]] = None
|
||||
schedule_notes: Optional[str] = None
|
||||
|
||||
|
||||
class ProductionScheduleResponse(BaseModel):
|
||||
"""Schema for production schedule response"""
|
||||
id: UUID
|
||||
tenant_id: UUID
|
||||
schedule_date: date
|
||||
shift_start: datetime
|
||||
shift_end: datetime
|
||||
total_capacity_hours: float
|
||||
planned_capacity_hours: float
|
||||
actual_capacity_hours: Optional[float]
|
||||
overtime_hours: Optional[float]
|
||||
staff_count: int
|
||||
equipment_capacity: Optional[Dict[str, Any]]
|
||||
station_assignments: Optional[Dict[str, Any]]
|
||||
total_batches_planned: int
|
||||
total_batches_completed: Optional[int]
|
||||
total_quantity_planned: float
|
||||
total_quantity_produced: Optional[float]
|
||||
is_finalized: bool
|
||||
is_active: bool
|
||||
efficiency_percentage: Optional[float]
|
||||
utilization_percentage: Optional[float]
|
||||
on_time_completion_rate: Optional[float]
|
||||
schedule_notes: Optional[str]
|
||||
schedule_adjustments: Optional[Dict[str, Any]]
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
finalized_at: Optional[datetime]
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
# ================================================================
|
||||
# QUALITY CHECK SCHEMAS
|
||||
# ================================================================
|
||||
|
||||
class QualityCheckBase(BaseModel):
|
||||
"""Base schema for quality check"""
|
||||
batch_id: UUID
|
||||
check_type: str = Field(..., min_length=1, max_length=50)
|
||||
check_time: datetime
|
||||
quality_score: float = Field(..., ge=1, le=10)
|
||||
pass_fail: bool
|
||||
defect_count: int = Field(0, ge=0)
|
||||
defect_types: Optional[List[str]] = None
|
||||
check_notes: Optional[str] = None
|
||||
|
||||
|
||||
class QualityCheckCreate(QualityCheckBase):
|
||||
"""Schema for creating a quality check"""
|
||||
checker_id: Optional[str] = Field(None, max_length=100)
|
||||
measured_weight: Optional[float] = Field(None, gt=0)
|
||||
measured_temperature: Optional[float] = None
|
||||
measured_moisture: Optional[float] = Field(None, ge=0, le=100)
|
||||
measured_dimensions: Optional[Dict[str, float]] = None
|
||||
target_weight: Optional[float] = Field(None, gt=0)
|
||||
target_temperature: Optional[float] = None
|
||||
target_moisture: Optional[float] = Field(None, ge=0, le=100)
|
||||
tolerance_percentage: Optional[float] = Field(None, ge=0, le=100)
|
||||
corrective_actions: Optional[List[str]] = None
|
||||
|
||||
|
||||
class QualityCheckResponse(BaseModel):
|
||||
"""Schema for quality check response"""
|
||||
id: UUID
|
||||
tenant_id: UUID
|
||||
batch_id: UUID
|
||||
check_type: str
|
||||
check_time: datetime
|
||||
checker_id: Optional[str]
|
||||
quality_score: float
|
||||
pass_fail: bool
|
||||
defect_count: int
|
||||
defect_types: Optional[List[str]]
|
||||
measured_weight: Optional[float]
|
||||
measured_temperature: Optional[float]
|
||||
measured_moisture: Optional[float]
|
||||
measured_dimensions: Optional[Dict[str, float]]
|
||||
target_weight: Optional[float]
|
||||
target_temperature: Optional[float]
|
||||
target_moisture: Optional[float]
|
||||
tolerance_percentage: Optional[float]
|
||||
within_tolerance: Optional[bool]
|
||||
corrective_action_needed: bool
|
||||
corrective_actions: Optional[List[str]]
|
||||
check_notes: Optional[str]
|
||||
photos_urls: Optional[List[str]]
|
||||
certificate_url: Optional[str]
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
# ================================================================
|
||||
# PRODUCTION ALERT SCHEMAS
|
||||
# ================================================================
|
||||
|
||||
class ProductionAlertBase(BaseModel):
|
||||
"""Base schema for production alert"""
|
||||
alert_type: str = Field(..., min_length=1, max_length=50)
|
||||
severity: AlertSeverityEnum = AlertSeverityEnum.MEDIUM
|
||||
title: str = Field(..., min_length=1, max_length=255)
|
||||
message: str = Field(..., min_length=1)
|
||||
batch_id: Optional[UUID] = None
|
||||
schedule_id: Optional[UUID] = None
|
||||
|
||||
|
||||
class ProductionAlertCreate(ProductionAlertBase):
|
||||
"""Schema for creating a production alert"""
|
||||
recommended_actions: Optional[List[str]] = None
|
||||
impact_level: Optional[str] = Field(None, pattern="^(low|medium|high|critical)$")
|
||||
estimated_cost_impact: Optional[float] = Field(None, ge=0)
|
||||
estimated_time_impact_minutes: Optional[int] = Field(None, ge=0)
|
||||
alert_data: Optional[Dict[str, Any]] = None
|
||||
alert_metadata: Optional[Dict[str, Any]] = None
|
||||
|
||||
|
||||
class ProductionAlertResponse(BaseModel):
|
||||
"""Schema for production alert response"""
|
||||
id: UUID
|
||||
tenant_id: UUID
|
||||
alert_type: str
|
||||
severity: AlertSeverityEnum
|
||||
title: str
|
||||
message: str
|
||||
batch_id: Optional[UUID]
|
||||
schedule_id: Optional[UUID]
|
||||
source_system: str
|
||||
is_active: bool
|
||||
is_acknowledged: bool
|
||||
is_resolved: bool
|
||||
recommended_actions: Optional[List[str]]
|
||||
actions_taken: Optional[List[Dict[str, Any]]]
|
||||
impact_level: Optional[str]
|
||||
estimated_cost_impact: Optional[float]
|
||||
estimated_time_impact_minutes: Optional[int]
|
||||
acknowledged_by: Optional[str]
|
||||
acknowledged_at: Optional[datetime]
|
||||
resolved_by: Optional[str]
|
||||
resolved_at: Optional[datetime]
|
||||
resolution_notes: Optional[str]
|
||||
alert_data: Optional[Dict[str, Any]]
|
||||
alert_metadata: Optional[Dict[str, Any]]
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
# ================================================================
|
||||
# DASHBOARD AND ANALYTICS SCHEMAS
|
||||
# ================================================================
|
||||
|
||||
class ProductionDashboardSummary(BaseModel):
|
||||
"""Schema for production dashboard summary"""
|
||||
active_batches: int
|
||||
todays_production_plan: List[Dict[str, Any]]
|
||||
capacity_utilization: float
|
||||
current_alerts: int
|
||||
on_time_completion_rate: float
|
||||
average_quality_score: float
|
||||
total_output_today: float
|
||||
efficiency_percentage: float
|
||||
|
||||
|
||||
class DailyProductionRequirements(BaseModel):
|
||||
"""Schema for daily production requirements"""
|
||||
date: date
|
||||
production_plan: List[Dict[str, Any]]
|
||||
total_capacity_needed: float
|
||||
available_capacity: float
|
||||
capacity_gap: float
|
||||
urgent_items: int
|
||||
recommended_schedule: Optional[Dict[str, Any]]
|
||||
|
||||
|
||||
class ProductionMetrics(BaseModel):
|
||||
"""Schema for production metrics"""
|
||||
period_start: date
|
||||
period_end: date
|
||||
total_batches: int
|
||||
completed_batches: int
|
||||
completion_rate: float
|
||||
average_yield_percentage: float
|
||||
on_time_completion_rate: float
|
||||
total_production_cost: float
|
||||
average_quality_score: float
|
||||
efficiency_trends: List[Dict[str, Any]]
|
||||
|
||||
|
||||
# ================================================================
|
||||
# REQUEST/RESPONSE WRAPPERS
|
||||
# ================================================================
|
||||
|
||||
class ProductionBatchListResponse(BaseModel):
|
||||
"""Schema for production batch list response"""
|
||||
batches: List[ProductionBatchResponse]
|
||||
total_count: int
|
||||
page: int
|
||||
page_size: int
|
||||
|
||||
|
||||
class ProductionScheduleListResponse(BaseModel):
|
||||
"""Schema for production schedule list response"""
|
||||
schedules: List[ProductionScheduleResponse]
|
||||
total_count: int
|
||||
page: int
|
||||
page_size: int
|
||||
|
||||
|
||||
class QualityCheckListResponse(BaseModel):
|
||||
"""Schema for quality check list response"""
|
||||
quality_checks: List[QualityCheckResponse]
|
||||
total_count: int
|
||||
page: int
|
||||
page_size: int
|
||||
|
||||
|
||||
class ProductionAlertListResponse(BaseModel):
|
||||
"""Schema for production alert list response"""
|
||||
alerts: List[ProductionAlertResponse]
|
||||
total_count: int
|
||||
page: int
|
||||
page_size: int
|
||||
14
services/production/app/services/__init__.py
Normal file
14
services/production/app/services/__init__.py
Normal file
@@ -0,0 +1,14 @@
|
||||
# ================================================================
|
||||
# services/production/app/services/__init__.py
|
||||
# ================================================================
|
||||
"""
|
||||
Business logic services
|
||||
"""
|
||||
|
||||
from .production_service import ProductionService
|
||||
from .production_alert_service import ProductionAlertService
|
||||
|
||||
__all__ = [
|
||||
"ProductionService",
|
||||
"ProductionAlertService"
|
||||
]
|
||||
435
services/production/app/services/production_alert_service.py
Normal file
435
services/production/app/services/production_alert_service.py
Normal file
@@ -0,0 +1,435 @@
|
||||
"""
|
||||
Production Alert Service
|
||||
Business logic for production alerts and notifications
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from datetime import datetime, date, timedelta
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from shared.database.transactions import transactional
|
||||
from shared.notifications.alert_integration import AlertIntegration
|
||||
from shared.config.base import BaseServiceSettings
|
||||
|
||||
from app.repositories.production_alert_repository import ProductionAlertRepository
|
||||
from app.repositories.production_batch_repository import ProductionBatchRepository
|
||||
from app.repositories.production_capacity_repository import ProductionCapacityRepository
|
||||
from app.models.production import ProductionAlert, AlertSeverity, ProductionStatus
|
||||
from app.schemas.production import ProductionAlertCreate
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionAlertService:
|
||||
"""Production alert service with comprehensive monitoring"""
|
||||
|
||||
def __init__(self, database_manager, config: BaseServiceSettings):
|
||||
self.database_manager = database_manager
|
||||
self.config = config
|
||||
self.alert_integration = AlertIntegration()
|
||||
|
||||
@transactional
|
||||
async def check_production_capacity_alerts(self, tenant_id: UUID) -> List[ProductionAlert]:
|
||||
"""Monitor production capacity and generate alerts"""
|
||||
alerts = []
|
||||
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
capacity_repo = ProductionCapacityRepository(session)
|
||||
alert_repo = ProductionAlertRepository(session)
|
||||
|
||||
today = date.today()
|
||||
|
||||
# Check capacity exceeded alert
|
||||
todays_batches = await batch_repo.get_batches_by_date_range(
|
||||
str(tenant_id), today, today
|
||||
)
|
||||
|
||||
# Calculate total planned hours for today
|
||||
total_planned_hours = sum(
|
||||
batch.planned_duration_minutes / 60
|
||||
for batch in todays_batches
|
||||
if batch.status != ProductionStatus.CANCELLED
|
||||
)
|
||||
|
||||
# Get available capacity
|
||||
available_capacity = await capacity_repo.get_capacity_utilization_summary(
|
||||
str(tenant_id), today, today
|
||||
)
|
||||
|
||||
total_capacity = available_capacity.get("total_capacity_units", 8.0)
|
||||
|
||||
if total_planned_hours > total_capacity:
|
||||
excess_hours = total_planned_hours - total_capacity
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="production_capacity_exceeded",
|
||||
severity=AlertSeverity.HIGH,
|
||||
title="Capacidad de Producción Excedida",
|
||||
message=f"🔥 Capacidad excedida: {excess_hours:.1f}h extra necesarias para completar la producción de hoy",
|
||||
recommended_actions=[
|
||||
"reschedule_batches",
|
||||
"outsource_production",
|
||||
"adjust_menu",
|
||||
"extend_working_hours"
|
||||
],
|
||||
impact_level="high",
|
||||
estimated_time_impact_minutes=int(excess_hours * 60),
|
||||
alert_data={
|
||||
"excess_hours": excess_hours,
|
||||
"total_planned_hours": total_planned_hours,
|
||||
"available_capacity_hours": total_capacity,
|
||||
"affected_batches": len(todays_batches)
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Check production delay alert
|
||||
current_time = datetime.utcnow()
|
||||
cutoff_time = current_time + timedelta(hours=4) # 4 hours ahead
|
||||
|
||||
urgent_batches = await batch_repo.get_urgent_batches(str(tenant_id), 4)
|
||||
delayed_batches = [
|
||||
batch for batch in urgent_batches
|
||||
if batch.planned_start_time <= current_time and batch.status == ProductionStatus.PENDING
|
||||
]
|
||||
|
||||
for batch in delayed_batches:
|
||||
delay_minutes = int((current_time - batch.planned_start_time).total_seconds() / 60)
|
||||
|
||||
if delay_minutes > self.config.PRODUCTION_DELAY_THRESHOLD_MINUTES:
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="production_delay",
|
||||
severity=AlertSeverity.HIGH,
|
||||
title="Retraso en Producción",
|
||||
message=f"⏰ Retraso: {batch.product_name} debía haber comenzado hace {delay_minutes} minutos",
|
||||
batch_id=batch.id,
|
||||
recommended_actions=[
|
||||
"start_production_immediately",
|
||||
"notify_staff",
|
||||
"prepare_alternatives",
|
||||
"update_customers"
|
||||
],
|
||||
impact_level="high",
|
||||
estimated_time_impact_minutes=delay_minutes,
|
||||
alert_data={
|
||||
"batch_number": batch.batch_number,
|
||||
"product_name": batch.product_name,
|
||||
"planned_start_time": batch.planned_start_time.isoformat(),
|
||||
"delay_minutes": delay_minutes,
|
||||
"affects_opening": delay_minutes > 120 # 2 hours
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Check cost spike alert
|
||||
high_cost_batches = [
|
||||
batch for batch in todays_batches
|
||||
if batch.estimated_cost and batch.estimated_cost > 100 # Threshold
|
||||
]
|
||||
|
||||
if high_cost_batches:
|
||||
total_high_cost = sum(batch.estimated_cost for batch in high_cost_batches)
|
||||
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="production_cost_spike",
|
||||
severity=AlertSeverity.MEDIUM,
|
||||
title="Costos de Producción Elevados",
|
||||
message=f"💰 Costos altos detectados: {len(high_cost_batches)} lotes con costo total de {total_high_cost:.2f}€",
|
||||
recommended_actions=[
|
||||
"review_ingredient_costs",
|
||||
"optimize_recipe",
|
||||
"negotiate_supplier_prices",
|
||||
"adjust_menu_pricing"
|
||||
],
|
||||
impact_level="medium",
|
||||
estimated_cost_impact=total_high_cost,
|
||||
alert_data={
|
||||
"high_cost_batches": len(high_cost_batches),
|
||||
"total_cost": total_high_cost,
|
||||
"average_cost": total_high_cost / len(high_cost_batches),
|
||||
"affected_products": [batch.product_name for batch in high_cost_batches]
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Send alerts using notification service
|
||||
await self._send_alerts(tenant_id, alerts)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error checking production capacity alerts",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
return []
|
||||
|
||||
@transactional
|
||||
async def check_quality_control_alerts(self, tenant_id: UUID) -> List[ProductionAlert]:
|
||||
"""Monitor quality control issues and generate alerts"""
|
||||
alerts = []
|
||||
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
alert_repo = ProductionAlertRepository(session)
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
|
||||
# Check for batches with low yield
|
||||
last_week = date.today() - timedelta(days=7)
|
||||
recent_batches = await batch_repo.get_batches_by_date_range(
|
||||
str(tenant_id), last_week, date.today(), ProductionStatus.COMPLETED
|
||||
)
|
||||
|
||||
low_yield_batches = [
|
||||
batch for batch in recent_batches
|
||||
if batch.yield_percentage and batch.yield_percentage < self.config.LOW_YIELD_ALERT_THRESHOLD * 100
|
||||
]
|
||||
|
||||
if low_yield_batches:
|
||||
avg_yield = sum(batch.yield_percentage for batch in low_yield_batches) / len(low_yield_batches)
|
||||
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="low_yield_detected",
|
||||
severity=AlertSeverity.MEDIUM,
|
||||
title="Rendimiento Bajo Detectado",
|
||||
message=f"📉 Rendimiento bajo: {len(low_yield_batches)} lotes con rendimiento promedio {avg_yield:.1f}%",
|
||||
recommended_actions=[
|
||||
"review_recipes",
|
||||
"check_ingredient_quality",
|
||||
"training_staff",
|
||||
"equipment_calibration"
|
||||
],
|
||||
impact_level="medium",
|
||||
alert_data={
|
||||
"low_yield_batches": len(low_yield_batches),
|
||||
"average_yield": avg_yield,
|
||||
"threshold": self.config.LOW_YIELD_ALERT_THRESHOLD * 100,
|
||||
"affected_products": list(set(batch.product_name for batch in low_yield_batches))
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Check for recurring quality issues
|
||||
quality_issues = [
|
||||
batch for batch in recent_batches
|
||||
if batch.quality_score and batch.quality_score < self.config.QUALITY_SCORE_THRESHOLD
|
||||
]
|
||||
|
||||
if len(quality_issues) >= 3: # 3 or more quality issues in a week
|
||||
avg_quality = sum(batch.quality_score for batch in quality_issues) / len(quality_issues)
|
||||
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="recurring_quality_issues",
|
||||
severity=AlertSeverity.HIGH,
|
||||
title="Problemas de Calidad Recurrentes",
|
||||
message=f"⚠️ Problemas de calidad: {len(quality_issues)} lotes con calidad promedio {avg_quality:.1f}/10",
|
||||
recommended_actions=[
|
||||
"quality_audit",
|
||||
"staff_retraining",
|
||||
"equipment_maintenance",
|
||||
"supplier_review"
|
||||
],
|
||||
impact_level="high",
|
||||
alert_data={
|
||||
"quality_issues_count": len(quality_issues),
|
||||
"average_quality_score": avg_quality,
|
||||
"threshold": self.config.QUALITY_SCORE_THRESHOLD,
|
||||
"trend": "declining"
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Send alerts
|
||||
await self._send_alerts(tenant_id, alerts)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error checking quality control alerts",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
return []
|
||||
|
||||
@transactional
|
||||
async def check_equipment_maintenance_alerts(self, tenant_id: UUID) -> List[ProductionAlert]:
|
||||
"""Monitor equipment status and generate maintenance alerts"""
|
||||
alerts = []
|
||||
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
capacity_repo = ProductionCapacityRepository(session)
|
||||
alert_repo = ProductionAlertRepository(session)
|
||||
|
||||
# Get equipment that needs maintenance
|
||||
today = date.today()
|
||||
equipment_capacity = await capacity_repo.get_multi(
|
||||
filters={
|
||||
"tenant_id": str(tenant_id),
|
||||
"resource_type": "equipment",
|
||||
"date": today
|
||||
}
|
||||
)
|
||||
|
||||
for equipment in equipment_capacity:
|
||||
# Check if maintenance is overdue
|
||||
if equipment.last_maintenance_date:
|
||||
days_since_maintenance = (today - equipment.last_maintenance_date.date()).days
|
||||
|
||||
if days_since_maintenance > 30: # 30 days threshold
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="equipment_maintenance_overdue",
|
||||
severity=AlertSeverity.MEDIUM,
|
||||
title="Mantenimiento de Equipo Vencido",
|
||||
message=f"🔧 Mantenimiento vencido: {equipment.resource_name} - {days_since_maintenance} días sin mantenimiento",
|
||||
recommended_actions=[
|
||||
"schedule_maintenance",
|
||||
"equipment_inspection",
|
||||
"backup_equipment_ready"
|
||||
],
|
||||
impact_level="medium",
|
||||
alert_data={
|
||||
"equipment_id": equipment.resource_id,
|
||||
"equipment_name": equipment.resource_name,
|
||||
"days_since_maintenance": days_since_maintenance,
|
||||
"last_maintenance": equipment.last_maintenance_date.isoformat() if equipment.last_maintenance_date else None
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Check equipment efficiency
|
||||
if equipment.efficiency_rating and equipment.efficiency_rating < 0.8: # 80% threshold
|
||||
alert_data = ProductionAlertCreate(
|
||||
alert_type="equipment_efficiency_low",
|
||||
severity=AlertSeverity.MEDIUM,
|
||||
title="Eficiencia de Equipo Baja",
|
||||
message=f"📊 Eficiencia baja: {equipment.resource_name} operando al {equipment.efficiency_rating*100:.1f}%",
|
||||
recommended_actions=[
|
||||
"equipment_calibration",
|
||||
"maintenance_check",
|
||||
"replace_parts"
|
||||
],
|
||||
impact_level="medium",
|
||||
alert_data={
|
||||
"equipment_id": equipment.resource_id,
|
||||
"equipment_name": equipment.resource_name,
|
||||
"efficiency_rating": equipment.efficiency_rating,
|
||||
"threshold": 0.8
|
||||
}
|
||||
)
|
||||
|
||||
alert = await alert_repo.create_alert({
|
||||
**alert_data.model_dump(),
|
||||
"tenant_id": tenant_id
|
||||
})
|
||||
alerts.append(alert)
|
||||
|
||||
# Send alerts
|
||||
await self._send_alerts(tenant_id, alerts)
|
||||
|
||||
return alerts
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error checking equipment maintenance alerts",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
return []
|
||||
|
||||
async def _send_alerts(self, tenant_id: UUID, alerts: List[ProductionAlert]):
|
||||
"""Send alerts using notification service with proper urgency handling"""
|
||||
try:
|
||||
for alert in alerts:
|
||||
# Determine delivery channels based on severity
|
||||
channels = self._get_channels_by_severity(alert.severity)
|
||||
|
||||
# Send notification using alert integration
|
||||
await self.alert_integration.send_alert(
|
||||
tenant_id=str(tenant_id),
|
||||
message=alert.message,
|
||||
alert_type=alert.alert_type,
|
||||
severity=alert.severity.value,
|
||||
channels=channels,
|
||||
data={
|
||||
"actions": alert.recommended_actions or [],
|
||||
"alert_id": str(alert.id)
|
||||
}
|
||||
)
|
||||
|
||||
logger.info("Sent production alert notification",
|
||||
alert_id=str(alert.id),
|
||||
alert_type=alert.alert_type,
|
||||
severity=alert.severity.value,
|
||||
channels=channels)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error sending alert notifications",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
|
||||
def _get_channels_by_severity(self, severity: AlertSeverity) -> List[str]:
|
||||
"""Map severity to delivery channels following user-centric analysis"""
|
||||
if severity == AlertSeverity.CRITICAL:
|
||||
return ["whatsapp", "email", "dashboard", "sms"]
|
||||
elif severity == AlertSeverity.HIGH:
|
||||
return ["whatsapp", "email", "dashboard"]
|
||||
elif severity == AlertSeverity.MEDIUM:
|
||||
return ["email", "dashboard"]
|
||||
else:
|
||||
return ["dashboard"]
|
||||
|
||||
@transactional
|
||||
async def get_active_alerts(self, tenant_id: UUID) -> List[ProductionAlert]:
|
||||
"""Get all active production alerts for a tenant"""
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
alert_repo = ProductionAlertRepository(session)
|
||||
return await alert_repo.get_active_alerts(str(tenant_id))
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting active alerts",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
return []
|
||||
|
||||
@transactional
|
||||
async def acknowledge_alert(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
alert_id: UUID,
|
||||
acknowledged_by: str
|
||||
) -> ProductionAlert:
|
||||
"""Acknowledge a production alert"""
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
alert_repo = ProductionAlertRepository(session)
|
||||
return await alert_repo.acknowledge_alert(alert_id, acknowledged_by)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error acknowledging alert",
|
||||
error=str(e), alert_id=str(alert_id), tenant_id=str(tenant_id))
|
||||
raise
|
||||
403
services/production/app/services/production_service.py
Normal file
403
services/production/app/services/production_service.py
Normal file
@@ -0,0 +1,403 @@
|
||||
"""
|
||||
Production Service
|
||||
Main business logic for production operations
|
||||
"""
|
||||
|
||||
from typing import Optional, List, Dict, Any
|
||||
from datetime import datetime, date, timedelta
|
||||
from uuid import UUID
|
||||
import structlog
|
||||
|
||||
from shared.database.transactions import transactional
|
||||
from shared.clients import get_inventory_client, get_sales_client
|
||||
from shared.clients.orders_client import OrdersServiceClient
|
||||
from shared.clients.recipes_client import RecipesServiceClient
|
||||
from shared.config.base import BaseServiceSettings
|
||||
|
||||
from app.repositories.production_batch_repository import ProductionBatchRepository
|
||||
from app.repositories.production_schedule_repository import ProductionScheduleRepository
|
||||
from app.repositories.production_capacity_repository import ProductionCapacityRepository
|
||||
from app.repositories.quality_check_repository import QualityCheckRepository
|
||||
from app.models.production import ProductionBatch, ProductionStatus, ProductionPriority
|
||||
from app.schemas.production import (
|
||||
ProductionBatchCreate, ProductionBatchUpdate, ProductionBatchStatusUpdate,
|
||||
DailyProductionRequirements, ProductionDashboardSummary, ProductionMetrics
|
||||
)
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
class ProductionService:
|
||||
"""Main production service with business logic"""
|
||||
|
||||
def __init__(self, database_manager, config: BaseServiceSettings):
|
||||
self.database_manager = database_manager
|
||||
self.config = config
|
||||
|
||||
# Initialize shared clients
|
||||
self.inventory_client = get_inventory_client(config, "production")
|
||||
self.orders_client = OrdersServiceClient(config)
|
||||
self.recipes_client = RecipesServiceClient(config)
|
||||
self.sales_client = get_sales_client(config, "production")
|
||||
|
||||
@transactional
|
||||
async def calculate_daily_requirements(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
target_date: date
|
||||
) -> DailyProductionRequirements:
|
||||
"""Calculate production requirements using shared client pattern"""
|
||||
try:
|
||||
# 1. Get demand requirements from Orders Service
|
||||
demand_data = await self.orders_client.get_demand_requirements(
|
||||
str(tenant_id),
|
||||
target_date.isoformat()
|
||||
)
|
||||
|
||||
# 2. Get current stock levels from Inventory Service
|
||||
stock_levels = await self.inventory_client.get_stock_levels(str(tenant_id))
|
||||
|
||||
# 3. Get recipe requirements from Recipes Service
|
||||
recipe_data = await self.recipes_client.get_recipe_requirements(str(tenant_id))
|
||||
|
||||
# 4. Get capacity information
|
||||
async with self.database_manager.get_session() as session:
|
||||
capacity_repo = ProductionCapacityRepository(session)
|
||||
available_capacity = await self._calculate_available_capacity(
|
||||
capacity_repo, tenant_id, target_date
|
||||
)
|
||||
|
||||
# 5. Apply production planning business logic
|
||||
production_plan = await self._calculate_production_plan(
|
||||
tenant_id, target_date, demand_data, stock_levels, recipe_data, available_capacity
|
||||
)
|
||||
|
||||
return production_plan
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating daily production requirements",
|
||||
error=str(e), tenant_id=str(tenant_id), date=target_date.isoformat())
|
||||
raise
|
||||
|
||||
@transactional
|
||||
async def create_production_batch(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
batch_data: ProductionBatchCreate
|
||||
) -> ProductionBatch:
|
||||
"""Create a new production batch"""
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
|
||||
# Prepare batch data
|
||||
batch_dict = batch_data.model_dump()
|
||||
batch_dict["tenant_id"] = tenant_id
|
||||
|
||||
# Validate recipe exists if provided
|
||||
if batch_data.recipe_id:
|
||||
recipe_details = await self.recipes_client.get_recipe_by_id(
|
||||
str(tenant_id), str(batch_data.recipe_id)
|
||||
)
|
||||
if not recipe_details:
|
||||
raise ValueError(f"Recipe {batch_data.recipe_id} not found")
|
||||
|
||||
# Check ingredient availability
|
||||
if batch_data.recipe_id:
|
||||
ingredient_requirements = await self.recipes_client.calculate_ingredients_for_quantity(
|
||||
str(tenant_id), str(batch_data.recipe_id), batch_data.planned_quantity
|
||||
)
|
||||
|
||||
if ingredient_requirements:
|
||||
availability_check = await self.inventory_client.check_availability(
|
||||
str(tenant_id), ingredient_requirements.get("requirements", [])
|
||||
)
|
||||
|
||||
if not availability_check or not availability_check.get("all_available", True):
|
||||
logger.warning("Insufficient ingredients for batch",
|
||||
batch_data=batch_dict, availability=availability_check)
|
||||
|
||||
# Create the batch
|
||||
batch = await batch_repo.create_batch(batch_dict)
|
||||
|
||||
logger.info("Production batch created",
|
||||
batch_id=str(batch.id), tenant_id=str(tenant_id))
|
||||
|
||||
return batch
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error creating production batch",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise
|
||||
|
||||
@transactional
|
||||
async def update_batch_status(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
batch_id: UUID,
|
||||
status_update: ProductionBatchStatusUpdate
|
||||
) -> ProductionBatch:
|
||||
"""Update production batch status"""
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
|
||||
# Update batch status
|
||||
batch = await batch_repo.update_batch_status(
|
||||
batch_id,
|
||||
status_update.status,
|
||||
status_update.actual_quantity,
|
||||
status_update.notes
|
||||
)
|
||||
|
||||
# Update inventory if batch is completed
|
||||
if status_update.status == ProductionStatus.COMPLETED and status_update.actual_quantity:
|
||||
await self._update_inventory_on_completion(
|
||||
tenant_id, batch, status_update.actual_quantity
|
||||
)
|
||||
|
||||
logger.info("Updated batch status",
|
||||
batch_id=str(batch_id),
|
||||
new_status=status_update.status.value,
|
||||
tenant_id=str(tenant_id))
|
||||
|
||||
return batch
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error updating batch status",
|
||||
error=str(e), batch_id=str(batch_id), tenant_id=str(tenant_id))
|
||||
raise
|
||||
|
||||
@transactional
|
||||
async def get_dashboard_summary(self, tenant_id: UUID) -> ProductionDashboardSummary:
|
||||
"""Get production dashboard summary data"""
|
||||
try:
|
||||
async with self.database_manager.get_session() as session:
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
|
||||
# Get active batches
|
||||
active_batches = await batch_repo.get_active_batches(str(tenant_id))
|
||||
|
||||
# Get today's production plan
|
||||
today = date.today()
|
||||
todays_batches = await batch_repo.get_batches_by_date_range(
|
||||
str(tenant_id), today, today
|
||||
)
|
||||
|
||||
# Calculate metrics
|
||||
todays_plan = [
|
||||
{
|
||||
"product_name": batch.product_name,
|
||||
"planned_quantity": batch.planned_quantity,
|
||||
"status": batch.status.value,
|
||||
"completion_time": batch.planned_end_time.isoformat() if batch.planned_end_time else None
|
||||
}
|
||||
for batch in todays_batches
|
||||
]
|
||||
|
||||
# Get metrics for last 7 days
|
||||
week_ago = today - timedelta(days=7)
|
||||
weekly_metrics = await batch_repo.get_production_metrics(
|
||||
str(tenant_id), week_ago, today
|
||||
)
|
||||
|
||||
return ProductionDashboardSummary(
|
||||
active_batches=len(active_batches),
|
||||
todays_production_plan=todays_plan,
|
||||
capacity_utilization=85.0, # TODO: Calculate from actual capacity data
|
||||
current_alerts=0, # TODO: Get from alerts
|
||||
on_time_completion_rate=weekly_metrics.get("on_time_completion_rate", 0),
|
||||
average_quality_score=8.5, # TODO: Get from quality checks
|
||||
total_output_today=sum(b.actual_quantity or 0 for b in todays_batches),
|
||||
efficiency_percentage=weekly_metrics.get("average_yield_percentage", 0)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting dashboard summary",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise
|
||||
|
||||
@transactional
|
||||
async def get_production_requirements(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
target_date: Optional[date] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Get production requirements for procurement planning"""
|
||||
try:
|
||||
if not target_date:
|
||||
target_date = date.today()
|
||||
|
||||
# Get planned batches for the date
|
||||
async with self.database_manager.get_session() as session:
|
||||
batch_repo = ProductionBatchRepository(session)
|
||||
planned_batches = await batch_repo.get_batches_by_date_range(
|
||||
str(tenant_id), target_date, target_date, ProductionStatus.PENDING
|
||||
)
|
||||
|
||||
# Calculate ingredient requirements
|
||||
total_requirements = {}
|
||||
for batch in planned_batches:
|
||||
if batch.recipe_id:
|
||||
requirements = await self.recipes_client.calculate_ingredients_for_quantity(
|
||||
str(tenant_id), str(batch.recipe_id), batch.planned_quantity
|
||||
)
|
||||
|
||||
if requirements and "requirements" in requirements:
|
||||
for req in requirements["requirements"]:
|
||||
ingredient_id = req.get("ingredient_id")
|
||||
quantity = req.get("quantity", 0)
|
||||
|
||||
if ingredient_id in total_requirements:
|
||||
total_requirements[ingredient_id]["quantity"] += quantity
|
||||
else:
|
||||
total_requirements[ingredient_id] = {
|
||||
"ingredient_id": ingredient_id,
|
||||
"ingredient_name": req.get("ingredient_name"),
|
||||
"quantity": quantity,
|
||||
"unit": req.get("unit"),
|
||||
"priority": "medium"
|
||||
}
|
||||
|
||||
return {
|
||||
"date": target_date.isoformat(),
|
||||
"total_batches": len(planned_batches),
|
||||
"ingredient_requirements": list(total_requirements.values()),
|
||||
"estimated_start_time": "06:00:00",
|
||||
"estimated_duration_hours": sum(b.planned_duration_minutes for b in planned_batches) / 60
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error getting production requirements",
|
||||
error=str(e), tenant_id=str(tenant_id))
|
||||
raise
|
||||
|
||||
async def _calculate_production_plan(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
target_date: date,
|
||||
demand_data: Optional[Dict[str, Any]],
|
||||
stock_levels: Optional[Dict[str, Any]],
|
||||
recipe_data: Optional[Dict[str, Any]],
|
||||
available_capacity: Dict[str, Any]
|
||||
) -> DailyProductionRequirements:
|
||||
"""Apply production planning business logic"""
|
||||
|
||||
# Default production plan structure
|
||||
production_plan = []
|
||||
total_capacity_needed = 0.0
|
||||
urgent_items = 0
|
||||
|
||||
if demand_data and "demand_items" in demand_data:
|
||||
for item in demand_data["demand_items"]:
|
||||
product_id = item.get("product_id")
|
||||
demand_quantity = item.get("quantity", 0)
|
||||
current_stock = 0
|
||||
|
||||
# Find current stock for this product
|
||||
if stock_levels and "stock_levels" in stock_levels:
|
||||
for stock in stock_levels["stock_levels"]:
|
||||
if stock.get("product_id") == product_id:
|
||||
current_stock = stock.get("available_quantity", 0)
|
||||
break
|
||||
|
||||
# Calculate production need
|
||||
production_needed = max(0, demand_quantity - current_stock)
|
||||
|
||||
if production_needed > 0:
|
||||
# Determine urgency
|
||||
urgency = "high" if demand_quantity > current_stock * 2 else "medium"
|
||||
if urgency == "high":
|
||||
urgent_items += 1
|
||||
|
||||
# Estimate capacity needed (simplified)
|
||||
estimated_time_hours = production_needed * 0.5 # 30 minutes per unit
|
||||
total_capacity_needed += estimated_time_hours
|
||||
|
||||
production_plan.append({
|
||||
"product_id": product_id,
|
||||
"product_name": item.get("product_name", f"Product {product_id}"),
|
||||
"current_inventory": current_stock,
|
||||
"demand_forecast": demand_quantity,
|
||||
"pre_orders": item.get("pre_orders", 0),
|
||||
"recommended_production": production_needed,
|
||||
"urgency": urgency
|
||||
})
|
||||
|
||||
return DailyProductionRequirements(
|
||||
date=target_date,
|
||||
production_plan=production_plan,
|
||||
total_capacity_needed=total_capacity_needed,
|
||||
available_capacity=available_capacity.get("total_hours", 8.0),
|
||||
capacity_gap=max(0, total_capacity_needed - available_capacity.get("total_hours", 8.0)),
|
||||
urgent_items=urgent_items,
|
||||
recommended_schedule=None
|
||||
)
|
||||
|
||||
async def _calculate_available_capacity(
|
||||
self,
|
||||
capacity_repo: ProductionCapacityRepository,
|
||||
tenant_id: UUID,
|
||||
target_date: date
|
||||
) -> Dict[str, Any]:
|
||||
"""Calculate available production capacity for a date"""
|
||||
try:
|
||||
# Get capacity entries for the date
|
||||
equipment_capacity = await capacity_repo.get_available_capacity(
|
||||
str(tenant_id), "equipment", target_date, 0
|
||||
)
|
||||
|
||||
staff_capacity = await capacity_repo.get_available_capacity(
|
||||
str(tenant_id), "staff", target_date, 0
|
||||
)
|
||||
|
||||
# Calculate total available hours (simplified)
|
||||
total_equipment_hours = sum(c.remaining_capacity_units for c in equipment_capacity)
|
||||
total_staff_hours = sum(c.remaining_capacity_units for c in staff_capacity)
|
||||
|
||||
# Capacity is limited by the minimum of equipment or staff
|
||||
effective_hours = min(total_equipment_hours, total_staff_hours) if total_staff_hours > 0 else total_equipment_hours
|
||||
|
||||
return {
|
||||
"total_hours": effective_hours,
|
||||
"equipment_hours": total_equipment_hours,
|
||||
"staff_hours": total_staff_hours,
|
||||
"utilization_percentage": 0 # To be calculated
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error calculating available capacity", error=str(e))
|
||||
# Return default capacity if calculation fails
|
||||
return {
|
||||
"total_hours": 8.0,
|
||||
"equipment_hours": 8.0,
|
||||
"staff_hours": 8.0,
|
||||
"utilization_percentage": 0
|
||||
}
|
||||
|
||||
async def _update_inventory_on_completion(
|
||||
self,
|
||||
tenant_id: UUID,
|
||||
batch: ProductionBatch,
|
||||
actual_quantity: float
|
||||
):
|
||||
"""Update inventory when a batch is completed"""
|
||||
try:
|
||||
# Add the produced quantity to inventory
|
||||
update_result = await self.inventory_client.update_stock_level(
|
||||
str(tenant_id),
|
||||
str(batch.product_id),
|
||||
actual_quantity,
|
||||
f"Production batch {batch.batch_number} completed"
|
||||
)
|
||||
|
||||
logger.info("Updated inventory after production completion",
|
||||
batch_id=str(batch.id),
|
||||
product_id=str(batch.product_id),
|
||||
quantity_added=actual_quantity,
|
||||
update_result=update_result)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error updating inventory on batch completion",
|
||||
error=str(e), batch_id=str(batch.id))
|
||||
# Don't raise - inventory update failure shouldn't prevent batch completion
|
||||
30
services/production/requirements.txt
Normal file
30
services/production/requirements.txt
Normal file
@@ -0,0 +1,30 @@
|
||||
# Production Service Dependencies
|
||||
# FastAPI and web framework
|
||||
fastapi==0.104.1
|
||||
uvicorn[standard]==0.24.0
|
||||
pydantic==2.5.0
|
||||
pydantic-settings==2.1.0
|
||||
|
||||
# Database
|
||||
sqlalchemy==2.0.23
|
||||
asyncpg==0.29.0
|
||||
alembic==1.13.1
|
||||
|
||||
# HTTP clients
|
||||
httpx==0.25.2
|
||||
|
||||
# Logging and monitoring
|
||||
structlog==23.2.0
|
||||
|
||||
# Date and time utilities
|
||||
python-dateutil==2.8.2
|
||||
|
||||
# Validation and utilities
|
||||
email-validator==2.1.0
|
||||
|
||||
# Authentication
|
||||
python-jose[cryptography]==3.3.0
|
||||
|
||||
# Development dependencies (optional)
|
||||
pytest==7.4.3
|
||||
pytest-asyncio==0.21.1
|
||||
Reference in New Issue
Block a user