================================================================ # Documentation: services/forecasting/README.md # ================================================================ # Forecasting Service AI-powered demand prediction service for bakery operations in Madrid, Spain. ## Overview The Forecasting Service is a specialized microservice responsible for generating accurate demand predictions for bakery products. It integrates trained ML models with real-time weather and traffic data to provide actionable forecasts for business planning. ## Features ### Core Functionality - **Single Product Forecasting**: Generate predictions for individual products - **Batch Forecasting**: Process multiple products and time periods - **Real-time Predictions**: On-demand forecasting with external data - **Business Rules**: Spanish bakery-specific adjustments - **Alert System**: Automated notifications for demand anomalies ### Integration Points - **Training Service**: Loads trained Prophet models - **Data Service**: Retrieves weather and traffic data - **Notification Service**: Sends alerts and reports - **Gateway Service**: Authentication and request routing ## API Endpoints ### Forecasts - `POST /api/v1/forecasts/single` - Generate single forecast - `POST /api/v1/forecasts/batch` - Generate batch forecasts - `GET /api/v1/forecasts/list` - List historical forecasts - `GET /api/v1/forecasts/alerts` - Get forecast alerts - `PUT /api/v1/forecasts/alerts/{id}/acknowledge` - Acknowledge alert ### Predictions - `POST /api/v1/predictions/realtime` - Real-time prediction - `GET /api/v1/predictions/quick/{product}` - Quick multi-day forecast ## Business Logic ### Spanish Bakery Rules - **Siesta Impact**: Reduced afternoon activity consideration - **Weather Adjustments**: Rain reduces traffic, extreme temperatures affect product mix - **Holiday Handling**: Spanish holiday calendar integration - **Weekend Patterns**: Different demand patterns for weekends ### Business Types - **Individual Bakery**: Single location with direct sales - **Central Workshop**: Production facility supplying multiple locations ## Configuration ### Environment Variables ```bash # Database DATABASE_URL=postgresql+asyncpg://user:pass@host:port/db # External Services TRAINING_SERVICE_URL=http://training-service:8000 DATA_SERVICE_URL=http://data-service:8000 # Business Rules WEEKEND_ADJUSTMENT_FACTOR=0.8 HOLIDAY_ADJUSTMENT_FACTOR=0.5 RAIN_IMPACT_FACTOR=0.7 ``` ### Performance Settings ```bash MAX_FORECAST_DAYS=30 PREDICTION_CACHE_TTL_HOURS=6 FORECAST_BATCH_SIZE=100 ``` ## Development ### Setup ```bash cd services/forecasting pip install -r requirements.txt ``` ### Testing ```bash pytest tests/ -v --cov=app ``` ### Running Locally ```bash uvicorn app.main:app --reload --port 8000 ``` ## Deployment ### Docker ```bash docker build -t forecasting-service . docker run -p 8000:8000 forecasting-service ``` ### Kubernetes ```bash kubectl apply -f infrastructure/kubernetes/base/forecasting-service.yaml ``` ## Monitoring ### Metrics - `forecasts_generated_total` - Total forecasts generated - `predictions_served_total` - Total predictions served - `forecast_processing_time_seconds` - Processing time histogram - `active_models_count` - Number of active models ### Health Checks - `/health` - Service health status - `/metrics` - Prometheus metrics endpoint ## Performance ### Benchmarks - **Single Forecast**: < 2 seconds average - **Batch Forecasting**: 100 products in < 30 seconds - **Concurrent Load**: 95%+ success rate at 20 concurrent requests ### Optimization - Model caching for faster predictions - Feature preparation optimization - Database query optimization - Asynchronous external API calls ## Troubleshooting ### Common Issues 1. **No Model Found Error** - Ensure training service has models for tenant/product - Check model training logs in training service 2. **High Prediction Latency** - Monitor model cache hit rate - Check external service response times - Review database query performance 3. **Inaccurate Predictions** - Verify external data quality (weather/traffic) - Check model performance metrics - Review business rule configurations ### Logging ```bash # View service logs docker logs forecasting-service # Debug level logging LOG_LEVEL=DEBUG uvicorn app.main:app ``` ## Contributing 1. Follow the existing code structure and patterns 2. Add tests for new functionality 3. Update documentation for API changes 4. Ensure performance benchmarks are maintained ## License This service is part of the Bakery Forecasting Platform - MIT License