Start generating pytest for training service
This commit is contained in:
@@ -41,6 +41,8 @@ pytest==7.4.3
|
||||
pytest-asyncio==0.21.1
|
||||
pytest-mock==3.12.0
|
||||
httpx==0.25.2
|
||||
pytest-cov==4.1.0
|
||||
coverage==7.3.2
|
||||
|
||||
# Utilities
|
||||
python-dateutil==2.8.2
|
||||
|
||||
@@ -1,263 +0,0 @@
|
||||
# Training Service - Complete Testing Suite
|
||||
|
||||
## 📁 Test Structure
|
||||
|
||||
```
|
||||
services/training/tests/
|
||||
├── conftest.py # Test configuration and fixtures
|
||||
├── test_api.py # API endpoint tests
|
||||
├── test_ml.py # ML component tests
|
||||
├── test_service.py # Service layer tests
|
||||
├── test_messaging.py # Messaging tests
|
||||
└── test_integration.py # Integration tests
|
||||
```
|
||||
|
||||
## 🧪 Test Coverage
|
||||
|
||||
### **1. API Tests (`test_api.py`)**
|
||||
- ✅ Health check endpoints (`/health`, `/health/ready`, `/health/live`)
|
||||
- ✅ Metrics endpoint (`/metrics`)
|
||||
- ✅ Training job creation and management
|
||||
- ✅ Single product training
|
||||
- ✅ Job status tracking and cancellation
|
||||
- ✅ Data validation endpoints
|
||||
- ✅ Error handling and edge cases
|
||||
- ✅ Authentication integration
|
||||
|
||||
**Key Test Classes:**
|
||||
- `TestTrainingAPI` - Basic API functionality
|
||||
- `TestTrainingJobsAPI` - Training job management
|
||||
- `TestSingleProductTrainingAPI` - Single product workflows
|
||||
- `TestErrorHandling` - Error scenarios
|
||||
- `TestAuthenticationIntegration` - Security tests
|
||||
|
||||
### **2. ML Component Tests (`test_ml.py`)**
|
||||
- ✅ Data processor functionality
|
||||
- ✅ Prophet manager operations
|
||||
- ✅ ML trainer orchestration
|
||||
- ✅ Feature engineering validation
|
||||
- ✅ Model training and validation
|
||||
|
||||
**Key Test Classes:**
|
||||
- `TestBakeryDataProcessor` - Data preparation and feature engineering
|
||||
- `TestBakeryProphetManager` - Prophet model management
|
||||
- `TestBakeryMLTrainer` - ML training orchestration
|
||||
- `TestIntegrationML` - ML component integration
|
||||
|
||||
**Key Features Tested:**
|
||||
- Spanish holiday detection
|
||||
- Temporal feature engineering
|
||||
- Weather and traffic data integration
|
||||
- Model validation and metrics
|
||||
- Data quality checks
|
||||
|
||||
### **3. Service Layer Tests (`test_service.py`)**
|
||||
- ✅ Training service business logic
|
||||
- ✅ Database operations
|
||||
- ✅ External service integration
|
||||
- ✅ Job lifecycle management
|
||||
- ✅ Error recovery and resilience
|
||||
|
||||
**Key Test Classes:**
|
||||
- `TestTrainingService` - Core business logic
|
||||
- `TestTrainingServiceDataFetching` - External API integration
|
||||
- `TestTrainingServiceExecution` - Training workflow execution
|
||||
- `TestTrainingServiceEdgeCases` - Edge cases and error conditions
|
||||
|
||||
### **4. Messaging Tests (`test_messaging.py`)**
|
||||
- ✅ Event publishing functionality
|
||||
- ✅ Message structure validation
|
||||
- ✅ Error handling in messaging
|
||||
- ✅ Integration with shared components
|
||||
|
||||
**Key Test Classes:**
|
||||
- `TestTrainingMessaging` - Basic messaging operations
|
||||
- `TestMessagingErrorHandling` - Error scenarios
|
||||
- `TestMessagingIntegration` - Shared component integration
|
||||
- `TestMessagingPerformance` - Performance and reliability
|
||||
|
||||
### **5. Integration Tests (`test_integration.py`)**
|
||||
- ✅ End-to-end workflow testing
|
||||
- ✅ Service interaction validation
|
||||
- ✅ Error handling across boundaries
|
||||
- ✅ Performance and scalability
|
||||
- ✅ Security and compliance
|
||||
|
||||
**Key Test Classes:**
|
||||
- `TestTrainingWorkflowIntegration` - Complete workflows
|
||||
- `TestServiceInteractionIntegration` - Cross-service communication
|
||||
- `TestErrorHandlingIntegration` - Error propagation
|
||||
- `TestPerformanceIntegration` - Performance characteristics
|
||||
- `TestSecurityIntegration` - Security validation
|
||||
- `TestRecoveryIntegration` - Recovery scenarios
|
||||
- `TestComplianceIntegration` - GDPR and audit compliance
|
||||
|
||||
## 🔧 Test Configuration (`conftest.py`)
|
||||
|
||||
### **Fixtures Provided:**
|
||||
- `test_engine` - Test database engine
|
||||
- `test_db_session` - Database session for tests
|
||||
- `test_client` - HTTP test client
|
||||
- `mock_messaging` - Mocked messaging system
|
||||
- `mock_data_service` - Mocked external data services
|
||||
- `mock_ml_trainer` - Mocked ML trainer
|
||||
- `mock_prophet_manager` - Mocked Prophet manager
|
||||
- `mock_data_processor` - Mocked data processor
|
||||
- `training_job_in_db` - Sample training job in database
|
||||
- `trained_model_in_db` - Sample trained model in database
|
||||
|
||||
### **Helper Functions:**
|
||||
- `assert_training_job_structure()` - Validate job data structure
|
||||
- `assert_model_structure()` - Validate model data structure
|
||||
|
||||
## 🚀 Running Tests
|
||||
|
||||
### **Run All Tests:**
|
||||
```bash
|
||||
cd services/training
|
||||
pytest tests/ -v
|
||||
```
|
||||
|
||||
### **Run Specific Test Categories:**
|
||||
```bash
|
||||
# API tests only
|
||||
pytest tests/test_api.py -v
|
||||
|
||||
# ML component tests
|
||||
pytest tests/test_ml.py -v
|
||||
|
||||
# Service layer tests
|
||||
pytest tests/test_service.py -v
|
||||
|
||||
# Messaging tests
|
||||
pytest tests/test_messaging.py -v
|
||||
|
||||
# Integration tests
|
||||
pytest tests/test_integration.py -v
|
||||
```
|
||||
|
||||
### **Run with Coverage:**
|
||||
```bash
|
||||
pytest tests/ --cov=app --cov-report=html --cov-report=term
|
||||
```
|
||||
|
||||
### **Run Performance Tests:**
|
||||
```bash
|
||||
pytest tests/test_integration.py::TestPerformanceIntegration -v
|
||||
```
|
||||
|
||||
### **Skip Slow Tests:**
|
||||
```bash
|
||||
pytest tests/ -v -m "not slow"
|
||||
```
|
||||
|
||||
## 📊 Test Scenarios Covered
|
||||
|
||||
### **Happy Path Scenarios:**
|
||||
- ✅ Complete training workflow (start → progress → completion)
|
||||
- ✅ Single product training
|
||||
- ✅ Data validation and preprocessing
|
||||
- ✅ Model training and storage
|
||||
- ✅ Event publishing and messaging
|
||||
- ✅ Job status tracking and cancellation
|
||||
|
||||
### **Error Scenarios:**
|
||||
- ✅ Database connection failures
|
||||
- ✅ External service unavailability
|
||||
- ✅ Invalid input data
|
||||
- ✅ ML training failures
|
||||
- ✅ Messaging system failures
|
||||
- ✅ Authentication and authorization errors
|
||||
|
||||
### **Edge Cases:**
|
||||
- ✅ Concurrent job execution
|
||||
- ✅ Large datasets
|
||||
- ✅ Malformed configurations
|
||||
- ✅ Network timeouts
|
||||
- ✅ Memory pressure scenarios
|
||||
- ✅ Rapid successive requests
|
||||
|
||||
### **Security Tests:**
|
||||
- ✅ Tenant isolation
|
||||
- ✅ Input validation
|
||||
- ✅ SQL injection protection
|
||||
- ✅ Authentication enforcement
|
||||
- ✅ Data access controls
|
||||
|
||||
### **Compliance Tests:**
|
||||
- ✅ Audit trail creation
|
||||
- ✅ Data retention policies
|
||||
- ✅ GDPR compliance features
|
||||
- ✅ Backward compatibility
|
||||
|
||||
## 🎯 Test Quality Metrics
|
||||
|
||||
### **Coverage Goals:**
|
||||
- **API Layer:** 95%+ coverage
|
||||
- **Service Layer:** 90%+ coverage
|
||||
- **ML Components:** 85%+ coverage
|
||||
- **Integration:** 80%+ coverage
|
||||
|
||||
### **Test Types Distribution:**
|
||||
- **Unit Tests:** ~60% (isolated component testing)
|
||||
- **Integration Tests:** ~30% (service interaction testing)
|
||||
- **End-to-End Tests:** ~10% (complete workflow testing)
|
||||
|
||||
### **Performance Benchmarks:**
|
||||
- All unit tests complete in <5 seconds
|
||||
- Integration tests complete in <30 seconds
|
||||
- End-to-end tests complete in <60 seconds
|
||||
|
||||
## 🔧 Mocking Strategy
|
||||
|
||||
### **External Dependencies Mocked:**
|
||||
- ✅ **Data Service:** HTTP calls mocked with realistic responses
|
||||
- ✅ **RabbitMQ:** Message publishing mocked for isolation
|
||||
- ✅ **Database:** SQLite in-memory for fast testing
|
||||
- ✅ **Prophet Models:** Training mocked for speed
|
||||
- ✅ **File System:** Model storage mocked
|
||||
|
||||
### **Real Components Tested:**
|
||||
- ✅ **FastAPI Application:** Real app instance
|
||||
- ✅ **Pydantic Validation:** Real validation logic
|
||||
- ✅ **SQLAlchemy ORM:** Real database operations
|
||||
- ✅ **Business Logic:** Real service layer code
|
||||
|
||||
## 🛡️ Continuous Integration
|
||||
|
||||
### **CI Pipeline Tests:**
|
||||
```yaml
|
||||
# Example CI configuration
|
||||
test_matrix:
|
||||
- python: "3.11"
|
||||
database: "postgresql"
|
||||
- python: "3.11"
|
||||
database: "sqlite"
|
||||
|
||||
test_commands:
|
||||
- pytest tests/ --cov=app --cov-fail-under=85
|
||||
- pytest tests/test_integration.py -m "not slow"
|
||||
- pytest tests/ --maxfail=1 --tb=short
|
||||
```
|
||||
|
||||
### **Quality Gates:**
|
||||
- ✅ All tests must pass
|
||||
- ✅ Coverage must be >85%
|
||||
- ✅ No critical security issues
|
||||
- ✅ Performance benchmarks met
|
||||
|
||||
## 📈 Test Maintenance
|
||||
|
||||
### **Regular Updates:**
|
||||
- ✅ Add tests for new features
|
||||
- ✅ Update mocks when APIs change
|
||||
- ✅ Review and update test data
|
||||
- ✅ Maintain realistic test scenarios
|
||||
|
||||
### **Monitoring:**
|
||||
- ✅ Test execution time tracking
|
||||
- ✅ Flaky test identification
|
||||
- ✅ Coverage trend monitoring
|
||||
- ✅ Test failure analysis
|
||||
|
||||
This comprehensive test suite ensures the training service is robust, reliable, and ready for production deployment! 🎉
|
||||
File diff suppressed because it is too large
Load Diff
7022
services/training/tests/fixtures/test_data/madrid_traffic_sample.json
vendored
Normal file
7022
services/training/tests/fixtures/test_data/madrid_traffic_sample.json
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1802
services/training/tests/fixtures/test_data/madrid_weather_sample.json
vendored
Normal file
1802
services/training/tests/fixtures/test_data/madrid_weather_sample.json
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1670
services/training/tests/results/coverage_end_to_end.xml
Normal file
1670
services/training/tests/results/coverage_end_to_end.xml
Normal file
File diff suppressed because it is too large
Load Diff
1670
services/training/tests/results/coverage_integration.xml
Normal file
1670
services/training/tests/results/coverage_integration.xml
Normal file
File diff suppressed because it is too large
Load Diff
1670
services/training/tests/results/coverage_performance.xml
Normal file
1670
services/training/tests/results/coverage_performance.xml
Normal file
File diff suppressed because it is too large
Load Diff
1670
services/training/tests/results/coverage_unit.xml
Normal file
1670
services/training/tests/results/coverage_unit.xml
Normal file
File diff suppressed because it is too large
Load Diff
5
services/training/tests/results/junit_end_to_end.xml
Normal file
5
services/training/tests/results/junit_end_to_end.xml
Normal file
@@ -0,0 +1,5 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="2" failures="0" skipped="0" tests="2" time="1.455" timestamp="2025-07-25T11:22:45.219619" hostname="543df414761a"><testcase classname="tests.test_end_to_end.TestTrainingServiceEndToEnd" name="test_complete_training_workflow_api" time="0.034"><error message="failed on setup with "UnboundLocalError: cannot access local variable 'np' where it is not associated with a value"">tests/test_end_to_end.py:75: in real_bakery_data
|
||||
temp = 15 + 12 * np.sin((date.timetuple().tm_yday / 365) * 2 * np.pi)
|
||||
E UnboundLocalError: cannot access local variable 'np' where it is not associated with a value</error><error message="failed on teardown with "TypeError: 'str' object is not callable"">tests/conftest.py:464: in setup_test_environment
|
||||
os.environ.pop(var, None)(scope="session")
|
||||
E TypeError: 'str' object is not callable</error></testcase></testsuite></testsuites>
|
||||
1
services/training/tests/results/junit_integration.xml
Normal file
1
services/training/tests/results/junit_integration.xml
Normal file
@@ -0,0 +1 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="0" skipped="0" tests="0" time="0.204" timestamp="2025-07-25T11:22:43.995108" hostname="543df414761a" /></testsuites>
|
||||
8
services/training/tests/results/junit_performance.xml
Normal file
8
services/training/tests/results/junit_performance.xml
Normal file
@@ -0,0 +1,8 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="1" failures="0" skipped="0" tests="1" time="0.238" timestamp="2025-07-25T11:22:44.599099" hostname="543df414761a"><testcase classname="" name="tests.test_performance" time="0.000"><error message="collection failure">ImportError while importing test module '/app/tests/test_performance.py'.
|
||||
Hint: make sure your test modules/packages have valid Python names.
|
||||
Traceback:
|
||||
/usr/local/lib/python3.11/importlib/__init__.py:126: in import_module
|
||||
return _bootstrap._gcd_import(name[level:], package, level)
|
||||
tests/test_performance.py:16: in <module>
|
||||
import psutil
|
||||
E ModuleNotFoundError: No module named 'psutil'</error></testcase></testsuite></testsuites>
|
||||
649
services/training/tests/results/junit_unit.xml
Normal file
649
services/training/tests/results/junit_unit.xml
Normal file
@@ -0,0 +1,649 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="23" failures="35" skipped="2" tests="83" time="5.714" timestamp="2025-07-25T11:22:37.801499" hostname="543df414761a"><testcase classname="tests.test_api.TestTrainingAPI" name="test_health_check" time="0.030"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:20: in test_health_check
|
||||
response = await test_client.get("/health")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_readiness_check_ready" time="0.069"><failure message="AttributeError: <starlette.datastructures.State object at 0xffff5ae06a10> does not have the attribute 'ready'">tests/test_api.py:32: in test_readiness_check_ready
|
||||
with patch('app.main.app.state.ready', True):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <starlette.datastructures.State object at 0xffff5ae06a10> does not have the attribute 'ready'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_readiness_check_not_ready" time="0.030"><failure message="AttributeError: <starlette.datastructures.State object at 0xffff5ae06a10> does not have the attribute 'ready'">tests/test_api.py:42: in test_readiness_check_not_ready
|
||||
with patch('app.main.app.state.ready', False):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <starlette.datastructures.State object at 0xffff5ae06a10> does not have the attribute 'ready'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_liveness_check_healthy" time="0.028"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:53: in test_liveness_check_healthy
|
||||
response = await test_client.get("/health/live")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_liveness_check_unhealthy" time="0.027"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:63: in test_liveness_check_unhealthy
|
||||
response = await test_client.get("/health/live")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_metrics_endpoint" time="0.027"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:73: in test_metrics_endpoint
|
||||
response = await test_client.get("/metrics")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_api.TestTrainingAPI" name="test_root_endpoint" time="0.026"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:92: in test_root_endpoint
|
||||
response = await test_client.get("/")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_start_training_job_success" time="0.029"><error message="failed on setup with "file /app/tests/test_api.py, line 104 @pytest.mark.asyncio async def test_start_training_job_success( self, test_client: AsyncClient, mock_messaging, mock_ml_trainer, mock_data_service ): """Test starting a training job successfully""" request_data = { "include_weather": True, "include_traffic": True, "min_data_points": 30, "seasonality_mode": "additive" } with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"): response = await test_client.post("/training/jobs", json=request_data) assert response.status_code == status.HTTP_200_OK data = response.json() assert "job_id" in data assert data["status"] == "started" assert data["tenant_id"] == "test-tenant" assert "estimated_duration_minutes" in data E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:104"">file /app/tests/test_api.py, line 104
|
||||
@pytest.mark.asyncio
|
||||
async def test_start_training_job_success(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
mock_messaging,
|
||||
mock_ml_trainer,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test starting a training job successfully"""
|
||||
request_data = {
|
||||
"include_weather": True,
|
||||
"include_traffic": True,
|
||||
"min_data_points": 30,
|
||||
"seasonality_mode": "additive"
|
||||
}
|
||||
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.post("/training/jobs", json=request_data)
|
||||
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()
|
||||
|
||||
assert "job_id" in data
|
||||
assert data["status"] == "started"
|
||||
assert data["tenant_id"] == "test-tenant"
|
||||
assert "estimated_duration_minutes" in data
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:104</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_start_training_job_validation_error" time="0.027"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:139: in test_start_training_job_validation_error
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_get_training_status_existing_job" time="0.031"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_get_training_status_nonexistent_job" time="0.027"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:167: in test_get_training_status_nonexistent_job
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_list_training_jobs" time="0.028"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_list_training_jobs_with_status_filter" time="0.028"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_cancel_training_job_success" time="0.031"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_cancel_nonexistent_job" time="0.031"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:233: in test_cancel_nonexistent_job
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_get_training_logs" time="0.032"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestTrainingJobsAPI" name="test_validate_training_data_valid" time="0.028"><error message="failed on setup with "file /app/tests/test_api.py, line 257 @pytest.mark.asyncio async def test_validate_training_data_valid( self, test_client: AsyncClient, mock_data_service ): """Test validating valid training data""" request_data = { "include_weather": True, "include_traffic": True, "min_data_points": 30 } with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"): response = await test_client.post("/training/validate", json=request_data) assert response.status_code == status.HTTP_200_OK data = response.json() assert "is_valid" in data assert "issues" in data assert "recommendations" in data assert "estimated_training_time" in data E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:257"">file /app/tests/test_api.py, line 257
|
||||
@pytest.mark.asyncio
|
||||
async def test_validate_training_data_valid(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test validating valid training data"""
|
||||
request_data = {
|
||||
"include_weather": True,
|
||||
"include_traffic": True,
|
||||
"min_data_points": 30
|
||||
}
|
||||
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.post("/training/validate", json=request_data)
|
||||
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()
|
||||
|
||||
assert "is_valid" in data
|
||||
assert "issues" in data
|
||||
assert "recommendations" in data
|
||||
assert "estimated_training_time" in data
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:257</error></testcase><testcase classname="tests.test_api.TestSingleProductTrainingAPI" name="test_train_single_product_success" time="0.033"><error message="failed on setup with "file /app/tests/test_api.py, line 285 @pytest.mark.asyncio async def test_train_single_product_success( self, test_client: AsyncClient, mock_messaging, mock_ml_trainer, mock_data_service ): """Test training a single product successfully""" product_name = "Pan Integral" request_data = { "include_weather": True, "include_traffic": True, "seasonality_mode": "additive" } with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"): response = await test_client.post( f"/training/products/{product_name}", json=request_data ) assert response.status_code == status.HTTP_200_OK data = response.json() assert "job_id" in data assert data["status"] == "started" assert data["tenant_id"] == "test-tenant" assert f"training started for {product_name}" in data["message"].lower() E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:285"">file /app/tests/test_api.py, line 285
|
||||
@pytest.mark.asyncio
|
||||
async def test_train_single_product_success(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
mock_messaging,
|
||||
mock_ml_trainer,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test training a single product successfully"""
|
||||
product_name = "Pan Integral"
|
||||
request_data = {
|
||||
"include_weather": True,
|
||||
"include_traffic": True,
|
||||
"seasonality_mode": "additive"
|
||||
}
|
||||
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.post(
|
||||
f"/training/products/{product_name}",
|
||||
json=request_data
|
||||
)
|
||||
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()
|
||||
|
||||
assert "job_id" in data
|
||||
assert data["status"] == "started"
|
||||
assert data["tenant_id"] == "test-tenant"
|
||||
assert f"training started for {product_name}" in data["message"].lower()
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:285</error></testcase><testcase classname="tests.test_api.TestSingleProductTrainingAPI" name="test_train_single_product_validation_error" time="0.033"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:323: in test_train_single_product_validation_error
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestSingleProductTrainingAPI" name="test_train_single_product_special_characters" time="0.030"><error message="failed on setup with "file /app/tests/test_api.py, line 331 @pytest.mark.asyncio async def test_train_single_product_special_characters( self, test_client: AsyncClient, mock_messaging, mock_ml_trainer, mock_data_service ): """Test training product with special characters in name""" product_name = "Pan Francés" # With accent request_data = { "include_weather": True, "seasonality_mode": "additive" } with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"): response = await test_client.post( f"/training/products/{product_name}", json=request_data ) assert response.status_code == status.HTTP_200_OK data = response.json() assert "job_id" in data E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:331"">file /app/tests/test_api.py, line 331
|
||||
@pytest.mark.asyncio
|
||||
async def test_train_single_product_special_characters(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
mock_messaging,
|
||||
mock_ml_trainer,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test training product with special characters in name"""
|
||||
product_name = "Pan Francés" # With accent
|
||||
request_data = {
|
||||
"include_weather": True,
|
||||
"seasonality_mode": "additive"
|
||||
}
|
||||
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.post(
|
||||
f"/training/products/{product_name}",
|
||||
json=request_data
|
||||
)
|
||||
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()
|
||||
assert "job_id" in data
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:331</error></testcase><testcase classname="tests.test_api.TestModelsAPI" name="test_list_models" time="0.028"><error message="failed on setup with "file /app/tests/test_api.py, line 360 @pytest.mark.asyncio async def test_list_models( self, test_client: AsyncClient, trained_model_in_db ): """Test listing trained models""" with patch('app.api.models.get_current_tenant_id', return_value="test-tenant"): response = await test_client.get("/models") # This endpoint might not exist yet, so we expect either 200 or 404 assert response.status_code in [status.HTTP_200_OK, status.HTTP_404_NOT_FOUND] if response.status_code == status.HTTP_200_OK: data = response.json() assert isinstance(data, list) E fixture 'trained_model_in_db' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:360"">file /app/tests/test_api.py, line 360
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_models(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
trained_model_in_db
|
||||
):
|
||||
"""Test listing trained models"""
|
||||
with patch('app.api.models.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.get("/models")
|
||||
|
||||
# This endpoint might not exist yet, so we expect either 200 or 404
|
||||
assert response.status_code in [status.HTTP_200_OK, status.HTTP_404_NOT_FOUND]
|
||||
|
||||
if response.status_code == status.HTTP_200_OK:
|
||||
data = response.json()
|
||||
assert isinstance(data, list)
|
||||
E fixture 'trained_model_in_db' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:360</error></testcase><testcase classname="tests.test_api.TestModelsAPI" name="test_get_model_details" time="0.027"><error message="failed on setup with "file /app/tests/test_api.py, line 377 @pytest.mark.asyncio async def test_get_model_details( self, test_client: AsyncClient, trained_model_in_db ): """Test getting model details""" model_id = trained_model_in_db.model_id with patch('app.api.models.get_current_tenant_id', return_value="test-tenant"): response = await test_client.get(f"/models/{model_id}") # This endpoint might not exist yet assert response.status_code in [ status.HTTP_200_OK, status.HTTP_404_NOT_FOUND, status.HTTP_501_NOT_IMPLEMENTED ] E fixture 'trained_model_in_db' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:377"">file /app/tests/test_api.py, line 377
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_model_details(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
trained_model_in_db
|
||||
):
|
||||
"""Test getting model details"""
|
||||
model_id = trained_model_in_db.model_id
|
||||
|
||||
with patch('app.api.models.get_current_tenant_id', return_value="test-tenant"):
|
||||
response = await test_client.get(f"/models/{model_id}")
|
||||
|
||||
# This endpoint might not exist yet
|
||||
assert response.status_code in [
|
||||
status.HTTP_200_OK,
|
||||
status.HTTP_404_NOT_FOUND,
|
||||
status.HTTP_501_NOT_IMPLEMENTED
|
||||
]
|
||||
E fixture 'trained_model_in_db' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:377</error></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_database_error_handling" time="0.032"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:412: in test_database_error_handling
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_missing_tenant_id" time="0.028"><failure message="AttributeError: 'async_generator' object has no attribute 'post'">tests/test_api.py:427: in test_missing_tenant_id
|
||||
response = await test_client.post("/training/jobs", json=request_data)
|
||||
E AttributeError: 'async_generator' object has no attribute 'post'</failure></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_invalid_job_id_format" time="0.028"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:437: in test_invalid_job_id_format
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_messaging_failure_handling" time="0.026"><error message="failed on setup with "file /app/tests/test_api.py, line 443 @pytest.mark.asyncio async def test_messaging_failure_handling( self, test_client: AsyncClient, mock_data_service ): """Test handling when messaging fails""" request_data = { "include_weather": True, "include_traffic": True, "min_data_points": 30 } with patch('app.services.messaging.publish_job_started', side_effect=Exception("Messaging failed")), \ patch('app.api.training.get_current_tenant_id', return_value="test-tenant"): response = await test_client.post("/training/jobs", json=request_data) # Should still succeed even if messaging fails assert response.status_code == status.HTTP_200_OK data = response.json() assert "job_id" in data E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_api.py:443"">file /app/tests/test_api.py, line 443
|
||||
@pytest.mark.asyncio
|
||||
async def test_messaging_failure_handling(
|
||||
self,
|
||||
test_client: AsyncClient,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test handling when messaging fails"""
|
||||
request_data = {
|
||||
"include_weather": True,
|
||||
"include_traffic": True,
|
||||
"min_data_points": 30
|
||||
}
|
||||
|
||||
with patch('app.services.messaging.publish_job_started', side_effect=Exception("Messaging failed")), \
|
||||
patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
|
||||
response = await test_client.post("/training/jobs", json=request_data)
|
||||
|
||||
# Should still succeed even if messaging fails
|
||||
assert response.status_code == status.HTTP_200_OK
|
||||
data = response.json()
|
||||
assert "job_id" in data
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_api.py:443</error></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_invalid_json_payload" time="0.028"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:469: in test_invalid_json_payload
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestErrorHandling" name="test_unsupported_content_type" time="0.028"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:481: in test_unsupported_content_type
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAuthenticationIntegration" name="test_endpoints_require_auth" time="0.027"><failure message="AttributeError: 'async_generator' object has no attribute 'post'">tests/test_api.py:512: in test_endpoints_require_auth
|
||||
response = await test_client.post(endpoint, json={})
|
||||
E AttributeError: 'async_generator' object has no attribute 'post'</failure></testcase><testcase classname="tests.test_api.TestAuthenticationIntegration" name="test_tenant_isolation_in_api" time="0.028"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_api.TestAPIValidation" name="test_training_request_validation" time="0.027"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:555: in test_training_request_validation
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAPIValidation" name="test_single_product_request_validation" time="0.038"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:591: in test_single_product_request_validation
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAPIValidation" name="test_query_parameter_validation" time="0.030"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:612: in test_query_parameter_validation
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAPIPerformance" name="test_concurrent_requests" time="0.031"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:643: in test_concurrent_requests
|
||||
with patch('app.api.training.get_current_tenant_id', return_value=f"tenant-{i}"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAPIPerformance" name="test_large_payload_handling" time="0.030"><failure message="AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'">tests/test_api.py:665: in test_large_payload_handling
|
||||
with patch('app.api.training.get_current_tenant_id', return_value="test-tenant"):
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1446: in __enter__
|
||||
original, local = self.get_original()
|
||||
/usr/local/lib/python3.11/unittest/mock.py:1419: in get_original
|
||||
raise AttributeError(
|
||||
E AttributeError: <module 'app.api.training' from '/app/app/api/training.py'> does not have the attribute 'get_current_tenant_id'</failure></testcase><testcase classname="tests.test_api.TestAPIPerformance" name="test_rapid_successive_requests" time="0.030"><failure message="AttributeError: 'async_generator' object has no attribute 'get'">tests/test_api.py:681: in test_rapid_successive_requests
|
||||
response = await test_client.get("/health")
|
||||
E AttributeError: 'async_generator' object has no attribute 'get'</failure></testcase><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_prepare_training_data_basic" time="0.049" /><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_prepare_training_data_empty_weather" time="0.045" /><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_prepare_prediction_features" time="0.034" /><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_add_temporal_features" time="0.029" /><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_spanish_holiday_detection" time="0.026" /><testcase classname="tests.test_ml.TestBakeryDataProcessor" name="test_prepare_training_data_insufficient_data" time="0.037"><failure message="Failed: DID NOT RAISE <class 'Exception'>">tests/test_ml.py:201: in test_prepare_training_data_insufficient_data
|
||||
with pytest.raises(Exception):
|
||||
E Failed: DID NOT RAISE <class 'Exception'></failure></testcase><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_train_bakery_model_success" time="0.031"><failure message="AttributeError: 'TrainingSettings' object has no attribute 'PROPHET_DAILY_SEASONALITY'">tests/test_ml.py:239: in test_train_bakery_model_success
|
||||
result = await prophet_manager.train_bakery_model(
|
||||
app/ml/prophet_manager.py:70: in train_bakery_model
|
||||
model = self._create_prophet_model(regressor_columns)
|
||||
app/ml/prophet_manager.py:238: in _create_prophet_model
|
||||
daily_seasonality=settings.PROPHET_DAILY_SEASONALITY,
|
||||
/usr/local/lib/python3.11/site-packages/pydantic/main.py:761: in __getattr__
|
||||
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
|
||||
E AttributeError: 'TrainingSettings' object has no attribute 'PROPHET_DAILY_SEASONALITY'</failure></testcase><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_validate_training_data_valid" time="0.028" /><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_validate_training_data_insufficient" time="0.027" /><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_validate_training_data_missing_columns" time="0.027" /><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_get_spanish_holidays" time="0.029" /><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_extract_regressor_columns" time="0.028" /><testcase classname="tests.test_ml.TestBakeryProphetManager" name="test_generate_forecast" time="0.028" /><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_train_tenant_models_success" time="0.048" /><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_train_single_product_success" time="0.041"><failure message="ValueError: Insufficient training data for Pan Integral: 3 days, minimum required: 30">tests/test_ml.py:414: in test_train_single_product_success
|
||||
result = await ml_trainer.train_single_product(
|
||||
app/ml/trainer.py:149: in train_single_product
|
||||
model_info = await self.prophet_manager.train_bakery_model(
|
||||
app/ml/prophet_manager.py:61: in train_bakery_model
|
||||
await self._validate_training_data(df, product_name)
|
||||
app/ml/prophet_manager.py:158: in _validate_training_data
|
||||
raise ValueError(
|
||||
E ValueError: Insufficient training data for Pan Integral: 3 days, minimum required: 30</failure></testcase><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_train_single_product_no_data" time="0.036"><failure message="KeyError: 'product_name'">tests/test_ml.py:438: in test_train_single_product_no_data
|
||||
await ml_trainer.train_single_product(
|
||||
app/ml/trainer.py:134: in train_single_product
|
||||
product_sales = sales_df[sales_df['product_name'] == product_name].copy()
|
||||
/usr/local/lib/python3.11/site-packages/pandas/core/frame.py:3893: in __getitem__
|
||||
indexer = self.columns.get_loc(key)
|
||||
/usr/local/lib/python3.11/site-packages/pandas/core/indexes/range.py:418: in get_loc
|
||||
raise KeyError(key)
|
||||
E KeyError: 'product_name'</failure></testcase><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_validate_input_data_valid" time="0.032" /><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_validate_input_data_empty" time="0.033" /><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_validate_input_data_missing_columns" time="0.038" /><testcase classname="tests.test_ml.TestBakeryMLTrainer" name="test_calculate_training_summary" time="0.032" /><testcase classname="tests.test_ml.TestIntegrationML" name="test_end_to_end_training_flow" time="0.028"><skipped type="pytest.skip" message="Requires actual Prophet dependencies for integration test">/app/tests/test_ml.py:508: Requires actual Prophet dependencies for integration test</skipped></testcase><testcase classname="tests.test_ml.TestIntegrationML" name="test_data_pipeline_integration" time="0.028"><skipped type="pytest.skip" message="Requires actual dependencies for integration test">/app/tests/test_ml.py:513: Requires actual dependencies for integration test</skipped></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_create_training_job_success" time="0.030"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:52: in create_training_job
|
||||
db.add(training_log)
|
||||
E AttributeError: 'coroutine' object has no attribute 'add'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:34: in test_create_training_job_success
|
||||
result = await training_service.create_training_job(
|
||||
app/services/training_service.py:61: in create_training_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_create_single_product_job_success" time="0.031"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:84: in create_single_product_job
|
||||
db.add(training_log)
|
||||
E AttributeError: 'coroutine' object has no attribute 'add'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:60: in test_create_single_product_job_success
|
||||
result = await training_service.create_single_product_job(
|
||||
app/services/training_service.py:93: in create_single_product_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_get_job_status_existing" time="0.035"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_get_job_status_nonexistent" time="0.030" /><testcase classname="tests.test_service.TestTrainingService" name="test_list_training_jobs" time="0.031"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_list_training_jobs_with_filter" time="0.035"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_cancel_training_job_success" time="0.035"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_cancel_nonexistent_job" time="0.031"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:270: in cancel_training_job
|
||||
result = await db.execute(
|
||||
E AttributeError: 'coroutine' object has no attribute 'execute'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:175: in test_cancel_nonexistent_job
|
||||
result = await training_service.cancel_training_job(
|
||||
app/services/training_service.py:297: in cancel_training_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_validate_training_data_valid" time="0.034"><error message="failed on setup with "file /app/tests/test_service.py, line 183 @pytest.mark.asyncio async def test_validate_training_data_valid( self, training_service, test_db_session, mock_data_service ): """Test validation with valid data""" config = {"min_data_points": 30} result = await training_service.validate_training_data( db=test_db_session, tenant_id="test-tenant", config=config ) assert isinstance(result, dict) assert "is_valid" in result assert "issues" in result assert "recommendations" in result assert "estimated_time_minutes" in result E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_service.py:183"">file /app/tests/test_service.py, line 183
|
||||
@pytest.mark.asyncio
|
||||
async def test_validate_training_data_valid(
|
||||
self,
|
||||
training_service,
|
||||
test_db_session,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test validation with valid data"""
|
||||
config = {"min_data_points": 30}
|
||||
|
||||
result = await training_service.validate_training_data(
|
||||
db=test_db_session,
|
||||
tenant_id="test-tenant",
|
||||
config=config
|
||||
)
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert "is_valid" in result
|
||||
assert "issues" in result
|
||||
assert "recommendations" in result
|
||||
assert "estimated_time_minutes" in result
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_service.py:183</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_validate_training_data_no_data" time="0.031"><failure message="assert True is False">tests/test_service.py:221: in test_validate_training_data_no_data
|
||||
assert result["is_valid"] is False
|
||||
E assert True is False</failure></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_update_job_status" time="0.035"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_store_trained_models" time="0.032"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:572: in _store_trained_models
|
||||
await db.execute(
|
||||
E AttributeError: 'coroutine' object has no attribute 'execute'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:280: in test_store_trained_models
|
||||
await training_service._store_trained_models(
|
||||
app/services/training_service.py:592: in _store_trained_models
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingService" name="test_get_training_logs" time="0.033"><error message="failed on setup with "TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog"">tests/conftest.py:539: in training_job_in_db
|
||||
job = ModelTrainingLog(
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:566: in _initialize_instance
|
||||
with util.safe_reraise():
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146: in __exit__
|
||||
raise exc_value.with_traceback(exc_tb)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:564: in _initialize_instance
|
||||
manager.original_init(*mixed[1:], **kwargs)
|
||||
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/decl_base.py:2142: in _declarative_constructor
|
||||
raise TypeError(
|
||||
E TypeError: 'started_at' is an invalid keyword argument for ModelTrainingLog</error></testcase><testcase classname="tests.test_service.TestTrainingServiceDataFetching" name="test_fetch_sales_data_success" time="0.031" /><testcase classname="tests.test_service.TestTrainingServiceDataFetching" name="test_fetch_sales_data_error" time="0.030" /><testcase classname="tests.test_service.TestTrainingServiceDataFetching" name="test_fetch_weather_data_success" time="0.040" /><testcase classname="tests.test_service.TestTrainingServiceDataFetching" name="test_fetch_traffic_data_success" time="0.033" /><testcase classname="tests.test_service.TestTrainingServiceDataFetching" name="test_fetch_data_with_date_filters" time="0.030" /><testcase classname="tests.test_service.TestTrainingServiceExecution" name="test_execute_training_job_success" time="0.030"><error message="failed on setup with "file /app/tests/test_service.py, line 468 @pytest.mark.asyncio async def test_execute_training_job_success( self, training_service, test_db_session, mock_messaging, mock_data_service ): """Test successful training job execution""" # Create job first job_id = "test-execution-job" training_log = await training_service.create_training_job( db=test_db_session, tenant_id="test-tenant", job_id=job_id, config={"include_weather": True} ) request = TrainingJobRequest( include_weather=True, include_traffic=True, min_data_points=30 ) with patch('app.services.training_service.TrainingService._fetch_sales_data') as mock_fetch_sales, \ patch('app.services.training_service.TrainingService._fetch_weather_data') as mock_fetch_weather, \ patch('app.services.training_service.TrainingService._fetch_traffic_data') as mock_fetch_traffic, \ patch('app.services.training_service.TrainingService._store_trained_models') as mock_store: mock_fetch_sales.return_value = [{"date": "2024-01-01", "product_name": "Pan Integral", "quantity": 45}] mock_fetch_weather.return_value = [] mock_fetch_traffic.return_value = [] mock_store.return_value = None await training_service.execute_training_job( db=test_db_session, job_id=job_id, tenant_id="test-tenant", request=request ) # Verify job was completed updated_job = await training_service.get_job_status( db=test_db_session, job_id=job_id, tenant_id="test-tenant" ) assert updated_job.status == "completed" assert updated_job.progress == 100 E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_service.py:468"">file /app/tests/test_service.py, line 468
|
||||
@pytest.mark.asyncio
|
||||
async def test_execute_training_job_success(
|
||||
self,
|
||||
training_service,
|
||||
test_db_session,
|
||||
mock_messaging,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test successful training job execution"""
|
||||
# Create job first
|
||||
job_id = "test-execution-job"
|
||||
training_log = await training_service.create_training_job(
|
||||
db=test_db_session,
|
||||
tenant_id="test-tenant",
|
||||
job_id=job_id,
|
||||
config={"include_weather": True}
|
||||
)
|
||||
|
||||
request = TrainingJobRequest(
|
||||
include_weather=True,
|
||||
include_traffic=True,
|
||||
min_data_points=30
|
||||
)
|
||||
|
||||
with patch('app.services.training_service.TrainingService._fetch_sales_data') as mock_fetch_sales, \
|
||||
patch('app.services.training_service.TrainingService._fetch_weather_data') as mock_fetch_weather, \
|
||||
patch('app.services.training_service.TrainingService._fetch_traffic_data') as mock_fetch_traffic, \
|
||||
patch('app.services.training_service.TrainingService._store_trained_models') as mock_store:
|
||||
|
||||
mock_fetch_sales.return_value = [{"date": "2024-01-01", "product_name": "Pan Integral", "quantity": 45}]
|
||||
mock_fetch_weather.return_value = []
|
||||
mock_fetch_traffic.return_value = []
|
||||
mock_store.return_value = None
|
||||
|
||||
await training_service.execute_training_job(
|
||||
db=test_db_session,
|
||||
job_id=job_id,
|
||||
tenant_id="test-tenant",
|
||||
request=request
|
||||
)
|
||||
|
||||
# Verify job was completed
|
||||
updated_job = await training_service.get_job_status(
|
||||
db=test_db_session,
|
||||
job_id=job_id,
|
||||
tenant_id="test-tenant"
|
||||
)
|
||||
|
||||
assert updated_job.status == "completed"
|
||||
assert updated_job.progress == 100
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_service.py:468</error></testcase><testcase classname="tests.test_service.TestTrainingServiceExecution" name="test_execute_training_job_failure" time="0.031"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:52: in create_training_job
|
||||
db.add(training_log)
|
||||
E AttributeError: 'coroutine' object has no attribute 'add'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:529: in test_execute_training_job_failure
|
||||
await training_service.create_training_job(
|
||||
app/services/training_service.py:61: in create_training_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingServiceExecution" name="test_execute_single_product_training_success" time="0.031"><error message="failed on setup with "file /app/tests/test_service.py, line 559 @pytest.mark.asyncio async def test_execute_single_product_training_success( self, training_service, test_db_session, mock_messaging, mock_data_service ): """Test successful single product training execution""" job_id = "test-single-product-job" product_name = "Pan Integral" await training_service.create_single_product_job( db=test_db_session, tenant_id="test-tenant", product_name=product_name, job_id=job_id, config={} ) request = SingleProductTrainingRequest( include_weather=True, include_traffic=False ) with patch('app.services.training_service.TrainingService._fetch_product_sales_data') as mock_fetch_sales, \ patch('app.services.training_service.TrainingService._fetch_weather_data') as mock_fetch_weather, \ patch('app.services.training_service.TrainingService._store_single_trained_model') as mock_store: mock_fetch_sales.return_value = [{"date": "2024-01-01", "product_name": product_name, "quantity": 45}] mock_fetch_weather.return_value = [] mock_store.return_value = None await training_service.execute_single_product_training( db=test_db_session, job_id=job_id, tenant_id="test-tenant", product_name=product_name, request=request ) # Verify job was completed updated_job = await training_service.get_job_status( db=test_db_session, job_id=job_id, tenant_id="test-tenant" ) assert updated_job.status == "completed" assert updated_job.progress == 100 E fixture 'mock_data_service' not found > available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory > use 'pytest --fixtures [testpath]' for help on them. /app/tests/test_service.py:559"">file /app/tests/test_service.py, line 559
|
||||
@pytest.mark.asyncio
|
||||
async def test_execute_single_product_training_success(
|
||||
self,
|
||||
training_service,
|
||||
test_db_session,
|
||||
mock_messaging,
|
||||
mock_data_service
|
||||
):
|
||||
"""Test successful single product training execution"""
|
||||
job_id = "test-single-product-job"
|
||||
product_name = "Pan Integral"
|
||||
|
||||
await training_service.create_single_product_job(
|
||||
db=test_db_session,
|
||||
tenant_id="test-tenant",
|
||||
product_name=product_name,
|
||||
job_id=job_id,
|
||||
config={}
|
||||
)
|
||||
|
||||
request = SingleProductTrainingRequest(
|
||||
include_weather=True,
|
||||
include_traffic=False
|
||||
)
|
||||
|
||||
with patch('app.services.training_service.TrainingService._fetch_product_sales_data') as mock_fetch_sales, \
|
||||
patch('app.services.training_service.TrainingService._fetch_weather_data') as mock_fetch_weather, \
|
||||
patch('app.services.training_service.TrainingService._store_single_trained_model') as mock_store:
|
||||
|
||||
mock_fetch_sales.return_value = [{"date": "2024-01-01", "product_name": product_name, "quantity": 45}]
|
||||
mock_fetch_weather.return_value = []
|
||||
mock_store.return_value = None
|
||||
|
||||
await training_service.execute_single_product_training(
|
||||
db=test_db_session,
|
||||
job_id=job_id,
|
||||
tenant_id="test-tenant",
|
||||
product_name=product_name,
|
||||
request=request
|
||||
)
|
||||
|
||||
# Verify job was completed
|
||||
updated_job = await training_service.get_job_status(
|
||||
db=test_db_session,
|
||||
job_id=job_id,
|
||||
tenant_id="test-tenant"
|
||||
)
|
||||
|
||||
assert updated_job.status == "completed"
|
||||
assert updated_job.progress == 100
|
||||
E fixture 'mock_data_service' not found
|
||||
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, api_test_scenarios, auth_headers, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_mocker, cleanup_after_test, configure_test_logging, corrupted_sales_data, cov, data_quality_test_cases, doctest_namespace, error_scenarios, event_loop, failing_external_services, insufficient_sales_data, integration_test_dependencies, integration_test_setup, large_dataset_for_performance, load_test_configuration, memory_monitor, mock_aemet_client, mock_data_processor, mock_external_services, mock_job_scheduler, mock_madrid_client, mock_messaging, mock_ml_trainer, mock_model_storage, mock_notification_system, mock_prophet_manager, mocker, module_mocker, monkeypatch, no_cover, package_mocker, performance_benchmarks, pytestconfig, real_world_scenarios, record_property, record_testsuite_property, record_xml_attribute, recwarn, sample_bakery_sales_data, sample_model_metadata, sample_single_product_request, sample_traffic_data, sample_training_request, sample_weather_data, seasonal_product_data, session_mocker, setup_test_environment, spanish_holidays_2023, temp_model_storage, test_app, test_client, test_config, test_data_validator, test_db_session, test_metrics_collector, timing_monitor, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, training_job_in_db, training_progress_states, training_service, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
/app/tests/test_service.py:559</error></testcase><testcase classname="tests.test_service.TestTrainingServiceEdgeCases" name="test_database_connection_failure" time="0.029" /><testcase classname="tests.test_service.TestTrainingServiceEdgeCases" name="test_external_service_timeout" time="0.030" /><testcase classname="tests.test_service.TestTrainingServiceEdgeCases" name="test_concurrent_job_creation" time="0.028"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:52: in create_training_job
|
||||
db.add(training_log)
|
||||
E AttributeError: 'coroutine' object has no attribute 'add'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:660: in test_concurrent_job_creation
|
||||
job = await training_service.create_training_job(
|
||||
app/services/training_service.py:61: in create_training_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingServiceEdgeCases" name="test_malformed_config_handling" time="0.001"><failure message="AttributeError: 'coroutine' object has no attribute 'rollback'">app/services/training_service.py:52: in create_training_job
|
||||
db.add(training_log)
|
||||
E AttributeError: 'coroutine' object has no attribute 'add'
|
||||
|
||||
During handling of the above exception, another exception occurred:
|
||||
tests/test_service.py:681: in test_malformed_config_handling
|
||||
job = await training_service.create_training_job(
|
||||
app/services/training_service.py:61: in create_training_job
|
||||
await db.rollback()
|
||||
E AttributeError: 'coroutine' object has no attribute 'rollback'</failure></testcase><testcase classname="tests.test_service.TestTrainingServiceEdgeCases" name="test_malformed_config_handling" time="0.029"><error message="failed on teardown with "TypeError: 'str' object is not callable"">tests/conftest.py:464: in setup_test_environment
|
||||
os.environ.pop(var, None)(scope="session")
|
||||
E TypeError: 'str' object is not callable</error></testcase></testsuite></testsuites>
|
||||
53
services/training/tests/results/test_report.json
Normal file
53
services/training/tests/results/test_report.json
Normal file
File diff suppressed because one or more lines are too long
673
services/training/tests/run_tests.py
Normal file
673
services/training/tests/run_tests.py
Normal file
@@ -0,0 +1,673 @@
|
||||
# ================================================================
|
||||
# services/training/tests/run_tests.py
|
||||
# ================================================================
|
||||
"""
|
||||
Main test runner script for Training Service
|
||||
Executes comprehensive test suite and generates reports
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import asyncio
|
||||
import subprocess
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any
|
||||
import logging
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TrainingTestRunner:
|
||||
"""Main test runner for training service"""
|
||||
|
||||
def __init__(self):
|
||||
self.test_dir = Path(__file__).parent
|
||||
self.results_dir = self.test_dir / "results"
|
||||
self.results_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Test configuration
|
||||
self.test_suites = {
|
||||
"unit": {
|
||||
"files": ["test_api.py", "test_ml.py", "test_service.py"],
|
||||
"description": "Unit tests for individual components",
|
||||
"timeout": 300 # 5 minutes
|
||||
},
|
||||
"integration": {
|
||||
"files": ["test_ml_pipeline_integration.py"],
|
||||
"description": "Integration tests for ML pipeline with external data",
|
||||
"timeout": 600 # 10 minutes
|
||||
},
|
||||
"performance": {
|
||||
"files": ["test_performance.py"],
|
||||
"description": "Performance and load testing",
|
||||
"timeout": 900 # 15 minutes
|
||||
},
|
||||
"end_to_end": {
|
||||
"files": ["test_end_to_end.py"],
|
||||
"description": "End-to-end workflow testing",
|
||||
"timeout": 800 # 13 minutes
|
||||
}
|
||||
}
|
||||
|
||||
self.test_results = {}
|
||||
|
||||
async def setup_test_environment(self):
|
||||
"""Setup test environment and dependencies"""
|
||||
logger.info("Setting up test environment...")
|
||||
|
||||
# Check if we're running in Docker
|
||||
if os.path.exists("/.dockerenv"):
|
||||
logger.info("Running in Docker environment")
|
||||
else:
|
||||
logger.info("Running in local environment")
|
||||
|
||||
# Verify required files exist
|
||||
required_files = [
|
||||
"conftest.py",
|
||||
"test_ml_pipeline_integration.py",
|
||||
"test_performance.py"
|
||||
]
|
||||
|
||||
for file in required_files:
|
||||
file_path = self.test_dir / file
|
||||
if not file_path.exists():
|
||||
logger.warning(f"Required test file missing: {file}")
|
||||
|
||||
# Create test data if needed
|
||||
await self.create_test_data()
|
||||
|
||||
# Verify external services (mock or real)
|
||||
await self.verify_external_services()
|
||||
|
||||
async def create_test_data(self):
|
||||
"""Create or verify test data exists"""
|
||||
logger.info("Creating/verifying test data...")
|
||||
|
||||
test_data_dir = self.test_dir / "fixtures" / "test_data"
|
||||
test_data_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Create bakery sales sample if it doesn't exist
|
||||
sales_file = test_data_dir / "bakery_sales_sample.csv"
|
||||
if not sales_file.exists():
|
||||
logger.info("Creating sample sales data...")
|
||||
await self.generate_sample_sales_data(sales_file)
|
||||
|
||||
# Create weather data sample
|
||||
weather_file = test_data_dir / "madrid_weather_sample.json"
|
||||
if not weather_file.exists():
|
||||
logger.info("Creating sample weather data...")
|
||||
await self.generate_sample_weather_data(weather_file)
|
||||
|
||||
# Create traffic data sample
|
||||
traffic_file = test_data_dir / "madrid_traffic_sample.json"
|
||||
if not traffic_file.exists():
|
||||
logger.info("Creating sample traffic data...")
|
||||
await self.generate_sample_traffic_data(traffic_file)
|
||||
|
||||
async def generate_sample_sales_data(self, file_path: Path):
|
||||
"""Generate sample sales data for testing"""
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Generate 6 months of sample data
|
||||
start_date = datetime(2023, 6, 1)
|
||||
dates = [start_date + timedelta(days=i) for i in range(180)]
|
||||
|
||||
products = ["Pan Integral", "Croissant", "Magdalenas", "Empanadas", "Tarta Chocolate"]
|
||||
|
||||
data = []
|
||||
for date in dates:
|
||||
for product in products:
|
||||
base_quantity = np.random.randint(10, 100)
|
||||
|
||||
# Weekend boost
|
||||
if date.weekday() >= 5:
|
||||
base_quantity *= 1.2
|
||||
|
||||
# Seasonal variation
|
||||
temp = 15 + 10 * np.sin((date.timetuple().tm_yday / 365) * 2 * np.pi)
|
||||
|
||||
data.append({
|
||||
"date": date.strftime("%Y-%m-%d"),
|
||||
"product": product,
|
||||
"quantity": int(base_quantity),
|
||||
"revenue": round(base_quantity * np.random.uniform(2.5, 8.0), 2),
|
||||
"temperature": round(temp + np.random.normal(0, 3), 1),
|
||||
"precipitation": max(0, np.random.exponential(0.5)),
|
||||
"is_weekend": date.weekday() >= 5,
|
||||
"is_holiday": False
|
||||
})
|
||||
|
||||
df = pd.DataFrame(data)
|
||||
df.to_csv(file_path, index=False)
|
||||
logger.info(f"Created sample sales data: {len(df)} records")
|
||||
|
||||
async def generate_sample_weather_data(self, file_path: Path):
|
||||
"""Generate sample weather data"""
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
import numpy as np
|
||||
|
||||
start_date = datetime(2023, 6, 1)
|
||||
weather_data = []
|
||||
|
||||
for i in range(180):
|
||||
date = start_date + timedelta(days=i)
|
||||
day_of_year = date.timetuple().tm_yday
|
||||
base_temp = 14 + 12 * np.sin((day_of_year / 365) * 2 * np.pi)
|
||||
|
||||
weather_data.append({
|
||||
"date": date.isoformat(),
|
||||
"temperature": round(base_temp + np.random.normal(0, 5), 1),
|
||||
"precipitation": max(0, np.random.exponential(1.0)),
|
||||
"humidity": np.random.uniform(30, 80),
|
||||
"wind_speed": np.random.uniform(5, 25),
|
||||
"pressure": np.random.uniform(1000, 1025),
|
||||
"description": np.random.choice(["Soleado", "Nuboso", "Lluvioso"]),
|
||||
"source": "aemet_test"
|
||||
})
|
||||
|
||||
with open(file_path, 'w') as f:
|
||||
json.dump(weather_data, f, indent=2)
|
||||
logger.info(f"Created sample weather data: {len(weather_data)} records")
|
||||
|
||||
async def generate_sample_traffic_data(self, file_path: Path):
|
||||
"""Generate sample traffic data"""
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
import numpy as np
|
||||
|
||||
start_date = datetime(2023, 6, 1)
|
||||
traffic_data = []
|
||||
|
||||
for i in range(180):
|
||||
date = start_date + timedelta(days=i)
|
||||
|
||||
for hour in [8, 12, 18]: # Three measurements per day
|
||||
measurement_time = date.replace(hour=hour)
|
||||
|
||||
if hour in [8, 18]: # Rush hours
|
||||
volume = np.random.randint(800, 1500)
|
||||
congestion = "high"
|
||||
else: # Lunch time
|
||||
volume = np.random.randint(400, 800)
|
||||
congestion = "medium"
|
||||
|
||||
traffic_data.append({
|
||||
"date": measurement_time.isoformat(),
|
||||
"traffic_volume": volume,
|
||||
"occupation_percentage": np.random.randint(10, 90),
|
||||
"load_percentage": np.random.randint(20, 95),
|
||||
"average_speed": np.random.randint(15, 50),
|
||||
"congestion_level": congestion,
|
||||
"pedestrian_count": np.random.randint(50, 500),
|
||||
"measurement_point_id": "TEST_POINT_001",
|
||||
"measurement_point_name": "Plaza Mayor",
|
||||
"road_type": "URB",
|
||||
"source": "madrid_opendata_test"
|
||||
})
|
||||
|
||||
with open(file_path, 'w') as f:
|
||||
json.dump(traffic_data, f, indent=2)
|
||||
logger.info(f"Created sample traffic data: {len(traffic_data)} records")
|
||||
|
||||
async def verify_external_services(self):
|
||||
"""Verify external services are available (mock or real)"""
|
||||
logger.info("Verifying external services...")
|
||||
|
||||
# Check if mock services are available
|
||||
mock_services = [
|
||||
("Mock AEMET", "http://localhost:8080/health"),
|
||||
("Mock Madrid OpenData", "http://localhost:8081/health"),
|
||||
("Mock Auth Service", "http://localhost:8082/health"),
|
||||
("Mock Data Service", "http://localhost:8083/health")
|
||||
]
|
||||
|
||||
try:
|
||||
import httpx
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
for service_name, url in mock_services:
|
||||
try:
|
||||
response = await client.get(url)
|
||||
if response.status_code == 200:
|
||||
logger.info(f"{service_name} is available")
|
||||
else:
|
||||
logger.warning(f"{service_name} returned status {response.status_code}")
|
||||
except Exception as e:
|
||||
logger.warning(f"{service_name} is not available: {e}")
|
||||
except ImportError:
|
||||
logger.warning("httpx not available, skipping service checks")
|
||||
|
||||
def run_test_suite(self, suite_name: str) -> Dict[str, Any]:
|
||||
"""Run a specific test suite"""
|
||||
suite_config = self.test_suites[suite_name]
|
||||
logger.info(f"Running {suite_name} test suite: {suite_config['description']}")
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Prepare pytest command
|
||||
pytest_args = [
|
||||
"python", "-m", "pytest",
|
||||
"-v",
|
||||
"--tb=short",
|
||||
"--capture=no",
|
||||
f"--junitxml={self.results_dir}/junit_{suite_name}.xml",
|
||||
f"--cov=app",
|
||||
f"--cov-report=html:{self.results_dir}/coverage_{suite_name}_html",
|
||||
f"--cov-report=xml:{self.results_dir}/coverage_{suite_name}.xml",
|
||||
"--cov-report=term-missing"
|
||||
]
|
||||
|
||||
# Add test files
|
||||
for test_file in suite_config["files"]:
|
||||
test_path = self.test_dir / test_file
|
||||
if test_path.exists():
|
||||
pytest_args.append(str(test_path))
|
||||
else:
|
||||
logger.warning(f"Test file not found: {test_file}")
|
||||
|
||||
# Run the tests
|
||||
try:
|
||||
result = subprocess.run(
|
||||
pytest_args,
|
||||
cwd=self.test_dir.parent, # Run from training service root
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=suite_config["timeout"]
|
||||
)
|
||||
|
||||
duration = time.time() - start_time
|
||||
|
||||
return {
|
||||
"suite": suite_name,
|
||||
"status": "passed" if result.returncode == 0 else "failed",
|
||||
"return_code": result.returncode,
|
||||
"duration": duration,
|
||||
"stdout": result.stdout,
|
||||
"stderr": result.stderr,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
duration = time.time() - start_time
|
||||
logger.error(f"Test suite {suite_name} timed out after {duration:.2f}s")
|
||||
|
||||
return {
|
||||
"suite": suite_name,
|
||||
"status": "timeout",
|
||||
"return_code": -1,
|
||||
"duration": duration,
|
||||
"stdout": "",
|
||||
"stderr": f"Test suite timed out after {suite_config['timeout']}s",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
duration = time.time() - start_time
|
||||
logger.error(f"Error running test suite {suite_name}: {e}")
|
||||
|
||||
return {
|
||||
"suite": suite_name,
|
||||
"status": "error",
|
||||
"return_code": -1,
|
||||
"duration": duration,
|
||||
"stdout": "",
|
||||
"stderr": str(e),
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
def generate_test_report(self):
|
||||
"""Generate comprehensive test report"""
|
||||
logger.info("Generating test report...")
|
||||
|
||||
# Calculate summary statistics
|
||||
total_suites = len(self.test_results)
|
||||
passed_suites = sum(1 for r in self.test_results.values() if r["status"] == "passed")
|
||||
failed_suites = sum(1 for r in self.test_results.values() if r["status"] == "failed")
|
||||
error_suites = sum(1 for r in self.test_results.values() if r["status"] == "error")
|
||||
timeout_suites = sum(1 for r in self.test_results.values() if r["status"] == "timeout")
|
||||
|
||||
total_duration = sum(r["duration"] for r in self.test_results.values())
|
||||
|
||||
# Create detailed report
|
||||
report = {
|
||||
"test_run_summary": {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"total_suites": total_suites,
|
||||
"passed_suites": passed_suites,
|
||||
"failed_suites": failed_suites,
|
||||
"error_suites": error_suites,
|
||||
"timeout_suites": timeout_suites,
|
||||
"success_rate": (passed_suites / total_suites * 100) if total_suites > 0 else 0,
|
||||
"total_duration_seconds": total_duration
|
||||
},
|
||||
"suite_results": self.test_results,
|
||||
"recommendations": self.generate_recommendations()
|
||||
}
|
||||
|
||||
# Save JSON report
|
||||
report_file = self.results_dir / "test_report.json"
|
||||
with open(report_file, 'w') as f:
|
||||
json.dump(report, f, indent=2)
|
||||
|
||||
# Generate HTML report
|
||||
self.generate_html_report(report)
|
||||
|
||||
# Print summary to console
|
||||
self.print_test_summary(report)
|
||||
|
||||
return report
|
||||
|
||||
def generate_recommendations(self) -> List[str]:
|
||||
"""Generate recommendations based on test results"""
|
||||
recommendations = []
|
||||
|
||||
failed_suites = [name for name, result in self.test_results.items() if result["status"] == "failed"]
|
||||
timeout_suites = [name for name, result in self.test_results.items() if result["status"] == "timeout"]
|
||||
|
||||
if failed_suites:
|
||||
recommendations.append(f"Failed test suites: {', '.join(failed_suites)}. Check logs for detailed error messages.")
|
||||
|
||||
if timeout_suites:
|
||||
recommendations.append(f"Timeout in suites: {', '.join(timeout_suites)}. Consider increasing timeout or optimizing performance.")
|
||||
|
||||
# Performance recommendations
|
||||
slow_suites = [
|
||||
name for name, result in self.test_results.items()
|
||||
if result["duration"] > 300 # 5 minutes
|
||||
]
|
||||
if slow_suites:
|
||||
recommendations.append(f"Slow test suites: {', '.join(slow_suites)}. Consider performance optimization.")
|
||||
|
||||
if not recommendations:
|
||||
recommendations.append("All tests passed successfully! Consider adding more edge case tests.")
|
||||
|
||||
return recommendations
|
||||
|
||||
def generate_html_report(self, report: Dict[str, Any]):
|
||||
"""Generate HTML test report"""
|
||||
html_template = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Training Service Test Report</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 40px; }
|
||||
.header { background-color: #f8f9fa; padding: 20px; border-radius: 5px; }
|
||||
.summary { display: flex; gap: 20px; margin: 20px 0; }
|
||||
.metric { background: white; border: 1px solid #dee2e6; padding: 15px; border-radius: 5px; text-align: center; }
|
||||
.metric-value { font-size: 24px; font-weight: bold; }
|
||||
.passed { color: #28a745; }
|
||||
.failed { color: #dc3545; }
|
||||
.timeout { color: #fd7e14; }
|
||||
.error { color: #6c757d; }
|
||||
.suite-result { margin: 20px 0; padding: 15px; border: 1px solid #dee2e6; border-radius: 5px; }
|
||||
.recommendations { background-color: #e7f3ff; padding: 15px; border-radius: 5px; margin: 20px 0; }
|
||||
pre { background-color: #f8f9fa; padding: 10px; border-radius: 3px; overflow-x: auto; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>Training Service Test Report</h1>
|
||||
<p>Generated: {timestamp}</p>
|
||||
</div>
|
||||
|
||||
<div class="summary">
|
||||
<div class="metric">
|
||||
<div class="metric-value">{total_suites}</div>
|
||||
<div>Total Suites</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="metric-value passed">{passed_suites}</div>
|
||||
<div>Passed</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="metric-value failed">{failed_suites}</div>
|
||||
<div>Failed</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="metric-value timeout">{timeout_suites}</div>
|
||||
<div>Timeout</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="metric-value">{success_rate:.1f}%</div>
|
||||
<div>Success Rate</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="metric-value">{duration:.1f}s</div>
|
||||
<div>Total Duration</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="recommendations">
|
||||
<h3>Recommendations</h3>
|
||||
<ul>
|
||||
{recommendations_html}
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<h2>Suite Results</h2>
|
||||
{suite_results_html}
|
||||
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Format recommendations
|
||||
recommendations_html = '\n'.join(
|
||||
f"<li>{rec}</li>" for rec in report["recommendations"]
|
||||
)
|
||||
|
||||
# Format suite results
|
||||
suite_results_html = ""
|
||||
for suite_name, result in report["suite_results"].items():
|
||||
status_class = result["status"]
|
||||
suite_results_html += f"""
|
||||
<div class="suite-result">
|
||||
<h3>{suite_name.title()} Tests <span class="{status_class}">({result["status"].upper()})</span></h3>
|
||||
<p><strong>Duration:</strong> {result["duration"]:.2f}s</p>
|
||||
<p><strong>Return Code:</strong> {result["return_code"]}</p>
|
||||
|
||||
{f'<h4>Output:</h4><pre>{result["stdout"][:1000]}{"..." if len(result["stdout"]) > 1000 else ""}</pre>' if result["stdout"] else ""}
|
||||
{f'<h4>Errors:</h4><pre>{result["stderr"][:1000]}{"..." if len(result["stderr"]) > 1000 else ""}</pre>' if result["stderr"] else ""}
|
||||
</div>
|
||||
"""
|
||||
|
||||
# Fill template
|
||||
html_content = html_template.format(
|
||||
timestamp=report["test_run_summary"]["timestamp"],
|
||||
total_suites=report["test_run_summary"]["total_suites"],
|
||||
passed_suites=report["test_run_summary"]["passed_suites"],
|
||||
failed_suites=report["test_run_summary"]["failed_suites"],
|
||||
timeout_suites=report["test_run_summary"]["timeout_suites"],
|
||||
success_rate=report["test_run_summary"]["success_rate"],
|
||||
duration=report["test_run_summary"]["total_duration_seconds"],
|
||||
recommendations_html=recommendations_html,
|
||||
suite_results_html=suite_results_html
|
||||
)
|
||||
|
||||
# Save HTML report
|
||||
html_file = self.results_dir / "test_report.html"
|
||||
with open(html_file, 'w') as f:
|
||||
f.write(html_content)
|
||||
|
||||
logger.info(f"HTML report saved to: {html_file}")
|
||||
|
||||
def print_test_summary(self, report: Dict[str, Any]):
|
||||
"""Print test summary to console"""
|
||||
summary = report["test_run_summary"]
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
print("TRAINING SERVICE TEST RESULTS SUMMARY")
|
||||
print("=" * 80)
|
||||
print(f"Timestamp: {summary['timestamp']}")
|
||||
print(f"Total Suites: {summary['total_suites']}")
|
||||
print(f"Passed: {summary['passed_suites']}")
|
||||
print(f"Failed: {summary['failed_suites']}")
|
||||
print(f"Errors: {summary['error_suites']}")
|
||||
print(f"Timeouts: {summary['timeout_suites']}")
|
||||
print(f"Success Rate: {summary['success_rate']:.1f}%")
|
||||
print(f"Total Duration: {summary['total_duration_seconds']:.2f}s")
|
||||
|
||||
print("\nSUITE DETAILS:")
|
||||
print("-" * 50)
|
||||
for suite_name, result in report["suite_results"].items():
|
||||
status_icon = "✅" if result["status"] == "passed" else "❌"
|
||||
print(f"{status_icon} {suite_name.ljust(15)}: {result['status'].upper().ljust(10)} ({result['duration']:.2f}s)")
|
||||
|
||||
print("\nRECOMMENDATIONS:")
|
||||
print("-" * 50)
|
||||
for i, rec in enumerate(report["recommendations"], 1):
|
||||
print(f"{i}. {rec}")
|
||||
|
||||
print("\nFILES GENERATED:")
|
||||
print("-" * 50)
|
||||
print(f"📄 JSON Report: {self.results_dir}/test_report.json")
|
||||
print(f"🌐 HTML Report: {self.results_dir}/test_report.html")
|
||||
print(f"📊 Coverage Reports: {self.results_dir}/coverage_*_html/")
|
||||
print(f"📋 JUnit XML: {self.results_dir}/junit_*.xml")
|
||||
print("=" * 80)
|
||||
|
||||
async def run_all_tests(self):
|
||||
"""Run all test suites"""
|
||||
logger.info("Starting comprehensive test run...")
|
||||
|
||||
# Setup environment
|
||||
await self.setup_test_environment()
|
||||
|
||||
# Run each test suite
|
||||
for suite_name in self.test_suites.keys():
|
||||
logger.info(f"Starting {suite_name} test suite...")
|
||||
result = self.run_test_suite(suite_name)
|
||||
self.test_results[suite_name] = result
|
||||
|
||||
if result["status"] == "passed":
|
||||
logger.info(f"✅ {suite_name} tests PASSED ({result['duration']:.2f}s)")
|
||||
elif result["status"] == "failed":
|
||||
logger.error(f"❌ {suite_name} tests FAILED ({result['duration']:.2f}s)")
|
||||
elif result["status"] == "timeout":
|
||||
logger.error(f"⏰ {suite_name} tests TIMED OUT ({result['duration']:.2f}s)")
|
||||
else:
|
||||
logger.error(f"💥 {suite_name} tests ERROR ({result['duration']:.2f}s)")
|
||||
|
||||
# Generate final report
|
||||
report = self.generate_test_report()
|
||||
|
||||
return report
|
||||
|
||||
def run_specific_suite(self, suite_name: str):
|
||||
"""Run a specific test suite"""
|
||||
if suite_name not in self.test_suites:
|
||||
logger.error(f"Unknown test suite: {suite_name}")
|
||||
logger.info(f"Available suites: {', '.join(self.test_suites.keys())}")
|
||||
return None
|
||||
|
||||
logger.info(f"Running {suite_name} test suite only...")
|
||||
result = self.run_test_suite(suite_name)
|
||||
self.test_results[suite_name] = result
|
||||
|
||||
# Generate report for single suite
|
||||
report = self.generate_test_report()
|
||||
return report
|
||||
|
||||
|
||||
# ================================================================
|
||||
# MAIN EXECUTION
|
||||
# ================================================================
|
||||
|
||||
async def main():
|
||||
"""Main execution function"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Training Service Test Runner")
|
||||
parser.add_argument(
|
||||
"--suite",
|
||||
choices=list(TrainingTestRunner().test_suites.keys()) + ["all"],
|
||||
default="all",
|
||||
help="Test suite to run (default: all)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose", "-v",
|
||||
action="store_true",
|
||||
help="Verbose output"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--quick",
|
||||
action="store_true",
|
||||
help="Run quick tests only (skip performance tests)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Setup logging level
|
||||
if args.verbose:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
# Create test runner
|
||||
runner = TrainingTestRunner()
|
||||
|
||||
# Modify test suites for quick run
|
||||
if args.quick:
|
||||
# Skip performance tests in quick mode
|
||||
if "performance" in runner.test_suites:
|
||||
del runner.test_suites["performance"]
|
||||
logger.info("Quick mode: Skipping performance tests")
|
||||
|
||||
try:
|
||||
if args.suite == "all":
|
||||
report = await runner.run_all_tests()
|
||||
else:
|
||||
report = runner.run_specific_suite(args.suite)
|
||||
|
||||
# Exit with appropriate code
|
||||
if report and report["test_run_summary"]["failed_suites"] == 0 and report["test_run_summary"]["error_suites"] == 0:
|
||||
logger.info("All tests completed successfully!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
logger.error("Some tests failed!")
|
||||
sys.exit(1)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Test run interrupted by user")
|
||||
sys.exit(130)
|
||||
except Exception as e:
|
||||
logger.error(f"Test run failed with error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Handle both direct execution and pytest discovery
|
||||
if len(sys.argv) > 1 and sys.argv[1] in ["--suite", "-h", "--help"]:
|
||||
# Running as main script with arguments
|
||||
asyncio.run(main())
|
||||
else:
|
||||
# Running as pytest discovery or direct execution without args
|
||||
print("Training Service Test Runner")
|
||||
print("=" * 50)
|
||||
print("Usage:")
|
||||
print(" python run_tests.py --suite all # Run all test suites")
|
||||
print(" python run_tests.py --suite unit # Run unit tests only")
|
||||
print(" python run_tests.py --suite integration # Run integration tests only")
|
||||
print(" python run_tests.py --suite performance # Run performance tests only")
|
||||
print(" python run_tests.py --quick # Run quick tests (skip performance)")
|
||||
print(" python run_tests.py -v # Verbose output")
|
||||
print()
|
||||
print("Available test suites:")
|
||||
runner = TrainingTestRunner()
|
||||
for suite_name, config in runner.test_suites.items():
|
||||
print(f" {suite_name.ljust(15)}: {config['description']}")
|
||||
print()
|
||||
|
||||
# If no arguments provided, run all tests
|
||||
if len(sys.argv) == 1:
|
||||
print("No arguments provided. Running all tests...")
|
||||
asyncio.run(TrainingTestRunner().run_all_tests())
|
||||
311
services/training/tests/test_end_to_end.py
Normal file
311
services/training/tests/test_end_to_end.py
Normal file
@@ -0,0 +1,311 @@
|
||||
# ================================================================
|
||||
# services/training/tests/test_end_to_end.py
|
||||
# ================================================================
|
||||
"""
|
||||
End-to-End Testing for Training Service
|
||||
Tests complete workflows from API to ML pipeline to results
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import httpx
|
||||
import pandas as pd
|
||||
import json
|
||||
import tempfile
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any
|
||||
from unittest.mock import patch, AsyncMock
|
||||
import uuid
|
||||
|
||||
from app.main import app
|
||||
from app.schemas.training import TrainingJobRequest, SingleProductTrainingRequest
|
||||
|
||||
|
||||
class TestTrainingServiceEndToEnd:
|
||||
"""End-to-end tests for complete training workflows"""
|
||||
|
||||
@pytest.fixture
|
||||
async def test_client(self):
|
||||
"""Create test client for the training service"""
|
||||
from httpx import AsyncClient
|
||||
async with AsyncClient(app=app, base_url="http://test") as client:
|
||||
yield client
|
||||
|
||||
@pytest.fixture
|
||||
def real_bakery_data(self):
|
||||
"""Use the actual bakery sales data from the uploaded CSV"""
|
||||
# This fixture would load the real bakery_sales_2023_2024.csv data
|
||||
# For testing, we'll simulate the structure based on the document description
|
||||
|
||||
# Generate realistic data matching the CSV structure
|
||||
start_date = datetime(2023, 1, 1)
|
||||
dates = [start_date + timedelta(days=i) for i in range(365)]
|
||||
|
||||
products = [
|
||||
"Pan Integral", "Pan Blanco", "Croissant", "Magdalenas",
|
||||
"Empanadas", "Tarta Chocolate", "Roscon Reyes", "Palmeras"
|
||||
]
|
||||
|
||||
data = []
|
||||
for date in dates:
|
||||
for product in products:
|
||||
# Realistic sales patterns for Madrid bakery
|
||||
base_quantity = {
|
||||
"Pan Integral": 80, "Pan Blanco": 120, "Croissant": 45,
|
||||
"Magdalenas": 30, "Empanadas": 25, "Tarta Chocolate": 15,
|
||||
"Roscon Reyes": 8, "Palmeras": 12
|
||||
}.get(product, 20)
|
||||
|
||||
# Seasonal variations
|
||||
if date.month == 12 and product == "Roscon Reyes":
|
||||
base_quantity *= 5 # Christmas specialty
|
||||
elif date.month in [6, 7, 8]: # Summer
|
||||
base_quantity *= 0.85
|
||||
elif date.month in [11, 12, 1]: # Winter
|
||||
base_quantity *= 1.15
|
||||
|
||||
# Weekly patterns
|
||||
if date.weekday() >= 5: # Weekends
|
||||
base_quantity *= 1.3
|
||||
elif date.weekday() == 0: # Monday slower
|
||||
base_quantity *= 0.8
|
||||
|
||||
# Weather influence
|
||||
temp = 15 + 12 * np.sin((date.timetuple().tm_yday / 365) * 2 * np.pi)
|
||||
if temp > 30: # Very hot days
|
||||
if product in ["Pan Integral", "Pan Blanco"]:
|
||||
base_quantity *= 0.7
|
||||
elif temp < 5: # Cold days
|
||||
base_quantity *= 1.1
|
||||
|
||||
# Add realistic noise
|
||||
import numpy as np
|
||||
quantity = max(1, int(base_quantity + np.random.normal(0, base_quantity * 0.15)))
|
||||
|
||||
# Calculate revenue (realistic Spanish bakery prices)
|
||||
price_per_unit = {
|
||||
"Pan Integral": 2.80, "Pan Blanco": 2.50, "Croissant": 1.50,
|
||||
"Magdalenas": 1.20, "Empanadas": 3.50, "Tarta Chocolate": 18.00,
|
||||
"Roscon Reyes": 25.00, "Palmeras": 1.80
|
||||
}.get(product, 2.00)
|
||||
|
||||
revenue = round(quantity * price_per_unit, 2)
|
||||
|
||||
data.append({
|
||||
"date": date.strftime("%Y-%m-%d"),
|
||||
"product": product,
|
||||
"quantity": quantity,
|
||||
"revenue": revenue,
|
||||
"temperature": round(temp + np.random.normal(0, 3), 1),
|
||||
"precipitation": max(0, np.random.exponential(0.8)),
|
||||
"is_weekend": date.weekday() >= 5,
|
||||
"is_holiday": self._is_spanish_holiday(date)
|
||||
})
|
||||
|
||||
return pd.DataFrame(data)
|
||||
|
||||
def _is_spanish_holiday(self, date: datetime) -> bool:
|
||||
"""Check if date is a Spanish holiday"""
|
||||
spanish_holidays = [
|
||||
(1, 1), # Año Nuevo
|
||||
(1, 6), # Reyes Magos
|
||||
(5, 1), # Día del Trabajo
|
||||
(8, 15), # Asunción de la Virgen
|
||||
(10, 12), # Fiesta Nacional de España
|
||||
(11, 1), # Todos los Santos
|
||||
(12, 6), # Día de la Constitución
|
||||
(12, 8), # Inmaculada Concepción
|
||||
(12, 25), # Navidad
|
||||
]
|
||||
return (date.month, date.day) in spanish_holidays
|
||||
|
||||
@pytest.fixture
|
||||
async def mock_external_apis(self):
|
||||
"""Mock external APIs (AEMET and Madrid OpenData)"""
|
||||
with patch('app.external.aemet.AEMETClient') as mock_aemet, \
|
||||
patch('app.external.madrid_opendata.MadridOpenDataClient') as mock_madrid:
|
||||
|
||||
# Mock AEMET weather data
|
||||
mock_aemet_instance = AsyncMock()
|
||||
mock_aemet.return_value = mock_aemet_instance
|
||||
|
||||
# Generate realistic Madrid weather data
|
||||
weather_data = []
|
||||
for i in range(365):
|
||||
date = datetime(2023, 1, 1) + timedelta(days=i)
|
||||
day_of_year = date.timetuple().tm_yday
|
||||
# Madrid climate: hot summers, mild winters
|
||||
base_temp = 14 + 12 * np.sin((day_of_year / 365) * 2 * np.pi)
|
||||
|
||||
weather_data.append({
|
||||
"date": date,
|
||||
"temperature": round(base_temp + np.random.normal(0, 4), 1),
|
||||
"precipitation": max(0, np.random.exponential(1.2)),
|
||||
"humidity": np.random.uniform(25, 75),
|
||||
"wind_speed": np.random.uniform(3, 20),
|
||||
"pressure": np.random.uniform(995, 1025),
|
||||
"description": np.random.choice([
|
||||
"Soleado", "Parcialmente nublado", "Nublado",
|
||||
"Lluvia ligera", "Despejado"
|
||||
]),
|
||||
"source": "aemet"
|
||||
})
|
||||
|
||||
mock_aemet_instance.get_historical_weather.return_value = weather_data
|
||||
mock_aemet_instance.get_current_weather.return_value = weather_data[-1]
|
||||
|
||||
# Mock Madrid traffic data
|
||||
mock_madrid_instance = AsyncMock()
|
||||
mock_madrid.return_value = mock_madrid_instance
|
||||
|
||||
traffic_data = []
|
||||
for i in range(365):
|
||||
date = datetime(2023, 1, 1) + timedelta(days=i)
|
||||
|
||||
# Multiple measurements per day
|
||||
for hour in range(6, 22, 2): # Every 2 hours from 6 AM to 10 PM
|
||||
measurement_time = date.replace(hour=hour)
|
||||
|
||||
# Realistic Madrid traffic patterns
|
||||
if hour in [7, 8, 9, 18, 19, 20]: # Rush hours
|
||||
volume = np.random.randint(1200, 2000)
|
||||
congestion = "high"
|
||||
speed = np.random.randint(10, 25)
|
||||
elif hour in [12, 13, 14]: # Lunch time
|
||||
volume = np.random.randint(800, 1200)
|
||||
congestion = "medium"
|
||||
speed = np.random.randint(20, 35)
|
||||
else: # Off-peak
|
||||
volume = np.random.randint(300, 800)
|
||||
congestion = "low"
|
||||
speed = np.random.randint(30, 50)
|
||||
|
||||
traffic_data.append({
|
||||
"date": measurement_time,
|
||||
"traffic_volume": volume,
|
||||
"occupation_percentage": np.random.randint(15, 85),
|
||||
"load_percentage": np.random.randint(25, 90),
|
||||
"average_speed": speed,
|
||||
"congestion_level": congestion,
|
||||
"pedestrian_count": np.random.randint(100, 800),
|
||||
"measurement_point_id": "MADRID_CENTER_001",
|
||||
"measurement_point_name": "Puerta del Sol",
|
||||
"road_type": "URB",
|
||||
"source": "madrid_opendata"
|
||||
})
|
||||
|
||||
mock_madrid_instance.get_historical_traffic.return_value = traffic_data
|
||||
mock_madrid_instance.get_current_traffic.return_value = traffic_data[-1]
|
||||
|
||||
yield {
|
||||
'aemet': mock_aemet_instance,
|
||||
'madrid': mock_madrid_instance
|
||||
}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_complete_training_workflow_api(
|
||||
self,
|
||||
test_client,
|
||||
real_bakery_data,
|
||||
mock_external_apis
|
||||
):
|
||||
"""Test complete training workflow through API endpoints"""
|
||||
|
||||
# Step 1: Check service health
|
||||
health_response = await test_client.get("/health")
|
||||
assert health_response.status_code == 200
|
||||
health_data = health_response.json()
|
||||
assert health_data["status"] == "healthy"
|
||||
|
||||
# Step 2: Validate training data quality
|
||||
with patch('app.services.training_service.TrainingService._fetch_sales_data',
|
||||
return_value=real_bakery_data):
|
||||
|
||||
validation_response = await test_client.post(
|
||||
"/training/validate",
|
||||
json={
|
||||
"tenant_id": "test_bakery_001",
|
||||
"include_weather": True,
|
||||
"include_traffic": True
|
||||
}
|
||||
)
|
||||
|
||||
assert validation_response.status_code == 200
|
||||
validation_data = validation_response.json()
|
||||
assert validation_data["is_valid"] is True
|
||||
assert validation_data["data_points"] > 1000 # Sufficient data
|
||||
assert validation_data["missing_percentage"] < 10
|
||||
|
||||
# Step 3: Start training job for multiple products
|
||||
training_request = {
|
||||
"products": ["Pan Integral", "Croissant", "Magdalenas"],
|
||||
"include_weather": True,
|
||||
"include_traffic": True,
|
||||
"config": {
|
||||
"seasonality_mode": "additive",
|
||||
"changepoint_prior_scale": 0.05,
|
||||
"seasonality_prior_scale": 10.0,
|
||||
"validation_enabled": True
|
||||
}
|
||||
}
|
||||
|
||||
with patch('app.services.training_service.TrainingService._fetch_sales_data',
|
||||
return_value=real_bakery_data):
|
||||
|
||||
start_response = await test_client.post(
|
||||
"/training/jobs",
|
||||
json=training_request,
|
||||
headers={"X-Tenant-ID": "test_bakery_001"}
|
||||
)
|
||||
|
||||
assert start_response.status_code == 201
|
||||
job_data = start_response.json()
|
||||
job_id = job_data["job_id"]
|
||||
assert job_data["status"] == "pending"
|
||||
|
||||
# Step 4: Monitor job progress
|
||||
max_wait_time = 300 # 5 minutes
|
||||
start_time = time.time()
|
||||
|
||||
while time.time() - start_time < max_wait_time:
|
||||
status_response = await test_client.get(f"/training/jobs/{job_id}/status")
|
||||
assert status_response.status_code == 200
|
||||
|
||||
status_data = status_response.json()
|
||||
|
||||
if status_data["status"] == "completed":
|
||||
# Training completed successfully
|
||||
assert "models_trained" in status_data
|
||||
assert len(status_data["models_trained"]) == 3 # Three products
|
||||
|
||||
# Check model quality
|
||||
for model_info in status_data["models_trained"]:
|
||||
assert "product_name" in model_info
|
||||
assert "model_id" in model_info
|
||||
assert "metrics" in model_info
|
||||
|
||||
metrics = model_info["metrics"]
|
||||
assert "mape" in metrics
|
||||
assert "rmse" in metrics
|
||||
assert "mae" in metrics
|
||||
|
||||
# Quality thresholds for bakery data
|
||||
assert metrics["mape"] < 50, f"MAPE too high for {model_info['product_name']}: {metrics['mape']}"
|
||||
assert metrics["rmse"] > 0
|
||||
|
||||
break
|
||||
elif status_data["status"] == "failed":
|
||||
pytest.fail(f"Training job failed: {status_data.get('error_message', 'Unknown error')}")
|
||||
|
||||
# Wait before checking again
|
||||
await asyncio.sleep(10)
|
||||
else:
|
||||
pytest.fail(f"Training job did not complete within {max_wait_time} seconds")
|
||||
|
||||
# Step 5: Get detailed job logs
|
||||
logs_response = await test_client.get(f"/training/jobs/{job_id}/logs")
|
||||
assert logs_response.status_code == 200
|
||||
logs_data = logs_response.json()
|
||||
assert "logs" in logs_data
|
||||
assert len(logs_data["logs"]) > 0
|
||||
630
services/training/tests/test_performance.py
Normal file
630
services/training/tests/test_performance.py
Normal file
@@ -0,0 +1,630 @@
|
||||
# ================================================================
|
||||
# services/training/tests/test_performance.py
|
||||
# ================================================================
|
||||
"""
|
||||
Performance and Load Testing for Training Service
|
||||
Tests training performance with real-world data volumes
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import psutil
|
||||
import gc
|
||||
from typing import List, Dict, Any
|
||||
import logging
|
||||
|
||||
from app.ml.trainer import BakeryMLTrainer
|
||||
from app.ml.data_processor import BakeryDataProcessor
|
||||
from app.services.training_service import TrainingService
|
||||
|
||||
|
||||
class TestTrainingPerformance:
|
||||
"""Performance tests for training service components"""
|
||||
|
||||
@pytest.fixture
|
||||
def large_sales_dataset(self):
|
||||
"""Generate large dataset for performance testing (2 years of data)"""
|
||||
start_date = datetime(2022, 1, 1)
|
||||
end_date = datetime(2024, 1, 1)
|
||||
|
||||
date_range = pd.date_range(start=start_date, end=end_date, freq='D')
|
||||
products = [
|
||||
"Pan Integral", "Pan Blanco", "Croissant", "Magdalenas",
|
||||
"Empanadas", "Tarta Chocolate", "Roscon Reyes", "Palmeras",
|
||||
"Donuts", "Berlinas", "Napolitanas", "Ensaimadas"
|
||||
]
|
||||
|
||||
data = []
|
||||
for date in date_range:
|
||||
for product in products:
|
||||
# Realistic sales simulation
|
||||
base_quantity = np.random.randint(5, 150)
|
||||
|
||||
# Seasonal patterns
|
||||
if date.month in [12, 1]: # Winter/Holiday season
|
||||
base_quantity *= 1.4
|
||||
elif date.month in [6, 7, 8]: # Summer
|
||||
base_quantity *= 0.8
|
||||
|
||||
# Weekly patterns
|
||||
if date.weekday() >= 5: # Weekends
|
||||
base_quantity *= 1.2
|
||||
elif date.weekday() == 0: # Monday
|
||||
base_quantity *= 0.7
|
||||
|
||||
# Add noise
|
||||
quantity = max(1, int(base_quantity + np.random.normal(0, base_quantity * 0.1)))
|
||||
|
||||
data.append({
|
||||
"date": date.strftime("%Y-%m-%d"),
|
||||
"product": product,
|
||||
"quantity": quantity,
|
||||
"revenue": round(quantity * np.random.uniform(1.5, 8.0), 2),
|
||||
"temperature": round(15 + 12 * np.sin((date.timetuple().tm_yday / 365) * 2 * np.pi) + np.random.normal(0, 3), 1),
|
||||
"precipitation": max(0, np.random.exponential(0.8)),
|
||||
"is_weekend": date.weekday() >= 5,
|
||||
"is_holiday": self._is_spanish_holiday(date)
|
||||
})
|
||||
|
||||
return pd.DataFrame(data)
|
||||
|
||||
def _is_spanish_holiday(self, date: datetime) -> bool:
|
||||
"""Check if date is a Spanish holiday"""
|
||||
holidays = [
|
||||
(1, 1), # New Year
|
||||
(1, 6), # Epiphany
|
||||
(5, 1), # Labor Day
|
||||
(8, 15), # Assumption
|
||||
(10, 12), # National Day
|
||||
(11, 1), # All Saints
|
||||
(12, 6), # Constitution Day
|
||||
(12, 8), # Immaculate Conception
|
||||
(12, 25), # Christmas
|
||||
]
|
||||
return (date.month, date.day) in holidays
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_single_product_training_performance(self, large_sales_dataset):
|
||||
"""Test performance of single product training with large dataset"""
|
||||
|
||||
trainer = BakeryMLTrainer()
|
||||
product_data = large_sales_dataset[large_sales_dataset['product'] == 'Pan Integral'].copy()
|
||||
|
||||
# Measure memory before training
|
||||
process = psutil.Process()
|
||||
memory_before = process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
result = await trainer.train_single_product(
|
||||
tenant_id="perf_test_tenant",
|
||||
product_name="Pan Integral",
|
||||
sales_data=product_data,
|
||||
config={
|
||||
"include_weather": True,
|
||||
"include_traffic": False, # Skip traffic for performance
|
||||
"seasonality_mode": "additive"
|
||||
}
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
training_duration = end_time - start_time
|
||||
|
||||
# Measure memory after training
|
||||
memory_after = process.memory_info().rss / 1024 / 1024 # MB
|
||||
memory_used = memory_after - memory_before
|
||||
|
||||
# Performance assertions
|
||||
assert training_duration < 120, f"Training took too long: {training_duration:.2f}s"
|
||||
assert memory_used < 500, f"Memory usage too high: {memory_used:.2f}MB"
|
||||
assert result['status'] == 'completed'
|
||||
|
||||
# Quality assertions
|
||||
metrics = result['metrics']
|
||||
assert metrics['mape'] < 50, f"MAPE too high: {metrics['mape']:.2f}%"
|
||||
|
||||
print(f"Performance Results:")
|
||||
print(f" Training Duration: {training_duration:.2f}s")
|
||||
print(f" Memory Used: {memory_used:.2f}MB")
|
||||
print(f" Data Points: {len(product_data)}")
|
||||
print(f" MAPE: {metrics['mape']:.2f}%")
|
||||
print(f" RMSE: {metrics['rmse']:.2f}")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_training_performance(self, large_sales_dataset):
|
||||
"""Test performance of concurrent training jobs"""
|
||||
|
||||
trainer = BakeryMLTrainer()
|
||||
products = ["Pan Integral", "Croissant", "Magdalenas"]
|
||||
|
||||
async def train_product(product_name: str):
|
||||
"""Train a single product"""
|
||||
product_data = large_sales_dataset[large_sales_dataset['product'] == product_name].copy()
|
||||
|
||||
start_time = time.time()
|
||||
result = await trainer.train_single_product(
|
||||
tenant_id=f"concurrent_test_{product_name.replace(' ', '_').lower()}",
|
||||
product_name=product_name,
|
||||
sales_data=product_data,
|
||||
config={"include_weather": True, "include_traffic": False}
|
||||
)
|
||||
end_time = time.time()
|
||||
|
||||
return {
|
||||
'product': product_name,
|
||||
'duration': end_time - start_time,
|
||||
'status': result['status'],
|
||||
'metrics': result.get('metrics', {})
|
||||
}
|
||||
|
||||
# Run concurrent training
|
||||
start_time = time.time()
|
||||
tasks = [train_product(product) for product in products]
|
||||
results = await asyncio.gather(*tasks)
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Verify all trainings completed
|
||||
for result in results:
|
||||
assert result['status'] == 'completed'
|
||||
assert result['duration'] < 120 # Individual training time
|
||||
|
||||
# Concurrent execution should be faster than sequential
|
||||
sequential_time_estimate = sum(r['duration'] for r in results)
|
||||
efficiency = sequential_time_estimate / total_time
|
||||
|
||||
assert efficiency > 1.5, f"Concurrency efficiency too low: {efficiency:.2f}x"
|
||||
|
||||
print(f"Concurrent Training Results:")
|
||||
print(f" Total Time: {total_time:.2f}s")
|
||||
print(f" Sequential Estimate: {sequential_time_estimate:.2f}s")
|
||||
print(f" Efficiency: {efficiency:.2f}x")
|
||||
|
||||
for result in results:
|
||||
print(f" {result['product']}: {result['duration']:.2f}s, MAPE: {result['metrics'].get('mape', 'N/A'):.2f}%")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_data_processing_scalability(self, large_sales_dataset):
|
||||
"""Test data processing performance with increasing data sizes"""
|
||||
|
||||
data_processor = BakeryDataProcessor()
|
||||
|
||||
# Test with different data sizes
|
||||
data_sizes = [1000, 5000, 10000, 20000, len(large_sales_dataset)]
|
||||
performance_results = []
|
||||
|
||||
for size in data_sizes:
|
||||
# Take a sample of the specified size
|
||||
sample_data = large_sales_dataset.head(size).copy()
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Process the data
|
||||
processed_data = await data_processor.prepare_training_data(
|
||||
sales_data=sample_data,
|
||||
include_weather=True,
|
||||
include_traffic=True,
|
||||
tenant_id="scalability_test",
|
||||
product_name="Pan Integral"
|
||||
)
|
||||
|
||||
processing_time = time.time() - start_time
|
||||
|
||||
performance_results.append({
|
||||
'data_size': size,
|
||||
'processing_time': processing_time,
|
||||
'processed_rows': len(processed_data),
|
||||
'throughput': size / processing_time if processing_time > 0 else 0
|
||||
})
|
||||
|
||||
# Verify linear or sub-linear scaling
|
||||
for i in range(1, len(performance_results)):
|
||||
prev_result = performance_results[i-1]
|
||||
curr_result = performance_results[i]
|
||||
|
||||
size_ratio = curr_result['data_size'] / prev_result['data_size']
|
||||
time_ratio = curr_result['processing_time'] / prev_result['processing_time']
|
||||
|
||||
# Processing time should scale better than linearly
|
||||
assert time_ratio < size_ratio * 1.5, f"Poor scaling at size {curr_result['data_size']}"
|
||||
|
||||
print("Data Processing Scalability Results:")
|
||||
for result in performance_results:
|
||||
print(f" Size: {result['data_size']:,} rows, Time: {result['processing_time']:.2f}s, "
|
||||
f"Throughput: {result['throughput']:.0f} rows/s")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_usage_optimization(self, large_sales_dataset):
|
||||
"""Test memory usage optimization during training"""
|
||||
|
||||
trainer = BakeryMLTrainer()
|
||||
process = psutil.Process()
|
||||
|
||||
# Baseline memory
|
||||
gc.collect() # Force garbage collection
|
||||
baseline_memory = process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
memory_snapshots = [{'stage': 'baseline', 'memory_mb': baseline_memory}]
|
||||
|
||||
# Load data
|
||||
product_data = large_sales_dataset[large_sales_dataset['product'] == 'Pan Integral'].copy()
|
||||
current_memory = process.memory_info().rss / 1024 / 1024
|
||||
memory_snapshots.append({'stage': 'data_loaded', 'memory_mb': current_memory})
|
||||
|
||||
# Train model
|
||||
result = await trainer.train_single_product(
|
||||
tenant_id="memory_test_tenant",
|
||||
product_name="Pan Integral",
|
||||
sales_data=product_data,
|
||||
config={"include_weather": True, "include_traffic": True}
|
||||
)
|
||||
|
||||
current_memory = process.memory_info().rss / 1024 / 1024
|
||||
memory_snapshots.append({'stage': 'model_trained', 'memory_mb': current_memory})
|
||||
|
||||
# Cleanup
|
||||
del product_data
|
||||
del result
|
||||
gc.collect()
|
||||
|
||||
final_memory = process.memory_info().rss / 1024 / 1024
|
||||
memory_snapshots.append({'stage': 'cleanup', 'memory_mb': final_memory})
|
||||
|
||||
# Memory assertions
|
||||
peak_memory = max(snapshot['memory_mb'] for snapshot in memory_snapshots)
|
||||
memory_increase = peak_memory - baseline_memory
|
||||
memory_after_cleanup = final_memory - baseline_memory
|
||||
|
||||
assert memory_increase < 800, f"Peak memory increase too high: {memory_increase:.2f}MB"
|
||||
assert memory_after_cleanup < 100, f"Memory not properly cleaned up: {memory_after_cleanup:.2f}MB"
|
||||
|
||||
print("Memory Usage Analysis:")
|
||||
for snapshot in memory_snapshots:
|
||||
print(f" {snapshot['stage']}: {snapshot['memory_mb']:.2f}MB")
|
||||
print(f" Peak increase: {memory_increase:.2f}MB")
|
||||
print(f" After cleanup: {memory_after_cleanup:.2f}MB")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_training_service_throughput(self, large_sales_dataset):
|
||||
"""Test training service throughput with multiple requests"""
|
||||
|
||||
training_service = TrainingService()
|
||||
|
||||
# Simulate multiple training requests
|
||||
num_requests = 5
|
||||
products = ["Pan Integral", "Croissant", "Magdalenas", "Empanadas", "Tarta Chocolate"]
|
||||
|
||||
async def execute_training_request(request_id: int, product: str):
|
||||
"""Execute a single training request"""
|
||||
product_data = large_sales_dataset[large_sales_dataset['product'] == product].copy()
|
||||
|
||||
with patch.object(training_service, '_fetch_sales_data', return_value=product_data):
|
||||
start_time = time.time()
|
||||
|
||||
result = await training_service.execute_training_job(
|
||||
db=None, # Mock DB session
|
||||
tenant_id=f"throughput_test_tenant_{request_id}",
|
||||
job_id=f"job_{request_id}_{product.replace(' ', '_').lower()}",
|
||||
request={
|
||||
'products': [product],
|
||||
'include_weather': True,
|
||||
'include_traffic': False,
|
||||
'config': {'seasonality_mode': 'additive'}
|
||||
}
|
||||
)
|
||||
|
||||
duration = time.time() - start_time
|
||||
return {
|
||||
'request_id': request_id,
|
||||
'product': product,
|
||||
'duration': duration,
|
||||
'status': result.get('status', 'unknown'),
|
||||
'models_trained': len(result.get('models_trained', []))
|
||||
}
|
||||
|
||||
# Execute requests concurrently
|
||||
start_time = time.time()
|
||||
tasks = [
|
||||
execute_training_request(i, products[i % len(products)])
|
||||
for i in range(num_requests)
|
||||
]
|
||||
results = await asyncio.gather(*tasks)
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Calculate throughput metrics
|
||||
successful_requests = sum(1 for r in results if r['status'] == 'completed')
|
||||
throughput = successful_requests / total_time # requests per second
|
||||
|
||||
# Performance assertions
|
||||
assert successful_requests >= num_requests * 0.8, "Too many failed requests"
|
||||
assert throughput >= 0.1, f"Throughput too low: {throughput:.3f} req/s"
|
||||
assert total_time < 300, f"Total time too long: {total_time:.2f}s"
|
||||
|
||||
print(f"Training Service Throughput Results:")
|
||||
print(f" Total Requests: {num_requests}")
|
||||
print(f" Successful: {successful_requests}")
|
||||
print(f" Total Time: {total_time:.2f}s")
|
||||
print(f" Throughput: {throughput:.3f} req/s")
|
||||
print(f" Average Request Time: {total_time/num_requests:.2f}s")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_large_dataset_edge_cases(self, large_sales_dataset):
|
||||
"""Test handling of edge cases with large datasets"""
|
||||
|
||||
data_processor = BakeryDataProcessor()
|
||||
|
||||
# Test 1: Dataset with many missing values
|
||||
corrupted_data = large_sales_dataset.copy()
|
||||
# Introduce 30% missing values randomly
|
||||
mask = np.random.random(len(corrupted_data)) < 0.3
|
||||
corrupted_data.loc[mask, 'quantity'] = np.nan
|
||||
|
||||
start_time = time.time()
|
||||
result = await data_processor.validate_data_quality(corrupted_data)
|
||||
validation_time = time.time() - start_time
|
||||
|
||||
assert validation_time < 10, f"Validation too slow: {validation_time:.2f}s"
|
||||
assert result['is_valid'] is False
|
||||
assert 'high_missing_data' in result['issues']
|
||||
|
||||
# Test 2: Dataset with extreme outliers
|
||||
outlier_data = large_sales_dataset.copy()
|
||||
# Add extreme outliers (100x normal values)
|
||||
outlier_indices = np.random.choice(len(outlier_data), size=int(len(outlier_data) * 0.01), replace=False)
|
||||
outlier_data.loc[outlier_indices, 'quantity'] *= 100
|
||||
|
||||
start_time = time.time()
|
||||
cleaned_data = await data_processor.clean_outliers(outlier_data)
|
||||
cleaning_time = time.time() - start_time
|
||||
|
||||
assert cleaning_time < 15, f"Outlier cleaning too slow: {cleaning_time:.2f}s"
|
||||
assert len(cleaned_data) > len(outlier_data) * 0.95 # Should retain most data
|
||||
|
||||
# Test 3: Very sparse data (many products with few sales)
|
||||
sparse_data = large_sales_dataset.copy()
|
||||
# Keep only 10% of data for each product randomly
|
||||
sparse_data = sparse_data.groupby('product').apply(
|
||||
lambda x: x.sample(n=max(1, int(len(x) * 0.1)))
|
||||
).reset_index(drop=True)
|
||||
|
||||
start_time = time.time()
|
||||
validation_result = await data_processor.validate_data_quality(sparse_data)
|
||||
sparse_validation_time = time.time() - start_time
|
||||
|
||||
assert sparse_validation_time < 5, f"Sparse data validation too slow: {sparse_validation_time:.2f}s"
|
||||
|
||||
print("Edge Case Performance Results:")
|
||||
print(f" Corrupted data validation: {validation_time:.2f}s")
|
||||
print(f" Outlier cleaning: {cleaning_time:.2f}s")
|
||||
print(f" Sparse data validation: {sparse_validation_time:.2f}s")
|
||||
|
||||
|
||||
class TestTrainingServiceLoad:
|
||||
"""Load testing for training service under stress"""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sustained_load_training(self, large_sales_dataset):
|
||||
"""Test training service under sustained load"""
|
||||
|
||||
trainer = BakeryMLTrainer()
|
||||
|
||||
# Define load test parameters
|
||||
duration_minutes = 2 # Run for 2 minutes
|
||||
requests_per_minute = 3
|
||||
|
||||
products = ["Pan Integral", "Croissant", "Magdalenas"]
|
||||
|
||||
async def sustained_training_worker(worker_id: int, duration: float):
|
||||
"""Worker that continuously submits training requests"""
|
||||
start_time = time.time()
|
||||
completed_requests = 0
|
||||
failed_requests = 0
|
||||
|
||||
while time.time() - start_time < duration:
|
||||
try:
|
||||
product = products[completed_requests % len(products)]
|
||||
product_data = large_sales_dataset[
|
||||
large_sales_dataset['product'] == product
|
||||
].copy()
|
||||
|
||||
result = await trainer.train_single_product(
|
||||
tenant_id=f"load_test_worker_{worker_id}",
|
||||
product_name=product,
|
||||
sales_data=product_data,
|
||||
config={"include_weather": False, "include_traffic": False} # Minimal config for speed
|
||||
)
|
||||
|
||||
if result['status'] == 'completed':
|
||||
completed_requests += 1
|
||||
else:
|
||||
failed_requests += 1
|
||||
|
||||
except Exception as e:
|
||||
failed_requests += 1
|
||||
logging.error(f"Training request failed: {e}")
|
||||
|
||||
# Wait before next request
|
||||
await asyncio.sleep(60 / requests_per_minute)
|
||||
|
||||
return {
|
||||
'worker_id': worker_id,
|
||||
'completed': completed_requests,
|
||||
'failed': failed_requests,
|
||||
'duration': time.time() - start_time
|
||||
}
|
||||
|
||||
# Start multiple workers
|
||||
num_workers = 2
|
||||
duration_seconds = duration_minutes * 60
|
||||
|
||||
start_time = time.time()
|
||||
tasks = [
|
||||
sustained_training_worker(i, duration_seconds)
|
||||
for i in range(num_workers)
|
||||
]
|
||||
results = await asyncio.gather(*tasks)
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Analyze results
|
||||
total_completed = sum(r['completed'] for r in results)
|
||||
total_failed = sum(r['failed'] for r in results)
|
||||
success_rate = total_completed / (total_completed + total_failed) if (total_completed + total_failed) > 0 else 0
|
||||
|
||||
# Performance assertions
|
||||
assert success_rate >= 0.8, f"Success rate too low: {success_rate:.2%}"
|
||||
assert total_completed >= duration_minutes * requests_per_minute * num_workers * 0.7, "Throughput too low"
|
||||
|
||||
print(f"Sustained Load Test Results:")
|
||||
print(f" Duration: {total_time:.2f}s")
|
||||
print(f" Workers: {num_workers}")
|
||||
print(f" Completed Requests: {total_completed}")
|
||||
print(f" Failed Requests: {total_failed}")
|
||||
print(f" Success Rate: {success_rate:.2%}")
|
||||
print(f" Average Throughput: {total_completed/total_time:.2f} req/s")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_resource_exhaustion_recovery(self, large_sales_dataset):
|
||||
"""Test service recovery from resource exhaustion"""
|
||||
|
||||
trainer = BakeryMLTrainer()
|
||||
|
||||
# Simulate resource exhaustion by running many concurrent requests
|
||||
num_concurrent = 10 # High concurrency to stress the system
|
||||
|
||||
async def resource_intensive_task(task_id: int):
|
||||
"""Task designed to consume resources"""
|
||||
try:
|
||||
# Use all products to increase memory usage
|
||||
all_products_data = large_sales_dataset.copy()
|
||||
|
||||
result = await trainer.train_tenant_models(
|
||||
tenant_id=f"resource_test_{task_id}",
|
||||
sales_data=all_products_data,
|
||||
config={
|
||||
"train_all_products": True,
|
||||
"include_weather": True,
|
||||
"include_traffic": True
|
||||
}
|
||||
)
|
||||
|
||||
return {'task_id': task_id, 'status': 'completed', 'error': None}
|
||||
|
||||
except Exception as e:
|
||||
return {'task_id': task_id, 'status': 'failed', 'error': str(e)}
|
||||
|
||||
# Launch all tasks simultaneously
|
||||
start_time = time.time()
|
||||
tasks = [resource_intensive_task(i) for i in range(num_concurrent)]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
duration = time.time() - start_time
|
||||
|
||||
# Analyze results
|
||||
completed = sum(1 for r in results if isinstance(r, dict) and r['status'] == 'completed')
|
||||
failed = sum(1 for r in results if isinstance(r, dict) and r['status'] == 'failed')
|
||||
exceptions = sum(1 for r in results if isinstance(r, Exception))
|
||||
|
||||
# The system should handle some failures gracefully
|
||||
# but should complete at least some requests
|
||||
total_processed = completed + failed + exceptions
|
||||
processing_rate = total_processed / num_concurrent
|
||||
|
||||
assert processing_rate >= 0.5, f"Too many requests not processed: {processing_rate:.2%}"
|
||||
assert duration < 600, f"Recovery took too long: {duration:.2f}s" # 10 minutes max
|
||||
|
||||
print(f"Resource Exhaustion Test Results:")
|
||||
print(f" Concurrent Requests: {num_concurrent}")
|
||||
print(f" Completed: {completed}")
|
||||
print(f" Failed: {failed}")
|
||||
print(f" Exceptions: {exceptions}")
|
||||
print(f" Duration: {duration:.2f}s")
|
||||
print(f" Processing Rate: {processing_rate:.2%}")
|
||||
|
||||
|
||||
# ================================================================
|
||||
# BENCHMARK UTILITIES
|
||||
# ================================================================
|
||||
|
||||
class PerformanceBenchmark:
|
||||
"""Utility class for performance benchmarking"""
|
||||
|
||||
@staticmethod
|
||||
def measure_execution_time(func):
|
||||
"""Decorator to measure execution time"""
|
||||
async def wrapper(*args, **kwargs):
|
||||
start_time = time.time()
|
||||
result = await func(*args, **kwargs)
|
||||
execution_time = time.time() - start_time
|
||||
|
||||
if hasattr(result, 'update') and isinstance(result, dict):
|
||||
result['execution_time'] = execution_time
|
||||
|
||||
return result
|
||||
return wrapper
|
||||
|
||||
@staticmethod
|
||||
def memory_profiler(func):
|
||||
"""Decorator to profile memory usage"""
|
||||
async def wrapper(*args, **kwargs):
|
||||
process = psutil.Process()
|
||||
|
||||
# Memory before
|
||||
gc.collect()
|
||||
memory_before = process.memory_info().rss / 1024 / 1024
|
||||
|
||||
result = await func(*args, **kwargs)
|
||||
|
||||
# Memory after
|
||||
memory_after = process.memory_info().rss / 1024 / 1024
|
||||
memory_used = memory_after - memory_before
|
||||
|
||||
if hasattr(result, 'update') and isinstance(result, dict):
|
||||
result['memory_used_mb'] = memory_used
|
||||
|
||||
return result
|
||||
return wrapper
|
||||
|
||||
|
||||
# ================================================================
|
||||
# STANDALONE EXECUTION
|
||||
# ================================================================
|
||||
|
||||
if __name__ == "__main__":
|
||||
"""
|
||||
Run performance tests as standalone script
|
||||
Usage: python test_performance.py
|
||||
"""
|
||||
import sys
|
||||
import os
|
||||
from unittest.mock import patch
|
||||
|
||||
# Add the training service root to Python path
|
||||
training_service_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
sys.path.insert(0, training_service_root)
|
||||
|
||||
print("=" * 60)
|
||||
print("TRAINING SERVICE PERFORMANCE TEST SUITE")
|
||||
print("=" * 60)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
|
||||
# Run performance tests
|
||||
pytest.main([
|
||||
__file__,
|
||||
"-v",
|
||||
"--tb=short",
|
||||
"-s", # Don't capture output
|
||||
"--durations=10", # Show 10 slowest tests
|
||||
"-m", "not slow", # Skip slow tests unless specifically requested
|
||||
])
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("PERFORMANCE TESTING COMPLETE")
|
||||
print("=" * 60)
|
||||
Reference in New Issue
Block a user