Add migration services

This commit is contained in:
Urtzi Alfaro
2025-09-30 08:12:45 +02:00
parent d1c83dce74
commit ec6bcb4c7d
139 changed files with 6363 additions and 163 deletions

View File

@@ -1,13 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIB9jCCAZ2gAwIBAgIRALeFt7uyrRUtqT8VC8AyOqAwCgYIKoZIzj0EAwIwWzEL
MAkGA1UEBhMCVVMxEjAQBgNVBAoTCUJha2VyeSBJQTEbMBkGA1UECxMSQmFrZXJ5
IElBIExvY2FsIENBMRswGQYDVQQDExJiYWtlcnktaWEtbG9jYWwtY2EwHhcNMjUw
OTI4MTYzMzAxWhcNMjYwOTI4MTYzMzAxWjBbMQswCQYDVQQGEwJVUzESMBAGA1UE
ChMJQmFrZXJ5IElBMRswGQYDVQQLExJCYWtlcnkgSUEgTG9jYWwgQ0ExGzAZBgNV
BAMTEmJha2VyeS1pYS1sb2NhbC1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IA
BMvQUfoPOJxF4JWwFX+YoolhrMKMBJ7pN5roI6/puxXa3UKRuQSF17lQGqdI9MFy
oYaQJlQ9PqI5RwqZn6uAIT6jQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8E
BTADAQH/MB0GA1UdDgQWBBS5waYyMCV5bG55I8YGZSIJCioRdjAKBggqhkjOPQQD
AgNHADBEAiAckCO8A4ZHLQg0wYi8q67lLB83OVXpyJ4Y3csjKI3WogIgNtuWgJ48
uOcW+pgMS55qTRkhZfAZXdAlhq/M2d/C6QA=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,309 @@
# Database Initialization System
This document explains the automatic database initialization system for the Bakery-IA microservices architecture.
## Overview
The system handles two main scenarios:
1. **Production/First-time Deployment**: Automatically creates tables from SQLAlchemy models and sets up Alembic version tracking
2. **Development Workflow**: Provides easy reset capabilities to start with a clean slate
## Key Features
-**Automatic Table Creation**: Creates tables from SQLAlchemy models when database is empty
-**Alembic Integration**: Properly manages migration versions and history
-**Development Reset**: Easy clean-slate restart for development
-**Production Ready**: Safe for production deployments
-**All 14 Services**: Works across all microservices
## How It Works
### 1. Automatic Detection
The system automatically detects the database state:
- **Empty Database**: Creates tables from models and initializes Alembic
- **Existing Database with Alembic**: Runs pending migrations
- **Existing Database without Alembic**: Initializes Alembic on existing schema
- **Force Recreate Mode**: Drops everything and recreates (development only)
### 2. Integration Points
#### Service Startup
```python
# In your service main.py
class AuthService(StandardFastAPIService):
# Migration verification happens automatically during startup
pass
```
#### Kubernetes Migration Jobs
```yaml
# Enhanced migration jobs handle automatic table creation
containers:
- name: migrate
image: bakery/auth-service:${IMAGE_TAG}
command: ["python", "/app/scripts/run_migrations.py", "auth"]
```
#### Environment Variables
```bash
# Control behavior via environment variables
DB_FORCE_RECREATE=true # Force recreate tables (development)
DEVELOPMENT_MODE=true # Enable development features
```
## Usage Scenarios
### 1. First-Time Production Deployment
**What happens:**
1. Migration job detects empty database
2. Creates all tables from SQLAlchemy models
3. Stamps Alembic with the latest migration version
4. Service starts and verifies migration state
**No manual intervention required!**
### 2. Development - Clean Slate Reset
**Option A: Using the Development Script**
```bash
# Reset specific service
./scripts/dev-reset-database.sh --service auth
# Reset all services
./scripts/dev-reset-database.sh --all
# Reset with auto-confirmation
./scripts/dev-reset-database.sh --service auth --yes
```
**Option B: Using the Workflow Script**
```bash
# Clean start with dev profile
./scripts/dev-workflow.sh clean --profile dev
# Reset specific service and restart
./scripts/dev-workflow.sh reset --service auth
```
**Option C: Manual Environment Variable**
```bash
# Set force recreate mode
kubectl patch configmap development-config -n bakery-ia \
--patch='{"data":{"DB_FORCE_RECREATE":"true"}}'
# Run migration job
kubectl apply -f infrastructure/kubernetes/base/migrations/auth-migration-job.yaml
```
### 3. Regular Development Workflow
```bash
# Start development environment
./scripts/dev-workflow.sh start --profile minimal
# Check status
./scripts/dev-workflow.sh status
# View logs for specific service
./scripts/dev-workflow.sh logs --service auth
# Run migrations only
./scripts/dev-workflow.sh migrate --service auth
```
## Configuration
### Skaffold Profiles
The system supports different deployment profiles:
```yaml
# skaffold.yaml profiles
profiles:
- name: minimal # Only auth and inventory
- name: full # All services + infrastructure
- name: single # Template for single service
- name: dev # Full development environment
```
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `DB_FORCE_RECREATE` | Force recreate tables | `false` |
| `DEVELOPMENT_MODE` | Enable development features | `false` |
| `DEBUG_LOGGING` | Enable debug logging | `false` |
| `SKIP_MIGRATION_VERSION_CHECK` | Skip version verification | `false` |
### Service Configuration
Each service automatically detects its configuration:
- **Models Module**: `services.{service}.app.models`
- **Alembic Config**: `services/{service}/alembic.ini`
- **Migration Scripts**: `services/{service}/migrations/versions/`
## Development Workflows
### Quick Start
```bash
# 1. Start minimal environment
./scripts/dev-workflow.sh start --profile minimal
# 2. Reset specific service when needed
./scripts/dev-workflow.sh reset --service auth
# 3. Clean restart when you want fresh start
./scripts/dev-workflow.sh clean --profile dev
```
### Database Reset Workflows
#### Scenario 1: "I want to reset auth service only"
```bash
./scripts/dev-reset-database.sh --service auth
```
#### Scenario 2: "I want to start completely fresh"
```bash
./scripts/dev-reset-database.sh --all
# or
./scripts/dev-workflow.sh clean --profile dev
```
#### Scenario 3: "I want to reset and restart in one command"
```bash
./scripts/dev-workflow.sh reset --service auth
```
## Technical Details
### Database Initialization Manager
The core logic is in [`shared/database/init_manager.py`](shared/database/init_manager.py):
```python
# Main initialization method
async def initialize_database(self) -> Dict[str, Any]:
# Check current database state
db_state = await self._check_database_state()
# Handle different scenarios
if self.force_recreate:
result = await self._handle_force_recreate()
elif db_state["is_empty"]:
result = await self._handle_first_time_deployment()
# ... etc
```
### Migration Job Enhancement
Migration jobs now use the enhanced runner:
```yaml
containers:
- name: migrate
command: ["python", "/app/scripts/run_migrations.py", "auth"]
env:
- name: AUTH_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: AUTH_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: development-config
key: DB_FORCE_RECREATE
```
### Service Integration
Services automatically handle table initialization during startup:
```python
async def _handle_database_tables(self):
# Check if we're in force recreate mode
force_recreate = os.getenv("DB_FORCE_RECREATE", "false").lower() == "true"
# Initialize database with automatic table creation
result = await initialize_service_database(
database_manager=self.database_manager,
service_name=self.service_name,
force_recreate=force_recreate
)
```
## Troubleshooting
### Common Issues
#### 1. Migration Job Fails
```bash
# Check job logs
kubectl logs -l job-name=auth-migration -n bakery-ia
# Check database connectivity
kubectl exec auth-db-pod -n bakery-ia -- pg_isready
```
#### 2. Service Won't Start
```bash
# Check service logs
kubectl logs -l app.kubernetes.io/name=auth -n bakery-ia
# Check database state
./scripts/dev-workflow.sh status
```
#### 3. Tables Not Created
```bash
# Force recreate mode
./scripts/dev-reset-database.sh --service auth --yes
# Check migration job status
kubectl get jobs -n bakery-ia
```
### Debugging Commands
```bash
# Check all components
./scripts/dev-workflow.sh status
# View specific service logs
./scripts/dev-workflow.sh logs --service auth
# Check migration jobs
kubectl get jobs -l app.kubernetes.io/component=migration -n bakery-ia
# Check ConfigMaps
kubectl get configmaps -n bakery-ia
# View database pods
kubectl get pods -l app.kubernetes.io/component=database -n bakery-ia
```
## Benefits
1. **Zero Manual Setup**: Tables are created automatically on first deployment
2. **Development Friendly**: Easy reset capabilities for clean development
3. **Production Safe**: Handles existing databases gracefully
4. **Alembic Compatible**: Maintains proper migration history and versioning
5. **Service Agnostic**: Works identically across all 14 microservices
6. **Kubernetes Native**: Integrates seamlessly with Kubernetes workflows
## Migration from TODO State
If you have existing services with TODO migrations:
1. **Keep existing models**: Your SQLAlchemy models are the source of truth
2. **Deploy normally**: The system will create tables from models automatically
3. **Alembic versions**: Will be stamped with the latest migration version
4. **No data loss**: Existing data is preserved in production deployments
The system eliminates the need to manually fill in TODO migration files while maintaining proper Alembic version tracking.

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: alert-processor-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/alert-processor-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: alert-processor-db-pvc claimName: alert-processor-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: alert-processor-db app.kubernetes.io/name: alert-processor-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/auth-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: auth-db-pvc claimName: auth-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: auth-db app.kubernetes.io/name: auth-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: external-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/external-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: external-db-pvc claimName: external-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: external-db app.kubernetes.io/name: external-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: forecasting-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/forecasting-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: forecasting-db-pvc claimName: forecasting-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: forecasting-db app.kubernetes.io/name: forecasting-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: inventory-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/inventory-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: inventory-db-pvc claimName: inventory-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: inventory-db app.kubernetes.io/name: inventory-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: notification-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/notification-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: notification-db-pvc claimName: notification-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: notification-db app.kubernetes.io/name: notification-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: orders-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/orders-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: orders-db-pvc claimName: orders-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: orders-db app.kubernetes.io/name: orders-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pos-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/pos-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: pos-db-pvc claimName: pos-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: pos-db app.kubernetes.io/name: pos-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: production-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/production-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: production-db-pvc claimName: production-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: production-db app.kubernetes.io/name: production-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: recipes-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/recipes-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: recipes-db-pvc claimName: recipes-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: recipes-db app.kubernetes.io/name: recipes-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: sales-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/sales-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: sales-db-pvc claimName: sales-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: sales-db app.kubernetes.io/name: sales-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: suppliers-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/suppliers-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: suppliers-db-pvc claimName: suppliers-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: suppliers-db app.kubernetes.io/name: suppliers-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: tenant-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/tenant-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: tenant-db-pvc claimName: tenant-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: tenant-db app.kubernetes.io/name: tenant-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -1,3 +1,19 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: training-db-pv
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/bakery-data/training-db"
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -51,6 +67,8 @@ spec:
volumeMounts: volumeMounts:
- name: postgres-data - name: postgres-data
mountPath: /var/lib/postgresql/data mountPath: /var/lib/postgresql/data
- name: init-scripts
mountPath: /docker-entrypoint-initdb.d
resources: resources:
requests: requests:
memory: "256Mi" memory: "256Mi"
@@ -86,6 +104,9 @@ spec:
- name: postgres-data - name: postgres-data
persistentVolumeClaim: persistentVolumeClaim:
claimName: training-db-pvc claimName: training-db-pvc
- name: init-scripts
configMap:
name: postgres-init-config
--- ---
apiVersion: v1 apiVersion: v1
@@ -117,8 +138,9 @@ metadata:
app.kubernetes.io/name: training-db app.kubernetes.io/name: training-db
app.kubernetes.io/component: database app.kubernetes.io/component: database
spec: spec:
storageClassName: manual
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 5Gi

View File

@@ -13,6 +13,9 @@ data:
ENVIRONMENT: "production" ENVIRONMENT: "production"
DEBUG: "false" DEBUG: "false"
LOG_LEVEL: "INFO" LOG_LEVEL: "INFO"
# Database initialization settings
DB_FORCE_RECREATE: "false"
BUILD_DATE: "2024-01-20T10:00:00Z" BUILD_DATE: "2024-01-20T10:00:00Z"
VCS_REF: "latest" VCS_REF: "latest"
IMAGE_TAG: "latest" IMAGE_TAG: "latest"

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: development-config
namespace: bakery-ia
labels:
app.kubernetes.io/component: config
app.kubernetes.io/part-of: bakery-ia
environment: development
data:
# Set to "true" to force recreate all tables from scratch (development mode)
# This will drop all existing tables and recreate them from SQLAlchemy models
DB_FORCE_RECREATE: "false"
# Development mode flag
DEVELOPMENT_MODE: "true"
# Enable debug logging in development
DEBUG_LOGGING: "true"
# Skip migration version checking in development
SKIP_MIGRATION_VERSION_CHECK: "false"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-config
namespace: bakery-ia
labels:
app.kubernetes.io/component: database
app.kubernetes.io/part-of: bakery-ia
data:
init.sql: |
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_stat_statements";

View File

@@ -11,6 +11,26 @@ resources:
- secrets.yaml - secrets.yaml
- ingress-https.yaml - ingress-https.yaml
# Additional configs
- configs/postgres-init-config.yaml
- configs/development-config.yaml
# Migration jobs
- migrations/auth-migration-job.yaml
- migrations/tenant-migration-job.yaml
- migrations/training-migration-job.yaml
- migrations/forecasting-migration-job.yaml
- migrations/sales-migration-job.yaml
- migrations/external-migration-job.yaml
- migrations/notification-migration-job.yaml
- migrations/inventory-migration-job.yaml
- migrations/recipes-migration-job.yaml
- migrations/suppliers-migration-job.yaml
- migrations/pos-migration-job.yaml
- migrations/orders-migration-job.yaml
- migrations/production-migration-job.yaml
- migrations/alert-processor-migration-job.yaml
# Infrastructure components # Infrastructure components
- components/databases/redis.yaml - components/databases/redis.yaml
- components/databases/rabbitmq.yaml - components/databases/rabbitmq.yaml

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for alert-processor service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: alert-processor-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: alert-processor-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: alert-processor-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h alert-processor-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/alert-processor:dev
command: ["python", "/app/scripts/run_migrations.py", "alert_processor"]
env:
- name: ALERT_PROCESSOR_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: ALERT_PROCESSOR_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for auth service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: auth-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: auth-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: auth-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h auth-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/auth-service:dev
command: ["python", "/app/scripts/run_migrations.py", "auth"]
env:
- name: AUTH_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: AUTH_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for external service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: external-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: external-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: external-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h external-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/external-service:dev
command: ["python", "/app/scripts/run_migrations.py", "external"]
env:
- name: EXTERNAL_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: EXTERNAL_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for forecasting service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: forecasting-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: forecasting-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: forecasting-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h forecasting-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/forecasting-service:dev
command: ["python", "/app/scripts/run_migrations.py", "forecasting"]
env:
- name: FORECASTING_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: FORECASTING_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for inventory service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: inventory-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: inventory-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: inventory-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h inventory-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/inventory-service:dev
command: ["python", "/app/scripts/run_migrations.py", "inventory"]
env:
- name: INVENTORY_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: INVENTORY_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for notification service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: notification-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: notification-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: notification-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h notification-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/notification-service:dev
command: ["python", "/app/scripts/run_migrations.py", "notification"]
env:
- name: NOTIFICATION_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: NOTIFICATION_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for orders service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: orders-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: orders-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: orders-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h orders-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/orders-service:dev
command: ["python", "/app/scripts/run_migrations.py", "orders"]
env:
- name: ORDERS_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: ORDERS_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for pos service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: pos-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: pos-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: pos-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h pos-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/pos-service:dev
command: ["python", "/app/scripts/run_migrations.py", "pos"]
env:
- name: POS_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: POS_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for production service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: production-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: production-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: production-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h production-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/production-service:dev
command: ["python", "/app/scripts/run_migrations.py", "production"]
env:
- name: PRODUCTION_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: PRODUCTION_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for recipes service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: recipes-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: recipes-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: recipes-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h recipes-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/recipes-service:dev
command: ["python", "/app/scripts/run_migrations.py", "recipes"]
env:
- name: RECIPES_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: RECIPES_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for sales service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: sales-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: sales-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: sales-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h sales-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/sales-service:dev
command: ["python", "/app/scripts/run_migrations.py", "sales"]
env:
- name: SALES_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: SALES_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for suppliers service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: suppliers-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: suppliers-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: suppliers-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h suppliers-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/suppliers-service:dev
command: ["python", "/app/scripts/run_migrations.py", "suppliers"]
env:
- name: SUPPLIERS_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: SUPPLIERS_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for tenant service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: tenant-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: tenant-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: tenant-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h tenant-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/tenant-service:dev
command: ["python", "/app/scripts/run_migrations.py", "tenant"]
env:
- name: TENANT_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: TENANT_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,55 @@
# Enhanced migration job for training service with automatic table creation
apiVersion: batch/v1
kind: Job
metadata:
name: training-migration
namespace: bakery-ia
labels:
app.kubernetes.io/name: training-migration
app.kubernetes.io/component: migration
app.kubernetes.io/part-of: bakery-ia
spec:
backoffLimit: 3
template:
metadata:
labels:
app.kubernetes.io/name: training-migration
app.kubernetes.io/component: migration
spec:
initContainers:
- name: wait-for-db
image: postgres:15-alpine
command: ["sh", "-c", "until pg_isready -h training-db-service -p 5432; do sleep 2; done"]
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
containers:
- name: migrate
image: bakery/training-service:dev
command: ["python", "/app/scripts/run_migrations.py", "training"]
env:
- name: TRAINING_DATABASE_URL
valueFrom:
secretKeyRef:
name: database-secrets
key: TRAINING_DATABASE_URL
- name: DB_FORCE_RECREATE
valueFrom:
configMapKeyRef:
name: bakery-config
key: DB_FORCE_RECREATE
optional: true
- name: LOG_LEVEL
value: "INFO"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
restartPolicy: OnFailure

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for alert-processor database
set -e
SERVICE_NAME="alert-processor"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=alert-processor-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find alert-processor database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for alert-processor database
set -e
SERVICE_NAME="alert-processor"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=alert-processor-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find alert-processor database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for alert-processor database
set -e
SERVICE_NAME="alert-processor"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ALERT_PROCESSOR_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=alert-processor-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find alert-processor database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for alert-processor service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your alert-processor service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for auth database
set -e
SERVICE_NAME="auth"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.AUTH_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.AUTH_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=auth-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find auth database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for auth database
set -e
SERVICE_NAME="auth"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.AUTH_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.AUTH_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=auth-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find auth database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for auth database
set -e
SERVICE_NAME="auth"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.AUTH_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.AUTH_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=auth-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find auth database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for auth service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your auth service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for external database
set -e
SERVICE_NAME="external"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=external-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find external database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for external database
set -e
SERVICE_NAME="external"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=external-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find external database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for external database
set -e
SERVICE_NAME="external"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.EXTERNAL_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=external-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find external database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for external service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your external service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for forecasting database
set -e
SERVICE_NAME="forecasting"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=forecasting-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find forecasting database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for forecasting database
set -e
SERVICE_NAME="forecasting"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=forecasting-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find forecasting database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for forecasting database
set -e
SERVICE_NAME="forecasting"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.FORECASTING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=forecasting-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find forecasting database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for forecasting service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your forecasting service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for inventory database
set -e
SERVICE_NAME="inventory"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=inventory-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find inventory database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for inventory database
set -e
SERVICE_NAME="inventory"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=inventory-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find inventory database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for inventory database
set -e
SERVICE_NAME="inventory"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.INVENTORY_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=inventory-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find inventory database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for inventory service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your inventory service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for notification database
set -e
SERVICE_NAME="notification"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=notification-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find notification database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for notification database
set -e
SERVICE_NAME="notification"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=notification-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find notification database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for notification database
set -e
SERVICE_NAME="notification"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.NOTIFICATION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=notification-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find notification database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for notification service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your notification service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for orders database
set -e
SERVICE_NAME="orders"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ORDERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ORDERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=orders-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find orders database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for orders database
set -e
SERVICE_NAME="orders"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ORDERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ORDERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=orders-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find orders database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for orders database
set -e
SERVICE_NAME="orders"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.ORDERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.ORDERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=orders-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find orders database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for orders service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your orders service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for pos database
set -e
SERVICE_NAME="pos"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.POS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.POS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=pos-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find pos database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for pos database
set -e
SERVICE_NAME="pos"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.POS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.POS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=pos-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find pos database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for pos database
set -e
SERVICE_NAME="pos"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.POS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.POS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=pos-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find pos database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for pos service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your pos service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for production database
set -e
SERVICE_NAME="production"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=production-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find production database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for production database
set -e
SERVICE_NAME="production"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=production-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find production database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for production database
set -e
SERVICE_NAME="production"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.PRODUCTION_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=production-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find production database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for production service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your production service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for recipes database
set -e
SERVICE_NAME="recipes"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.RECIPES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.RECIPES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=recipes-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find recipes database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for recipes database
set -e
SERVICE_NAME="recipes"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.RECIPES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.RECIPES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=recipes-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find recipes database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for recipes database
set -e
SERVICE_NAME="recipes"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.RECIPES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.RECIPES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=recipes-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find recipes database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for recipes service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your recipes service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for sales database
set -e
SERVICE_NAME="sales"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SALES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SALES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=sales-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find sales database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for sales database
set -e
SERVICE_NAME="sales"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SALES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SALES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=sales-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find sales database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for sales database
set -e
SERVICE_NAME="sales"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SALES_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SALES_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=sales-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find sales database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for sales service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your sales service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for suppliers database
set -e
SERVICE_NAME="suppliers"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=suppliers-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find suppliers database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for suppliers database
set -e
SERVICE_NAME="suppliers"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=suppliers-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find suppliers database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for suppliers database
set -e
SERVICE_NAME="suppliers"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.SUPPLIERS_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=suppliers-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find suppliers database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for suppliers service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your suppliers service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for tenant database
set -e
SERVICE_NAME="tenant"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TENANT_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TENANT_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=tenant-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find tenant database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for tenant database
set -e
SERVICE_NAME="tenant"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TENANT_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TENANT_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=tenant-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find tenant database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for tenant database
set -e
SERVICE_NAME="tenant"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TENANT_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TENANT_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=tenant-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find tenant database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for tenant service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your tenant service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Backup script for training database
set -e
SERVICE_NAME="training"
BACKUP_DIR="${BACKUP_DIR:-./backups}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${SERVICE_NAME}_backup_${TIMESTAMP}.sql"
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
echo "Starting backup for $SERVICE_NAME database..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TRAINING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TRAINING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=training-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find training database pod"
exit 1
fi
echo "Backing up to: $BACKUP_FILE"
kubectl exec "$POD_NAME" -n bakery-ia -- pg_dump -U "$DB_USER" "$DB_NAME" > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE"
# Compress the backup
gzip "$BACKUP_FILE"
echo "Backup compressed: ${BACKUP_FILE}.gz"
else
echo "Backup failed"
exit 1
fi

View File

@@ -0,0 +1,47 @@
#!/bin/bash
# Restore script for training database
set -e
SERVICE_NAME="training"
BACKUP_FILE="$1"
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: $0 <backup_file>"
echo "Example: $0 ./backups/${SERVICE_NAME}_backup_20240101_120000.sql"
exit 1
fi
if [ ! -f "$BACKUP_FILE" ]; then
echo "Error: Backup file not found: $BACKUP_FILE"
exit 1
fi
echo "Starting restore for $SERVICE_NAME database from: $BACKUP_FILE"
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TRAINING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TRAINING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=training-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find training database pod"
exit 1
fi
# Check if file is compressed
if [[ "$BACKUP_FILE" == *.gz ]]; then
echo "Decompressing backup file..."
zcat "$BACKUP_FILE" | kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME"
else
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$BACKUP_FILE"
fi
if [ $? -eq 0 ]; then
echo "Restore completed successfully"
else
echo "Restore failed"
exit 1
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Seeding script for training database
set -e
SERVICE_NAME="training"
SEED_FILE="${SEED_FILE:-infrastructure/scripts/seeds/${SERVICE_NAME}_seed.sql}"
echo "Starting database seeding for $SERVICE_NAME..."
# Get database credentials from Kubernetes secrets
DB_USER=$(kubectl get secret database-secrets -n bakery-ia -o jsonpath='{.data.TRAINING_DB_USER}' | base64 -d)
DB_NAME=$(kubectl get configmap bakery-config -n bakery-ia -o jsonpath='{.data.TRAINING_DB_NAME}')
# Get the pod name
POD_NAME=$(kubectl get pods -n bakery-ia -l app.kubernetes.io/name=training-db -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: Could not find training database pod"
exit 1
fi
# Check if seed file exists
if [ ! -f "$SEED_FILE" ]; then
echo "Warning: Seed file not found: $SEED_FILE"
echo "Creating sample seed file..."
mkdir -p "infrastructure/scripts/seeds"
cat > "$SEED_FILE" << 'SEED_EOF'
-- Sample seed data for training service
-- Add your seed data here
-- Example:
-- INSERT INTO sample_table (name, created_at) VALUES
-- ('Sample Data 1', NOW()),
-- ('Sample Data 2', NOW());
-- Note: Replace with actual seed data for your training service
SELECT 'Seed file created. Please add your seed data.' as message;
SEED_EOF
echo "Sample seed file created at: $SEED_FILE"
echo "Please edit this file to add your actual seed data"
exit 0
fi
echo "Applying seed data from: $SEED_FILE"
kubectl exec -i "$POD_NAME" -n bakery-ia -- psql -U "$DB_USER" "$DB_NAME" < "$SEED_FILE"
if [ $? -eq 0 ]; then
echo "Seeding completed successfully"
else
echo "Seeding failed"
exit 1
fi

231
scripts/dev-reset-database.sh Executable file
View File

@@ -0,0 +1,231 @@
#!/bin/bash
# Development Database Reset Script
#
# This script helps developers reset their databases to a clean slate.
# It can reset individual services or all services at once.
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
NAMESPACE="bakery-ia"
SERVICES=("alert-processor" "auth" "external" "forecasting" "inventory" "notification" "orders" "pos" "production" "recipes" "sales" "suppliers" "tenant" "training")
print_banner() {
echo -e "${BLUE}"
echo "╔═══════════════════════════════════════════════════════════════╗"
echo "║ Bakery-IA Development Database Reset ║"
echo "║ ║"
echo "║ This script will reset database(s) to a clean slate ║"
echo "║ WARNING: This will delete all existing data! ║"
echo "╚═══════════════════════════════════════════════════════════════╝"
echo -e "${NC}"
}
show_usage() {
echo "Usage: $0 [OPTIONS] [SERVICE]"
echo ""
echo "Options:"
echo " -a, --all Reset all services"
echo " -s, --service NAME Reset specific service"
echo " -l, --list List available services"
echo " -y, --yes Skip confirmation prompts"
echo " -h, --help Show this help"
echo ""
echo "Examples:"
echo " $0 --service auth # Reset only auth service"
echo " $0 --all # Reset all services"
echo " $0 auth # Reset auth service (short form)"
}
list_services() {
echo -e "${YELLOW}Available services:${NC}"
for service in "${SERVICES[@]}"; do
echo " - $service"
done
}
confirm_action() {
local service="$1"
local message="${2:-Are you sure you want to reset}"
if [[ "$SKIP_CONFIRM" == "true" ]]; then
return 0
fi
echo -e "${YELLOW}$message the database for service: ${RED}$service${YELLOW}?${NC}"
echo -e "${RED}This will delete ALL existing data!${NC}"
read -p "Type 'yes' to continue: " confirmation
if [[ "$confirmation" != "yes" ]]; then
echo -e "${YELLOW}Operation cancelled.${NC}"
return 1
fi
return 0
}
enable_force_recreate() {
echo -e "${BLUE}Enabling force recreate mode...${NC}"
# Update the development config
kubectl patch configmap development-config -n "$NAMESPACE" \
--patch='{"data":{"DB_FORCE_RECREATE":"true"}}' 2>/dev/null || \
kubectl create configmap development-config -n "$NAMESPACE" \
--from-literal=DB_FORCE_RECREATE=true \
--from-literal=DEVELOPMENT_MODE=true \
--from-literal=DEBUG_LOGGING=true || true
}
disable_force_recreate() {
echo -e "${BLUE}Disabling force recreate mode...${NC}"
kubectl patch configmap development-config -n "$NAMESPACE" \
--patch='{"data":{"DB_FORCE_RECREATE":"false"}}' 2>/dev/null || true
}
reset_service() {
local service="$1"
echo -e "${BLUE}Resetting database for service: $service${NC}"
# Delete existing migration job if it exists
kubectl delete job "${service}-migration" -n "$NAMESPACE" 2>/dev/null || true
# Wait a moment for cleanup
sleep 2
# Create new migration job
echo -e "${YELLOW}Creating migration job for $service...${NC}"
kubectl apply -f "infrastructure/kubernetes/base/migrations/${service}-migration-job.yaml"
# Wait for job to complete
echo -e "${YELLOW}Waiting for migration to complete...${NC}"
kubectl wait --for=condition=complete job/"${service}-migration" -n "$NAMESPACE" --timeout=300s
# Check job status
if kubectl get job "${service}-migration" -n "$NAMESPACE" -o jsonpath='{.status.succeeded}' | grep -q "1"; then
echo -e "${GREEN}✓ Database reset completed successfully for $service${NC}"
else
echo -e "${RED}✗ Database reset failed for $service${NC}"
echo "Check logs with: kubectl logs -l job-name=${service}-migration -n $NAMESPACE"
return 1
fi
}
reset_all_services() {
echo -e "${BLUE}Resetting databases for all services...${NC}"
local failed_services=()
for service in "${SERVICES[@]}"; do
echo -e "\n${BLUE}Processing $service...${NC}"
if ! reset_service "$service"; then
failed_services+=("$service")
fi
done
if [[ ${#failed_services[@]} -eq 0 ]]; then
echo -e "\n${GREEN}✓ All services reset successfully!${NC}"
else
echo -e "\n${RED}✗ Some services failed to reset:${NC}"
for service in "${failed_services[@]}"; do
echo -e " ${RED}- $service${NC}"
done
return 1
fi
}
cleanup_migration_jobs() {
echo -e "${BLUE}Cleaning up migration jobs...${NC}"
kubectl delete jobs -l app.kubernetes.io/component=migration -n "$NAMESPACE" 2>/dev/null || true
}
main() {
local action=""
local target_service=""
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
-a|--all)
action="all"
shift
;;
-s|--service)
action="service"
target_service="$2"
shift 2
;;
-l|--list)
list_services
exit 0
;;
-y|--yes)
SKIP_CONFIRM="true"
shift
;;
-h|--help)
show_usage
exit 0
;;
*)
if [[ -z "$action" && -z "$target_service" ]]; then
action="service"
target_service="$1"
fi
shift
;;
esac
done
print_banner
# Validate arguments
if [[ -z "$action" ]]; then
echo -e "${RED}Error: No action specified${NC}"
show_usage
exit 1
fi
if [[ "$action" == "service" && -z "$target_service" ]]; then
echo -e "${RED}Error: Service name required${NC}"
show_usage
exit 1
fi
if [[ "$action" == "service" ]]; then
# Validate service name
if [[ ! " ${SERVICES[*]} " =~ " ${target_service} " ]]; then
echo -e "${RED}Error: Invalid service name: $target_service${NC}"
list_services
exit 1
fi
fi
# Execute action
case "$action" in
"all")
if confirm_action "ALL SERVICES" "Are you sure you want to reset ALL databases? This will affect"; then
enable_force_recreate
trap disable_force_recreate EXIT
reset_all_services
fi
;;
"service")
if confirm_action "$target_service"; then
enable_force_recreate
trap disable_force_recreate EXIT
reset_service "$target_service"
fi
;;
esac
}
# Run main function
main "$@"

235
scripts/dev-workflow.sh Executable file
View File

@@ -0,0 +1,235 @@
#!/bin/bash
# Development Workflow Script for Bakery-IA
#
# This script provides common development workflows with database management
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
show_usage() {
echo "Development Workflow Script for Bakery-IA"
echo ""
echo "Usage: $0 [COMMAND] [OPTIONS]"
echo ""
echo "Commands:"
echo " start Start development environment"
echo " reset Reset database(s) and restart"
echo " clean Clean start (drop all data)"
echo " migrate Run migrations only"
echo " logs Show service logs"
echo " status Show deployment status"
echo ""
echo "Options:"
echo " --service NAME Target specific service (default: all)"
echo " --profile NAME Use specific Skaffold profile (minimal, full, dev)"
echo " --clean-slate Force recreate all tables"
echo " --help Show this help"
echo ""
echo "Examples:"
echo " $0 start --profile minimal # Start with minimal services"
echo " $0 reset --service auth # Reset auth service only"
echo " $0 clean --profile dev # Clean start with dev profile"
}
start_development() {
local profile="${1:-dev}"
local clean_slate="${2:-false}"
echo -e "${BLUE}Starting development environment with profile: $profile${NC}"
if [[ "$clean_slate" == "true" ]]; then
echo -e "${YELLOW}Enabling clean slate mode...${NC}"
kubectl create configmap development-config --dry-run=client -o yaml \
--from-literal=DB_FORCE_RECREATE=true \
--from-literal=DEVELOPMENT_MODE=true \
--from-literal=DEBUG_LOGGING=true | \
kubectl apply -f -
fi
# Start with Skaffold
echo -e "${BLUE}Starting Skaffold with profile: $profile${NC}"
skaffold dev --profile="$profile" --port-forward
}
reset_service_and_restart() {
local service="$1"
local profile="${2:-dev}"
echo -e "${BLUE}Resetting service: $service${NC}"
# Reset the database
./scripts/dev-reset-database.sh --service "$service" --yes
# Restart the deployment
kubectl rollout restart deployment "${service}-service" -n bakery-ia 2>/dev/null || \
kubectl rollout restart deployment "$service" -n bakery-ia 2>/dev/null || true
echo -e "${GREEN}Service $service reset and restarted${NC}"
}
clean_start() {
local profile="${1:-dev}"
echo -e "${YELLOW}Performing clean start...${NC}"
# Stop existing Skaffold process
pkill -f "skaffold" || true
# Clean up all deployments
kubectl delete jobs -l app.kubernetes.io/component=migration -n bakery-ia 2>/dev/null || true
# Wait a moment
sleep 2
# Start with clean slate
start_development "$profile" "true"
}
run_migrations() {
local service="$1"
if [[ -n "$service" ]]; then
echo -e "${BLUE}Running migration for service: $service${NC}"
kubectl delete job "${service}-migration" -n bakery-ia 2>/dev/null || true
kubectl apply -f "infrastructure/kubernetes/base/migrations/${service}-migration-job.yaml"
kubectl wait --for=condition=complete job/"${service}-migration" -n bakery-ia --timeout=300s
else
echo -e "${BLUE}Running migrations for all services${NC}"
kubectl delete jobs -l app.kubernetes.io/component=migration -n bakery-ia 2>/dev/null || true
kubectl apply -f infrastructure/kubernetes/base/migrations/
# Wait for all migration jobs
for job in $(kubectl get jobs -l app.kubernetes.io/component=migration -n bakery-ia -o name); do
kubectl wait --for=condition=complete "$job" -n bakery-ia --timeout=300s
done
fi
echo -e "${GREEN}Migrations completed${NC}"
}
show_logs() {
local service="$1"
if [[ -n "$service" ]]; then
echo -e "${BLUE}Showing logs for service: $service${NC}"
kubectl logs -l app.kubernetes.io/name="${service}" -n bakery-ia --tail=100 -f
else
echo -e "${BLUE}Available services for logs:${NC}"
kubectl get deployments -n bakery-ia -o custom-columns="NAME:.metadata.name"
fi
}
show_status() {
echo -e "${BLUE}Deployment Status:${NC}"
echo ""
echo -e "${YELLOW}Pods:${NC}"
kubectl get pods -n bakery-ia
echo ""
echo -e "${YELLOW}Services:${NC}"
kubectl get services -n bakery-ia
echo ""
echo -e "${YELLOW}Jobs:${NC}"
kubectl get jobs -n bakery-ia
echo ""
echo -e "${YELLOW}ConfigMaps:${NC}"
kubectl get configmaps -n bakery-ia
}
main() {
local command=""
local service=""
local profile="dev"
local clean_slate="false"
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
start|reset|clean|migrate|logs|status)
command="$1"
shift
;;
--service)
service="$2"
shift 2
;;
--profile)
profile="$2"
shift 2
;;
--clean-slate)
clean_slate="true"
shift
;;
--help)
show_usage
exit 0
;;
*)
if [[ -z "$command" ]]; then
command="$1"
fi
shift
;;
esac
done
if [[ -z "$command" ]]; then
show_usage
exit 1
fi
case "$command" in
"start")
start_development "$profile" "$clean_slate"
;;
"reset")
if [[ -n "$service" ]]; then
reset_service_and_restart "$service" "$profile"
else
echo -e "${RED}Error: --service required for reset command${NC}"
exit 1
fi
;;
"clean")
clean_start "$profile"
;;
"migrate")
run_migrations "$service"
;;
"logs")
show_logs "$service"
;;
"status")
show_status
;;
*)
echo -e "${RED}Error: Unknown command: $command${NC}"
show_usage
exit 1
;;
esac
}
# Check if kubectl and skaffold are available
if ! command -v kubectl &> /dev/null; then
echo -e "${RED}Error: kubectl is not installed or not in PATH${NC}"
exit 1
fi
if ! command -v skaffold &> /dev/null; then
echo -e "${RED}Error: skaffold is not installed or not in PATH${NC}"
exit 1
fi
# Run main function
main "$@"

134
scripts/run_migrations.py Executable file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
"""
Enhanced Migration Runner
Handles automatic table creation and Alembic migrations for Kubernetes deployments.
Supports both first-time deployments and incremental migrations.
"""
import os
import sys
import asyncio
import argparse
import structlog
from pathlib import Path
# Add the project root to the Python path
project_root = Path(__file__).parent.parent
sys.path.insert(0, str(project_root))
from shared.database.base import DatabaseManager
from shared.database.init_manager import initialize_service_database
# Configure logging
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.JSONRenderer()
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
logger = structlog.get_logger()
async def run_service_migration(service_name: str, force_recreate: bool = False) -> bool:
"""
Run migration for a specific service
Args:
service_name: Name of the service (e.g., 'auth', 'inventory')
force_recreate: Whether to force recreate tables (development mode)
Returns:
True if successful, False otherwise
"""
logger.info("Starting migration for service", service=service_name, force_recreate=force_recreate)
try:
# Get database URL from environment (try both constructed and direct approaches)
db_url_key = f"{service_name.upper().replace('-', '_')}_DATABASE_URL"
database_url = os.getenv(db_url_key) or os.getenv("DATABASE_URL")
# If no direct URL, construct from components
if not database_url:
host = os.getenv("POSTGRES_HOST")
port = os.getenv("POSTGRES_PORT")
db_name = os.getenv("POSTGRES_DB")
user = os.getenv("POSTGRES_USER")
password = os.getenv("POSTGRES_PASSWORD")
if all([host, port, db_name, user, password]):
database_url = f"postgresql+asyncpg://{user}:{password}@{host}:{port}/{db_name}"
logger.info("Constructed database URL from components", host=host, port=port, db=db_name)
else:
logger.error("Database connection details not found",
db_url_key=db_url_key,
host=bool(host),
port=bool(port),
db=bool(db_name),
user=bool(user),
password=bool(password))
return False
# Create database manager
db_manager = DatabaseManager(database_url=database_url)
# Initialize the database
result = await initialize_service_database(
database_manager=db_manager,
service_name=service_name,
force_recreate=force_recreate
)
logger.info("Migration completed successfully", service=service_name, result=result)
return True
except Exception as e:
logger.error("Migration failed", service=service_name, error=str(e))
return False
finally:
# Cleanup database connections
try:
await db_manager.close_connections()
except:
pass
async def main():
"""Main migration runner"""
parser = argparse.ArgumentParser(description="Enhanced Migration Runner")
parser.add_argument("service", help="Service name (e.g., auth, inventory)")
parser.add_argument("--force-recreate", action="store_true",
help="Force recreate tables (development mode)")
parser.add_argument("--verbose", "-v", action="store_true", help="Verbose logging")
args = parser.parse_args()
if args.verbose:
logger.info("Starting migration runner", service=args.service,
force_recreate=args.force_recreate)
# Run the migration
success = await run_service_migration(args.service, args.force_recreate)
if success:
logger.info("Migration runner completed successfully")
sys.exit(0)
else:
logger.error("Migration runner failed")
sys.exit(1)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

View File

@@ -35,8 +35,23 @@ except ImportError as e:
# this is the Alembic Config object # this is the Alembic Config object
config = context.config config = context.config
# Set database URL from settings if not already set # Set database URL from environment variables or settings
database_url = os.getenv('DATABASE_URL') or getattr(settings, 'DATABASE_URL', None) database_url = os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url: if database_url:
config.set_main_option("sqlalchemy.url", database_url) config.set_main_option("sqlalchemy.url", database_url)

View File

@@ -26,6 +26,9 @@ COPY --from=shared /shared /app/shared
# Copy application code # Copy application code
COPY services/auth/ . COPY services/auth/ .
# Copy scripts directory
COPY scripts/ /app/scripts/
# Add shared libraries to Python path # Add shared libraries to Python path
ENV PYTHONPATH="/app:/app/shared:${PYTHONPATH:-}" ENV PYTHONPATH="/app:/app/shared:${PYTHONPATH:-}"

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

View File

@@ -3,6 +3,7 @@ Authentication Service Main Application
""" """
from fastapi import FastAPI from fastapi import FastAPI
from sqlalchemy import text
from app.core.config import settings from app.core.config import settings
from app.core.database import database_manager from app.core.database import database_manager
from app.api import auth, users, onboarding from app.api import auth, users, onboarding
@@ -13,6 +14,27 @@ from shared.service_base import StandardFastAPIService
class AuthService(StandardFastAPIService): class AuthService(StandardFastAPIService):
"""Authentication Service with standardized setup""" """Authentication Service with standardized setup"""
expected_migration_version = "001_initial_auth"
async def on_startup(self, app):
"""Custom startup logic including migration verification"""
await self.verify_migrations()
await super().on_startup(app)
async def verify_migrations(self):
"""Verify database schema matches the latest migrations."""
try:
async with self.database_manager.get_session() as session:
result = await session.execute(text("SELECT version_num FROM alembic_version"))
version = result.scalar()
if version != self.expected_migration_version:
self.logger.error(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
raise RuntimeError(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
self.logger.info(f"Migration verification successful: {version}")
except Exception as e:
self.logger.error(f"Migration verification failed: {e}")
raise
def __init__(self): def __init__(self):
# Define expected database tables for health checks # Define expected database tables for health checks
auth_expected_tables = [ auth_expected_tables = [

View File

@@ -35,8 +35,24 @@ except ImportError as e:
# this is the Alembic Config object # this is the Alembic Config object
config = context.config config = context.config
# Set database URL from settings if not already set # Set database URL from environment variables or settings
database_url = os.getenv('DATABASE_URL') or getattr(settings, 'DATABASE_URL', None) # Try service-specific DATABASE_URL first, then fall back to generic
database_url = os.getenv('AUTH_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url: if database_url:
config.set_main_option("sqlalchemy.url", database_url) config.set_main_option("sqlalchemy.url", database_url)

View File

@@ -20,6 +20,9 @@ COPY shared/ /app/shared/
# Copy application code # Copy application code
COPY services/external/app/ /app/app/ COPY services/external/app/ /app/app/
# Copy scripts directory
COPY scripts/ /app/scripts/
# Set Python path to include shared modules # Set Python path to include shared modules
ENV PYTHONPATH=/app ENV PYTHONPATH=/app

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

View File

@@ -4,6 +4,7 @@ External Service Main Application
""" """
from fastapi import FastAPI from fastapi import FastAPI
from sqlalchemy import text
from app.core.config import settings from app.core.config import settings
from app.core.database import database_manager from app.core.database import database_manager
from app.services.messaging import setup_messaging, cleanup_messaging from app.services.messaging import setup_messaging, cleanup_messaging
@@ -16,6 +17,27 @@ from app.api.traffic import router as traffic_router
class ExternalService(StandardFastAPIService): class ExternalService(StandardFastAPIService):
"""External Data Service with standardized setup""" """External Data Service with standardized setup"""
expected_migration_version = "001_initial_external"
async def on_startup(self, app):
"""Custom startup logic including migration verification"""
await self.verify_migrations()
await super().on_startup(app)
async def verify_migrations(self):
"""Verify database schema matches the latest migrations."""
try:
async with self.database_manager.get_session() as session:
result = await session.execute(text("SELECT version_num FROM alembic_version"))
version = result.scalar()
if version != self.expected_migration_version:
self.logger.error(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
raise RuntimeError(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
self.logger.info(f"Migration verification successful: {version}")
except Exception as e:
self.logger.error(f"Migration verification failed: {e}")
raise
def __init__(self): def __init__(self):
# Define expected database tables for health checks # Define expected database tables for health checks
external_expected_tables = [ external_expected_tables = [

View File

@@ -35,8 +35,24 @@ except ImportError as e:
# this is the Alembic Config object # this is the Alembic Config object
config = context.config config = context.config
# Set database URL from settings if not already set # Set database URL from environment variables or settings
database_url = os.getenv('DATABASE_URL') or getattr(settings, 'DATABASE_URL', None) # Try service-specific DATABASE_URL first, then fall back to generic
database_url = os.getenv('EXTERNAL_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url: if database_url:
config.set_main_option("sqlalchemy.url", database_url) config.set_main_option("sqlalchemy.url", database_url)

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

View File

@@ -7,6 +7,7 @@ Demand prediction and forecasting service for bakery operations
""" """
from fastapi import FastAPI from fastapi import FastAPI
from sqlalchemy import text
from app.core.config import settings from app.core.config import settings
from app.core.database import database_manager from app.core.database import database_manager
from app.api import forecasts, predictions from app.api import forecasts, predictions
@@ -18,6 +19,27 @@ from shared.service_base import StandardFastAPIService
class ForecastingService(StandardFastAPIService): class ForecastingService(StandardFastAPIService):
"""Forecasting Service with standardized setup""" """Forecasting Service with standardized setup"""
expected_migration_version = "001_initial_forecasting"
async def on_startup(self, app):
"""Custom startup logic including migration verification"""
await self.verify_migrations()
await super().on_startup(app)
async def verify_migrations(self):
"""Verify database schema matches the latest migrations."""
try:
async with self.database_manager.get_session() as session:
result = await session.execute(text("SELECT version_num FROM alembic_version"))
version = result.scalar()
if version != self.expected_migration_version:
self.logger.error(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
raise RuntimeError(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
self.logger.info(f"Migration verification successful: {version}")
except Exception as e:
self.logger.error(f"Migration verification failed: {e}")
raise
def __init__(self): def __init__(self):
# Define expected database tables for health checks # Define expected database tables for health checks
forecasting_expected_tables = [ forecasting_expected_tables = [

View File

@@ -35,8 +35,24 @@ except ImportError as e:
# this is the Alembic Config object # this is the Alembic Config object
config = context.config config = context.config
# Set database URL from settings if not already set # Set database URL from environment variables or settings
database_url = os.getenv('DATABASE_URL') or getattr(settings, 'DATABASE_URL', None) # Try service-specific DATABASE_URL first, then fall back to generic
database_url = os.getenv('FORECASTING_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url: if database_url:
config.set_main_option("sqlalchemy.url", database_url) config.set_main_option("sqlalchemy.url", database_url)

View File

@@ -0,0 +1,111 @@
"""Alembic environment configuration for forecasting service"""
import asyncio
import logging
import os
import sys
from logging.config import fileConfig
from sqlalchemy import pool
from sqlalchemy.engine import Connection
from sqlalchemy.ext.asyncio import async_engine_from_config
from alembic import context
# Add the service directory to the Python path
service_path = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
if service_path not in sys.path:
sys.path.insert(0, service_path)
# Add shared modules to path
shared_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "shared"))
if shared_path not in sys.path:
sys.path.insert(0, shared_path)
try:
from app.core.config import settings
from shared.database.base import Base
# Import all models to ensure they are registered with Base.metadata
from app.models import * # Import all models
except ImportError as e:
print(f"Import error in migrations env.py: {e}")
print(f"Current Python path: {sys.path}")
raise
# this is the Alembic Config object
config = context.config
# Set database URL from environment variables or settings
database_url = os.getenv('FORECASTING_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url:
config.set_main_option("sqlalchemy.url", database_url)
# Interpret the config file for Python logging
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# Set target metadata
target_metadata = Base.metadata
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode."""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
compare_type=True,
compare_server_default=True,
)
with context.begin_transaction():
context.run_migrations()
def do_run_migrations(connection: Connection) -> None:
context.configure(
connection=connection,
target_metadata=target_metadata,
compare_type=True,
compare_server_default=True,
)
with context.begin_transaction():
context.run_migrations()
async def run_async_migrations() -> None:
"""Run migrations in 'online' mode."""
connectable = async_engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
async with connectable.connect() as connection:
await connection.run_sync(do_run_migrations)
await connectable.dispose()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode."""
asyncio.run(run_async_migrations())
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -19,6 +19,9 @@ COPY shared/ /app/shared/
# Copy application code # Copy application code
COPY services/inventory/app/ /app/app/ COPY services/inventory/app/ /app/app/
# Copy scripts directory
COPY scripts/ /app/scripts/
# Set Python path to include shared modules # Set Python path to include shared modules
ENV PYTHONPATH=/app ENV PYTHONPATH=/app

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

View File

@@ -5,6 +5,7 @@ Inventory Service FastAPI Application
import os import os
from fastapi import FastAPI from fastapi import FastAPI
from sqlalchemy import text
# Import core modules # Import core modules
from app.core.config import settings from app.core.config import settings
@@ -21,6 +22,27 @@ from app.api.food_safety import router as food_safety_router
class InventoryService(StandardFastAPIService): class InventoryService(StandardFastAPIService):
"""Inventory Service with standardized setup""" """Inventory Service with standardized setup"""
expected_migration_version = "001_initial_inventory"
async def on_startup(self, app):
"""Custom startup logic including migration verification"""
await self.verify_migrations()
await super().on_startup(app)
async def verify_migrations(self):
"""Verify database schema matches the latest migrations."""
try:
async with self.database_manager.get_session() as session:
result = await session.execute(text("SELECT version_num FROM alembic_version"))
version = result.scalar()
if version != self.expected_migration_version:
self.logger.error(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
raise RuntimeError(f"Migration version mismatch: expected {self.expected_migration_version}, got {version}")
self.logger.info(f"Migration verification successful: {version}")
except Exception as e:
self.logger.error(f"Migration verification failed: {e}")
raise
def __init__(self): def __init__(self):
# Define expected database tables for health checks # Define expected database tables for health checks
inventory_expected_tables = [ inventory_expected_tables = [

View File

@@ -1,93 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = .
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version number format
# Uses Alembic datetime format
version_num_format = %%(year)d%%(month).2d%%(day).2d_%%(hour).2d%%(minute).2d_%%(second).2d
# version name format
version_path_separator = /
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = postgresql+asyncpg://inventory_user:inventory_pass123@inventory-db:5432/inventory_db
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -35,8 +35,24 @@ except ImportError as e:
# this is the Alembic Config object # this is the Alembic Config object
config = context.config config = context.config
# Set database URL from settings if not already set # Set database URL from environment variables or settings
database_url = os.getenv('DATABASE_URL') or getattr(settings, 'DATABASE_URL', None) # Try service-specific DATABASE_URL first, then fall back to generic
database_url = os.getenv('INVENTORY_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url: if database_url:
config.set_main_option("sqlalchemy.url", database_url) config.set_main_option("sqlalchemy.url", database_url)

View File

@@ -0,0 +1,111 @@
"""Alembic environment configuration for inventory service"""
import asyncio
import logging
import os
import sys
from logging.config import fileConfig
from sqlalchemy import pool
from sqlalchemy.engine import Connection
from sqlalchemy.ext.asyncio import async_engine_from_config
from alembic import context
# Add the service directory to the Python path
service_path = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
if service_path not in sys.path:
sys.path.insert(0, service_path)
# Add shared modules to path
shared_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "shared"))
if shared_path not in sys.path:
sys.path.insert(0, shared_path)
try:
from app.core.config import settings
from shared.database.base import Base
# Import all models to ensure they are registered with Base.metadata
from app.models import * # Import all models
except ImportError as e:
print(f"Import error in migrations env.py: {e}")
print(f"Current Python path: {sys.path}")
raise
# this is the Alembic Config object
config = context.config
# Set database URL from environment variables or settings
database_url = os.getenv('INVENTORY_DATABASE_URL') or os.getenv('DATABASE_URL')
# If DATABASE_URL is not set, construct from individual components
if not database_url:
postgres_host = os.getenv('POSTGRES_HOST')
postgres_port = os.getenv('POSTGRES_PORT', '5432')
postgres_db = os.getenv('POSTGRES_DB')
postgres_user = os.getenv('POSTGRES_USER')
postgres_password = os.getenv('POSTGRES_PASSWORD')
if all([postgres_host, postgres_db, postgres_user, postgres_password]):
database_url = f"postgresql+asyncpg://{postgres_user}:{postgres_password}@{postgres_host}:{postgres_port}/{postgres_db}"
else:
# Fallback to settings
database_url = getattr(settings, 'DATABASE_URL', None)
if database_url:
config.set_main_option("sqlalchemy.url", database_url)
# Interpret the config file for Python logging
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# Set target metadata
target_metadata = Base.metadata
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode."""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
compare_type=True,
compare_server_default=True,
)
with context.begin_transaction():
context.run_migrations()
def do_run_migrations(connection: Connection) -> None:
context.configure(
connection=connection,
target_metadata=target_metadata,
compare_type=True,
compare_server_default=True,
)
with context.begin_transaction():
context.run_migrations()
async def run_async_migrations() -> None:
"""Run migrations in 'online' mode."""
connectable = async_engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
async with connectable.connect() as connection:
await connection.run_sync(do_run_migrations)
await connectable.dispose()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode."""
asyncio.run(run_async_migrations())
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -29,7 +29,7 @@ revision_environment = false
sourceless = false sourceless = false
# version of a migration file's filename format # version of a migration file's filename format
version_num_format = %s version_num_format = %%s
# version path separator # version path separator
version_path_separator = os version_path_separator = os

Some files were not shown because too many files have changed in this diff Show More