Files
bakery-ia/Tiltfile
2026-01-21 16:21:24 +01:00

1530 lines
60 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# =============================================================================
# Bakery IA - Tiltfile for Secure Local Development
# =============================================================================
# Features:
# - TLS encryption for PostgreSQL and Redis
# - Strong 32-character passwords with PersistentVolumeClaims
# - PostgreSQL pgcrypto extension and audit logging
# - Organized resource dependencies and live-reload capabilities
# - Local registry for faster image builds and deployments
#
# Build Optimization:
# - Services only rebuild when their specific code changes (not all services)
# - Shared folder changes trigger rebuild of ALL services (as they all depend on it)
# - Uses 'only' parameter to watch only relevant files per service
# - Frontend only rebuilds when frontend/ code changes
# - Gateway only rebuilds when gateway/ or shared/ code changes
# =============================================================================
# =============================================================================
# GLOBAL VARIABLES - DEFINED FIRST TO BE AVAILABLE FOR ALL RESOURCES
# =============================================================================
# Docker registry configuration
# Set USE_DOCKERHUB=true environment variable to push images to Docker Hub
# Otherwise, uses local kind registry for faster builds and deployments
use_dockerhub = False # Use local kind registry by default
if 'USE_DOCKERHUB' in os.environ:
use_dockerhub = os.environ['USE_DOCKERHUB'].lower() == 'true'
dockerhub_username = 'uals' # Default username
if 'DOCKERHUB_USERNAME' in os.environ:
dockerhub_username = os.environ['DOCKERHUB_USERNAME']
# Base image registry configuration for Dockerfile ARGs
# This controls where the base Python image is pulled from during builds
base_registry = 'localhost:5000' # Default for local dev (kind registry)
python_image = 'python_3_11_slim' # Local registry uses underscores (matches prepull naming)
if 'BASE_REGISTRY' in os.environ:
base_registry = os.environ['BASE_REGISTRY']
if 'PYTHON_IMAGE' in os.environ:
python_image = os.environ['PYTHON_IMAGE']
# For Docker Hub mode, use canonical image names
if use_dockerhub:
base_registry = 'docker.io'
python_image = 'python:3.11-slim'
# =============================================================================
# PREPULL BASE IMAGES - RUNS AFTER SECURITY SETUP
# =============================================================================
# Dependency order: apply-k8s-manifests -> security-setup -> ingress-status-check
# -> kind-cluster-configuration -> prepull-base-images
# Prepull runs AFTER security setup to ensure registry is available
local_resource(
'prepull-base-images',
cmd='''#!/usr/bin/env bash
echo "=========================================="
echo "STARTING PRE PULL WITH PROPER DEPENDENCIES"
echo "=========================================="
echo ""
# Export environment variables for the prepull script
export USE_GITEA_REGISTRY=false
export USE_LOCAL_REGISTRY=true
# Run the prepull script
if ./scripts/prepull-base-images.sh; then
echo ""
echo "✓ Base images prepull completed successfully"
echo "=========================================="
echo "CONTINUING WITH TILT SETUP..."
echo "=========================================="
exit 0
else
echo ""
echo "⚠ Base images prepull had issues"
echo "This may affect image availability for services"
echo "=========================================="
# Continue execution - images are still available locally
exit 0
fi
''',
resource_deps=['kind-cluster-configuration'], # Runs AFTER kind cluster configuration
labels=['00-prepull'],
auto_init=True,
allow_parallel=False
)
# =============================================================================
# TILT CONFIGURATION
# =============================================================================
# Update settings
update_settings(
max_parallel_updates=2, # Reduce parallel updates to avoid resource exhaustion
k8s_upsert_timeout_secs=120 # Increase timeout for slower local builds
)
# Ensure we're running in the correct context
allow_k8s_contexts('kind-bakery-ia-local')
# =============================================================================
# DISK SPACE MANAGEMENT & CLEANUP CONFIGURATION
# =============================================================================
# Disk space management settings
disk_cleanup_enabled = True # Default to True, can be disabled with TILT_DISABLE_CLEANUP=true
if 'TILT_DISABLE_CLEANUP' in os.environ:
disk_cleanup_enabled = os.environ['TILT_DISABLE_CLEANUP'].lower() != 'true'
disk_space_threshold_gb = '10'
if 'TILT_DISK_THRESHOLD_GB' in os.environ:
disk_space_threshold_gb = os.environ['TILT_DISK_THRESHOLD_GB']
disk_cleanup_frequency_minutes = '30'
if 'TILT_CLEANUP_FREQUENCY' in os.environ:
disk_cleanup_frequency_minutes = os.environ['TILT_CLEANUP_FREQUENCY']
print("""
DISK SPACE MANAGEMENT CONFIGURATION
======================================
Cleanup Enabled: {}
Free Space Threshold: {}GB
Cleanup Frequency: Every {} minutes
To disable cleanup: export TILT_DISABLE_CLEANUP=true
To change threshold: export TILT_DISK_THRESHOLD_GB=20
To change frequency: export TILT_CLEANUP_FREQUENCY=60
""".format(
'YES' if disk_cleanup_enabled else 'NO (TILT_DISABLE_CLEANUP=true)',
disk_space_threshold_gb,
disk_cleanup_frequency_minutes
))
# Automatic cleanup scheduler (informational only - actual scheduling done externally)
if disk_cleanup_enabled:
local_resource(
'automatic-disk-cleanup-info',
cmd='''
echo "Automatic disk cleanup is ENABLED"
echo "Settings:"
echo " - Threshold: ''' + disk_space_threshold_gb + ''' GB free space"
echo " - Frequency: Every ''' + disk_cleanup_frequency_minutes + ''' minutes"
echo ""
echo "Note: Actual cleanup runs via external scheduling (cron job or similar)"
echo "To run cleanup now: tilt trigger manual-disk-cleanup"
''',
labels=['99-cleanup'],
auto_init=True,
allow_parallel=False
)
# Manual cleanup trigger (can be run on demand)
local_resource(
'manual-disk-cleanup',
cmd='''
echo "Starting manual disk cleanup..."
python3 scripts/cleanup_disk_space.py --manual --verbose
''',
labels=['99-cleanup'],
auto_init=False,
allow_parallel=False
)
# Disk space monitoring resource
local_resource(
'disk-space-monitor',
cmd='''
echo "DISK SPACE MONITORING"
echo "======================================"
# Get disk usage
df -h / | grep -v Filesystem | awk '{{print "Total: " $2 " | Used: " $3 " | Free: " $4 " | Usage: " $5}}'
# Get Docker disk usage
echo ""
echo "DOCKER DISK USAGE:"
docker system df
# Get Kubernetes disk usage (if available)
echo ""
echo "KUBERNETES DISK USAGE:"
kubectl get pvc -n bakery-ia --no-headers 2>/dev/null | awk '{{print "PVC: " $1 " | Status: " $2 " | Capacity: " $3 " | Used: " $4}}' || echo " Kubernetes PVCs not available"
echo ""
echo "Cleanup Status:"
if [ "{disk_cleanup_enabled}" = "True" ]; then
echo " Automatic cleanup: ENABLED (every {disk_cleanup_frequency_minutes} minutes)"
echo " Threshold: {disk_space_threshold_gb}GB free space"
else
echo " Automatic cleanup: DISABLED"
echo " To enable: unset TILT_DISABLE_CLEANUP or set TILT_DISABLE_CLEANUP=false"
fi
echo ""
echo "Manual cleanup commands:"
echo " tilt trigger manual-disk-cleanup # Run cleanup now"
echo " docker system prune -a # Manual Docker cleanup"
echo " kubectl delete jobs --all # Clean up completed jobs"
''',
labels=['99-cleanup'],
auto_init=False,
allow_parallel=False
)
# Use the registry configuration defined at the top of the file
if use_dockerhub:
print("""
DOCKER HUB MODE ENABLED
Images will be pushed to Docker Hub: docker.io/%s
Base images will be pulled from: %s/%s
Make sure you're logged in: docker login
To disable: unset USE_DOCKERHUB or set USE_DOCKERHUB=false
""" % (dockerhub_username, base_registry, python_image))
default_registry('docker.io/%s' % dockerhub_username)
else:
print("""
LOCAL REGISTRY MODE (KIND)
Using local kind registry for faster builds: localhost:5000
Base images will be pulled from: %s/%s
This registry is created by kubernetes_restart.sh script
To use Docker Hub: export USE_DOCKERHUB=true
To change base registry: export BASE_REGISTRY=<registry-url>
To change Python image: export PYTHON_IMAGE=<image:tag>
""" % (base_registry, python_image))
default_registry('localhost:5000')
# =============================================================================
# INGRESS HEALTH CHECK
# =============================================================================
# Check ingress status and readiness with improved logic
local_resource(
'ingress-status-check',
cmd='''
echo "=========================================="
echo "CHECKING INGRESS STATUS AND READINESS"
echo "=========================================="
# Wait for ingress controller to be ready
echo "Waiting for ingress controller to be ready..."
kubectl wait --for=condition=ready pod -l app.kubernetes.io/component=controller -n ingress-nginx --timeout=300s
# Check ingress controller status
echo ""
echo "INGRESS CONTROLLER STATUS:"
kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller
# Quick check: verify ingress controller is running
echo "Quick check: verifying ingress controller is running..."
if kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller | grep -q "Running"; then
echo "✓ Ingress controller is running"
else
echo "⚠ Ingress controller may not be running properly"
fi
# Brief pause to allow any pending ingress resources to be processed
sleep 2
# Check ingress resources (just report status, don't wait)
echo ""
echo "INGRESS RESOURCES:"
kubectl get ingress -A 2>/dev/null || echo "No ingress resources found yet"
# Check ingress load balancer status
echo ""
echo "INGRESS LOAD BALANCER STATUS:"
kubectl get svc -n ingress-nginx ingress-nginx-controller -o wide 2>/dev/null || echo "Ingress controller service not found"
# Verify ingress endpoints
echo ""
echo "INGRESS ENDPOINTS:"
kubectl get endpoints -n ingress-nginx 2>/dev/null || echo "Ingress endpoints not found"
# Test connectivity to the ingress endpoints
echo ""
echo "TESTING INGRESS CONNECTIVITY:"
# Test if we can reach the ingress controller
kubectl exec -n ingress-nginx deployment/ingress-nginx-controller --container controller -- \
/nginx-ingress-controller --version > /dev/null 2>&1 && echo "✓ Ingress controller accessible"
# In Kind clusters, ingresses typically don't get external IPs, so we just verify they exist
echo "In Kind clusters, ingresses don't typically get external IPs - this is expected behavior"
echo ""
echo "Ingress status check completed successfully!"
echo "Project ingress resources are ready for Gitea and other services."
echo "=========================================="
''',
resource_deps=['security-setup'], # According to requested order: security-setup -> ingress-status-check
labels=['00-ingress-check'],
auto_init=True,
allow_parallel=False
)
# =============================================================================
# SECURITY & INITIAL SETUP
# =============================================================================
print("""
======================================
Bakery IA Secure Development Mode
======================================
Security Features:
TLS encryption for PostgreSQL and Redis
Strong 32-character passwords
PersistentVolumeClaims (no data loss)
Column encryption: pgcrypto extension
Audit logging: PostgreSQL query logging
Object storage: MinIO with TLS for ML models
Monitoring:
Service metrics available at /metrics endpoints
Telemetry ready (traces, metrics, logs)
SigNoz deployment optional for local dev (see signoz-info resource)
Applying security configurations...
""")
# Apply security configurations after applying manifests
# According to requested order: apply-k8s-manifests -> security-setup
security_resource_deps = ['apply-k8s-manifests'] # Depend on manifests first
local_resource(
'security-setup',
cmd='''
echo "=========================================="
echo "APPLYING SECRETS AND TLS CERTIFICATIONS"
echo "=========================================="
echo "Setting up security configurations..."
# First, ensure all required namespaces exist
echo "Creating namespaces..."
kubectl apply -f infrastructure/namespaces/bakery-ia.yaml
kubectl apply -f infrastructure/namespaces/tekton-pipelines.yaml
# Wait for namespaces to be ready
echo "Waiting for namespaces to be ready..."
for ns in bakery-ia tekton-pipelines; do
until kubectl get namespace $ns 2>/dev/null; do
echo "Waiting for namespace $ns to be created..."
sleep 2
done
echo "Namespace $ns is available"
done
# Apply common secrets and configs
echo "Applying common configurations..."
kubectl apply -f infrastructure/environments/common/configs/configmap.yaml
kubectl apply -f infrastructure/environments/common/configs/secrets.yaml
# Apply database secrets and configs
echo "Applying database security configurations..."
kubectl apply -f infrastructure/platform/storage/postgres/secrets/postgres-tls-secret.yaml
kubectl apply -f infrastructure/platform/storage/postgres/configs/postgres-init-config.yaml
kubectl apply -f infrastructure/platform/storage/postgres/configs/postgres-logging-config.yaml
# Apply Redis secrets
kubectl apply -f infrastructure/platform/storage/redis/secrets/redis-tls-secret.yaml
# Apply MinIO secrets and configs
kubectl apply -f infrastructure/platform/storage/minio/minio-secrets.yaml
kubectl apply -f infrastructure/platform/storage/minio/secrets/minio-tls-secret.yaml
# Apply Mail/SMTP secrets (already included in common/configs/secrets.yaml)
# Apply CI/CD secrets
# Note: infrastructure/cicd/tekton-helm/templates/secrets.yaml is a Helm template file
# and should be applied via the Helm chart deployment, not directly with kubectl
echo "Skipping infrastructure/cicd/tekton-helm/templates/secrets.yaml (Helm template file)"
echo "This file will be applied when the Tekton Helm chart is deployed"
# Apply self-signed ClusterIssuer for cert-manager (required before certificates)
echo "Applying self-signed ClusterIssuer..."
kubectl apply -f infrastructure/platform/cert-manager/selfsigned-issuer.yaml
# Wait for ClusterIssuer to be ready
echo "Waiting for ClusterIssuer to be ready..."
kubectl wait --for=condition=Ready clusterissuer/selfsigned-issuer --timeout=60s || echo "ClusterIssuer may still be provisioning..."
# Apply TLS certificates for ingress
echo "Applying TLS certificates for ingress..."
kubectl apply -f infrastructure/environments/dev/k8s-manifests/dev-certificate.yaml
# Wait for cert-manager to create the certificate
echo "Waiting for TLS certificate to be ready..."
kubectl wait --for=condition=Ready certificate/bakery-dev-tls-cert -n bakery-ia --timeout=120s || echo "Certificate may still be provisioning..."
# Verify TLS certificates are created
echo "Verifying TLS certificates..."
if kubectl get secret bakery-dev-tls-cert -n bakery-ia &>/dev/null; then
echo "✓ TLS certificate 'bakery-dev-tls-cert' found in bakery-ia namespace"
else
echo "⚠ TLS certificate 'bakery-dev-tls-cert' not found, may still be provisioning"
fi
# Verify other secrets are created
echo "Verifying security secrets..."
for secret in gitea-admin-secret; do
if kubectl get secret $secret -n gitea &>/dev/null; then
echo "✓ Secret '$secret' found in gitea namespace"
else
echo " Secret '$secret' not found in gitea namespace (will be created when Gitea is deployed)"
fi
done
echo ""
echo "Security configurations applied successfully!"
echo "TLS certificates and secrets are ready for use."
echo "=========================================="
''',
resource_deps=security_resource_deps, # Conditional dependency based on registry usage
labels=['00-security'],
auto_init=True
)
# Kind cluster configuration for registry access
local_resource(
'kind-cluster-configuration',
cmd='''
echo "=========================================="
echo "CONFIGURING KIND CLUSTER FOR REGISTRY ACCESS"
echo "=========================================="
echo "Setting up localhost:5000 access in Kind cluster..."
echo ""
# Wait for the TLS certificate to be available
echo "Waiting for TLS certificate to be ready..."
MAX_RETRIES=30
RETRY_COUNT=0
while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
if kubectl get secret bakery-dev-tls-cert -n bakery-ia &>/dev/null; then
echo "TLS certificate is ready"
break
fi
echo " Waiting for TLS certificate... (attempt $((RETRY_COUNT+1))/$MAX_RETRIES)"
sleep 5
RETRY_COUNT=$((RETRY_COUNT+1))
done
if [ $RETRY_COUNT -eq $MAX_RETRIES ]; then
echo "⚠ Warning: TLS certificate not ready after $MAX_RETRIES attempts"
echo " Proceeding with configuration anyway..."
fi
# Add localhost:5000 registry configuration to containerd
echo "Configuring containerd to access localhost:5000 registry..."
# Create the hosts.toml file for containerd to access localhost:5000 registry
if docker exec bakery-ia-local-control-plane sh -c 'cat > /etc/containerd/certs.d/localhost:5000/hosts.toml << EOF
server = "http://localhost:5000"
[host."http://kind-registry:5000"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true
EOF'; then
echo "✓ Successfully created hosts.toml for localhost:5000 registry access"
else
echo "⚠ Failed to create hosts.toml for containerd"
echo " This may be because the Kind container is not running yet"
echo " The kubernetes_restart.sh script should handle this configuration"
fi
# Create the hosts.toml file for kind-registry:5000 (used by migration jobs)
if docker exec bakery-ia-local-control-plane sh -c 'cat > /etc/containerd/certs.d/kind-registry:5000/hosts.toml << EOF
server = "http://kind-registry:5000"
[host."http://kind-registry:5000"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true
EOF'; then
echo "✓ Successfully created hosts.toml for kind-registry:5000 access"
else
echo "⚠ Failed to create hosts.toml for kind-registry:5000"
echo " This may be because the Kind container is not running yet"
echo " The kubernetes_restart.sh script should handle this configuration"
fi
echo ""
echo "Kind cluster configuration completed!"
echo "Registry access should now be properly configured."
echo "=========================================="
''',
resource_deps=['ingress-status-check'], # According to requested order: ingress-status-check -> kind-cluster-configuration
labels=['00-kind-config'],
auto_init=True,
allow_parallel=False
)
# Verify TLS certificates are mounted correctly
# =============================================================================
# EXECUTE OVERLAYS KUSTOMIZATIONS
# =============================================================================
# Execute the main kustomize overlay for the dev environment with proper dependencies
k8s_yaml(kustomize('infrastructure/environments/dev/k8s-manifests'))
# Create a visible resource for applying Kubernetes manifests with proper dependencies
local_resource(
'apply-k8s-manifests',
cmd='''
echo "=========================================="
echo "EXECUTING OVERLAYS KUSTOMIZATIONS"
echo "=========================================="
echo "Loading all Kubernetes resources including ingress configuration..."
echo ""
echo "This step applies:"
echo "- All services and deployments"
echo "- Ingress configuration for external access"
echo "- Database configurations"
echo "- Security configurations"
echo "- CI/CD configurations"
echo ""
echo "Overlays kustomizations executed successfully!"
echo "=========================================="
''',
labels=['00-k8s-manifests'],
auto_init=True,
allow_parallel=False
)
# =============================================================================
# DOCKER BUILD HELPERS
# =============================================================================
# Helper function for Python services with live updates
# This function ensures services only rebuild when their specific code changes,
# but all services rebuild when shared/ folder changes
def build_python_service(service_name, service_path):
docker_build(
'bakery/' + service_name,
context='.',
dockerfile='./services/' + service_path + '/Dockerfile',
# Build arguments for environment-configurable base images
build_args={
'BASE_REGISTRY': base_registry,
'PYTHON_IMAGE': python_image,
},
# Only watch files relevant to this specific service + shared code
only=[
'./services/' + service_path,
'./shared',
'./scripts',
],
live_update=[
# Fall back to full image build if Dockerfile or requirements change
fall_back_on([
'./services/' + service_path + '/Dockerfile',
'./services/' + service_path + '/requirements.txt',
'./shared/requirements-tracing.txt',
]),
# Sync service code
sync('./services/' + service_path, '/app'),
# Sync shared libraries
sync('./shared', '/app/shared'),
# Sync scripts
sync('./scripts', '/app/scripts'),
# Install new dependencies if requirements.txt changes
run(
'pip install --no-cache-dir -r requirements.txt',
trigger=['./services/' + service_path + '/requirements.txt']
),
# Restart uvicorn on Python file changes (HUP signal triggers graceful reload)
run(
'kill -HUP 1',
trigger=[
'./services/' + service_path + '/**/*.py',
'./shared/**/*.py'
]
),
],
# Ignore common patterns that don't require rebuilds
ignore=[
'.git',
'**/__pycache__',
'**/*.pyc',
'**/.pytest_cache',
'**/node_modules',
'**/.DS_Store'
]
)
# =============================================================================
# INFRASTRUCTURE IMAGES
# =============================================================================
# Frontend (React + Vite)
frontend_debug_env = 'false' # Default to false
if 'FRONTEND_DEBUG' in os.environ:
frontend_debug_env = os.environ['FRONTEND_DEBUG']
frontend_debug = frontend_debug_env.lower() == 'true'
if frontend_debug:
print("""
FRONTEND DEBUG MODE ENABLED
Building frontend with NO minification for easier debugging.
Full React error messages will be displayed.
To disable: unset FRONTEND_DEBUG or set FRONTEND_DEBUG=false
""")
else:
print("""
FRONTEND PRODUCTION MODE
Building frontend with minification for optimized performance.
To enable debug mode: export FRONTEND_DEBUG=true
""")
docker_build(
'bakery/dashboard',
context='./frontend',
dockerfile='./frontend/Dockerfile.kubernetes.debug' if frontend_debug else './frontend/Dockerfile.kubernetes',
live_update=[
sync('./frontend/src', '/app/src'),
sync('./frontend/public', '/app/public'),
],
build_args={
'NODE_OPTIONS': '--max-old-space-size=8192'
},
ignore=[
'playwright-report/**',
'test-results/**',
'node_modules/**',
'.DS_Store'
]
)
# Gateway
docker_build(
'bakery/gateway',
context='.',
dockerfile='./gateway/Dockerfile',
# Build arguments for environment-configurable base images
build_args={
'BASE_REGISTRY': base_registry,
'PYTHON_IMAGE': python_image,
},
# Only watch gateway-specific files and shared code
only=[
'./gateway',
'./shared',
'./scripts',
],
live_update=[
fall_back_on([
'./gateway/Dockerfile',
'./gateway/requirements.txt',
'./shared/requirements-tracing.txt',
]),
sync('./gateway', '/app'),
sync('./shared', '/app/shared'),
sync('./scripts', '/app/scripts'),
run('kill -HUP 1', trigger=['./gateway/**/*.py', './shared/**/*.py']),
],
ignore=[
'.git',
'**/__pycache__',
'**/*.pyc',
'**/.pytest_cache',
'**/node_modules',
'**/.DS_Store'
]
)
# =============================================================================
# MICROSERVICE IMAGES
# =============================================================================
# Core Services
build_python_service('auth-service', 'auth')
build_python_service('tenant-service', 'tenant')
# Data & Analytics Services
build_python_service('training-service', 'training')
build_python_service('forecasting-service', 'forecasting')
build_python_service('ai-insights-service', 'ai_insights')
# Operations Services
build_python_service('sales-service', 'sales')
build_python_service('inventory-service', 'inventory')
build_python_service('production-service', 'production')
build_python_service('procurement-service', 'procurement')
build_python_service('distribution-service', 'distribution')
# Supporting Services
build_python_service('recipes-service', 'recipes')
build_python_service('suppliers-service', 'suppliers')
build_python_service('pos-service', 'pos')
build_python_service('orders-service', 'orders')
build_python_service('external-service', 'external')
# Platform Services
build_python_service('notification-service', 'notification')
build_python_service('alert-processor', 'alert_processor')
build_python_service('orchestrator-service', 'orchestrator')
# Demo Services
build_python_service('demo-session-service', 'demo_session')
# Tell Tilt that demo-cleanup-worker uses the demo-session-service image
k8s_image_json_path(
'bakery/demo-session-service',
'{.spec.template.spec.containers[?(@.name=="worker")].image}',
name='demo-cleanup-worker'
)
# =============================================================================
# INFRASTRUCTURE RESOURCES
# =============================================================================
# Redis & RabbitMQ
k8s_resource('redis', resource_deps=['security-setup'], labels=['01-infrastructure'])
k8s_resource('rabbitmq', resource_deps=['security-setup'], labels=['01-infrastructure'])
# MinIO Storage
k8s_resource('minio', resource_deps=['security-setup'], labels=['01-infrastructure'])
k8s_resource('minio-bucket-init', resource_deps=['minio'], labels=['01-infrastructure'])
# Unbound DNSSEC Resolver - Infrastructure component for Mailu DNS validation
local_resource(
'unbound-helm',
cmd='''
echo "Deploying Unbound DNS resolver via Helm..."
echo ""
# Check if Unbound is already deployed
if helm list -n bakery-ia | grep -q unbound; then
echo "Unbound already deployed, checking status..."
helm status unbound -n bakery-ia
else
echo "Installing Unbound..."
# Determine environment (dev or prod) based on context
ENVIRONMENT="dev"
if [[ "$(kubectl config current-context)" == *"prod"* ]]; then
ENVIRONMENT="prod"
fi
echo "Environment detected: $ENVIRONMENT"
# Install Unbound with appropriate values
if [ "$ENVIRONMENT" = "dev" ]; then
helm upgrade --install unbound infrastructure/platform/networking/dns/unbound-helm \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/networking/dns/unbound-helm/values.yaml \
-f infrastructure/platform/networking/dns/unbound-helm/dev/values.yaml \
--timeout 5m \
--wait
else
helm upgrade --install unbound infrastructure/platform/networking/dns/unbound-helm \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/networking/dns/unbound-helm/values.yaml \
-f infrastructure/platform/networking/dns/unbound-helm/prod/values.yaml \
--timeout 5m \
--wait
fi
echo ""
echo "Unbound deployment completed"
fi
echo ""
echo "Unbound DNS Service Information:"
echo " Service Name: unbound-dns.bakery-ia.svc.cluster.local"
echo " Ports: UDP/TCP 53"
echo " Used by: Mailu for DNS validation"
echo ""
echo "To check pod status: kubectl get pods -n bakery-ia | grep unbound"
''',
resource_deps=['security-setup'],
labels=['01-infrastructure'],
auto_init=True # Auto-deploy with Tilt startup
)
# Mail Infrastructure (Mailu) - Manual trigger for Helm deployment
local_resource(
'mailu-helm',
cmd='''
echo "Deploying Mailu via Helm..."
echo ""
# =====================================================
# Step 1: Ensure Unbound is deployed and get its IP
# =====================================================
echo "Checking Unbound DNS resolver..."
if ! kubectl get svc unbound-dns -n bakery-ia &>/dev/null; then
echo "ERROR: Unbound DNS service not found!"
echo "Please deploy Unbound first by triggering 'unbound-helm' resource"
exit 1
fi
UNBOUND_IP=$(kubectl get svc unbound-dns -n bakery-ia -o jsonpath='{.spec.clusterIP}')
echo "Unbound DNS service IP: $UNBOUND_IP"
# =====================================================
# Step 2: Configure CoreDNS to forward to Unbound
# =====================================================
echo ""
echo "Configuring CoreDNS to forward external queries to Unbound for DNSSEC validation..."
# Check current CoreDNS forward configuration
CURRENT_FORWARD=$(kubectl get configmap coredns -n kube-system -o jsonpath='{.data.Corefile}' | grep -o 'forward \\. [0-9.]*' | awk '{print $3}')
if [ "$CURRENT_FORWARD" != "$UNBOUND_IP" ]; then
echo "Updating CoreDNS to forward to Unbound ($UNBOUND_IP)..."
# Change to project root to ensure correct file paths
cd /Users/urtzialfaro/Documents/bakery-ia
# Create a temporary Corefile with the forwarding configuration
TEMP_COREFILE=$(mktemp)
cat > "$TEMP_COREFILE" << EOF
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . $UNBOUND_IP {
max_concurrent 1000
}
cache 30 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}
EOF
# Create a complete new configmap YAML with the updated Corefile content
cat > /tmp/coredns_updated.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
$(sed 's/^/ /' "$TEMP_COREFILE")
EOF
# Apply the updated configmap
kubectl apply -f /tmp/coredns_updated.yaml
# Clean up the temporary file
rm "$TEMP_COREFILE"
# Restart CoreDNS
kubectl rollout restart deployment coredns -n kube-system
echo "Waiting for CoreDNS to restart..."
kubectl rollout status deployment coredns -n kube-system --timeout=60s
echo "CoreDNS configured successfully"
else
echo "CoreDNS already configured to forward to Unbound"
fi
# =====================================================
# Step 3: Create self-signed TLS certificate for Mailu Front
# =====================================================
echo ""
echo "Checking Mailu TLS certificates..."
if ! kubectl get secret mailu-certificates -n bakery-ia &>/dev/null; then
echo "Creating self-signed TLS certificate for Mailu Front..."
# Generate certificate in temp directory
TEMP_DIR=$(mktemp -d)
cd "$TEMP_DIR"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=mail.bakery-ia.dev/O=bakery-ia" 2>/dev/null
kubectl create secret tls mailu-certificates \
--cert=tls.crt \
--key=tls.key \
-n bakery-ia
rm -rf "$TEMP_DIR"
echo "TLS certificate created"
else
echo "Mailu TLS certificate already exists"
fi
# =====================================================
# Step 4: Deploy Mailu via Helm
# =====================================================
echo ""
# Check if Mailu is already deployed
if helm list -n bakery-ia | grep -q mailu; then
echo "Mailu already deployed, checking status..."
helm status mailu -n bakery-ia
else
echo "Installing Mailu..."
# Add Mailu Helm repository if not already added
helm repo add mailu https://mailu.github.io/helm-charts 2>/dev/null || true
helm repo update mailu
# Determine environment (dev or prod) based on context
ENVIRONMENT="dev"
if [[ "$(kubectl config current-context)" == *"prod"* ]]; then
ENVIRONMENT="prod"
fi
echo "Environment detected: $ENVIRONMENT"
# Install Mailu with appropriate values
# Ensure we're in the project root directory for correct file paths
cd /Users/urtzialfaro/Documents/bakery-ia
if [ "$ENVIRONMENT" = "dev" ]; then
helm upgrade --install mailu mailu/mailu \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/mail/mailu-helm/values.yaml \
-f infrastructure/platform/mail/mailu-helm/dev/values.yaml \
--timeout 10m
else
helm upgrade --install mailu mailu/mailu \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/mail/mailu-helm/values.yaml \
-f infrastructure/platform/mail/mailu-helm/prod/values.yaml \
--timeout 10m
fi
echo ""
echo "Mailu deployment completed"
fi
# =====================================================
# Step 5: Apply Mailu Ingress
# =====================================================
echo ""
echo "Applying Mailu ingress configuration..."
cd /Users/urtzialfaro/Documents/bakery-ia
kubectl apply -f infrastructure/platform/mail/mailu-helm/mailu-ingress.yaml
echo "Mailu ingress applied for mail.bakery-ia.dev"
# =====================================================
# Step 6: Wait for pods and show status
# =====================================================
echo ""
echo "Waiting for Mailu pods to be ready..."
sleep 10
echo ""
echo "Mailu Pod Status:"
kubectl get pods -n bakery-ia | grep mailu
echo ""
echo "Mailu Access Information:"
echo " Admin Panel: https://mail.bakery-ia.dev/admin"
echo " Webmail: https://mail.bakery-ia.ldev/webmail"
echo " SMTP: mail.bakery-ia.dev:587 (STARTTLS)"
echo " IMAP: mail.bakery-ia.dev:993 (SSL/TLS)"
echo ""
echo "To create admin user:"
echo " Admin user created automatically via initialAccount feature in Helm values"
echo ""
echo "To check pod status: kubectl get pods -n bakery-ia | grep mailu"
''',
resource_deps=['unbound-helm'], # Ensure Unbound is deployed first
labels=['01-infrastructure'],
auto_init=False, # Manual trigger only
)
# Nominatim Geocoding - Manual trigger for Helm deployment
local_resource(
'nominatim-helm',
cmd='''
echo "Deploying Nominatim geocoding service via Helm..."
echo ""
# Check if Nominatim is already deployed
if helm list -n bakery-ia | grep -q nominatim; then
echo "Nominatim already deployed, checking status..."
helm status nominatim -n bakery-ia
else
echo "Installing Nominatim..."
# Determine environment (dev or prod) based on context
ENVIRONMENT="dev"
if [[ "$(kubectl config current-context)" == *"prod"* ]]; then
ENVIRONMENT="prod"
fi
echo "Environment detected: $ENVIRONMENT"
# Install Nominatim with appropriate values
if [ "$ENVIRONMENT" = "dev" ]; then
helm upgrade --install nominatim infrastructure/platform/nominatim/nominatim-helm \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/nominatim/nominatim-helm/values.yaml \
-f infrastructure/platform/nominatim/nominatim-helm/dev/values.yaml \
--timeout 10m \
--wait
else
helm upgrade --install nominatim infrastructure/platform/nominatim/nominatim-helm \
-n bakery-ia \
--create-namespace \
-f infrastructure/platform/nominatim/nominatim-helm/values.yaml \
-f infrastructure/platform/nominatim/nominatim-helm/prod/values.yaml \
--timeout 10m \
--wait
fi
echo ""
echo "Nominatim deployment completed"
fi
echo ""
echo "Nominatim Service Information:"
echo " Service Name: nominatim-service.bakery-ia.svc.cluster.local"
echo " Port: 8080"
echo " Health Check: http://nominatim-service:8080/status"
echo ""
echo "To check pod status: kubectl get pods -n bakery-ia | grep nominatim"
echo "To check Helm release: helm status nominatim -n bakery-ia"
''',
labels=['01-infrastructure'],
auto_init=False, # Manual trigger only
)
# =============================================================================
# MONITORING RESOURCES - SigNoz (Unified Observability)
# =============================================================================
# Deploy SigNoz using Helm with automatic deployment and progress tracking
local_resource(
'signoz-deploy',
cmd='''
echo "Deploying SigNoz Monitoring Stack..."
echo ""
# Check if SigNoz is already deployed
if helm list -n bakery-ia | grep -q signoz; then
echo "SigNoz already deployed, checking status..."
helm status signoz -n bakery-ia
else
echo "Installing SigNoz..."
# Add SigNoz Helm repository if not already added
helm repo add signoz https://charts.signoz.io 2>/dev/null || true
helm repo update signoz
# Install SigNoz with custom values in the bakery-ia namespace
helm upgrade --install signoz signoz/signoz \
-n bakery-ia \
-f infrastructure/monitoring/signoz/signoz-values-dev.yaml \
--timeout 10m \
--wait
echo ""
echo "SigNoz deployment completed"
fi
echo ""
echo "SigNoz Access Information:"
echo " URL: https://monitoring.bakery-ia.local"
echo " Username: admin"
echo " Password: admin"
echo ""
echo "OpenTelemetry Collector Endpoints:"
echo " gRPC: localhost:4317"
echo " HTTP: localhost:4318"
echo ""
echo "To check pod status: kubectl get pods -n signoz"
''',
labels=['05-monitoring'],
auto_init=False,
)
# Deploy Flux CD using Helm with automatic deployment and progress tracking
local_resource(
'flux-cd-deploy',
cmd='''
echo "Deploying Flux CD GitOps Toolkit..."
echo ""
# Check if Flux CLI is installed, install if missing
if ! command -v flux &> /dev/null; then
echo "Flux CLI not found, installing..."
# Determine OS and architecture
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m | tr '[:upper:]' '[:lower:]')
# Convert architecture format
if [[ "$ARCH" == "x86_64" ]]; then
ARCH="amd64"
elif [[ "$ARCH" == "aarch64" ]]; then
ARCH="arm64"
fi
# Download and install Flux CLI to user's local bin
echo "Detected OS: $OS, Architecture: $ARCH"
FLUX_VERSION="2.7.5"
DOWNLOAD_URL="https://github.com/fluxcd/flux2/releases/download/v${FLUX_VERSION}/flux_${FLUX_VERSION}_${OS}_${ARCH}.tar.gz"
echo "Downloading Flux CLI from: $DOWNLOAD_URL"
mkdir -p ~/.local/bin
cd /tmp
curl -sL "$DOWNLOAD_URL" -o flux.tar.gz
tar xzf flux.tar.gz
chmod +x flux
mv flux ~/.local/bin/
# Add to PATH if not already there
export PATH="$HOME/.local/bin:$PATH"
# Verify installation
if command -v flux &> /dev/null; then
echo "Flux CLI installed successfully"
else
echo "ERROR: Failed to install Flux CLI"
exit 1
fi
else
echo "Flux CLI is already installed"
fi
# Check if Flux CRDs are installed, install if missing
if ! kubectl get crd gitrepositories.source.toolkit.fluxcd.io >/dev/null 2>&1; then
echo "Installing Flux CRDs..."
flux install --namespace=flux-system --network-policy=false
else
echo "Flux CRDs are already installed"
fi
# Check if Flux is already deployed
if helm list -n flux-system | grep -q flux-cd; then
echo "Flux CD already deployed, checking status..."
helm status flux-cd -n flux-system
else
echo "Installing Flux CD Helm release..."
# Create the namespace if it doesn't exist
kubectl create namespace flux-system --dry-run=client -o yaml | kubectl apply -f -
# Install Flux CD with custom values using the local chart
helm upgrade --install flux-cd infrastructure/cicd/flux \
-n flux-system \
--create-namespace \
--timeout 10m \
--wait
echo ""
echo "Flux CD deployment completed"
fi
echo ""
echo "Flux CD Access Information:"
echo "To check status: flux check"
echo "To check GitRepository: kubectl get gitrepository -n flux-system"
echo "To check Kustomization: kubectl get kustomization -n flux-system"
echo ""
echo "To check pod status: kubectl get pods -n flux-system"
''',
labels=['99-cicd'],
auto_init=False,
)
# Optional exporters (in monitoring namespace) - DISABLED since using SigNoz
# k8s_resource('node-exporter', labels=['05-monitoring'])
# k8s_resource('postgres-exporter', resource_deps=['auth-db'], labels=['05-monitoring'])
# =============================================================================
# DATABASE RESOURCES
# =============================================================================
# Core Service Databases
k8s_resource('auth-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('tenant-db', resource_deps=['security-setup'], labels=['06-databases'])
# Data & Analytics Databases
k8s_resource('training-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('forecasting-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('ai-insights-db', resource_deps=['security-setup'], labels=['06-databases'])
# Operations Databases
k8s_resource('sales-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('inventory-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('production-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('procurement-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('distribution-db', resource_deps=['security-setup'], labels=['06-databases'])
# Supporting Service Databases
k8s_resource('recipes-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('suppliers-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('pos-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('orders-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('external-db', resource_deps=['security-setup'], labels=['06-databases'])
# Platform Service Databases
k8s_resource('notification-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('alert-processor-db', resource_deps=['security-setup'], labels=['06-databases'])
k8s_resource('orchestrator-db', resource_deps=['security-setup'], labels=['06-databases'])
# Demo Service Databases
k8s_resource('demo-session-db', resource_deps=['security-setup'], labels=['06-databases'])
# =============================================================================
# MIGRATION JOBS
# =============================================================================
# Core Service Migrations
k8s_resource('auth-migration', resource_deps=['auth-db'], labels=['07-migrations'])
k8s_resource('tenant-migration', resource_deps=['tenant-db'], labels=['07-migrations'])
# Data & Analytics Migrations
k8s_resource('training-migration', resource_deps=['training-db'], labels=['07-migrations'])
k8s_resource('forecasting-migration', resource_deps=['forecasting-db'], labels=['07-migrations'])
k8s_resource('ai-insights-migration', resource_deps=['ai-insights-db'], labels=['07-migrations'])
# Operations Migrations
k8s_resource('sales-migration', resource_deps=['sales-db'], labels=['07-migrations'])
k8s_resource('inventory-migration', resource_deps=['inventory-db'], labels=['07-migrations'])
k8s_resource('production-migration', resource_deps=['production-db'], labels=['07-migrations'])
k8s_resource('procurement-migration', resource_deps=['procurement-db'], labels=['07-migrations'])
k8s_resource('distribution-migration', resource_deps=['distribution-db'], labels=['07-migrations'])
# Supporting Service Migrations
k8s_resource('recipes-migration', resource_deps=['recipes-db'], labels=['07-migrations'])
k8s_resource('suppliers-migration', resource_deps=['suppliers-db'], labels=['07-migrations'])
k8s_resource('pos-migration', resource_deps=['pos-db'], labels=['07-migrations'])
k8s_resource('orders-migration', resource_deps=['orders-db'], labels=['07-migrations'])
k8s_resource('external-migration', resource_deps=['external-db'], labels=['07-migrations'])
# Platform Service Migrations
k8s_resource('notification-migration', resource_deps=['notification-db'], labels=['07-migrations'])
k8s_resource('alert-processor-migration', resource_deps=['alert-processor-db'], labels=['07-migrations'])
k8s_resource('orchestrator-migration', resource_deps=['orchestrator-db'], labels=['07-migrations'])
# Demo Service Migrations
k8s_resource('demo-session-migration', resource_deps=['demo-session-db'], labels=['07-migrations'])
# =============================================================================
# DATA INITIALIZATION JOBS
# =============================================================================
k8s_resource('external-data-init', resource_deps=['external-migration', 'redis'], labels=['08-data-init'])
# =============================================================================
# APPLICATION SERVICES
# =============================================================================
# Core Services
k8s_resource('auth-service', resource_deps=['auth-migration', 'redis'], labels=['09-services-core'])
k8s_resource('tenant-service', resource_deps=['tenant-migration', 'redis'], labels=['09-services-core'])
# Data & Analytics Services
k8s_resource('training-service', resource_deps=['training-migration', 'redis'], labels=['10-services-analytics'])
k8s_resource('forecasting-service', resource_deps=['forecasting-migration', 'redis'], labels=['10-services-analytics'])
k8s_resource('ai-insights-service', resource_deps=['ai-insights-migration', 'redis', 'forecasting-service', 'production-service', 'procurement-service'], labels=['10-services-analytics'])
# Operations Services
k8s_resource('sales-service', resource_deps=['sales-migration', 'redis'], labels=['11-services-operations'])
k8s_resource('inventory-service', resource_deps=['inventory-migration', 'redis'], labels=['11-services-operations'])
k8s_resource('production-service', resource_deps=['production-migration', 'redis'], labels=['11-services-operations'])
k8s_resource('procurement-service', resource_deps=['procurement-migration', 'redis'], labels=['11-services-operations'])
k8s_resource('distribution-service', resource_deps=['distribution-migration', 'redis', 'rabbitmq'], labels=['11-services-operations'])
# Supporting Services
k8s_resource('recipes-service', resource_deps=['recipes-migration', 'redis'], labels=['12-services-supporting'])
k8s_resource('suppliers-service', resource_deps=['suppliers-migration', 'redis'], labels=['12-services-supporting'])
k8s_resource('pos-service', resource_deps=['pos-migration', 'redis'], labels=['12-services-supporting'])
k8s_resource('orders-service', resource_deps=['orders-migration', 'redis'], labels=['12-services-supporting'])
k8s_resource('external-service', resource_deps=['external-migration', 'external-data-init', 'redis'], labels=['12-services-supporting'])
# Platform Services
k8s_resource('notification-service', resource_deps=['notification-migration', 'redis', 'rabbitmq'], labels=['13-services-platform'])
k8s_resource('alert-processor', resource_deps=['alert-processor-migration', 'redis', 'rabbitmq'], labels=['13-services-platform'])
k8s_resource('orchestrator-service', resource_deps=['orchestrator-migration', 'redis'], labels=['13-services-platform'])
# Demo Services
k8s_resource('demo-session-service', resource_deps=['demo-session-migration', 'redis'], labels=['14-services-demo'])
k8s_resource('demo-cleanup-worker', resource_deps=['demo-session-service', 'redis'], labels=['14-services-demo'])
# =============================================================================
# FRONTEND & GATEWAY
# =============================================================================
k8s_resource('gateway', resource_deps=['auth-service'], labels=['15-frontend'])
k8s_resource('frontend', resource_deps=['gateway'], labels=['15-frontend'])
# =============================================================================
# CRONJOBS (Remaining K8s CronJobs)
# =============================================================================
k8s_resource('demo-session-cleanup', resource_deps=['demo-session-service'], labels=['16-cronjobs'])
k8s_resource('external-data-rotation', resource_deps=['external-service'], labels=['16-cronjobs'])
# =============================================================================
# WATCH SETTINGS
# =============================================================================
# Watch settings
watch_settings(
ignore=[
'.git/**',
'**/__pycache__/**',
'**/*.pyc',
'**/.pytest_cache/**',
'**/node_modules/**',
'**/.DS_Store',
'**/*.swp',
'**/*.swo',
'**/.venv/**',
'**/venv/**',
'**/.mypy_cache/**',
'**/.ruff_cache/**',
'**/.tox/**',
'**/htmlcov/**',
'**/.coverage',
'**/dist/**',
'**/build/**',
'**/*.egg-info/**',
'**/infrastructure/tls/**/*.pem',
'**/infrastructure/tls/**/*.cnf',
'**/infrastructure/tls/**/*.csr',
'**/infrastructure/tls/**/*.srl',
'**/*.tmp',
'**/*.tmp.*',
'**/migrations/versions/*.tmp.*',
'**/playwright-report/**',
'**/test-results/**',
]
)
# =============================================================================
# CI/CD INFRASTRUCTURE - MANUAL TRIGGERS
# =============================================================================
# Tekton Pipelines - Manual trigger for local development using Helm
local_resource(
'tekton-pipelines',
cmd='''
echo "Setting up Tekton Pipelines for CI/CD using Helm..."
echo ""
# Check if Tekton Pipelines CRDs are already installed
if kubectl get crd pipelines.tekton.dev >/dev/null 2>&1; then
echo " Tekton Pipelines CRDs already installed"
else
echo " Installing Tekton Pipelines..."
kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
echo " Waiting for Tekton Pipelines to be ready..."
kubectl wait --for=condition=available --timeout=180s deployment/tekton-pipelines-controller -n tekton-pipelines
kubectl wait --for=condition=available --timeout=180s deployment/tekton-pipelines-webhook -n tekton-pipelines
echo " Tekton Pipelines installed and ready"
fi
# Check if Tekton Triggers CRDs are already installed
if kubectl get crd eventlisteners.triggers.tekton.dev >/dev/null 2>&1; then
echo " Tekton Triggers CRDs already installed"
else
echo " Installing Tekton Triggers..."
kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml
kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml
echo " Waiting for Tekton Triggers to be ready..."
kubectl wait --for=condition=available --timeout=180s deployment/tekton-triggers-controller -n tekton-pipelines
kubectl wait --for=condition=available --timeout=180s deployment/tekton-triggers-webhook -n tekton-pipelines
echo " Tekton Triggers installed and ready"
fi
echo ""
echo "Installing Tekton configurations via Helm..."
# Check if Tekton Helm release is already deployed
if helm list -n tekton-pipelines | grep -q tekton-cicd; then
echo " Updating existing Tekton CICD deployment..."
helm upgrade --install tekton-cicd infrastructure/cicd/tekton-helm \
-n tekton-pipelines \
--create-namespace \
--timeout 10m \
--wait \
--set pipeline.build.baseRegistry="${base_registry}"
else
echo " Installing new Tekton CICD deployment..."
helm upgrade --install tekton-cicd infrastructure/cicd/tekton-helm \
-n tekton-pipelines \
--create-namespace \
--timeout 10m \
--wait \
--set pipeline.build.baseRegistry="${base_registry}"
fi
echo ""
echo "Tekton setup complete!"
echo "To check status: kubectl get pods -n tekton-pipelines"
echo "To check Helm release: helm status tekton-cicd -n tekton-pipelines"
''',
labels=['99-cicd'],
auto_init=False, # Manual trigger only
)
# Gitea - Simple Helm installation for dev environment
local_resource(
'gitea',
cmd='''
echo "Installing Gitea via Helm..."
# Create namespace
kubectl create namespace gitea --dry-run=client -o yaml | kubectl apply -f -
# Install Gitea using Helm
helm repo add gitea https://dl.gitea.io/charts 2>/dev/null || true
helm repo update gitea
helm upgrade --install gitea gitea/gitea -n gitea -f infrastructure/cicd/gitea/values.yaml --wait
echo ""
echo "Gitea installed!"
echo "Access: https://gitea.bakery-ia.local"
echo "Status: kubectl get pods -n gitea"
''',
labels=['99-cicd'],
auto_init=False,
)
# =============================================================================
# STARTUP SUMMARY
# =============================================================================
print("""
Security setup complete!
Database Security Features Active:
TLS encryption: PostgreSQL and Redis
Strong passwords: 32-character cryptographic
Persistent storage: PVCs for all databases
Column encryption: pgcrypto extension
Audit logging: PostgreSQL query logging
Internal Schedulers Active:
Alert Priority Recalculation: Hourly @ :15 (alert-processor)
Usage Tracking: Daily @ 2:00 AM UTC (tenant-service)
Disk Cleanup: Every {disk_cleanup_frequency_minutes} minutes (threshold: {disk_space_threshold_gb}GB)
Access your application:
Main Application: https://bakery-ia.local
API Endpoints: https://bakery-ia.local/api/v1/...
Local Access: https://localhost
Service Metrics:
Gateway: http://localhost:8000/metrics
Any Service: kubectl port-forward <service> 8000:8000
SigNoz (Unified Observability):
Deploy via Tilt: Trigger 'signoz-deployment' resource
Manual deploy: ./infrastructure/monitoring/signoz/deploy-signoz.sh dev
Access (if deployed): https://monitoring.bakery-ia.local
Username: admin
Password: admin
CI/CD Infrastructure:
Tekton: Trigger 'tekton-pipelines' resource
Flux: Trigger 'flux-cd' resource
Gitea: Auto-installed when USE_GITEA_REGISTRY=true, or trigger manually
Verify security:
kubectl get pvc -n bakery-ia
kubectl get secrets -n bakery-ia | grep tls
kubectl logs -n bakery-ia <db-pod> | grep SSL
Verify schedulers:
kubectl exec -it -n bakery-ia deployment/alert-processor -- curl localhost:8000/scheduler/status
kubectl logs -f -n bakery-ia -l app=tenant-service | grep "usage tracking"
Documentation:
docs/SECURITY_IMPLEMENTATION_COMPLETE.md
docs/DATABASE_SECURITY_ANALYSIS_REPORT.md
Build Optimization Active:
Services only rebuild when their code changes
Shared folder changes trigger ALL services (as expected)
Reduces unnecessary rebuilds and disk usage
Edit service code: only that service rebuilds
Edit shared/ code: all services rebuild (required)
Useful Commands:
# Work on specific services only
tilt up <service-name> <service-name>
# View logs by label
tilt logs 09-services-core
tilt logs 13-services-platform
DNS Configuration:
# To access the application via domain names, add these entries to your hosts file:
# sudo nano /etc/hosts
# Add these lines:
# 127.0.0.1 bakery-ia.local
# 127.0.0.1 monitoring.bakery-ia.local
======================================
""")