Skip to content

Deployment and Operations

Radch-enko edited this page Jul 24, 2025 · 4 revisions

Deployment & Operations

This document covers the deployment strategies, operational procedures, and production environment setup for the Effective Office system. It details the containerized deployment architecture, environment configuration management, monitoring setup, and ongoing operational tasks required to maintain the system in production.

For detailed information about individual Docker service configurations, see Docker Deployment. For application configuration patterns and environment variable management, see Configuration Management. For comprehensive instructions on setting up Google Workspace and Google Calendar integration, see Google Workspace & Calendar Integration.

Deployment Architecture Overview

The Effective Office system uses a containerized deployment approach with Docker Compose, featuring a multi-service architecture with reverse proxy, database persistence, and automated health monitoring.

deployment-architecture-overview.svg

Deployment Infrastructure Components

Docker Containerization

The system uses a three-tier containerized architecture with dedicated containers for the application, database, and reverse proxy.

Container Architecture

Application Container Configuration

The Spring Boot application runs in an eclipse-temurin:17-jre container with the following key characteristics:

  • Base Image: Eclipse Temurin JRE 17 for optimal Java performance
  • Port Exposure: Internal port 8080 (not directly exposed to host)
  • Health Monitoring: Built-in Spring Actuator health checks at /api/actuator/health
  • Dependency Management: Waits for database health before starting

Key container configuration:

app:
  container_name: effective-office-app
  expose: ["8080"]
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:8080/api/actuator/health"]
    interval: 30s
    timeout: 10s
    retries: 5
    start_period: 40s

Database Container Setup

PostgreSQL 15 Alpine provides the persistence layer with automated backup and health monitoring:

  • Container Name: effective-office-db (production) / effective-office-db-dev (development)
  • Port Mapping: 5432:5432 (production) / 5433:5432 (development)
  • Volume Persistence: Named volumes for data durability
  • Health Checks: Built-in pg_isready monitoring

Environment Configuration

The system uses environment-based configuration with separate setups for development and production environments. Configuration is managed through .env files and Spring Boot's application configuration system.

Configuration Hierarchy

configuratio-heierarhy.svg

Core Environment Variables

The system requires the following core environment variables:

Variable Purpose Example Value
POSTGRES_USER Database username effective_office
POSTGRES_PASSWORD Database password secure_password
POSTGRES_DB Database name effective_office_db
GOOGLE_CLIENT_ID OAuth client ID 123456789-abcdef.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET OAuth client secret GOCSPX-abcdefghijklmn
GOOGLE_REFRESH_TOKEN OAuth refresh token 1//abcdefghijklmnopqrstuvwxyz
APPLICATION_URL Production webhook URL https://app.example.com/api/calendar/webhook
TEST_APPLICATION_URL Test webhook URL https://test.example.com/api/calendar/webhook
FIREBASE_CREDENTIALS Firebase service account Base64-encoded JSON

Application Configuration Structure

The main application configuration is defined in with the following key sections:

  • Server Configuration: Port 8080, context path /api
  • Database Configuration: JPA/Hibernate with Flyway migrations
  • Calendar Integration: Google Calendar provider settings
  • Monitoring: Spring Actuator endpoints for health checks

Production vs Development Environments

The system maintains separate deployment configurations optimized for different environments, with distinct networking, security, and operational characteristics.

Environment Comparison

Aspect Production Development
Database Port Internal only Exposed on 5433
Reverse Proxy Integrated Caddy External/manual
SSL/TLS Automatic via Caddy Optional/manual
Volume Management Named external volumes Local volumes
Network Configuration Isolated networks Simplified networking
Logging Level INFO DEBUG
Calendar Environment Production calendars Test calendars

Production-Specific Features

Production deployment includes additional operational features:

  • Integrated Caddy Proxy: lucaslorentz/caddy-docker-proxy for automatic SSL and routing
  • External Volume Management: Persistent volumes with external references
  • Enhanced Security: Docker socket access restrictions and network isolation

Development Environment Optimizations

Development setup prioritizes ease of debugging and rapid iteration:

  • Port Accessibility: Database exposed on host port 5433 for direct access
  • External Caddy: Assumes existing Caddy proxy for simplified networking
  • Debug Logging: Enhanced logging levels for troubleshooting

Monitoring and Health Checks

The system implements comprehensive health monitoring at multiple levels, from individual container health to application-level service monitoring.

Health Check Architecture

health-checks-architecture.svg

Container Health Checks

Application container health monitoring configuration:

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:8080/api/actuator/health"]
  interval: 30s
  timeout: 10s
  retries: 5
  start_period: 40s

Database health verification:

healthcheck:
  test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
  interval: 10s
  timeout: 5s
  retries: 5

Spring Actuator Configuration

Application monitoring endpoints are configured:

  • Health Endpoint: /api/actuator/health with detailed status information
  • Metrics Endpoint: /api/actuator/metrics for performance monitoring
  • Info Endpoint: /api/actuator/info for application metadata

The health endpoint provides comprehensive status including database connectivity, external service availability, and application-specific health indicators.

Operational Procedures

The system includes several automated operational procedures that maintain system functionality and external service integrations without manual intervention.

Calendar Subscription Management

The most critical operational procedure is the automated renewal of Google Calendar notification subscriptions, implemented in the CalendarSubscriptionScheduler.

calendar-subscription-managment.svg

Subscription Renewal Process

The CalendarSubscriptionScheduler implements a proactive renewal strategy:

  • Execution Frequency: Every 6 days (518,400,000 milliseconds)
  • Renewal Reason: Google Calendar subscriptions expire after 7 days
  • Dual Environment Support: Separate subscriptions for production and test environments
  • Automatic Initialization: Subscriptions established on application startup

Configuration Validation

The scheduler performs configuration validation before attempting subscription renewal:

// Production calendars
if (config.applicationUrl.isNotBlank() && productionCalendars.isNotEmpty()) {
    googleCalendarService.subscribeToCalendarNotifications(config.applicationUrl, productionCalendars)
}

// Test calendars  
if (config.testApplicationUrl.isNotBlank() && testCalendars.isNotEmpty()) {
    googleCalendarService.subscribeToCalendarNotifications(config.testApplicationUrl, testCalendars)
}

Push Notification Infrastructure

The system implements a real-time notification system using Firebase Cloud Messaging (FCM) to provide immediate updates to tablet clients when calendar events change.

Notification Flow Architecture

notification-flow-architecture.svg

Firebase Message Reception

The tablet client implements message reception through the ServerMessagingService class:

override fun onMessageReceived(message: RemoteMessage) {
    Log.i("ReceivedMessage", message.data.toString())
    collector.emit(message.from?.substringAfter("topics/")?.replace("-test", "") ?: "")
}

Message Processing Logic

The service processes incoming FCM messages with the following behavior:

  1. Message Logging: All received messages are logged for debugging
  2. Topic Extraction: Calendar ID extracted from message.from field
  3. Environment Normalization: Test environment suffix (-test) removed
  4. Event Distribution: Processed calendar ID emitted through dependency-injected Collector

Integration with Calendar Subscriptions

The push notification system works in conjunction with the calendar subscription scheduler to provide real-time updates:

  • Calendar subscriptions register webhook endpoints for change notifications
  • Backend receives webhook calls when calendar events are modified
  • Notification service translates calendar changes into FCM messages
  • Tablet clients receive push notifications and trigger data refresh
Clone this wiki locally