Containerized Scheduling: Why I Ditched Ofelia for Supercronic
Trying to use Ofelia as a Docker-native scheduler and switching to building a "dual-mode" container powered by Supercronic.
In the modern DevOps landscape, scheduling background tasks within isolated container environments often presents a unique architectural challenge. Recently, while adding my MikroTik Backup Bot I hit a classic wall: how do I elegantly schedule a containerized script without relying on the host machine’s legacy cron daemon?
This article breaks down my journey, exploring my initial attempt to use Ofelia as a Docker-native scheduler, the architectural limitations I encountered, and why I ultimately pivoted to building a “dual-mode” container powered by Supercronic.
The Initial Goal: Infrastructure as Code with Ofelia
Historically, running a containerized task on a schedule relied on host-level schedulers (like Vixie cron, which is over 28 years old). This approach breaks the “Infrastructure as Code” (IaC) paradigm because the schedule lives on the host OS, not within the deployment manifests (like Docker Compose or Ansible playbooks).
My initial instinct was to migrate to Ofelia. Ofelia is a highly regarded, low-footprint task scheduler written in Go. It integrates directly with the Docker Engine API, meaning it requires no additional agents installed inside the target containers.
The most appealing feature of Ofelia was its declarative configuration via Docker labels. By simply adding labels to a container, Ofelia dynamically detects it and registers the job. Furthermore, it provides a crucial no-overlap parameter that prevents a task from running again if the previous instance is still executing, protecting the system from race conditions and CPU starvation.
The Ansible Implementation Attempt
To use Ofelia with my Ansible playbook, I mapped out a deployment using the job-exec method. This method executes a command inside an already running container.
The configuration looked beautiful on paper:
labels:
ofelia.enabled: "true"
ofelia.job-exec.mikrotik_backup.schedule: "0 0 3 * * *"
ofelia.job-exec.mikrotik_backup.command: "/app/backup.sh"
ofelia.job-exec.mikrotik_backup.no-overlap: "true"
The Architectural Roadblock
Despite its elegance, Ofelia presented a fundamental flaw for my specific use case: it cannot start a stopped container via label-based discovery. Because Ofelia reads labels from running containers to build its job registry, my ephemeral backup container (which was designed to run once and stop) was completely invisible to the scheduler.
To bypass this, I had to implement a common, yet frustrating “hack” - daemonizing the container by overriding its entrypoint:
# The Daemonization Hack
entrypoint: ["/bin/bash", "-c", "tail -f /dev/null"]
This kept the container perpetually “sleeping” so Ofelia could read its labels and execute the script inside it. While functional, it felt like an anti-pattern. I also explored modern Ofelia successors like Chadburn, which fixes Ofelia’s memory leaks and dynamically reloads upon Docker events without requiring daemon restarts. However, Chadburn inherited the same core job architecture: it lacks a job-start feature. There is also another fork of ofelia that I reviewed which has plenty of additional features, although is blamed to be vibe-coded.
I was forced to keep an idle container running 24/7 just to execute a script once a day. If I used Ofelia’s job-run feature (which spins up a new container and destroys it), I would lose the ability to configure the schedule via labels, forcing me to maintain a centralized, messy config.ini file.
I needed a better way.
The Pivot: Supercronic and the “Dual-Mode” Architecture
If managing cron from outside the container required hacks, the logical next step was to move the scheduling inside the container.
Enter Supercronic.
Supercronic is a cron implementation built explicitly for containers. Unlike standard Linux cron, it:
- Runs in the foreground and routes all logs directly to
stdout/stderr, ensuringdocker logsworks flawlessly. - Runs safely as a non-root user if needed.
- Has support for sentry reporting of failed jobs (sounds cool!)
- Inherits all container environment variables (like
BACKUP_PASSWORDandSENTRY_DSN) and passes them to the executing script.
Instead of deploying a separate scheduling container, I decided to bake Supercronic directly into the MikroTik Backup image, creating a Dual-Mode container.
Step 1: The Dockerfile Integration
I modified my build process to download and inject the Supercronic binary, utilizing Docker ARG variables to easily bump the version in the future:
ARG SUPERCRONIC_VERSION="0.2.44"
ARG SUPERCRONIC_ARCH="linux-amd64"
ENV SUPERCRONIC_URL="https://github.com/aptible/supercronic/releases/download/v${SUPERCRONIC_VERSION}/supercronic-${SUPERCRONIC_ARCH}" \
SUPERCRONIC="supercronic-${SUPERCRONIC_ARCH}"
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/supercronic"
Step 2: The Intelligent Entrypoint
To support every possible use case I wrote an intelligent entrypoint.sh wrapper:
- If the user provides a
CRON_SCHEDULEenvironment variable, the container dynamically generates a crontab in the/rundirectory and takes over as a daemon. - If not, it executes the script once and exits.
- I also added a
RUN_ON_STARTUPflag to allow immediate execution (vital for testing SSH keys without waiting for the cron trigger), and standard$@passthrough for isolated debugging.
#!/bin/bash
set -e
# Allow arbitrary command execution (e.g., docker-compose run --rm ...)
if [ "$#" -gt 0 ]; then
exec "$@"
fi
# Run synchronously on startup for immediate testing
if [ "${RUN_ON_STARTUP:-false}" = "true" ]; then
echo "=> RUN_ON_STARTUP=true detected. Executing backup immediately..."
/app/backup.sh
fi
# Autonomous Daemon Mode (Supercronic)
if [ -n "$CRON_SCHEDULE" ]; then
echo "=> CRON_SCHEDULE detected: '$CRON_SCHEDULE'"
echo "$CRON_SCHEDULE /app/backup.sh" > /run/crontab
# Exec replaces the shell with Supercronic, ensuring proper SIGTERM handling
exec supercronic /run/crontab
# Legacy "Run Once and Exit" Mode
else
if [ "${RUN_ON_STARTUP:-false}" != "true" ]; then
exec /app/backup.sh
else
exit 0
fi
fi
The Result: A Frictionless User Experience
By abandoning the external scheduler approach, I achieved true Infrastructure as Code without the architectural compromises. Deploying an autonomous, self-scheduling backup bot now requires nothing more than a few lines in a docker-compose.yml:
services:
mikrotik-backup:
image: ghcr.io/olegstepura/mikrotik-backup:main
restart: unless-stopped
environment:
- CRON_SCHEDULE=0 3 * * *
- RUN_ON_STARTUP=true
- SENTRY_DSN=https://yourPublicKey@o0.ingest.sentry.io/0
Key Takeaways
- Using tools like Ofelia or Chadburn for ephemeral tasks forces you into keeping idle processes alive (
tail -f /dev/null). For strictly ephemeral scripts, external schedulers often cause more friction than they solve. - Standard
cronassumes a full Linux environment. Supercronic is engineered for the constraints and logging paradigms of Docker. - By wrapping the execution logic in an intelligent entrypoint, you can offer built-in scheduling (Daemon mode) without deprecating support for users who rely on external orchestration (Run-Once mode).
In the end, shifting the scheduling responsibility from the Docker Engine tier (Ofelia) directly into the application tier (Supercronic) resulted in a cleaner, more observable, and vastly more portable deployment.