Introducing Scotty Part II: Behind the Curtain
You’ve seen how Scotty solves the preview environment challenge by turning Docker Compose apps into shareable URLs. But what’s happening behind the curtain when your checkout flow or dashboard preview suddenly appears online? In Part II, we’ll explore Scotty’s architecture, its built-in safeguards, and the boundaries it deliberately respects.
The power of thoughtful simplicity
Scotty focuses on getting a working application in front of stakeholders quickly and reliably. Load balancer integration reflects this philosophy. Scotty works with Traefik and HAProxy, generating the configuration needed to route traffic to your services. You do not need to become an expert in reverse proxies or certificate management. Routing, domain assignment, and basic authentication can be handled automatically. Your nginx service becomes nginx.your-app.yourdomain.com without manual tweaks.
Security is built in rather than bolted on. You can enable basic authentication for each app to prevent unauthorized access. Scotty instructs search engines not to index preview environments. These are sensible defaults that protect work-in-progress.
Automatic lifecycle management avoids forgotten apps consuming resources indefinitely. Set a time-to-live when deploying. Scotty stops the app when the deadline arrives. For truly temporary previews, you can configure automatic destruction.
Most importantly, Scotty integrates with existing workflows. The CLI works in GitLab CI, GitHub Actions, or any automation system. Deploy preview apps when pull requests are created, update them when commits are pushed, and clean them up when branches are merged. The REST API enables custom integrations, while the CLI covers most use cases with simple, memorable commands.
Architecture without the complexity
Scotty’s architecture is straightforward: a Rust-based server with a REST API and a command-line client (scottyctl). You can deploy apps from your local machine, CI pipelines, or other systems via the API.
The server monitors a dedicated directory for folders containing docker-compose.yml. When you create an app, scottyctl uploads and extracts your application folder on the server. Scotty reads your Compose file and generates a docker-compose.override.yml with the necessary load balancer configuration—labels for Traefik or environment variables for HAProxy—to make services accessible.
Domain management uses a simple wildcard pattern: *.apps.yourdomain.com points to your server. Each app gets its own subdomain namespace. A “my-blog” app with an “nginx” service becomes nginx.my-blog.apps.yourdomain.com. No extra DNS work required.
Scotty tracks application lifecycles automatically. It stops apps based on container age and configured time-to-live, preventing a buildup of forgotten environments.
Knowing the boundaries
Clarity about scope matters. Scotty is not for production deployments. While previews may perform well, Scotty does not provide high availability, advanced monitoring, or multi-node scaling. It is a single-node solution. If your application must handle real user traffic and meet strict uptime targets, choose production-grade tooling.
Scotty is not a replacement for orchestrators like Kubernetes, Nomad, or OpenShift. Those tools solve complex, multi-server scheduling, resource management at scale, and advanced networking and security. Scotty operates in a different space: it is the tool you use before you need that complexity.
It also does not aim to be a full PaaS. It does not host databases or manage SSL beyond load balancer integration. It does not provide production monitoring or alerting. The trade-off is intentional: a focused feature set that makes Docker Compose apps instantly shareable without unused enterprise features.
Access control is basic by design. Scotty offers simple authentication and keeps apps private by default. If you need fine-grained permissions, role-based access control, or enterprise identity integration, layer those in or use a broader platform.
Security and scalability considerations
Scotty provides container-level isolation between apps, while sharing server resources and Docker daemon access. Run it behind a dedicated load balancer to filter malicious traffic and keep it on infrastructure isolated from production. As a single-node solution, resource exhaustion affects all apps, so monitor usage and plan capacity. This design introduces a single point of failure, but apps can be recreated in minutes and moved between servers because they maintain minimal state. Ephemeral apps should not store critical data; use external services for persistence. The stateless approach keeps migrations between servers straightforward.
Most importantly, Scotty does not pretend to solve problems beyond its core mission. It will not manage code deployment strategies, database migrations, or application performance monitoring. It makes Docker Compose applications accessible on the web with minimal configuration and automatic lifecycle management.
Ready to beam up your apps
The fastest way to see Scotty in action is to try it. Set up a server with Docker Compose, configure Traefik as the load balancer, and start the Scotty service — installation typically takes just a few minutes. Once the server is running, install the scottyctl command-line client (via Homebrew on macOS or by downloading the binary for your platform). Deploying is as simple as pointing Scotty to your project folder, specifying the public service, and letting it generate a shareable URL.
From server setup to a live application usually takes less than five minutes, using only the Docker Compose knowledge you already have. Full documentation, installation guides, and examples are available at scotty.factorial.io. The source code and contribution guidelines are on GitHub at github.com/factorial-io/scotty.