First things first - I need to deploy a nginx server
using my shiny new deployment system.
I take it there’s a docker image I can grab from somewhere.
Also, I think I should start with docker-compose instead of
defaulting to it later on to save me some time in the long
run.
This will also make it much easier to collectively deploy
stuff.
The deployment process will become:
- push some kind of update
- rebuild and restart affected apps docker images
One slight difference is that all dependencies and
configuration and shared files (e.g static web files) will
need to be shared explicitly using Docker volumes.
For example, my blog is a statically generated Hugo site.
Once the build step is complete, these files need to be
available to the nginx server for hosting.
Ideally, each thing I deploy should be unaware when possible
of is dependents. In this situation, the entire hugo blog
shouldn’t “know” or “require” that it is being hosted using
nginx. That way it can be used in other stacks I might want
to use in the future.
The alternative to sharing static files between containers
(more entangled, less separation) could be to run one nginx
instance per app, but that seems unnecessary - really there
should be flexibility to choose between static file hosting
and redirecting web requests directly to the app.
I feel like this is a solid plan with some good first steps:
- deploy an nginx using automated docker-compose.yml
- mount generated files into nginx container
- test that it works
- switch to the new auto-deployed nginx