So I moved a bunch of web services over to a different type of thing. Let me explain.
I started working at this place. They provide a piece of software to their customers, and manage the servers and sites that their clients connect to. The method they would use to get their code onto servers is to manually set up each environment on EC2, run their servers and forget. Fully manual process.
So I start at this place and I'm like, "Dang, that could be better," so I go to work.
The first thing I did was to set up scripts in Visual Studio Code to start up the entire server environment in one click. I wrote a bunch of tasks and launch configs and even updated the docs, and eventually got my server and front-end to start simultaneously. Amusingly, multi-root workspaces were a very new feature in VS code at the time, but first-hand experience: they're great, use them if it makes sense.
This was a good start but I needed to go deeper. The servers were both Express servers, and Docker is a thing. Oh shit. I need to make a CI/CD system.
And thusly I migrated the Node projects to Docker. This was ridiculously easy, here's a random google result explaining that. Afterwards, linking it up with Docker Compose to all start up together and have links. Running this locally was a dream, because the server environment feels sanitized to a degree.
Of course this post is about AWS, though. So I want to run my docker images on AWS. How?
ECS and EKS are Elastic Container Service and Elastic Kubernetes Service respectfully and both are super complicated, difficult-to-use beasts. Because I was new at the company, building something that would take someone a 4-week course to understand was out of the question. Also it would be hard and I'm lazy. So I settled on Elastic Beanstalk.
EB allows you to deploy your docker images onto worker nodes in a load balanced environment, with a helluva lot of goodies. Version management, environment configs, multiple environments for each application, and high visibility into its operation made it a great candidate for our SaaS offering admin tool.
And so, I made CodePipeline projects to build, tag and push our docker images from Git hooks, and push them to ECR, Amazon's container registry. I set up multiple environments. One of them is dubbed staging and pulls the latest image every time. Others only update when manually reconfigured.
Amazon's Dockerrun.aws.json precedent is great, and almost identical to a standard docker-compose file. So much so that it's almost insane that they didn't just use the docker-compose standard entirely, especially when tools like Container Transform exist.
So anyway, that was all nice. How about we run Elasticsearch and Kibana with it, too? Seems good. Except there were niggles! But they were easily sorted out with the .ebextensions tools that AWS provides - basically allowing you to change low level EB worker node configuration easily, as part of your run configuration. Super awesome!
So yeah that's my story, don't know what else to tell you. Smoke weed everyday.