September 1, 2020
..deployed this blog to azure web apps successfully. But it was a bumpy ride there, let me tell you.
I have started this blog as a sandbox project and first things that I have learned in 2017 were: how to build docker image, push this image to repository and how to build dotnet core application. Therefore, every new version that I build, I started manually this scripts:
SERVICE="hysite"
pushd ./Hysite.Web
dotnet publish -c Release
popd
docker build -t "$SERVICE" .
SERVICE="hysite"
docker tag "$SERVICE" hyston/"$SERVICE":latest
docker push hyston/"$SERVICE":latest
And after that I have to ssh to linode virtual linux and run third script there:
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
docker rmi $(docker images -q)
docker run --rm -d -p 80:5000 -v /root/logs:/app/logs hyston/hysite:latest
I think everything here is self is self-explanatory1 and easy enough to be written by a person, who is started to looking ways to develop simple applications into the cloud. Deployment of new version was a cumbersome, but reliable path of running all three of these scripts together.
There were two things, that I didn’t like about this approach:
git pull
into master and hook third one when new image was pushed into docker hub.What I was looking is service, that can take docker file, access to repository, restore, build, publish and create docker image, that will contain latest version, then deploy it into some cloud – and all of this should happen automatically, on new commit in master and without any manual steps. Ideally, it also should beforehand check, that tests are running, builds are building and notify me, if something is wrong.
Because I use only one image, I ignored all solutions, that have “Kubernetes” in name. Dunno, it just scared me a little, I was looking for a simple ways to start. I knew, that there are two major cloud operators: Azure and AWS and for my purposes I decided to stick with azure. They have some new service, called container instances, that is used exactly for what I need – take docker file, build image, run it and don’t ask any more questions.
Besides azure portal, MS have product “devops pipelines”, which purpose is still little unclear for me. I have created pipeline, that took repository, build my project and run it in container instance - but I met a problem.
All posts in this site stored in private GitHub repository and first thing, that hyston.blog is doing on start – is attempt to sign in into GitHub and load all posts. To do this, obviously, I need to store my GitHub credentials. When I build site on local machine, these username/password were stored in web.Production.config, which was gitignored. It is possible to add azure pipeline secrets, that would contain GitHub username and password and pass them into docker environment variables, but then they will be stored as linux environment variables, which is considered as insecure. There is other option to do this, that I’ll describe late, but by then I decided, that is a stop sign for me to use container instance. The other reason, that pushed me away from container instances was this comment in Stackoverflow:
Azure Web App for Containers is targeted at long running stuff (always running) while ACI are aimed at scheduled\burstable\short lived workloads (similar to Azure Functions).
So, despite having already running instance in ACI (without GitHub credentials, though), I was switched to use Azure web app. And the problem started to fall on my head.
At first, I have decided created webapp with deployment directly code – so I should note care about docker at all. There is “deployment configuration” option in web app, that (if I remember correctly, it is using devops pipelines) should work as “wizard”. For me it refused to login into my GitHub account at first2. After a day or two this issue has gone and I was able to create deploy my code directly in couple clicks. But there appears another problem: there is no way to choose, which linux distribution will be used to deploy my code. For GitHub integration, my blog is using LibGit2Sharp, which in turn, using very specific linux git library, that is exist only on Ubuntu LTS. Some details about this issue can be found here. For docker image, I just add ‘-bionic’ to runtime dotnet image and then it runs in ubuntu. But for automatic deployment configuration base image was Debian and my site throwing an exception on trying to use libgit2sharp. At this point, I was frustrated about Azure portal functionality overall and confused with Azure devops pipelines (as I understood, these are two different services, that use different accounts). So I went further into GitHub Actions.
There are many ways and possibilities, that are opening, when you are using them. Simplest way is to build dotnet core app directly and deploy binary somewhere. Because I already added build layer to my docker file, I left compilation part there, although I dont like this solution now – wherever I deploy it now, it doesn’t use cache from previous deployments. It is possible to set my own machine as action environment, but I currently don’t have any computer that will run all the time.
I’ll show my deployment script later, but before that I still have one unresolved problem: I do need GitHub credentials in runtime and I still don’t want to store them in open source configuration. Because I stick with azure, I decided to use azure vault. It is overloaded and cumbersome service (as everything with Microsoft), but it ended up well documented (probably, even overdocumented, because I have found several very similar tutorials) and with this tutorial I have added support for it in my code. When I run locally, it take azure credentials from my machine (because I have installed and logged in azure cli) and in docker I have added environment variables, that imported from arguments. This is more secure, than just storing plain credentials in environment, because with azure client id this can be retained only from azure web app. And to pass them into dockerfile, I store them in GitHub secrets, which is inaccessible from view. So, to recap:
Azure.Security.KeyVault.Secrets.SecretClient
it take azure credentials from environment automaticallyIt may sound overcomplicated (that was stopped me from using it for a long time), but in hindsight this is most secure solution, as I can see.
To finish my GitHub action I needed only one step: deploy builded image. After all issues that I have seen trying to create devops pipelines, this was easy.
azure/docker-login
stop into GitHub action to login using these admin credentialsazure/webapps-deploy
to do actual deploy. Here I need to store another GitHub secret: azure publish profile.And that’s basically it. It took me more than one hour to write this post about a thing, that took me almost a month to figure out 4. In the end it is just two scripts: Dockerfile, deploy.yaml created azure account and several settings in azure portal and GitHub. I hope, that in a year, when I will forget about all of that and would need to implement similar stuff, this blog post will help me to refresh my memory.
P.s. Have I mentioned, how I hate yaml indent-based structure? 🤯 This is good in theory and keep structure organised, but I have made too many errors during my development adventures.
I know, that using separate Service variable and removing all images is redundant, as well as other missing improvements. I just copied my own 3yo script↩
Even that my microsoft account is directly bonded to my GitHub account.↩
I made a mistake here which left unnoticed for too long time and I have spent a day, trying to figure out, why I don’t have permission to pull image, before I realised that image name was wrong.↩
Keep in mind, this is a hobby project during summer, I didn’t spent actual month of working time to do this 😛↩