|
||
---|---|---|
hello-world-service | ||
.woodpecker.yml | ||
Dockerfile | ||
README.md | ||
application-hello-zeal.uploadfilter24.eu.yaml | ||
default.conf | ||
handout.md | ||
nginx.conf | ||
tests.sh | ||
values-hello-zeal.uploadfilter24.eu.yaml |
README.md
hello-world-service
This service greets the entire world with hellos.
Result Link: https://hello-zeal.uploadfilter24.eu/
CI-Link: https://woodpecker.uploadfilter24.eu/repos/5
CD-Link: https://argo.uploadfilter24.eu
Grafana Dashboard: https://grafana.uploadfilter24.eu/d/85a562078cdf77779eaa1add43ccec1easdas/zealview?orgId=1&refresh=5s
Done
- Automated and re-producible provisioning of infrastructure
- Possibility to scale compute resources to more than just one instance
- Fault-tolerance for single VM outages
- Automated deployment process including (CI/CD pipeline)
- Building the application
- Executing tests
- Infrastructure provisioning
- Deploying the application
- Basic verification of a successful deployment
- Monitoring/alerting capabilities to detect future downtimes proactively
Read Me
Everything is automated and operated via CI/CD
for updates on the website, just edit the Dockerfile push it to the git.
Create a new version tag and push the tag to git, then just watch the CI/CD doing the magic.
Requirements
- Kubernetes
- Helm
- git
- Bash
Conclusion
completed the case-study in a slightly modified way.
due to my skills, my trust in kubernetes, my will for continuous improvement and the task to:
improve their setup for deploying and operating hello-world-service with the goal to protect the service against future downtimes
i decided to build a full automated pipeline: from building the image, over a security scan, building the helmchart, releasing everything, upto a fully automated deployment to kubernetes.
some things i have added:
- trivy scan of the image
- multi arch build
- rootless nginx environment
- tolerace to high load scenarios, auto-scaling capabilitie
- build image and helm-chart aliged to the tag version
- trigger the ArgoCD to rollout the new tag version
- wait until the healthy state from ArgoCD returns
- test the Public Endpoint with the tests.sh script
information about the infrastructure:
- the Kubernetes that presents you this complete case-study, is my own freetime project.
- the Kubernetes cluster is deployed via KOPS and runs on 1 ControlPlane, 3 Nodes. The Architecture of all Servers is ARM and located at Hetzner
alternativ solution that was also in my mind:
- build the container in CI/CD
- use terraform/packer/cloudformation to spin up 3 Nodes at the CloudProvider in an Autoscaling Group
- use terraform/packer/cloudformation to place a public loadbalancer infront of those 3 nodes with an activ healthcheck for the running application by port
- provision the node with some provisioning and config management tool (ansile/salt/chef) and deploy the container to the nodes
Local development
Build
- Execute
docker build . -t hello-world-service
Run
- Execute
docker run -it --rm -d -p 18080:80 --name hello-world-service hello-world-service
- Open your browser and navigate to
localhost:18080
and be greeted with a hello
Test
- Execute
./tests.sh localhost:18080