About a year ago, we had a project come in which required automating infrastructure setup, configuration management and change management. I happened to be at the right place at the right time and was given in charge of the whole thing.

The project was designed on the micro services style of architecture and our teams decided on using GitHub as the source code repository, JFrog for storing our build artifacts, a RHEL based Kubernetes cluster as the deployment platform and Jenkins to drive the whole thing through a continuous integration, testing and deployment pipeline. In this way, developers, QA, and automation engineers can use it to run code on different configurations that simulate our continuous integration (CI) environments without too much bother. For enterprise solutions, Atlassian provides a great suite of products geared towards this specific purpose.

Throughout my career, I have been on both sides of the product life cycle. I have written code all day and night and I have also worked on weekends trying to get someone else’s live production code back online before Mondays. Slowly I learned to respect the DevOps view of organizations and how important and difficult it is to maintain the quality of a large product that is being developed by many groups and teams. My DevOps experience completely changed how I approach software development.

Always Automate
The first thing that I did for my team was set up an end to end CI and CD pipeline which automates the building, testing and deployment of code check-ins. Continuous deployment of code to production is something which I never thought feasible but as I started to learn, I figured that continuously shipping code to customers coupled with continuous testing is something which will vastly reduce Operations overhead in the long run. Jenkins is a great tool for this and has many engines and plugins for different tech stacks(java, python, scala, go …) that builds and deploys your code continuously for you. I configured a web-hook with each repository so that any code commits to the Master branch will automatically trigger a build process. Only after the build process passes the CI tests, I order Jenkins to save the artifacts to a dedicated server, from which they get deployed to dev/prod. I can go into a lot more detail ranting on about versioning your artifacts, GitHub branching and CI tests but that is for another blog, another time.

Remove dependencies as much as possible
This was easier for me as the project followed a micro-services style of architecture. A year in DevOps gave me much more intimate knowledge of CI, and namely, a better understanding of what happens to all those build modules that different developers commit to GitHub. The way one module uses dependencies can create issues for another module. Keep your repositories as loosely coupled as possible. For example, we had initially planned on using MongoDB as our primary database and backend devs wrote their code along that. After a few months, when we shifted to Cassandra, all that they had to do was simply use a different driver.

Clone environments across dev, staging and prod
Another thing that I do differently now is ensure that the environments remain the same across all stages of code development. One developer may write code that works on his local system but breaks in the CI. Breaking is good. This helps to debug code before it even goes to deployment.

I used Ansible for configuration management. This gave me better control on infra dependencies and failovers. I wrote a script that would build an environment that mimics the CI server’s environment for any developer who needs to debug my code. With my script handy, the developer can focus on fixing the defect rather than fiddling with the environment. We used the same approach for our deployment platform. I kept reusing the Ansible playbooks/scripts to spin up new environments all through out the product life cycle.

Containerize your repositories
To further make sure that developers work independently of each other, we used Docker to containerize each dev’s work. Devs would put their entire stack inside a container and push that to GitHub and be confident of their code working as expected irrespective of the server’s environments. I had written a blog earlier on why teams should move towards Docker and virtualization in general (linked below).

Dockerizing anything and everything

As a developer, I used to write code, make sure it passed some unit test cases and push that to GitHub. At the end of the day, I wasn’t sure if my code would make or break the system. Docker eliminates that kind of negativity from your life.

Run tests more than code
Tests that pass on a developer’s machine often fail when running on the CI server. My experience with DevOps has taught me to consider the differences between my own machine and the CI server when running tests before I commit my code. For example, when I test a piece of code on my machine, that’s all I do, but when a CI server goes into action, it may run many processes simultaneously.

This extra load may cause delays as different processes get shifted in and out of context. It’s exactly this kind of timing variation that exposes bugs in untested corners of your code. This exposes more real-life scenarios of our CI environment and enables me to write more robust tests.

My experience with DevOps and it’s tools had a profound effect on how I code now. Instead of focusing only if my code passes unit test cases, I now have a more forward thinking approach. I now see how my code will work in sync with other developer’s code. These are invaluable lessons I learned while in DevOps. For any developer out there, I strongly recommend making this foray into the unknown, even if just for a short time.

ThirdEye Data

Transforming Enterprises with
Data & AI Services & Solutions.

ThirdEye delivers Data and AI services & solutions for enterprises worldwide by
leveraging state-of-the-art Data & AI technologies.

Talk to ThirdEye