On implementing DevOps, too many tools, DevSecOps, and more with Nana Janashia
To excel in DevOps, the first step is to think like an architect – and yes, you heard it right!
This is the foundational rule that Nana Janashia, renowned DevOps Trainer and YouTuber instills in her students.
With her free courses and comprehensive bootcamps offered on YouTube and her proprietary training channel, Nana has facilitated the transition of hundreds of thousands of people into DevOps roles.
Additionally, her ‘IT Beginners’ course (for those without IT backgrounds) has attracted thousands of students as well, demonstrating her dedication to accessible tech education for people with different levels of IT expertise. Currently, about 5,000 students from all continents are enrolled in either of her premium courses.
Let’s delve into her approach and insights.
Thinking like an architect
First, the burning question: how do you define DevOps?
Nana: My own preferred definition of DevOps is this: It’s a combination of all processes and tools that remove any bottlenecks and slow-down points in the complete software release pipeline. And if we reverse engineer from that definition, we see the way to remove bottlenecks and things that slow down the process is to automate things, and that’s why at the core of DevOps are automation tools, everything as code concepts, etc.
As a DevOps engineer, you need to think like an architect first, examine the existing systems, and identify any inefficiencies, manual steps, long-running steps, blockages, bottlenecks, and any points of human mistakes or issues slipping into production. Then, redesign the systems to improve those inefficiencies, solve the blockages with automation steps, etc.
Of course, this is a rather challenging task, especially when dealing with complex systems involving multiple engineering teams.
What is the biggest challenge when implementing DevOps within organizations?
Nana: The biggest challenge is that DevOps impacts the entire set of systems and the whole software development pipeline, which affects different engineering teams and roles.
And that can create a lot of resistance from other engineering teams, when they don’t fully understand the DevOps principles themselves and often see the implementation of DevOps tools as an interference in their already existing tasks and responsibilities.
The higher impact is on the Operations and System Administration side because the tools and processes DevOps uses to efficiently manage infrastructure, cloud platforms, platforms, etc., are more aligned with software development practices and less familiar with how Operations teams work.
That’s why the best way to implement DevOps is by introducing the changes step by step, smaller improvements, instead of redesigning larger parts of the systems. In parallel, educating other engineering teams about DevOps to promote collaboration.
How a collaboration between development and operations teams should be promoted?
Nana: Firstly, it’s essential to ensure everyone is on the same page regarding what DevOps achieves and its implications for each engineering team. This helps to reduce resistance and misinterpretation. However, in practice, things may differ from theory.
When implementing DevOps, it’s important to understand that even though DevOps engineers are also hands-on engineers implementing processes they designed and architected themselves, they often need collaboration from other engineers to build fully automated processes across all systems.
To achieve that, especially in an organization that isn’t fully open to these changes, DevOps engineers should start with improvements that make the existing tasks of the other engineers easier, faster, and more efficient.
DevSecOps: a multi-layered approach
How do you handle monitoring and metrics in your DevOps practices?
Nana: As systems grew more complex, automated monitoring became essential to understand and manage them effectively. Since monitoring should involve multiple levels of the systems, the monitoring tools used for each may be different. Cloud platforms usually have their own services to monitor the cloud infrastructure and services, as well as access to the platform. In the Kubernetes world, tools like Prometheus play a big role in monitoring clusters and workloads inside.
With monitoring, it’s important not only to have insights but also to have automated alerting in place whenever anything deviates from a norm within the systems to proactively address any issues.
How do you integrate security practices into your DevOps processes, especially in terms of code reviews and automated testing?
Nana: DevSecOps principles involve dozens of tools to integrate automated security checks in the DevOps processes.
Like monitoring, security is also applied on multiple layers. And since security is layered, it’s important to test all parts of the systems to identify any vulnerabilities. Therefore, the DevSecOps involves tools for testing security issues in the application code, in the third-party dependencies, in the image layers, dynamic security tests to tamper with the application to exploit it like hackers would etc.
Beyond those automated security scans we add in the DevSecOps release pipeline, we also have security on all other levels. Every tool and platform we introduce in the systems opens up a new field of security threats that can be exposed if they aren’t secured. That means security on the cloud platform, infrastructure level, security in Kubernetes, compliance as code, automated policy enforcement, etc.
And last but not least, probably the most important of all, the access management for all the systems used, from cloud platforms to CI/CD platforms, etc.
DevOps tools: AI-driven features and fewer tools
How do you design and implement disaster recovery and high-availability solutions in your DevOps workflows?
Nana: Many factors are involved in creating efficient DR and HA solutions in DevOps. Some key components I would highlight are:
- Using Infrastructure as Code principles to define your infrastructure, networking, and configuration in code. Using tools like Terraform, AWS CloudFormation, or Kubernetes YAML files to enable you to recreate your infrastructure on-demand.
- Deploying applications and services across multiple geographic regions or Availability Zones (AZs) to ensure HA. Many Cloud providers offer multi-region deployments for redundancy.
- Regularly backing up data and systems to meet RPO requirements. Using automated backup and snapshotting tools from cloud providers themselves or third-party backup solutions. Automated is again a key here.
In your opinion, what are the emerging trends in DevOps in 2024? Are there any particular tools or technologies you find especially promising or exciting?
Nana: My focus is rarely the tools and technologies but rather the underlying concepts that are either changing or being standardized. So I believe we will see many existing DevOps tools introducing AI-driven features or completely new AI tools for DevOps use cases specifically, which will add to the already vast eco-system of DevOps and Cloud tools. I believe these trends will continue into 2025 as well.
However, at some point, we will see the trend to standardize some of the technologies and concepts. So eventually, instead of a myriad of tools, we would have fewer tools that become established as the industry standard. But we still have a long way to go till then.