The Ops Community ⚙️

Erik Lundevall Zara
Erik Lundevall Zara

Posted on • Updated on • Originally published at cloudgnosis.org

Tidy Cloud AWS issue #39 - Bye 2022, welcome 2023

Hello all!

Welcome to the next issue of the Tidy Cloud AWS bulletin! In this issue, we are starting the new year with a bit of interesting blog posts, state of DevOps from 2022, and also a bit on Clojure.

Enjoy!


Interesting blogs

Pulumi-Tailscale - connecti

Pulumi recently sent out a newsletter update with a reference to an interesting blog on the Tailscale website - Tailscale for DevOps: Connect to any subnet in your tailnet with Connecti (by Pulumi). The idea here is to provide a simple command-line tool that sets up connectivity to a private network for your computer on-demand. It is set up when you need to use it, and teared down when you are done with it. You use Tailscale for the network connectivity and Pulumi for provisioning Tailscale, and remove it when it is no longer used. This is packaged into a dedicated command-line tool.

This is a great example of extending infrastructure as software tools to build nice utilities and tools for specific use cases and provide user experience improvements. Great work!

DevOps in diapers

This blog post looks at a perhaps common issue today with today’s infrastructure environments, be it in cloud, hybrid or something else. There are simply too many tools and too much time spent on figuring out how to use these tools properly, for tasks that should be simple, but is not.

I have experienced this myself more often than I would like, services and tools with bad documentation, or simply bad interfaces. Esoteric exceptions that make your particular use case really difficult to handle.

We can do ourselves a favour to be better at documenting our struggles and sharing that knowledge - how to make it work and ways to make things easier and better.

ECS Service Connect

This blog post introduces an interesting service feature for AWS ECS, called Service Connect. This allows services running in ECS to connect to each other, even if they live in different VPCs and clusters. You have already been able to connect with other services in the same ECS cluster easily, via CloudMap. This feature extends that beyond individual clusters and VPCs.

Services are defined to belong to different namespaces, and those within the same service connect namespace can communicate with each other, without worrying about networking details. Under the hood, Service Connect adds a sidecar container to existing task definitions in ECS to handle the grunt work.

More command-line glory

In issue 35 of the Tidy Cloud AWS bulletin, I wrote about a couple of alternatives to bash and zsh. One alternative that I did not include then is babashka, or bb for short. One reason it was not included is that it is not a shell, per se. It intends to be a replacement for bash scripting, but not as a complete shell replacement.

Babashka is an implementation of Clojure, a powerful general-purpose programming language which excels at data processing. Regular Clojure runs on the Java Virtual Machine (JVM) and can have a startup time which may be a bit long for short-lived scripting tasks. Babashka has a much quicker startup time, making it well suited for scripting.

Why would you pick babashka for shell scripting? For similar reasons that you would pick Python or Ruby for scripts, although it is just a single binary and not a whole runtime environment to set up. If you do some work with Clojure or ClojureScript, then Babashka becomes a natural extension into shell scripting.

nbb for Node.js

If you are also into the Node.js ecosystem as well, then nbb may also be a great option, it provides babashka-style scripting integrated with the Node/npm ecosystem - you can use any npm package in your nbb scripts.

One use case in AWS land is to write AWS Lambda functions using nbb.

A few Babashka and nbb resources:

Clojure turns 15

Clojure is a dynamic and very expressive language that was initially released to the public 15 years ago, in October 2007. It is part of the Lisp family of programming languages, where the first Lisp appeared in 1958. It is a functional language, with dynamic typing.

The original and main implementation runs on the Java Virtual Machine (JVM), but there are other implementations as well, including ClojureScript (JavaScript/web ecosystem), ClojureCLR (.NET ecosystem), babashka, nbb, ClojureDart (for use with Flutter), and others.

The syntax may feel odd initially, but is simple to learn and very consistent. It has a rich set of libraries and functions, which will take a longer time to learn. Also, learning how to work with functional programming and immutable data structures will also take some time if you are not already familiar with it.

I would say that Clojure, though, is one of these programming languages that will make you a better developer, even if you work with other languages after learning Clojure.

Developing software in Clojure is not just about the syntax though, it is also about the workflow and tooling you use. In Clojure, REPL-driven development is a workflow and a way to interactively develop the software in a tight feedback loop. You are molding the software as you run it. It is one big reason people pick and enjoy developing with Clojure.

One very nice presentation that introduces Clojure and the ideas behind the language is Clojure in a nutshell, by James Trunk.

The Clojure website has a good introduction to REPL-driven or REPL-aided development. It is a valuable read, although you do not need to read all of it right away.

To Clojure and beyond

I have been using Clojure from time to time over the years, but mostly, the work I have done has required or used other programming languages. I find this unfortunate, because it is a fantastic language.

I can only recommend trying it out, if you want to expand your views on software development. If I were forced to pick only one programming language to use from now on, that language would likely be Clojure.

This list of material to learn Clojure is a nice reference if you want to dig into this excellent language: https://gist.github.com/ssrihari/0bf159afb781eef7cc552a1a0b17786f

State of DevOps 2022

In the last couple of years, the DevOps Research and Assessment (DORA) team at Google Cloud has produced the Acccelerate State of DevOps Report. This is a report which has investigated thousands of software delivery teams, to understand what makes some teams low-performing and some teams high performing. The latest report for 2022 came out in September.

For the past few years, there have been four key metrics used to measure where a software team is on this performance scale:

  • Deployment Frequency—How often an organization successfully releases to production

  • Lead Time for Changes—The time it takes a commit to get into production

  • Change Failure Rate—The percentage of deployments causing a failure in production

  • Time to Restore Service—How long it takes an organization to recover from a failure in production

Google Cloud provides a quick check to help assess where your team may be: https://www.devops-research.com/quickcheck.html.

I think DORA has done very good work in this area, and these four key metrics are better than many other metrics that have been used to measure performance and efficiency of software delivery teams.

However, the results and conclusions beyond that has sometime met some critique, for example in Dave Farley’s video Did microservices break DORA?.

This is a private research group and all the survey questions and answers are not available for public scrutiny, which raises some questions around the conclusions drawn.

Part of this problem can be that not all terminology used may be well-defined, so different respondents to the surveys may interpret questions differently. Without knowing the actual questions and answers, it is hard to know.

This reminds of the state of the Cloud report from Hashicorp, where there are many percentages provided in favour of a multi-cloud strategy and usage. One sentence in this study says multi-cloud means using more than one public or private cloud. This is a pretty fuzzy definition. It is not clear what that means. At what point is your on-premise IT considered a private cloud, and where do you draw the line between one or more private clouds in an organization? Is using a service like GitHub, GitLab or Bitbucket considered different clouds? Jira, Confluence, Azure Active directory, Office 365, Google Workspaces, Google Drive?

I really like much of the work by DORA, but it is also sound to review the conclusions with a critical eye.


You can find older bulletins and more at Tidy Cloud AWS. You will also find other useful articles around AWS automation and infrastructure-as-software.

Until next time,

/Erik

Top comments (4)

Collapse
 
ellativity profile image
Ella (she/her/elle)

If I were forced to pick only one programming language to use from now on, that language would likely be Clojure.

High praise indeed, especially given your generally analytical approach to most of the tools and content you share.

Your discussion of the State of DevOps-type reports particularly resonated with me. I could easily fall into research rabbit holes, interrogating the meaning and methodology of most of the reports we see floating around.

Sometimes I wonder what the intention of them is, produced (as they so often are) by service or product vendors themselves.

Collapse
 
eriklz profile image
Erik Lundevall Zara

I think that is a good point, most of the reports you see on a topic is often either directly conducted by a vendor, or indirectly by them paying for the production of the report.

If you are in the position to define the questions asked, to whom they are presented and under what circumstances, you will of course affect the outcome - intentionally or unintentionally.

For many scientific reports you are also able to access and obtain the raw collected data used, and from that do your own analysis. You also have very specific details about how this data was collected or measured.
All of that so you can either reproduce the same results, or come to a different conclusion based on the same data.
But I do not know how much academic research is done on these topics.

Collapse
 
ellativity profile image
Ella (she/her/elle)

You also have very specific details about how this data was collected or measured.

And, importantly to me, why that particular dataset was selected.

As you say, when the findings of the research can be influenced by the questions themselves, it's crucial that every question delivers data for a specified purpose. Extra questions for the sake of extra questions are a liability (or influence, depending on your perspective)...

Thread Thread
 
eriklz profile image
Erik Lundevall Zara

Yes, you are right, that is quite crucial. It is not that obvious if this is the case, if you look at a report where you do not have that insight.

It is much easier to see this pattern in product comparisons on a vendor's website, when they compare against the competition. The points of comparison chosen and how they are phrased generally tend to emphasize their product's strengths and less focus on potential weaknesses.

Back in the days of big licensing deals, when I worked for a product vendor, it was a strategy from sales to try to influence the questions asked by a customer on bids that they sent out to multiple vendors - which was a possibility if you already had established trust with that customer before. If the company were not in such a position, some other vendor had likely done the same thing.