The Puppet State of DevOps Report 2021 – A Summary

I get a bit confused every year about who is writing the State of DevOps Report, and how that gets decided, and in the past it’s been Puppet, Google, DORA and others, but this year, 2021, it was definitely Puppet.

[Edit: apparently there are two State of DevOps reports now… I’m staying out of that particular argument though!]

The state of DevOps report each year attempts to synthesise and aggregate the current state of the technology industry across the world in respect to our collective transformation towards delivering value faster and more reliably. Or as Jonathan Smart puts it, “Sooner. Safer, Happier”. The DevOps shift has been in progress for over a decade now, and whilst DevOps was always really about culture, the most recent reports are now emphasising the importance of culture, progressive leadership, inclusion, and diversity more than ever before.

Last year, in 2020, the core findings of the State of DevOps Report focussed on:

  1. The technology industry in general still had a long way to go and there remained significant areas for improvement across all sectors.
  2. Internal platforms and platform teams are a key enabler of performance, and more organisations were starting to adopt this approach.
  3. Adopting a long-term product approach over short-term project-oriented improves performance and facilitates improved adoption of DevOps cultures and practices.
  4. Lean, automated, and people-oriented change management processes improve velocity and performance over traditional gated approaches.

This year (2021), there are a number of key findings building on previous DevOps reports:

1. Well defined and architected Team Topologies improve flow.

Clear organisational dynamics including well-defined boundaries, responsibilities, and interactions, are critical to achieve fast flow of of value. Whilst last year highlighted the importance of internal platforms, this report emphasises the importance of Conway’s Law, and shows that well defined team structures and interactions, such as platform teams (which scale out the benefits of DevOps transformations across multiple teams), cross-functional value-stream aligned teams, and enabling teams strongly influence the architecture and performance of the technology they build. Team “Interaction Modes” as seen in the diagram below are also critical to define, in the same way that we would define API specifications.

DevOps and Team Topologies

The book Team Topologies expands upon this concept in great detail, and Matthew and Manuel, the authors, also provide excellent training in order to apply these concepts to your contexts.

Clear team responsibilities

What is also clear from the State of DevOps Report this, and has been for some time, is that siloing DevOps practices into separate “DevOps teams” is an antipattern to success in most cases. And there should still be no such thing as a “DevOps Engineer”.

2. Use of cloud technology remains immature in many organisations.

Whilst the majority of organisations are now using cloud technology such as IaaS (infrastructure-as-a-service), most organisations are still using it in ways that are analogous to the ways we used to manage on-premise or datacentre technology. High performers are adopting “cloud-native” technologies and ways of working, including the NIST (National Institute of Standards and Technology)essential characteristics of cloud computing: “on-demand self-service, broad network access, resource pooling, rapid elasticity or expansion, and measured service.” How these are implemented is very context-specific, but includes the principles of platform(s) as a product or service, and high competencies in monitoring and alerting and SRE (Site Reliability Engineering) capabilities, whether as SRE teams, or SRE roles in cross-functional teams.

Cloud native capabilities and devops

3. Security is shifting left.

High performers in the technology space integrate security requirements early in the value chain, including security stakeholders into the design phase and build phases rather than just at deploy, or even worse, run, phases. Traditional “inspection” approaches to security, governance and compliance significantly impact flow and quality, resulting in higher risk and lower reliability. Applying DevOps principles and practices to include change management, security and compliance improves flow, reliability, performance, and keeps the auditors off your back.

DevSecOps transformation

Whilst some call this DevSecOps, many would simply call it DevOps the way it was always intended to be.

4. DevOps and Digital Transformation must be delivered from the bottom-up, and empowered from the top-down.

Culture is the reflection of what we do, the behaviours we manifest, the practices we perform, the way we interact and what we believe. Culture change is never successfully implemented only from the top-down, and must be driven and engaged with by those expected to actually change their behaviours and practices.

DevOps transformation promotion

Cultural barriers to change include unclear responsibilities (enter Team Topologies), insufficient feedback loops, fear of change and a low prioritisation for fast flow, and most importantly, a lack of psychological safety.

Psychological safety and risk

A lot of these findings, unsurprisingly, echo the findings from Google’s 2013 Project Aristotle, which showed that psychological safety, clarity, dependability, meaning and impact were crucial for high performance in teams.

Extra note on “Legacy” workloads.

The report highlighted the “dragging” effect that legacy workloads can have on flow and change rate, as an effect of their architecture, codebase, or infrastructure, or the fact that nobody in the organisation understands it any longer. Rather than leave alone your legacy workloads, invest in them “so that they’re no longer an inhibitor of progress”. This could be as simple as virtualisation of physical hardware, or decomposing part of the system and moving certain components to cloud-native platforms such as Kubernetes or OpenShift. Even if you have to do something a bit “ugly” such as creating 18GB containers, it’s still a step forward.

TL;DR

  1. Organisational dynamics must be considered crucial to transformation.
  2. Cloud-native approaches are critical. It is no good to simply move traditional workloads to the cloud.
  3. Shift security, compliance and change governance left, and include security stakeholders in all stages of value delivery.
  4. Culture change is key, and must be promoted from the very “top” as well as delivered from the “bottom”. Psychological safety is at the core of digital and cultural transformations.

If you’re interested in finding out more about DevOps and Digital Transformations, Psychological Safety, or Cloud Native approaches, please get in touch.

Thanks to Nigel Kersten, Kate McCarthy, Michael Stahnke and Caitlyn O’Connell for working on the 2021 State of DevOps Report and providing us with these insights.

View the 2021 Accelerate State of DevOps Report summary here.

Adoption vs Adaption in Resilience Engineering and DevOps

Photo by Chris Abney on Unsplash

DevOps was, and still is to a degree, a “ground-up” phenomenon. It became to be adopted, adapted and evolved by engineering teams before “management” even really understood what it was. 

The openness and flexibility that was expounded by DevOps meant that it was able to be interpreted in different ways by different teams in different contexts. This was a key strength, because unlike rigid frameworks such as ITIL, the people responsible for doing the work were able to modify and apply DevOps to their own work, in the ways that best suited them.

But this loose definition also proved to be a weakness. Because there were no limits to how DevOps could be interpreted and applied, it was often (and still is) interpreted as a technology solution rather than cultural change. This resulted in “DevOps engineers”, or “DevOps teams” whose remit is focussed on cloud technology, CI/CD pipelines, or automation. 

Due to this, we’re still far behind from where we could have been as an industry. Despite everyone in technology knowing the term “DevOps” and almost every firm adopting some degree of DevOps practices, these transformations have often stuttered or even failed, in part because it’s unclear to many what DevOps really is and how to “do” DevOps.

Resilience Engineering is a field of applied research that considers organisational-scale capability to anticipate, detect, respond and adapt to change. The principle of socio-technicality is core to RE: the premise that you can’t separate people from technology; if you change the technology, it will affect people, and if you change the way people work or communicate or the way teams are structured, it will impact the technology created or consumed by those people. 

RE as a field has been around for almost two decades, but only now (for various reasons including the Covid pandemic) is beginning to touch mainstream discussions and discussed in the same conversations as digital and organisational transformation. 

Researchers and Practitioners of RE are quick to clarify what RE is, and is not, during these discussions. Whilst it may seem dogmatic to be so strict about what is within the remit of the field, I think this could be the valuable lesson learned from one of the weaknesses of DevOps. In order for organisations to successfully adopt and adapt to a new operating model and principles such as RE, it’s essential to understand very clearly what it is. 

Resilience Engineering, despite being nearly twenty years old as a field, is somewhat embryonic in its adoption outside of a narrow field of specialist researchers and practitioners, and as such, it’s crucial that we define accurately what it is, what it is not, and resist attempts (intentional or unintentional) to co-opt the term to mean something more akin to chaos engineering, automation, or system hardening efforts. 

A balance must be struck between defining accurately what RE is, and tolerating (indeed, encouraging) a flexibility of interpretation and adoption in different contexts. DevOps was maybe too loose in this respect, other paradigms such as ITIL or SAFe were maybe too strict and dogmatic. Maybe with Resilience Engineering, the sweet spot will be found.

Digital Transformation and DevOps: Enterprise Resilience

digital transformation

Digital Transformation is having a real moment in industry, in part due to the huge changes as a result of the pandemic of 2020.  But as usual, there’s little agreement about what it means. In contrast to previous “transformations” such as ITIL, Lean, Agile, or DevOps, digital transformation doesn’t simply mean automating processes, becoming more efficient, offering your existing products and services online, creating an app, or shifting your infrastructure to the cloud. Even the annual State of DevOps Reports are beginning to focus more on digital and organisational transformation rather than a specific focus solely on DevOps.

What is digital transformation?

True digital transformation means transforming everything about your organisation in respect to people and technology towards an engaged, agile, happy and high performing organisation. DevOps was (and still is) one key aspect of this approach. The only way to truly achieve organisational resilience or enterprise agility is to fundamentally transform the foundations of the organisation. The list below describes just some of the aspects of digital transformation and the areas to address

  • Culture, values and behaviours
  • Practices and ways of working
  • Communication structures
  • Hierarchies
  • How financial budgets and funding models are managed
  • How teams and people are measured and incentivised
  • How and what metrics are used
  • Cloud native architectures and practices
  • Moving from projects to products
  • Team structures, topologies and interactions
  • Recruitment and onboarding/offboarding practices
  • Value stream alignment
  • Breaking down silos
  • Embedding the ability to change and adapt
  • Reducing cognitive load
  • Psychological safety in delivery teams, senior leadership teams and functional teams
  • IT services and operational technologies
  • Facilities, colocation, office layouts (especially options for open-plan or not)
  • And many many more – in fact, here is an (incomplete) list of organisational factors relevant to transformation.

Why digital transformation?

What’s your organisational goal? Maybe it’s increasing your speed to market for new products and features, maybe it’s reducing risk of failure in production and improving reliability, or maybe it’s to keep doing what you’re doing but with less stress and improved flow. If you’re only looking to reduce costs however, digital transformation is not for you: one of the core requirements for a transformation to succeed is for everyone in the organisation to be psychologically safe, engaged and get behind it, so reducing costs and potentially cutting workforce numbers is not going to create that movement.

What is Enterprise Resilience?

Resilience Engineering is a decades-old field of applied research that focusses on the capacity to anticipate, detect, respond to and adapt to change. Organisational “robustness” might mean being able to withstand massive disrupting events such as pandemics or competition, but enterprise agility represents the resilience engineering concept of true resilience – not just “coping” with change, but improving from it and future challenges. I believe that Resilience Engineering is the direction that DevOps is evolving into.

Why is digital transformation so complex?

Despite many attempts to simplify the concept of digital transformation, it remains one of the most challenging endeavours we could embark upon.

Galbraith Star model
Galbraith Star model

I’m not a huge fan of over-simplifying organisational complexity into components, especially models such as Galbraith’s Star that place “people” as one of the components (and certainly not models that consider anything other than people to be the primary element). Whilst models such as this may help people compartmentalise the transformation challenge, in almost every case, the fractures between the various components don’t actually exist in the way they’re presented.

Organisations are not simply jigsaw pieces of technology, tools, and people that react and function in predictable ways. As the Cynefin model shows us, systems exist in multiple different states. Complex states, such as the state in which most sociotechnical systems (the organisations that we work in) reside in, require a probe-sense-respond approach that applies built-in feedback loops to determine what effect the intervention you’re working on is having. Everything in digital transformation is an experiment.

upload.wikimedia.org/wikipedia/commons/1/15/Cyn...

It’s also important to avoid localised optimisation – applying digital transformation approaches to one part of an organisation whilst ignoring other parts will only result in tension, bottlenecks, high-friction and failures elsewhere. We must observe and examine the entire system, even if we cannot change it. Ian Miell discusses here in this excellent piece why we must address the money flows in an organisation.

Likewise, changing one small part of a system, especially a complex system, will have unintended and unanticipated effects elsewhere, so a complete, holistic view of the entire organisation is critical.

Digital transformation is a series of experiments

This is why, if anyone suggests that there is a detailed “roadmap”, or even worse, a Gannt chart, for a digital transformation project, at best it’s naive and worst, it’s fiction. Any digital transformation process must be made not of a fixed plan, but a series of experiments that allow for iterative improvements to be made.

Digital transformaiton - everything is an experiment

When you think about digital transformation in this way, it also becomes clear why it will never be “finished”. Organisations, like the people they consist of, constantly change and evolve, just like the world we operate in, so whilst digital transformation is undoubtedly of huge value and an effective approach to organisational change, you will never, ever, be “done”.

In my role as Transformation Lead at Red Hat Open Innovation Labs, we use the Mobius Loop approach to provide structure to this experimental, feedback-based, continuous improvement and transformation journey.  If you’re interested in digital transformation, DevOps, Psychological Safety and how you can begin to set transformation in motion in your own organisation, get in touch.

 

Psychological Safety and Organisational Resilience

Psychological safety is the “”shared belief that the team is safe for interpersonal risk taking”” (Dr Amy Edmondson, 1999), and is the single most important factor in team performance, according to Google’s Project Aristotle (2016). People, teams, and organisations that possess high levels of psychological safety will innovate faster, suffer fewer major incidents, and will adapt to changes faster than the competition.

As 2020 has shown us, resilience – that is, the ability to withstand and learn from challenges – is one of the most important capabilities of an organisation.

In this talk, Jabe Bloom and I discuss how psychological safety is one of the most important factors in enabling organisations to be more resilient to challenges, and building a DevOps culture.

Critique of Personality Profiling (Myers-Briggs, DISC, Predictive Index, Tilt, etc)

I find that some of my ideas take a few weeks, months or even years to form. This one took almost exactly a year before coalescing (coagulating?) in my mind. I’ve been thinking about personality tests in the context of efficacy, equity and neurodiversity recently, and it troubles me.

I’ve always found personality testing problematic – indeed any pseudo-Jungian approach to putting people into type categories I find highly distasteful and potentially harmful.

Critical literacy is sorely lacking in the business and management world. This is possibly largely because it’s not rewarded: we reward confidence, sticking by decisions, bullishness and simple answers to complex problems.

In respect to diversity, inclusion, and equity, I just can’t square the desire to categorise people and their personalities with the very real need for inclusion and diversity of ways of thinking. It’s seems simply antithetical.

To summarise the flaws in personality testing:

  • There is very little evidential basis behind personality profiling, and significant evidence against it.
  • The models are usually based on false dichotomies of “big picture vs detail-oriented”, when there is no evidence that these exist.
  • The models are also based on WEIRD (Western, Educated, Individualistic, Rich, and Democratic) societies, and fail to recognise collectivist, holistic strengths.
  • They rarely address context and inter-relational behaviours, but instead make assumptions about behaviour from individualistic measures.
  • They tend to assume that our personalities are largely fixed and unchangeable.
  • These tools can lead to false and potentially harmful assumptions made about other people and the way they behave.
  • The tools may be used for unethical (and illegal) practices such as recruitment, selection for promotion, or other decisions made about someone without their consent.
  • In my experience, they are one of the most highly weaponised management tools ever created.
  • Because they lead people to believe that they can understand someone based upon a profile, they can prevent further, discussion, examination and effort to understand people and their ever-evolving uniqueness.
  • The algorithms used are rarely open. Algorithms inherit the biases of those people that created them, and if we are making ourselves subject to analysis by algorithm, I want to know what it’s doing and who designed it.
  • Many tests are biased (see above) – for example, the Big Five was shown to bias against women and categorise them as more aggressive when answering identically to a man: because the original data model was flawed.
  • To avoid a critique of poor reliability, we’re often told to avoid doing the tests more than once.
  • When assigned a profile, we are generally not allowed to dispute it. Even though we have spent decades in our own minds, a five-minute test is assumed to know more about me, than me.

Even scientists who are most concerned with assessing individual differences in personality would concede that our ability to predict how particular people will respond in particular situations is very limited.

Personality, strength, or psychometric models such as Myers-Briggs, DISC, Belbin, Predictive Index, Tilt and the myriad others available, attempt to codify people and their preferences, personalities, behaviours and values into archetypes, using fixed (usually proprietary and opaque) algorithms. There is usually a commercial reason that these tests are closed-source, because companies don’t want someone copying the code and using it for distributing it, but it also prevents detailed analysis and evaluation of the algorithm.

 

Repeatability and validity

These archetypes (such as “maverick”, or “Inventor”) are then categorised and collated into larger group types, and in many organisations, used  to inform everything from role selection, management approach, or even hiring decisions, (which is illegal in many cases).

In 20 years of management, I have never seen a psychometric analysis tool generate a constructive outcome, particularly from a diversity, equity and inclusion (DEI) perspective. I also find it interesting that personality testing *only* exists in the business world, not in the academic world of actual psychological study. Do business managers actually think they know something psychologists don’t?

In my opinion (somewhat backed up by many years of experience and study), categorising people and attempting to simplify the complexities of our nature, in an attempt to make other people and ourselves more predictable, is certainly a seductive proposition. But it is error-prone, and dangerous. Adam Grant, organisational psychologist at Wharton, agrees.

 

Psychometric analyses don’t work. Indeed, they are often damaging.

The reason they will never work is because they try to map a complicated framework onto a complex problem. You may be familiar with Carl Jung, and his “12 Archetypes” of “Ruler, Sage, Explorer, etc”, which are frequently criticised as mystical or metaphysical essentialism. Since archetypes are defined so vaguely and since archetypal images have been observed by many Jungians in a wide and essentially infinite variety of everyday phenomena, they are neither generalisable nor specific in a way that may be researched or demarcated with any kind of rigour. Hence they elude systematic study, which is true of many other domains of knowledge that seek to reduce complex problems and systems to simple, archetypal models and solutions.

As Cynefin shows us, complicated systems can be really big, and appear complex, but the laws of cause and effect don’t change. When you press the A/C button in your modern car (which is “complicated”), the A/C comes on, and the same thing happens every subsequent time you do it. This is rather obviously not the case with people.

In a complex systems such as humans, asking a teammate to help you out with a task one day results in them helping you, but on another day, they might tell you to stick it; maybe they’re hungover, stressed and busy, maybe they’re tired, or maybe they just don’t feel like helping. Cause and effect change in complex systems; and humans are complex. Really complex. Which is why “the soft stuff is the hard stuff“.

Complicated systems can seem messy, but an action results in the same result each time. People are not like that. They are complex, and groups of people even more so. Cause and effect changes constantly – pressing the equivalent of that A/C button on a complex human has one effect today and a different effect tomorrow.

And that is why personality, psychometric, “strength” tests etc will never work in the way people desire them to. People don’t fit into boxes, and neither should we try to.

 

All models are wrong. Some are useful.

The problem is when you use a model and apply it to a complex problem in the assumption that it’s right.

“It ain’t what you don’t know that hurts you, it’s what you know for sure that just ain’t so.”

And people selling these systems either know this, in which case they’re selling snake oil. Or they’re simply being optimistically gullible, looking for simple answers to complex problems. To be fair, we humans are almost infinitely susceptible to the seductive simplicity of personality archetypes, even more so when they’re about us. This is known as the Barnum effect, where it’s possible to give everyone the same description and people nevertheless rate the description as very accurate.

 

the barnum effect sketchplanations https://sketchplanations.com/the-barnum-effect

Flawed evidence of personality test reliability

MBTI fails on both validity and reliability tests, as do most other personality and psychometric tools. Proponents (usually people selling them) are keen to point out reliability measures that show, with a degree of error, that the same person taking the same test at a different time often obtains a similar result. This only serves to highlight the problem however – just as I would tell you my favourite colour is yellow if you ask me today, and I’d respond usually with the same a month later – it doesn’t follow that my favourite colour has anything to do with my personality, nor that my personality is stable over time. Equally, I may be lying. My favourite colour is actually blue.

Most of these systems apply an assumption of dichotomies, or even force them – you are either X or Y: cannot be both, and you cannot change from one to the other. This has been disproven too. 

When I did a “Predictive Index” test, I was told that I was far from empathetic, because I was evidence and data driven. According to PI, someone cannot be both evidence-oriented and empathetic. Not only is this offensive, it’s completely unfounded. In fact research shows that people with more rigorous and evidence-driven thinking skills are also better at understanding and managing emotions. These are simply not valid tests.

We should all be suspicious of algorithms that describe us or make decisions about us that are closed source, and psychometric tests are no different. Predictive Index have repeatedly declined to open source their algorithm, ostensibly to protect their intellectual property.

The key to the Big 5 model is its simplicity. It doesn’t sort anybody into a “type,” just informs them where they fall on a continuum of personality traits. There are no tricks and no surprises to be revealed, and it’s not a black box. However, even though it’s the most trusted psychological profiling test in academia, the “Big 5” has been found to be systematically sexist. “Women are told they’re significantly more disagreeable than men who answer questions identically.

Criticism of MBTI and others extend even further, often due to a highly westernised, English-language, neurotypical approach. 

 

Dangerous tools?

Evidence shows that, far from being a “short-cut” to more insightful leadership, tools such as these can be harmful – they may convince managers that they’re doing “good management”, and discourage further effort to improve management and leadership behaviours. At worst, they’re actively discriminatory and detrimental to individual and team performance, reducing the quality of human interactions and decreasing levels of psychological safety.

Conversely,  I’ve actually found value in doing “which Hogwarts house are you in?” or “Which sex and the city character are you?” quizzes with teams. They’re obviously nonsense, but they facilitated a good discussion with team members about preferences and styles – and it was much more fun than MBTI!

(In fact, those quizzes have an advantage over some of the “official” tests because they make no pretence of scientific accuracy.)

Finally, I’ve never come across a strongly competent leader who used personality testing and categorisation. It seems to me (and I’m conscious of my own biases here) that these tests can sometimes risk replacing empathy. A way to feel like you’re understanding people, and “doing the work” without actually putting in the effort to do so.

Personally, given all the flaws and limitations of personality profiling, I hope organisations stop using them, and businesses stop trying to make money out of them.  We try not to use flawed tools to do finance, accounting, software development, design, or data analysis. Why is it acceptable to use flawed tools to understand and manage the most important thing in organisations – people? And why, when we realise that they’re flawed, are they so “sticky”? Why can’t we seem to get rid of them?

What do you think? Are they a useful tool, or a potentially dangerous over-simplification of human nature?

 

Read more: https://adamgrant.substack.com/p/mbti-if-you-want-me-back-you-need and https://www.psychologytoday.com/gb/blog/give-and-take/201309/goodbye-to-mbti-the-fad-that-wont-die