Resilience Engineering and DevOps – A Deeper Dive

robustness vs resilience

[This is a work in progress. If you spot an error, or would like to contribute, please get in touch]

The term “Resilience Engineering” is appearing more frequently in the DevOps and technology world, and there exists some argument about what it really means. Resilience Engineering is a field in its own right. There is even a Resilience Engineering Association.

It addresses complexity, non-linearity, inter-dependencies, emergence, formal and informal social structures, threats and opportunities. A common refrain in the field of resilience engineering is “there is no root cause”, and blaming incidents on “human error” is also highly frowned upon, as Sydney Dekker explains so eloquently in “The Field Guide To Understanding Human Error”.

Resilience engineering is “The intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions.Prof Erik Hollnagel

It is the “sustained adaptive capacity” of a system, organisation, or community.

Resilience engineering has the word “engineering” in, which makes us typically think of machines, structures, or code, and this is maybe a little misleading. Instead, maybe try to think about engineering being the process of response, creation and change.

Systems

Resilience Engineering also refers to “systems”, which might also lead you down a certain mental path of mechanical or digital systems. Widen your concept of systems from software and machines, to organisations, societies, ecosystems, even solar systems. They’re all systems in the broader sense.

Resilience engineering refers in particular to complex systems, and typically, complex systems involve people. Human beings like you and I (I don’t wish to be presumptive but I’m assuming that you’re a human reading this).

Consider Dave Snowden’s Cynefin framework:

cynefin

Obvious systems are fairly easy to deal with. There are no unknowns – they’re fixed and repeatable in nature, and the same process achieves the same result each time, so that we humans can use things like Standard Operating Procedures to work with them.

Complicated systems are large, usually too large for us humans to hold in our heads in their entirety, but are finite and have fixed rules. They possess known unknowns – by which we mean that you can find the answer if you know where to look. A modern motorcar, or a game of chess, are complicated – but possess fixed rules that do not change. With expertise and good practice, such as employed by surgeons or engineers or chess players, we can work with these systems. 

Complex systems possess unknown-unknowns, and include realms such as battlefields, ecosystems, organisations and teams, or humans themselves. The practice in complex systems is probe, sense, and respond. Complex systems resist reductionist attempts at determining cause and effect because the rules are note fixed, therefore the effects of changes can themselves change over time, and even the attempt of measuring or sensing in a complex system can affect the system. When working with complex systems, feedback loops that facilitate continuous learning about the changing system are crucial.

Chaotic systems are impossible to predict. Examples include emergency departments or crisis situations. There are no real rules to speak of, even ones that change. In these cases, acting first is necessary. Communication is rapid, and top-down or broadcast, because there is no time, or indeed any use, for debate.

Resilience

As Erik Hollnagel has said repeatedly since Resilience Engineering began (Hollnagel & Woods, 2006), resilience is about what a system can do — including its capacity

  • to anticipate — seeing developing signs of trouble ahead to begin to adapt early and reduce the risk of decompensation 
  • to synchronize —  adjusting how different roles at different levels coordinate their activities to keep pace with tempo of events and reduce the risk of working at cross purposes 
  • to be ready to respond — developing deployable and mobilizable response capabilities in advance of surprises and reduce the risk of brittleness 
  • for proactive learning — learning about brittleness and sources of resilient performance before major collapses or accidents occur by studying how surprises are caught and resolved 

(From Resilience is a Verb by David D. Woods)

 

Capacity Description
Anticipation Create foresight about future operating conditions, revise models of risk
Readiness to respond Maintain deployable reserve resources available to keep pace with demand
Synchronization Coordinate information flows and actions across the networked system
Proactive learning Search for brittleness, gaps in understanding, trade-offs, re-prioritisations

Provan et al (2020) build upon Hollnagel’s four aspects of resilience to show that resilient people and organisations must possess a “Readiness to respond”, and states “This requires employees to have the psychological safety to apply their judgement without fear of repercussion.”

Resilience is therefore something that a system “does”, not “has”.

Systems comprise of structures, technology, rules, inputs and outputs, and most importantly, people.

Resilience is about the creation and sustaining of various conditions that enable systems to adapt to unforeseen events. *People* are the adaptable element of those systems” – John Allspaw (@allspaw) of Adaptive Capacity Labs.

Resilience therefore is about “systems” adapting to unforeseen events, and the adaptability of people is fundamental to resilience engineering.

And if resilience is the potential to anticipate, respond, learn, and change, and people are part of the systems we’re talking about:

We need to talk about people: What makes people resilient?

Psychological safety

Psychological safety is the key fundamental aspect of groups of people (whether that group is a team, organisation, community, or nation) that facilitates performance. It is the belief, within a group, “that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes.” – Edmondson, 1999.

Amy Edmondson also talks about the concept of a “Learning organisation” – essentially a complex system operating in a vastly more complex, even chaotic wider environment. In a learning organisation, employees continually create, acquire, and transfer knowledge—helping their company adapt to the un-predictable faster than rivals can. (Garvin et al, 2008)

“A resilient organisation adapts effectively to surprise.” (Lorin Hochstein, Netflix)

In this sense, we can see that a “learning organisation” and a “resilient organisation” are fundamentally the same.

Learning, resilient organisations must possess psychological safety in order to respond to changes and threats. They must also have clear goals, vision, and processes and structures. According to Conways Law:

“Any organisation that designs a system (defined broadly) will produce a design whose structure is a copy of the organisation’s communication structure.”

In order for both the organisation to respond quickly to change, and for the systems that organisation has built to respond to change, the organisation must be structured in such a way that respond to change is as rapid as possible. In context, this will depend significantly on the organisation itself, but fundamentally, smaller, less-tightly coupled, autonomous and expert teams will be able to respond to change faster than large, tightly-bound teams with low autonomy will. Pais and Skelton’s Team Topologies explores this in much more depth.

Engineer the conditions for resilience engineering

“Before you can engineer resilience, you must engineer the conditions in which it is possible to engineer resilience.” – Rein Henrichs (@reinH)

As we’ve seen, an essential component of learning organisations is psychological safety. Psychological safety is a necessary condition (though not sufficient) for the  conditions of resilience to be created and sustained. 

Therefore we must create psychological safety in our teams, our organisations, our human “systems”. Without this, we cannot engineer resilience. 

We create, build, and maintain psychological safety via three core behaviours:

  1. Framing work as a learning problem, not an execution problem. The primary outcome should be knowing how to do it even better next time.
  2. Acknowledging your own fallibility. You might be an expert, but you don’t know everything, and you get things wrong – if you admit it when you do, you allow others to do the same.
  3. Model curiosity – ask a lot of questions. This creates a need for voice. By you asking questions, people HAVE to speak up. 

Resilience engineering and psychological safety

Psychological safety enables these fundamental aspects of resilience – the sustained adaptive capacity of a team or organisation.:

  • Taking risks and making changes that you don’t, or can’t, fully understand the outcomes of. 
  • Admitting when you made a mistake. 
  • Asking for help
  • Contributing new ideas
  • Detailed systemic cause* analysis (The ability to get detailed information about the “messy details” of work)

(*There is never a single root cause)

Let’s go back to that phrase at the start:

Sustained adaptive capacity.

What we’re trying to create is an organisation, a complex system, and sub systems (maybe including all that software we’re building) that possesses a capacity for sustained adaptation.

With DevOps we build systems that respond to demand, scale up and down, we implement redundancy, low-dependancy to allow for graceful failure, and identify and react to security threats.

Pretty much all of these only contribute to robustness.

robustness vs resilience

(David Woods, Professor, Integrated Systems Engineering Faculty, Ohio State University)

You may want to think back to the cynefin model, and think of robustness as being able to deal well with known unknowns (complicated systems), and resilience as being able to deal well with unknown unknowns (complex, even chaotic systems). Technological or DevOps practices that primarily focus on systems, such as microservices, containerisation, autoscaling, or distribution of components, build robustness, not resilience.

However, if we are to build resilience, the sustained adaptive capacity for change, we can utilise DevOps practices for our benefit. None of them, like psychological safety, are sufficient on their own, but they are necessary. Using automation to reduce the cognitive load of people is important: by reducing the extraneous cognitive load, we maximise the germane, problem solving capability of people. The provision of other tools, internal platforms, automated testing pipelines, and increasing the observability of systems increases the ability of people and teams to respond to change, and increases their sustained adaptive capacity.

Observability

It is absolutely crucial to be able to observe what is happening inside the systems. This refers to anything from analysing system logs to identify errors or future problems, to managing Work In Progress (WIP) to highlight bottlenecks in a process.

Too often, engineering and technology organisations look only inward, whilst many of the threats to systems are external to the system and the organisation. Observability must also concern external metrics and qualitative data: what is happening in the marketspace, the economy, and what are our competitors doing?

Resilience Engineering and DevOps

What must we do?

Create psychological safety – this means that people can ask for help, raise issues, highlight potential risks and “apply their judgement without fear of repercussion.”

Manage cognitive load – so people can focus on the real problems of value – such as responding to unanticipated events.

Apply DevOps practices to technology – use automation, internal platforms and observability, amongst other DevOps practices. 

Increase observability and monitoring – this applies to systems (internal) and the world (external). People and systems cannot respond to a threat if they don’t see it coming.

Embed practices and expertise in root cause analysis – whether you call it a post-mortem, retrospective or RCA, build the habits and expertise to routinely examine the component causes of failure.

Run “fire drills” and disaster exercises. Make it easier for humans to deal with emergencies and unexpected events by making it habit. Increase the cognitive load available for problem solving in emergencies.

Structure the organisation in a way that facilitates adaptation and change. Consider appropriate team topologies to facilitate adaptability.

In summary

Through facilitating learning, responding, monitoring, and anticipating threats, we can create resilient organisations. DevOps and psychological safety are two important components of resilience engineering.

 

References:

Conway, M. E. (1968) How Do Committees Invent? Datamation magazine. F. D. Thompson Publications, Inc. Available at: https://www.melconway.com/Home/Committees_Paper.html

Edmondson, A., 1999. Psychological safety and learning behavior in work teams. Administrative science quarterly, 44(2), pp.350-383.

Garvin, David & Edmondson, Amy & Gino, Francesca. (2008). Is Yours a Learning Organization?. Harvard business review. 86. 109-16, 134.

Hochstein, L. (2019)  Resilience engineering: Where do I start? Available at: https://github.com/lorin/resilience-engineering/blob/master/intro.md (Accessed: 17 November 2020).

Hollnagel, E., Woods, D. D. & Leveson, N. C. (2006). Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate.

Hollnagel, E. Resilience Engineering (2020). Available at: https://erikhollnagel.com/ideas/resilience-engineering.html (Accessed: 17 November 2020).

Provan, D.J., Woods, D.D., Dekker, S.W. and Rae, A.J., 2020. Safety II professionals: how resilience engineering can transform safety practice. Reliability Engineering & System Safety, 195, p.106740. Available at https://www.sciencedirect.com/science/article/pii/S0951832018309864

Woods, D. D. (2018). Resilience is a verb. In Trump, B. D., Florin, M.-V., & Linkov, I.
(Eds.). IRGC resource guide on resilience (vol. 2): Domains of resilience for complex interconnected systems. Lausanne, CH: EPFL International Risk Governance Center. Available on irgc.epfl.ch and irgc.org.

Remote Working – What Have We Learned From 2020?

Remote working improves productivity.

Even way back in 2014, evidence showed that remote working enables employees to be more productive and take fewer sick days, and saves money for the organisation.  The rabbit is out of the hat: remote working works, and it has obvious benefits.

Source: Forbes Global Workplace Analytics 2020

More and more organisations are adopting remote-first or fully remote practices, such as Zapier:

“It’s a better way to work. It allows us to hire smart people no matter where in the world, and it gives those people hours back in their day to spend with friends and family. We save money on office space and all the hassles that comes with that. A lot of people are more productive in remote setting, though it does require some more discipline too.”

We know, through empirical studies and longitudinal evidence such as Google’s Project Aristotle that colocation of teams is not a factor in driving performance. Remote teams perform as well as, if not better than colocated teams, if provided with appropriate tools and leadership.

Teams that are already used to more flexible, lightweight or agile approaches adapt adapt to a high performing and fully remote model even more easily than traditional teams.

The opportunity to work remotely, more flexibly, and save on time spent commuting helps to improve the lives of people with caring, parenting or other commitments too. Whilst some parents are undoubtedly keen to get into the office and away from the distractions of home schooling, the ability to choose remote and more flexible work patterns is a game changer for some, and many are actually considering refusing to go back to the old ways.

What works for some, doesn’t work for others, and it will change for all of us over time, as our circumstances change. But having that choice is critical.

However, remote working is still (even now in 2020 with the effects of Covid and lockdowns) something that is “allowed” by an organisation and provided to the people that work there as a benefit.

Remote working is now an expectation.

What we are seeing now is that, for employees at least, particularly in technology, design, and other knowledge-economy roles, remote working is no longer a treat, or benefit – just like holiday pay and lunch breaks,  it’s an expectation.

Organisations that adopt and encourage remote working are able to recruit across a wider catchment area, unimpeded by geography, though still somewhat limited by timezones – because we also know that synchronous communication is important.

Remote work is also good for the economy, and for equality across geographies. Remote work is closing the wage gap between areas of the US and will likely have the same effect on the North-South divide in the UK. This means London firms can recruit top talent outside the South-East, and people in typically less affluent areas can find well paying work without moving away.

But that view isn’t shared by many organisations.

However, whilst employees are increasingly seeing remote working as an expectation rather than a benefit, many organisations, via pressure from command-control managers, difficulties in onboarding, process-oriented HR teams, or simply the most dangerous phrase in the English language: because “we’ve always done it this way“, possess a desire to bring employees back into the office, where they can see them.

Indeed, often by the managers of that organisation, remote working may be seen as an exclusive benefit and an opportunity to slack off. The Taylorist approach to management is still going strong, it appears.

People are adopting remote faster than organisations.

In 1962, Everett Rogers came up with the principle he called “Diffusion of innovation“.

It describes the adoption of new ideas and products over time as a bell curve, and categorises groups of people along its length as innovators, early adopters, early majority, late majority, and laggards. Spawned in the days of rapidly advancing agricultural technology, it was easy (and interesting) to study the adoption of new technologies such as hybrid seeds, equipment and methods.

 

Some organisations are even suggesting that remote workers could be paid less, since they no longer pay for their commute (in terms of costs and in time), but I believe the converse may become true – that firms who request regular attendance at the office will need to pay more to make up for it. As an employee, how much do you value your free time?

It seems that many people are further along Rogers’ adoption curve than the organisations they work for.

There are benefits of being in the office.

Of course, it’s important to recognise that there are benefits of being colocated in an office environment. Some types of work simply don’t suit it. Some people don’t have a suitable home environment to work from. Sometimes people need to work on a physical product or collaborate and use tools and equipment in person. Much of the time, people just want to be in the same room as their colleagues – what Tom Cheesewright calls “The unbeatable bandwidth of being there.”

But is that benefit worth the cost? An average commute is 59 minutes, which totals nearly 40 hours per month, per employee. For a team of twenty people, is 800 hours per month worth the benefit of being colocated? What would you pay to obtain an extra 800 hours of time for your team in a single month?

The question is one of motivation: are we empowering our team members to choose where they want to work and how they best provide value, or are we to revert to the Taylorist principles where “the manager knows best”? In Taylors words: “All we want of them is to obey the orders we give them, do what we say, and do it quick.

We must use this as a learning opportunity.

Whilst 2020 has been a massive challenge for all of us, it’s also taught us a great deal, about change, about people and about the future of work. The worst thing that companies can do is ignore what they have learned about their workforce and how they like to operate. We must not mindlessly drift back to the old ways.

We know that remote working is more productive, but there are many shades of remoteness, and it takes strong leadership, management effort, good tools, and effective, high-cadence communication to really do it well.

There is no need for a binary choice: there is no one-size-fits-all for office-based or remote work. There are infinite operating models available to us, and the best we can do to prepare for the future of work is simply to be endlessly adaptable.

Root Cause Analysis using Rothmans Causal Pies

rothmans causal pies

It sometimes seems to me that in the tech industry, maybe because we’re often playing with new technologies and innovating in our organisation, or even field, (when we’re not trying to pay down tech debt and keep legacy systems running), we’re sometimes guilty of not looking outside our sphere for better practices and new (or even old) ideas.

Whilst studying for my Master’s degree in Global Health, I discovered the concept of “Rothman’s Causal Pies”.

The Epidemiological Triad

Epidemiology is the study of why and how diseases (including non-communicable diseases) occur. As a field, it encompasses the entire realm of human existence, from environmental and biological aspect to heuristics and even economics. It’s a real exercise in Systems Thinking, which is kinda why I love it.

In epidemiology, there is a concept known as the “Epidemiological Triad”, which describes the necessary relationship between vector, host, and environment. When all three are present, the disease can occur. Without one or more of those three factors, the disease cannot occur. It’s a very simplistic but useful model. As we know, all models are wrong, but some are useful.

This concept is useful because through understanding this triad, it’s possible to identify an intervention to reduce the incidence of, or even eradicate, a disease, such as by changing something in the environment (say, by providing clean drinking water) or a vaccination programme (changing something about the host).

What the triad doesn’t provide, however, is a description of the various factors necessary for the disease to occur, and this is especially relevant to non-infectious disease, such as back pain, coronary heart disease, or a mental health problem. In these cases, there may be many different components, or causal factors. Some of these may be “necessary”, whilst some may contribute. There may be many difference combinations of causes that result in the disease.

To use heart disease as an example, the component causes, or “risk factors” could include poor diet, little or no exercise, genetic predisposition, smoking, alcohol, and many more. No single component is sufficient to cause the disease, and one (genetic predisposition, for example) may be necessary in all cases.

Rothman, in 1976, came up with a model that demonstrates the multifactorial nature of causation.

Rothman’s Causal Pies

An individual factor that contributes to cause disease is shown as a piece of a pie, like the triangles in the game Trivial Pursuit. After all the pieces of a pie fall into place, the pie is complete, and disease occurs.

The individual factors are called component causes. The complete pie, which is termed a causal pathway, is called a sufficient cause. A disease may have more than one sufficient cause, with each sufficient cause being composed of several component causes that may or may not overlap. A component that appears in every single pie or pathway is called a necessary cause, because without it, disease does not occur. An example of this is the role that genetic factors play in haemophilia in humans – haemophilia will not occur without a specific gene defect, but the gene defect is not believed to be sufficient in isolation to cause the disease.

An example: Note in the image below that component cause A is a necessary cause because it appears in every pie.

Root Cause Analysis

I’m a huge proponent of holding regular retrospectives (for incidents, failures, successes, and simply at regular intervals), but it seems that in technology, particularly when we’re carrying out a Root Cause Analysis due to an incident, there’s a tendency to assume one single “root cause” – the smoking gun that caused the problem.

We may tend towards assuming that once we’ve found this necessary cause, we’re finished. And whilst that’s certainly a useful exercise, it’s important to recognise that there are other component causes and there may be more than one sufficient cause.

The Five Why’s model is a great example of this – it fails to probe into other component factors, and only looks for a single root cause. As any resilience engineer will tell you: There is no Root Cause.

The 5 whys takes the team down a single linear path, and will certainly find a root cause, but leaves the team blind to other potential component or sufficient causes – and even worse: it leads the team to believe that they’ve identified the problem. In the worst case scenario, a team may identify “human error” as a root cause, which could re-affirm a faulty, overly-simplistic world view and result in not only the wrong cause identified, but harm the team’s ability to carry out RCAs in the future.

In reality, we’re dealing with complex, maybe even chaotic, systems, alongside human interactions. There exist multiple causal factors, some necessary for the “incident” to have occurred, and some simply component causes that together become sufficient – the completed pie!

Take Away: There is usually more than one causal pie.

An improved approach could be to use Ishikawa diagrams, but in my experience, when dealing with complex systems, these diagrams very quickly become visibly cluttered and complex, which makes them hard to use. Additionally, because each “fish bone” is treated as a separate pathway, interrelationships between causes may not be identified.

Instead of a complex fishbone diagram, try identifying all the component causes, and visually complete (on a whiteboard for example) all the pies that could (or did) result in the outcome. You almost certainly won’t identify all of them, but that doesn’t matter very much.

If we adopt the Rothman’s causal pie model instead of approaches such as the 5 whys or Ishikawa, it provides us with an easy to use and easy to visualise tool that can model not only “what caused this incident”, but “what factors, if present, could cause this incident to occur again?“. 

In order to prevent the incident (the disease, in epidemiological terms), the key factor we’re looking for is the “necessary cause” – component A in the pies diagram. But we’re also looking for the other component causes.

Application: The prevention of future incidents.

Suppose we can’t easily solve component A – maybe it’s a third party system that’s outside our control – but we can control causal components B and C which occur in every causal pie. If we control for those instead, it’s clear that we don’t need to worry about component A anyway!

Next time you’re carrying out a Root Cause Analysis or retrospective, try using Rothman’s Causal Pies, and please let me know how it goes.

Addendum: “Post-Mortem” exercises.

Even though the term “post-mortem” is ubiquitously used in the technology industry as a descriptor for analysis into root causes, I don’t like it.

Firstly, in the vast majority of tech incidents, nobody died – post-mortem literally means “after death”. It implies that a Very Bad Thing happened, but if we’re trying to hold constructive, open exercises where everyone present possesses enough psychological safety in order to contribute honestly and without fear, we should phrase the exercise in less morbid terms. The incident has already happened – we should treat it as a learning opportunity, not a punitive sounding exercise.

Secondly, we should run these root cause analysis exercises for successes, not just for failures. You don’t learn the secrets of a great marriage by studying divorce. The term “post-mortem” isn’t particularly appropriate for studying the root causes of successes.