Remote working: what have we learned from 2020?

Remote working improves productivity.

Even way back in 2014, evidence showed that remote working enables employees to be more productive and take fewer sick days, and saves money for the organisation.  The rabbit is out of the hat: remote working works, and it has obvious benefits.

Source: Forbes Global Workplace Analytics 2020

More and more organisations are adopting remote-first or fully remote practices, such as Zapier:

“It’s a better way to work. It allows us to hire smart people no matter where in the world, and it gives those people hours back in their day to spend with friends and family. We save money on office space and all the hassles that comes with that. A lot of people are more productive in remote setting, though it does require some more discipline too.”

We know, through empirical studies and longitudinal evidence such as Google’s Project Aristotle that colocation of teams is not a factor in driving performance. Remote teams perform as well as, if not better than colocated teams, if provided with appropriate tools and leadership.

Teams that are already used to more flexible, lightweight or agile approaches adapt adapt to a high performing and fully remote model even more easily than traditional teams.

The opportunity to work remotely, more flexibly, and save on time spent commuting helps to improve the lives of people with caring, parenting or other commitments too. Whilst some parents are undoubtedly keen to get into the office and away from the distractions of home schooling, the ability to choose remote and more flexible work patterns is a game changer for some, and many are actually considering refusing to go back to the old ways.

What works for some, doesn’t work for others, and it will change for all of us over time, as our circumstances change. But having that choice is critical.

However, remote working is still (even now in 2020 with the effects of Covid and lockdowns) something that is “allowed” by an organisation and provided to the people that work there as a benefit.

Remote working is now an expectation.

What we are seeing now is that, for employees at least, particularly in technology, design, and other knowledge-economy roles, remote working is no longer a treat, or benefit – just like holiday pay and lunch breaks,  it’s an expectation.

Organisations that adopt and encourage remote working are able to recruit across a wider catchment area, unimpeded by geography, though still somewhat limited by timezones – because we also know that synchronous communication is important.

Remote work is also good for the economy, and for equality across geographies. Remote work is closing the wage gap between areas of the US and will likely have the same effect on the North-South divide in the UK. This means London firms can recruit top talent outside the South-East, and people in typically less affluent areas can find well paying work without moving away.

But that view isn’t shared by many organisations.

However, whilst employees are increasingly seeing remote working as an expectation rather than a benefit, many organisations, via pressure from command-control managers, difficulties in onboarding, process-oriented HR teams, or simply the most dangerous phrase in the English language: because “we’ve always done it this way“, possess a desire to bring employees back into the office, where they can see them.

Indeed, often by the managers of that organisation, remote working may be seen as an exclusive benefit and an opportunity to slack off. The Taylorist approach to management is still going strong, it appears.

People are adopting remote faster than organisations.

In 1962, Everett Rogers came up with the principle he called “Diffusion of innovation“.

It describes the adoption of new ideas and products over time as a bell curve, and categorises groups of people along its length as innovators, early adopters, early majority, late majority, and laggards. Spawned in the days of rapidly advancing agricultural technology, it was easy (and interesting) to study the adoption of new technologies such as hybrid seeds, equipment and methods.


Some organisations are even suggesting that remote workers could be paid less, since they no longer pay for their commute (in terms of costs and in time), but I believe the converse may become true – that firms who request regular attendance at the office will need to pay more to make up for it. As an employee, how much do you value your free time?

It seems that many people are further along Rogers’ adoption curve than the organisations they work for.

There are benefits of being in the office.

Of course, it’s important to recognise that there are benefits of being colocated in an office environment. Some types of work simply don’t suit it. Some people don’t have a suitable home environment to work from. Sometimes people need to work on a physical product or collaborate and use tools and equipment in person. Much of the time, people just want to be in the same room as their colleagues – what Tom Cheesewright calls “The unbeatable bandwidth of being there.”

But is that benefit worth the cost? An average commute is 59 minutes, which totals nearly 40 hours per month, per employee. For a team of twenty people, is 800 hours per month worth the benefit of being colocated? What would you pay to obtain an extra 800 hours of time for your team in a single month?

The question is one of motivation: are we empowering our team members to choose where they want to work and how they best provide value, or are we to revert to the Taylorist principles where “the manager knows best”? In Taylors words: “All we want of them is to obey the orders we give them, do what we say, and do it quick.

We must use this as a learning opportunity.

Whilst 2020 has been a massive challenge for all of us, it’s also taught us a great deal, about change, about people and about the future of work. The worst thing that companies can do is ignore what they have learned about their workforce and how they like to operate. We must not mindlessly drift back to the old ways.

We know that remote working is more productive, but there are many shades of remoteness, and it takes strong leadership, management effort, good tools, and effective, high-cadence communication to really do it well.

There is no need for a binary choice: there is no one-size-fits-all for office-based or remote work. There are infinite operating models available to us, and the best we can do to prepare for the future of work is simply to be endlessly adaptable.

Root Cause Analysis using Rothmans Causal Pies

rothmans causal pies

It sometimes seems to me that in the tech industry, by dint of playing with new technologies and working in innovation (when we’re not trying to pay down tech debt and keep legacy systems running), we’re sometimes guilty of not looking outside our world for better practices and new (or even old) ideas.

This week, in my studies for my Master’s degree in Global Health, I discovered the concept of “Rothman’s Causal Pies”.

The Epidemiological Triad

In epidemiology, there is a concept known as the “Epidemiological Triad”, which describes the necessary relationship between vector, host, and environment. When all three are present, the disease can occur. Without one or more of those three factors, the disease cannot occur.

This concept is useful because through understanding this triad, it’s possible to intervene to reduce or eradicate a disease, such as changing the environment or vaccinating the host.

What the triad doesn’t provide, however, is a description of the various factors necessary for the disease to occur, and this is especially relevant to non-infectious disease, such as back pain, coronary heart disease, or a mental health problem. In these cases, there may be many different components, or causal factors. Some of these may be “necessary”, whilst some may contribute.

To use heart disease as an example, the component causes, or “risk factors” could include poor diet, little or no exercise, genetic predisposition, smoking, alcohol, and many more. No single component may be sufficient to cause the disease, and one (genetic predisposition, for example) may be necessary in all cases.

Rothman, in 1976, came up with a model that demonstrates the multifactorial nature of causation.

Rothman’s Causal Pies

An individual factor that contributes to cause disease is shown as a piece of a pie. After all the pieces of a pie fall into place, the pie is complete, and disease occurs. The individual factors are called component causes. The complete pie, which might be considered a causal pathway, is called a sufficient cause. A disease may have more than one sufficient cause, with each sufficient cause being composed of several component causes that may or may not overlap. A component that appears in every pie or pathway is called a necessary cause, because without it, disease does not occur. Note in the image below that component cause A is a necessary cause because it appears in every pie.

Root Cause Analysis

I’m a huge proponent of holding regular retrospectives (for incidents, failures, successes, and simply at regular intervals), but it seems that in technology, particularly when we’re carrying out a Root Cause Analysis due to an incident, there’s a tendency to assume one single “root cause” – the smoking gun that caused the problem.

The Five Why’s model is a great example of this – it fails to probe into other component factors, and only looks for a single root cause.

In reality, we’re dealing with complex systems and with human interactions. There exist multiple causal factors, some necessary for the incident to have occurred, and some simply component causes that together become sufficient – the completed pie!

There is usually more than one causal pie.

If we adopt the Rothman’s causal pie model instead, it provides us with a tool that can model not only “what caused this incident”, but “what factors, if present, could cause this incident to occur again?”. 

In order to prevent the incident (the disease, in epidemiological terms), the key factor we’re looking for is the “necessary cause” – component A in the pies diagram. But we’re also looking for the other component causes.

Prevention of future incidents.

Suppose we can’t easily solve component A – maybe it’s a third party system that’s outside our control – but we can control causal components B and C which occur in every causal pie. If we control for those instead, it’s clear that we don’t need to worry about component A anyway!

Next time you’re carrying out a Root Cause Analysis, or incident retrospective, try using Rothman’s Causal Pies, and let me know how it goes.




Simpsons Paradox and the Ecological Fallacy in Data Science

Simpsons Paradox

I’m currently studying for a Master’s Degree in Global Health at The University of Manchester, and I’m absolutely loving it. Right now, we’re studying epidemiology and study design, which also involves a great deal of statistical analysis.

Some data was presented to us from an ecological study (a type of scientific study, that looks at large-scale, population level data) called The WHO MONICA Project that showed mean cholesterol vs mean height, grouped by population centre (E.g. China-Beijing or UK-Glasgow).

In this chart, you can see a positive correlation between height and cholesterol, with a coefficient of 0.36, suggesting that height may be a potential risk factor for higher cholesterol.

However, when the analysis was re-run using raw data (not averaged for each of the population centres), the correlation coefficient was -0.11.

So, when using mean measures of each population centre, it appears that height could be a risk factor for higher cholesterol, whilst the raw data actually shows the opposite is slightly more likely to be true!

This is known as an “ecological fallacy” – because it takes population level data and makes erroneous assumptions about individual effects.

This is a great example of Simpsons Paradox.

Simpsons paradox is when a trend appears in several different groups of data but disappears or reverses when the groups are combined.

Table 1 in Wang (2018) is a relatively easy example. (This is fictional test score data for two schools.)

(Also, please ignore for a moment the author’s possible bias in scoring male students higher – maybe this is a test about ability to grow facial hair.)










Alpha (1)





Beta (2)





It’s clear if you look at the numbers that the Beta school have higher average scores (85 and 81 for male students and female students respectively).

However, if you calculate the averaged scores for individuals in the schools, Alpha school has an average score of 83.8 and Beta has just 81.8.

So whilst Beta school *looks* like the highest performing school when broken down by gender, it is actually Alpha school that has the highest average scores.

In this case, it’s quite clear why: if you only look at the average scores by gender, it’s easy to assume that the proportion of male and female pupils for each school is roughly the same, when in fact 80 pupils at Alpha school are male (and 20 female), but only 20 are male at the Beta school, with 80 female.

Using gender to segment the data hides this disproportion of gender between the schools. This may be appropriate to show in some cases, but can lead to false assumptions being made.

The same issue can be seen in Covid-19 Case Fatality Rate (CFR) data when comparing Italy and China. Kegelgen et al (2020) found that CFRs were lower in Italy for every age group, but higher overall (see table (a)) in the paper.

The reason, when you see table (b), is clear. The CFR for the 70-79 and 80+ groups are far higher than for all other age groups, and these age groups are significantly over-represented in Italy’s confirmed cases of Covid-19. This means that Italy’s overall CFR is higher than China’s only by dint of recording a “much higher proportion of confirmed cases in older patients compared to China.” China simply didn’t report as many Covid-19 cases in older individuals, and the fatality rate is far higher in older individuals. Italy has a more elderly population (median age of 45.4 opposed to China’s 38.4), but other factors such as testing strategies and social dynamics may also be playing a part.

Another example of Simpsons Paradox is in gender bias among graduate admissions to University of California, Berkeley, where it was used in reverse. In 1973, the admission figures appeared to show that men were more likely to be admitted than women, and the difference was significant enough that it was unlikely to be due to chance alone. However, the data for the individual departments showed a “small but statistically significant bias in favour of women”. (Bickel et al, 1975). Bickel et al’s conclusions were that women were applying to more competitive departments such as English, whilst men were applying to departments such as engineering and chemistry, that typically had higher rates of admission.

(Whether this still constitutes bias is the subject of a different debate.)

The crux of Simpsons Paradox is: If you pool data without regard to the underlying causality, you could get the wrong results.


Bokai WANG, C. (2018) “Simpson’s Paradox: Examples”, Shanghai Archives of Psychiatry, 30(2), p. 139. Available at: (Accessed: 21 October 2020).

Julius von Kugelgen, Luigi Gresele, Bernhard Scholkopl, (2020) “Simpson’s paradox in Covid-19 case fatality rates: a mediation analysis of age-related causal effects.” Available at: (Accessed: 21 October 2020).

P.J. Bickel, E.A. Hammel and J.W. O’Connell (1975). “Sex Bias in Graduate Admissions: Data From Berkeley”(PDF). Science. 187 (4175): 398–404. doi:10.1126/science.187.4175.398. PMID 17835295.

WHO MONICA Project Principal Investigators (1988) “The world health organization monica project (monitoring trends and determinants in cardiovascular disease): A major international collaboration” Journal of Clinical Epidemiology 41(2) 105-114. DOI: 10.1016/0895-4356(88)90084-4