How to Avoid Toxic Code Reviews

Chris Dwan
5 min readOct 26, 2018

I have experienced a lot of pain in code reviews over the years. I’ve had long arguments that were never resolved in them, I’ve had my simple changes blocked for a long time because someone’s opinions couldn’t be swayed. I’ve also found it distressing that sometimes people just approve things when there are obvious problems.

Honestly just want to work somewhere that code reviews aren’t toxic 💆‍♀️

Recently on LinkedIn Lauren was complaining about toxic code review. I know that they can be non-toxic. Ultimately mismatched expectations, lack of direction and values mismatches that aren’t resolved can lead to toxic code reviews.

I think we can start developing a deeper understanding of good code review by asking what is essential. I won’t cover everything here but I’ll start at the top.

  1. The person writing the code needs to expose their assumptions and intent.
  2. We need to focus on the more important to catch errors in code review.

What’s Essential About Code Reviews?

You cannot overestimate the unimportance of practically everything.

— Greg McKeown

Why are code reviews important anyways?

They are a forcing function for the practice of critical thinking about code. This is an essential discipline for programmers and something that is shockingly uncommon. We all think we’re practicing critical thinking about code and often are not. We can develop an illusion of doing it well if we fail to test our assumptions. Code review as a discipline forces a trial, a testing and it creates an opportunity for growth.

They generate discussion. Teams of programmers become disconnected from each other and from what is happening in the code base. Every team I’ve been on starts to develop this problem of silos and blindness about sections of the code that they didn’t write themselves. When we lose sight of what has gone before we will end up re-inventing every wheel. By participating in the discussion, we transmit knowledge and it’s a tide that lifts all ships.

They catch errors. You might argue with me on this one since a lot of code reviews don’t catch any errors. If we’re looking for the right things then they can catch important errors, but it’s harder than we give it credit for and there should be more of an onus on the person writing the code to expose their thinking and process so that it’s easier to catch errors.

Catching Errors in Code Review

Why do code reviews fail to catch errors? Let’s look at a hierarchy of the kinds of errors that could be present in some code:

  • A mistake in the starting assumptions
  • A mistake in the design of the solution
  • A mistake in the algorithms / implementation of the design
  • A mistake in the styling of the code

It is a hierarchy because the first ones are far more problematic than the last ones. The last ones are more easy to spot than the first ones. It quickly becomes apparent why code reviews are hard and sometimes completely ineffective. We spend a lot of time finding unimportant errors and little time catching the really important ones.

Imagine going to the doctor for a checkup and she says. “You really should try Nike runners. I find them to be a much better fit.” Meanwhile you have a major problem with your blood lipids and are very likely to have a heart attack in the next few weeks.

Doc, you nailed it!

“man shooting ball on basketball hoop” by Matthew LeJune on Unsplash

Is it any wonder that we can get sore an angry when this happens to us over and over? Yet it is an easy trap to fall into. How do we find the more important errors?

It is Most Essential To Catch Errors in Assumptions

The most important class of errors to catch in code reviews are errors in the fundamental assumptions. If we have faulty assumptions about the problem we’re trying to solve and implement the perfect design, we’re still 100% wrong.

When I was studying philosophy, every class someone would start an argument with the professor trying to show how the philosopher was wrong. Every time they got shot down by the professor.

It was impossible to find an error in the philosopher’s logic. It was perfectly encoded and well tested. The problems could be found in the assumptions upon which the philosopher began. I found these arguments that the students kept having to be annoying and toxic. They wanted to show how smart they were to the professor. We can do the same thing with code review.

If you aim at the wrong target with perfect accuracy then you’ll hit it, and most code reviewers would just yell “Nice SHOT!” and miss the importance of fact that you just killed your neighbour’s dog.

How do we catch errors in assumptions? Many programmers make this impossible because they don’t tell you what their assumptions are. The first thing then is to make sure that the assumptions are clear. This means that the code review submitter has to share the assumptions.

How To Write a Pull Request That Can Be Reviewed

I’ve seen a lot of this in my time:

It’s really hard to review a pull request without a description. All I can do is evaluate the code and not any of the reasoning behind it or the assumptions. I have to work really hard to uncover the intent and mostly get it wrong.

In order to even begin to question assumptions, there needs to be context in the pull request itself. Here’s how we do it: Us the What, Why, How + Links technique for pull requests.

The technique is a minimalist technique for ensure we have the context in our pull requests. We only really need What, Why and How in our pull requests. It’s best to present them in that sequence so that our reviewer can digest it easily.

If you can follow this simple three step process for writing your pull requests: (What / Why / How) then you will supercharge them and make them open to finding the really important errors.

If we start by agreeing that we’re all here to catch the most important errors then we will have made huge strides away from toxic code reviews. Code reviews based on mutual cooperation and that go after essential problems will easily steer clear of the toxic zone. I will break this down more in future posts.

--

--

Chris Dwan

Crafting software and teams for 20 years. Committed to whole people belonging in whole teams as part of whole organizations. Also write stuff sometimes