Avoiding Layers of Misdirection

Publié par Jean-Nicolas Viens le jeudi 29 mai 2014 à 09:53

Have mocks led you to add frustrating layers of indirection? You are not alone. This is an argument part of the ongoing debate with DHH (David Heinemeier Hansson), Martin Fowler and Kent Beck.

All three of them seem to be in agreement on one thing: mocks should be avoided. This is what I think brings you from layers of indirection to layers of misdirection.

Skippable summary

You may skip this section, even if you are not familiar with the topic. Refer to these links to continue the discussion!

Here are the main parts (I think), in order :

  • Is TDD Dead? David's blog post following his railsconf keynote. The subsequent articles are interesting as well.
  • Their firstsecond and third hangout sessions.
  • Google, twitter

These people have a lot more experience than I do, but I still don't see mocks the same way they do. Hopefully, this will make some sense. As for everything pertaining to this debate : what works for me might not work for you. But, it doesn't mean you should not try it.

This is obviously a vast debate with various angles of approach. If you want test isolation with proper mocking, Gary Bernhardt nailed it in my opinion. Read his post. If the test speed argument is one that bothers you, I suggest you read Corey Haines' article. If you are still unsure that integrations tests can harm you when not done properly, I could not explain it better than J. B. Rainsberger in this video.

What if mocks had nothing to do with it

The angle I want to tackle is one that has confused me a lot over the years. Sorry, there will be no definite answer at the end of this article, just a suggestion for a further conversation. I hope you chip in!

Over the years, I have seen people hate TDD because they had to do mocks. Doing mocks forced them to create new classes. Many of them got confused in their own design after three or four of these classes.

This phenomenon is also seen when you read someone else's code. If it's all in one place, you can read it in one go. Adding layers of indirection makes it harder to follow. DHH puts it this way :

[If I can look at something that fits within 10 lines of clean code], to me that's endlessly easier to understand than sixty lines that has three additional layers of indirection that I have to comprehend first.

I agree with this statement, up until he says "that I have to comprehend first". You don't. The name of the class or method you are calling should be enough to tell you what is happening there. You know the what, not the how. To me this is key to making complex business models work. I never add layers of indirection, I add layers of I-don't-care-ness.

But this is where I get confused. This makes sense to me, but most people seem to disagree. Everyone wants to understand every single layer first, but that is just impossible to do when you live in a world of complex business rules (as opposed to simple CRUD operations).

Maybe this is just a naming problem? Of course, if a method is poorly named, this will become a layer of misdirection. But I think there is more to it.

What if you just made it hard on yourself

Just let go.

Adding a layer, or simply a private method, allows me to understand a small portion of the algorithm faster than anything else. When you write a complex algorithm without tests, what do you do? Write a part of it, refresh, does it work? Yes, keep going. You iterate. Test the algorithm bit by bit. Adding layers allows you to do just that, in a TDD flow that makes sure you get every part of the algorithm right.

Here is a simple example. What does this do?

receipt = new Receipt(employeeNumber);
for(int i = 0; i < items.size; i++) {
return receipt;

You can most likely figure it out, but this version should be faster to understand :

receipt = receiptFactory.create(employeeNumber);

What happened? Did I really remove code? Not at all. In fact, I would have added code. The number of lines of code pertaining to the business logic would be the same, but code to make the compiler happy would be added. That is class declaration, brackets, etc. It would even add one more file. 

Is it still worth it? Hell yeah! I've now stopped caring about two things: how to create a receipt and how to add items to it. Do you need to queue the new receipt somewhere? Don't know don't care. Do you need to add something between each items? Don't know don't care. Our brain is very limited, asking less of it each time is good. You get more lines of code, but less code to care about for each step: you brain is happy.

Tests get better too. I can now have a test that says "you should create a receipt and add items to it" (hint: mocks required!). Obviously, the factory will now have to be tested: is the receipt created properly? Same goes for the receipt itself. 

I added a layer of indirection, but the resulting algorithm is simpler, in each and every one of these classes. +1 for the readability because you know where to look if you want to change how a receipt is created. If you don't care, you don't even need to look for it. +1 for test documentation too, I now know they fully document these pieces of code.

This code is not tested!

The code above has not been tested. It looks like java, but who cares.

If I was to do it in TDD, I would probably have come up with better names, better collaborators. I have found that the best way to figure out the proper use of layers is through (mockist) TDD.

If a mock is hard to reason about, it is wrong. An integration test will not put enough pressure on my design to let me know about it. The slice which is being tested will grow, and grow, and grow... until the "in-between" becomes unreadable. What do we do? We split it! We add a layer of I-don't-care-ness, or at least that's what we think we are doing. We are using the code immediately, so it looks easy enough to understand. Here is the catch: how do you know you made the right separation? Is it viable in the long term?

This is where I leave it up to you

What do you think? Do you have to understand each parts before changing an algorithm? How does it add complexity?

I know the example is an over simplification, but in my experience, it scales properly.

Am I the only one to imagine software in my head and see all the moving parts interacting with each other? No I am not crazy, my mother had me tested - Sheldon Cooper.

blog comments powered by Disqus


Post a comment

Comments have been closed for this post.