How Much Testing is the Right Amount of Testing for a Ruby on Rails

Early career Rails developers have one consistent question that gets asked of seasoned developers again and again: How much testing is the right amount of testing?

Every experienced Rails developer banters about how much testing is the “right” amount of testing.

Sandi Metz says in 99 Bottles, “… sheepishly apologize for their code”

Martin Fowler says ….

Kent C. Dodds discusses the “trophy” model of testing (which I really like, as a teachable model, but is more geared towards JS ecosystems with either solioed teams or serverless architecture). He says, of Javasscript testing: “It’s about writing just enough tests—the right tests. It’s not about reaching for 100% coverage.”

In the Node world (Javascript), the community has switched over to Typescript which provides a layer of static testing and lining to your Javascript. This largely doesn’t apply to Ruby developers, though, but it is a useful comparison.

The other night at the NYC.rb meetiup we were discussing Simple-Cov, the Ruby code coverage tool that is used by nearly all Ruby developers.

In a meetup on this subject the other night, Trace Wax said that lately some people in the community were “spitballing” the number 70%.

Many senior Rails developers will say things like: “Well, when you install Rails, it comes with the default action cable file which will show as untested.”

All of these things are true, and I think before we discuss the “magic number”— whether it is 70% or 100%— it’s important to remember that SimpleCov has a feature to simply exclude any files you want. It’s the add_filter feature and you are at liberty to use it.

When you exclude a file from the test runner, you are simply redefining “100%” It’s just math, people.

Don’t be fooled by statisticians who lie with numbers.

To say, “Well, we’re going to exclude the

Here’s my default SimpleCov config. It goes into the top of rails_helper.rb

if( ENV['COVERAGE'] == 'on' )
  require 'simplecov'
  require 'simplecov-rcov'
  class SimpleCov::Formatter::MergedFormatter
    def format(result)
  SimpleCov.formatter = SimpleCov::Formatter::MergedFormatter
  SimpleCov.start 'rails' do
    add_filter "/vendor/"

As you see, I’ve simply excluded the /vendor folder.

Experienced Rails developers tend to waffle: Well, it all depends. You should choose the amount of testing that’s right for you. Based on what you get value out of. If you test too much, you might just spend all your time testing and it might not be as valuable as you think it is. Then again, anybody who writes no tests is a fool.

But new Rails developers keep asking: What’s the right amount of testing?

They kind of want a number, or a guide, or a yardstick. So here is the fully biased Jason Fleetwood-Boldt rules of testing Rails apps. This is written for early-career entrants seeking guidance and is intended to amalgamate the best of the testing experts in the community.

First and foremost, it would be malpractice of me not to endorse my friend Sandi Metz’s book 99 Bottles right here. Most importantly, Sandi discusses how the process of good testing is also the process of good refactoring.

This essential point is probably one that needs extra underscoring. Indeed, many early-career Rails devs think “How do I write a test around this?” when they should be thinking “How do I write this code so that it is testable?”

When you invert the problem, you begin to see that the tests go hand-in-hand with a living organism: The Rails app itself. They allow the organism to grow and thrive.

The other good book I will plug (although a little more inaccessible to Rails devs) is James Coplien Lean Architecture.

Rule #1: If the code has some business logic in it, you have no good excuses.

Rule #2: If rule #1 doesn’t apply, you might have code that has no business logic (Rails defaults, setups, initializers, empty shell objects, empty Singleton objects, etc). You can exclude those from code coverage. It’s fine.

Now we get to the paradox rules that are somewhat more specific to Rails:

Rule #3: Unit test your models and business objects. By keeping your objects with a single responsibility, your unit tests naturally break down and things become more testable. Unit tests are easy and cheap to write.

(If you wonder if you are over-writing unit tests, your code probably has abstractions that live in code that could be moved into the database layer— i.e., pull the content out into a layer that can be tested in its own abstraction.)

Rule #4: Your whole workflow should be end-to-end tested (also known as “system specs”). Capybara is (still) very tedious to work with and is worth spending several days just getting to know in depth. In particular, mastering the fine art of the timing issues and waiting is the name of the game. Once you master it, it will all make sense. But before you do, you will want to jump out a window.

What should you end-to-end test? Always test your happy paths. That means what a user does for a normal workflow. Then maybe test a couple of other paths. That’s pretty much it.

If you have a complicated workflow or a game, for example, you want to write end-to-end tests that cover all of your Javascript, interactions, Ruby, and hit the database layer, etc. However, you don’t need to write Capybara tests that play out every possible game play (for example, in a complex game that would be prohibitive.) With Capybara, you are testing the mechanisms of the web app, but not the rules of the game.

Ideally, the rules of the game are abstracted away from the mechanism of game play (interaction elements), so that makes them testable with unit tests.

I hope that last explanation is the one that resonates. To recap, it’s not that we try to fit our testing around our code architecture, it’s that we want to move our logic either into our business domain or abstract our interface/interaction into a layer that can be tested on its own.

Rule #5: If you think you need to test business logic in your controllers, you probably have too much business logic in your controller.

Testing the controller is the most controversial: A long time ago, we used a controller testing technique that tested the controllers directly. With end-to-end tests and an active frontend codebase, you don’t need to test your controllers explicitly because you are relying on the E2E tests to do that. However, if you have an API app with, for example, a React Native frontend client, then you may not have E2E tests at all.

In the absence of E2E tests, falling back to controller testing is the way to go. However, you should be testing only that the controller can receive the parameters it is expecting and return the things that are expected. You still should avoid overloaded controllers and also should not have business logic in your controllers.

If you have no E2E and no other choice, then you will need need to test your controllers (and even any dirty business logic in them).