Testing object allocations with RSpec

How to test object allocations with Rspec. Learn how to optimize your code and make sure your code remains optimized.

Everyone is talking about Ruby performance lately, and with good reason. It turns out that with some smallish tweaks to your code it's possible to increase performance by up to 99.9%.

There are plenty of articles out there on how to optimize your code, but how can you make sure your code remains optimized?

You may not always consider the consequences when embedding a string literal rather than a frozen constant in a regularly called method -- it's much too easy to lose the savings of your optimizations when maintaining your code in the future.

These were my thoughts recently as I optimized some code for the second (or third) time in our Ruby gem at Honeybadger: "wouldn't it be great if there were a way to ensure that these optimizations don't regress?"

Regressions are something most of us are familiar with in software development, even if not by name. A regression happens when a bug or an issue which was resolved in the past reoccurs due to a future change to the same code. Nobody likes to do the same work more than once; regressions are like tracking dirt on the floor right after it's been swept.

Luckily, we have a secret weapon: tests. Whether you practice dogmatic TDD or not, tests are awesome for fixing bugs because they demonstrate the issue and the solution programmatically. Tests give us confidence that regressions won't happen when changes do.

Sound familiar? I thought so too, which made me wonder, "if performance optimizations can regress, why can't I catch those regressions with tests, too?"

There are a lot of great tools for profiling various performance aspects of Ruby including object allocations, memory, CPU, garbage collection, etc. Some of these include ruby-prof, stackprof and allocation_tracer.

I've recently been using allocation_stats to profile object allocations. Reducing allocations is a fairly easy task to accomplish, yielding a lot of low-hanging fruit for tuning memory consumption and speed.

For example, here's a basic Ruby class which stores an Array of 5 strings which default to 'foo':

class MyClass
  def initialize
    @values = Array.new(5)
    5.times { @values << 'foo' }
  end
end

The AllocationStats API is simple. Give it a block to profile, and it will print out where the most objects are allocated.

$ ruby -r allocation_stats -r ./lib/my_class
stats = AllocationStats.trace { MyClass.new } 
puts stats.allocations(alias_paths: true).group_by(:sourcefile, :sourceline, :class).to_text
^D
     sourcefile        sourceline   class   count
---------------------  ----------  -------  -----
/lib/my_class.rb           4       String       5
/lib/my_class.rb           3       Array        1
-                          1       MyClass      1

The #to_text method (called on a group of allocations) simply prints out a nice human-readable table grouped by whatever criteria you ask for.

This output is great when profiling manually, but my goal was to create a test which could run alongside my normal unit test suite (which is written in RSpec).We can see that on line 4 of my_class.rb, 5 strings are being allocated, which seems unnecessary since I know they all contain the same value. I wanted my scenario to read something like: "when initializing MyClass it allocates under 6 objects". In RSpec this looks something like:

describe MyClass do
  context "when initializing" do
    specify { expect { MyClass.new }.to allocate_under(6).objects }
  end
end

Using this syntax I have everything I need to test that object allocations are less than a given number for the described block of code (inside the expect block) using a custom RSpec matcher.

In addition to printing the trace results, AllocationStats provides a few methods for accessing the allocations via Ruby, including #allocations and #new_allocations. These are what I used to build my matcher:

begin
  require 'allocation_stats'
rescue LoadError
  puts 'Skipping AllocationStats.'
end

RSpec::Matchers.define :allocate_under do |expected|
  match do |actual|
    return skip('AllocationStats is not available: skipping.') unless defined?(AllocationStats)
    @trace = actual.is_a?(Proc) ? AllocationStats.trace(&actual) : actual
    @trace.new_allocations.size < expected
  end

  def objects
    self
  end

  def supports_block_expectations?
    true
  end

  def output_trace_info(trace)
    trace.allocations(alias_paths: true).group_by(:sourcefile, :sourceline, :class).to_text
  end

  failure_message do |actual|
    "expected under #{ expected } objects to be allocated; got #{ @trace.new_allocations.size }:\n\n" << output_trace_info(@trace)
  end

  description do
    "allocates under #{ expected } objects"
  end
end

I'm rescuing LoadError in the initial require statement because I may not want to include AllocationStats on every test run (it tends to slow down the tests). I then define the :allocate_under matcher which performs the trace inside of the match block. The failure_message block is also important because it includes the to_text output from the AllocationStats trace right inside my failure message! The rest of the matcher is mostly standard RSpec configuration.

With my matcher loaded, I can now run my scenario from before, and watch it fail:

$ rspec spec/my_class_spec.rb 

MyClass
  when initializing
    should allocates under 6 objects (FAILED - 1)

Failures:

  1) MyClass when initializing should allocates under 6 objects
     Failure/Error: expect { MyClass.new }.to allocate_under(6).objects
       expected under 6 objects to be allocated; got 7:

               sourcefile           sourceline   class   count
       ---------------------------  ----------  -------  -----
       <PWD>/spec/my_class_spec.rb           6  MyClass      1
       <PWD>/lib/my_class.rb                 3  Array        1
       <PWD>/lib/my_class.rb                 4  String       5
     # ./spec/my_class_spec.rb:6:in `block (3 levels) in <top (required)>'

Finished in 0.15352 seconds (files took 0.22293 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/my_class_spec.rb:5 # MyClass when initializing should allocates under 6 objects

OK, so I've programmatically demonstrated the performance problem, which is that MyClass allocates extra string objects with the same value. Let's fix that issue by throwing those values into a frozen constant:

class MyClass
  DEFAULT = 'foo'.freeze

  def initialize
    @values = Array.new(5)
    5.times { @values << DEFAULT }
  end
end

Now that I've fixed the issue, I'll run my test again and watch it pass:

$ rspec spec/my_class_spec.rb

MyClass
  when initializing
    should allocates under 6 objects

Finished in 0.14952 seconds (files took 0.22056 seconds to load)
1 example, 0 failures

Next time I change the MyClass#initialize method, I can be confident that I'm not allocating too many objects.

Because profiling allocations can be relatively slow, it would be ideal to run these on-demand rather than all the time. Because I'm already gracefully handling allocation_stats being missing, I can use Bundler to create multiple gemfiles and then specify which gemfile I want to use with the BUNDLE_GEMFILE environment variable:

$ BUNDLE_GEMFILE=with_performance.gemfile bundle exec rspec spec/
$ BUNDLE_GEMFILE=without_performance.gemfile bundle exec rspec spec/

Another option is to use a library like the appraisal gem, which takes this same approach and solves some Bundler gotchas. Jason Clark gave an excellent presentation on how to do this at Ruby on Ales in March 2015; check out his slides to learn more.

I also think that maintaining these types of tests separately from my normal unit tests is a good idea, so I'll create a new "performance" directory so that my unit test suite resides in spec/unit/ and my performance suite resides in spec/performance/:

spec/
|-- spec_helper.rb
|-- unit/
|-- features/
|-- performance/

I am still refining my approach to profiling Ruby code for performance; my hope is that maintaining a performance test suite will help me improve the speed of my code now, keep it fast in the future, and create documentation for myself and others.

What to do next:
  1. Try Honeybadger for FREE
    Honeybadger helps you find and fix errors before your users can even report them. Get set up in minutes and check monitoring off your to-do list.
    Start free trial
    Easy 5-minute setup — No credit card required
  2. Get the Honeybadger newsletter
    Each month we share news, best practices, and stories from the DevOps & monitoring community—exclusively for developers like you.
    author photo

    Joshua Wood

    Josh is Honeybadger's resident bug hunter and technical debt collector. Once enthusiastically referred to as a "human exception tracker", he spends his days crafting the middleware, plugins, and gems which keep the 'badger fat and happy through a steady diet of fresh data.

    More articles by Joshua Wood
    Stop wasting time manually checking logs for errors!

    Try the only application health monitoring tool that allows you to track application errors, uptime, and cron jobs in one simple platform.

    • Know when critical errors occur, and which customers are affected.
    • Respond instantly when your systems go down.
    • Improve the health of your systems over time.
    • Fix problems before your customers can report them!

    As developers ourselves, we hated wasting time tracking down errors—so we built the system we always wanted.

    Honeybadger tracks everything you need and nothing you don't, creating one simple solution to keep your application running and error free so you can do what you do best—release new code. Try it free and see for yourself.

    Start free trial
    Simple 5-minute setup — No credit card required

    Learn more

    "We've looked at a lot of error management systems. Honeybadger is head and shoulders above the rest and somehow gets better with every new release."
    — Michael Smith, Cofounder & CTO of YvesBlue

    Honeybadger is trusted by top companies like:

    “Everyone is in love with Honeybadger ... the UI is spot on.”
    Molly Struve, Sr. Site Reliability Engineer, Netflix
    Start free trial
    Are you using Sentry, Rollbar, Bugsnag, or Airbrake for your monitoring? Honeybadger includes error tracking with a whole suite of amazing monitoring tools — all for probably less than you're paying now. Discover why so many companies are switching to Honeybadger here.
    Start free trial
    Stop digging through chat logs to find the bug-fix someone mentioned last month. Honeybadger's built-in issue tracker keeps discussion central to each error, so that if it pops up again you'll be able to pick up right where you left off.
    Start free trial