Search
⌃K

Flaky specs

Automated Tests which do not pass consistently: they fail sometimes (flaky)

Diagnosing flaky tests

We have a nice script for this, which makes it easy for testers to run a spec or example repeatedly, using all available CPU threads and tracking a generated log file

Run the script

The script can be run on your local repo folder, by following the syntax:
./script/rspec-slow-repeat <number of times to run the spec> <spec file or example>
for example:
script/rspec-slow-repeat 100 ./spec/system/admin/configuration/taxonomies_spec.rb:45
Here we run the context on line 45 of the taxonomies_spec.rb, on hundred times.
It will display passing runs with Pass., failing runs with !!! Fail !!! and a pass percentage when it finishes, something like 97 of 100 passed (97%). However, it will not display which examples are failing and why they are failing.
To find out, we'll need to look at the gererated log file.

Track the output with the log

The script above will generate a log file, in which the RSpec output is recorded. The log file is located at ./tmp/rspec.log .

Catching failing specs like a pro
💪

Open two terminal windows:
  • On the left window run the script following the indicated syntax
  • On the right window track the log file, by running tail -f ./tmp/rspec.log
Now you can see the specs passing/failing
🍿
Happy debugging!