Skip to content
Home » Blog » Test Execution Dashboard

Test Execution Dashboard

TL; DR;

Tool to help with monitoring Test Execution time as you go through development or refactoring intended to encourage writing faster tests and catch things that slow you down. Source here.

Problem

Well not really a problem necessarily. Imagine you are working in an org that’s been around for a few years. It’s the typical happy soup kind. There’s a decent amount of tests, maybe they even use fairly similar approaches to creating test data. Problem is that the busiest business area requires tons of supporting data to exist (e.g. product catalogs, configuration structures, approvers, …). On top of that there is a whole lot happening in triggers, as is typical of soupy orgs like that. This all means that the unit tests are not only quite long, but also take super long to execute.

While you do try to spend enough time on making things a bit better, you also have to keep satisfying new business requirements. It’s very tempting to just copy-paste the base Test Setup or general unit test structure from other existing test similar to what you need. Makes you go faster. But it’s not doing you favours in terms of test execution time. Which makes the whole refactoring thing more and more painful. I mean, it’s not unheard of to have the tests take over an hour during a full deployment.

Of course the answer is to start writing different tests. More mocking. Smarter test data setup. That’s a topic (many many topics really) for another day. Here we want a simple nudge every now and then to make sure we keep going in the right direction. Just an ability to follow up on the trend. Is every new test a little bit better than those before?

Dashboard to the Rescue

My idea of how to do this was to maintain a dashboard of how long the tests take to execute over time. This would allow the team to see if the trend is a good one.

I could not find any existing tool (not a free one anyway, yes that was a requirement for me this time) that did what I needed easily. SonarCloud (which we already use) can show how many tests were executed, but not how long they took.

I’ve decided to just build my own dashboard. All I have to do is make sure I have the data. Apex Test Run Result and Apex Test Result hold all I need, but they can disappear easily and you cannot report on them. Straight-up clone Custom Objects will do the trick though.

Getting the Data

The easiest way to move the data from the Tooling objects into Custom objects? I’m sure there’s plenty of ways, but I love the simplicity of using the CLI. I can run a SOQL query like this SELECT Id.. FORM ApexTestResult and then use the BULK API import into my custom objects.

It’s good to keep this process outside of the org. I will mostly run this from GitHub Actions and I want to be able to maintain the dashboard only in one selected org while measuring tests from multiple different environments (potentially).

Suitable Context

This isn’t going to be any use for ad hoc test runs. It’s important to always consider only Test executions that are comparable. Think with/without code coverage, all tests vs selected tests. The ideal situation is post-merge deployment into an integration sandbox. That’s always done the same way. Other good options would be PR validations (if running all tests) or regular scheduled tests in a stable environment.

What to Track

What are reasonable measures? Overall Test Execution time is one. But it should not be the only one. We are adding new functionality after all so we can’t reasonably expect to go down with overall time. Still it’s a good idea to monitor to make sure we’re not soaring into the sky. Number of tests doesn’t really tell us much. Not on its own anyway. I chose the average time per method; this helps to keep an eye on a general trend towards faster (mocked) tests, even when we are constantly adding new ones. 

Choosing the right “target” is quite tricky. Since we are starting to do this with a 1000 tests in place already, I’ve just chosen a number that’s smaller than the current average.

One final interesting measure in my opinion is listing out the slowest classes and how much of the overall test execution they represent. Nice target for quiet business periods when there isn’t as much going on.

Pretty Colours

This is what it looks like. We’ve got it on a Schedule to email everyone in the development team each week as a little nudge that we want to focus on the tests. (we run Local Tests with Code Coverage – that’s why it’s so slow :-))

Screenshot of the Test Execution Dashbaord

Caveat

Given the nature of Salesforce even the same version of the same test isn’t always going to take exactly the same amount of time to execute. It’s important to remember that when looking at the results here and not get triggered by small deviations in either direction. Focus on the long term trend. And the more test executions you compare the more informative the results.

Verdict?

What do you think? Is this useful at all? Or is it just a bit obsessive?

I didn’t want to focus on code too much this time. There isn’t much of it anyway. If you would like to use this I’ve made it available as an Unlocked Package. The source is of course available on GitHub, including a bash script to move the Test Results from one org to another. To make it truly easy though, I’ve also created a GitHub Action that you can add to your pipeline (example YAML included).

Feel free to fork the repos and change it up a bit. Any contributions are of course welcome. Maybe you prefer different charts? I also need to add an easy way to include the Org Id into the import to make it support multiple orgs out of the box (I was a bit lazy to be honest). Let me know if you’d like to get involved.

Where to next

Image of the DFA approach to analysis of Salesforce Graph Engine

SFDX GitHub Code Review – Part 3

Jan 23, 20237 min read

TL; DR; Finally making the automated SFDX Pull Request Code Review useful by handling unlimited comments without API Rate Limiting…

Metal Worker

Class Factory

Apr 7, 20225 min read

TL; DR; Standardising mocking throughout your code base in the simplest way possible without mixing test-only and production code. See…

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x