Writing Instrumentation Tests
Old Glean Proverb
If it’s important enough to be instrumented, it’s important enough to be tested.
All metrics and pings in the Glean SDK have well-documented APIs for testing.
You’ll want to familiarize yourself with TestGetValue()
(here’s an example JS (xpcshell) test of some metrics)
for metrics and
TestBeforeNextSubmit()
(here’s an example C++ (gtest) test of a custom ping)
for pings.
All test APIs are available in all three of FOG’s supported languages: Rust, C++, and JavaScript.
But how do you get into a position where you can even call these test APIs? How do they fit in with Firefox Desktop’s testing frameworks?
Manual Testing and Debugging
The Glean SDK has debugging capabilities
for manually verifying that instrumentation makes it to Mozilla’s Data Pipeline.
Firefox Desktop supports these via environment variables and
via the interface on about:glean
.
This is all well and good for getting a good sense check that things are going well now, but in order to check that everything stays good through the future, you’re going to want to write some automated tests.
General Things To Bear In Mind
You may see values from previous tests persist across tests because the profile directory was shared between test cases.
You can reset Glean before your test by calling
Services.fog.testResetFOG()
(in JS).If your instrumentation isn’t on the parent process, you should call
await Services.fog.testFlushAllChildren()
beforetestResetFOG
. That will ensure all pending data makes it to the parent process to be cleared.
You shouldn’t have to do this in C++ or Rust since there you should use the
FOGFixture
test fixture.
If your metric is based on timing (
timespan
,timing_distribution
), do not expect to be able to assert the correct timing value. Glean does a lot of timing for you deep in the SDK, so unless you mock the system’s monotonic clock, do not expect the values to be predictable.Instead, check that a value is
> 0
or that the number of samples is expected.You might be able to assert that the value is at least as much as a known, timed value, but beware of rounding.
If your metric is a
timing_distribution
mirroring to a Telemetry probe via GIFFT, there may be small observed differences between systems that can cause equality assertions to fail.
Errors in instrumentation APIs do not panic, throw, or crash. But Glean remembers that the errors happened.
Test APIs, on the other hand, are permitted (some may say “encouraged”) to panic, throw, or crash on bad behaviour.
If you call a test API and it panics, throws, or crashes, that means your instrumentation did something wrong. Check your test logs for details about what went awry.
Tests and Artifact Builds
Artifact build support is provided by the JOG subsystem. It is able to register the latest versions of all metrics and pings at runtime. However, the compiled code is still running against the version of those metrics and pings that was current at the time the artifacts were compiled.
This isn’t a problem unless:
You are changing a metric or ping that is used in instrumentation in the compiled code, or
You are using
testBeforeNextSubmit
in JavaScript for a ping submitted in the compiled code.
When in doubt, simply test your new test in artifact mode
(by e.g. passing --enable-artifact-builds
to mach try
)
before submitting it.
If it doesn’t pass in artifact mode because of one of these two cases,
you may need to skip your test whenever FOG’s artifact build support is enabled:
xpcshell:
add_task(
{ skip_if: () => Services.prefs.getBoolPref("telemetry.fog.artifact_build", false) },
function () {
// ... your test ...
}
);
mochitest:
add_task(function () {
if (Services.prefs.getBoolPref("telemetry.fog.artifact_build", false)) {
Assert.ok(true, "Test skipped in artifact mode.");
return;
}
// ... your test ...
});
The Usual Test Format
Instrumentation tests tend to follow the same three-part format:
Assert no value in the metric
Express behaviour
Assert correct value in the metric
Your choice of test suite will depend on how the instrumented behaviour can be expressed.
xpcshell
Tests
If the instrumented behaviour is on the main or content process and can be called from privileged JS,
xpcshell
is an excellent choice.
xpcshell
is so minimal an environment, however, that
(pending bug 1756055)
you’ll need to manually tell it you need two things:
A profile directory
An initialized FOG
/* Any copyright is dedicated to the Public Domain.
http://creativecommons.org/publicdomain/zero/1.0/ */
"use strict";
add_setup(function test_setup() {
// FOG needs a profile directory to put its data in.
do_get_profile();
// FOG needs to be initialized in order for data to flow.
Services.fog.initializeFOG();
});
From there, just follow The Usual Test Format:
add_task(function test_instrumentation() {
// 1) Assert no value
Assert.equal(undefined, Glean.myMetricCategory.myMetricName.testGetValue());
// 2) Express behaviour
// ...<left as an exercise to the reader>...
// 3) Assert correct value
Assert.equal(kValue, Glean.myMetricCategory.myMetricName.testGetValue());
});
If your new instrumentation includes a new custom ping, there are two small additions to The Usual Test Format:
1.1) Call
testBeforeNextSubmit
before your ping is submitted. The callback you register intestBeforeNextSubmit
is called synchronously with the call to the ping’ssubmit()
.3.1) Check that the ping actually was submitted. If all your Asserts are inside
testBeforeNextSubmit
’s closure, another way this test could pass is by not running any of them.
add_task(function test_custom_ping() {
// 1) Assert no value
Assert.equal(undefined, Glean.myMetricCategory.myMetricName.testGetValue());
// 1.1) Set up Step 3.
let submitted = false;
GleanPings.myPing.testBeforeNextSubmit(reason => {
submitted = true;
// 3) Assert correct value
Assert.equal(kExpectedReason, reason, "Reason of submitted ping must match.");
Assert.equal(kExpectedMetricValue, Glean.myMetricCategory.myMetricName.testGetValue());
});
// 2) Express behaviour that sends a ping with expected reason and contents
// ...<left as an exercise to the reader>...
// 3.1) Check that the ping actually was submitted.
Assert.ok(submitted, "Ping was submitted, callback was called.");
});
(( We acknowledge that this isn’t the most ergonomic form. Please follow bug 1756637 for updates on a better design and implementation for ping tests. ))
mochitest
browser-chrome
-flavoured mochitests can be tested very similarly to xpcshell
,
though you do not need to request a profile or initialize FOG.
plain
-flavoured mochitests aren’t yet supported (follow
bug 1799977
for updates and a workaround).
If you’re testing in mochitest
, your instrumentation (or your test)
might not be running in the parent process.
This means you get to learn the IPC test APIs.
IPC
All test APIs must be called on the main process (they’ll assert otherwise). But your instrumentation might be on any process, so how do you test it?
In this case there’s a slight addition to the Usual Test Format:
Assert no value in the metric
Express behaviour
Flush all pending FOG IPC operations with
await Services.fog.testFlushAllChildren()
Assert correct value in the metric.
NOTE: We learned in
bug 1843178
that the list of all content processes that Services.fog.testFlushAllChildren()
uses is very quickly updated after the end of a call to BrowserUtils.withNewTab(...)
.
If you are using withNewTab
, you should consider calling testFlushAllChildren()
within the callback.
GTests/Google Tests
Please make use of the FOGFixture
fixture when writing your tests, like:
TEST_F(FOGFixture, MyTestCase) {
// 1) Assert no value
ASSERT_EQ(mozilla::Nothing(),
my_metric_category::my_metric_name.TestGetValue());
// 2) Express behaviour
// ...<left as an exercise to the reader>...
// 3) Assert correct value
ASSERT_EQ(kValue,
my_metric_category::my_metric_name.TestGetValue().unwrap().ref());
}
The fixture will take care of ensuring storage is reset between tests.
Rust rusttests
The general-purpose Testing & Debugging Rust Code in Firefox is a good thing to review first.
Unfortunately, FOG requires gecko (to tell it where the profile dir is, and other things), which means we need to use the GTest + FFI approach where GTest is the runner and Rust is just the language the test is written in.
This means your test will look like a GTest like this:
extern "C" void Rust_MyRustTest();
TEST_F(FOGFixture, MyRustTest) { Rust_MyRustTest(); }
Plus a Rust test like this:
#[no_mangle]
pub extern "C" fn Rust_MyRustTest() {
// 1) Assert no value
assert_eq!(None,
fog::metrics::my_metric_category::my_metric_name.test_get_value(None));
// 2) Express behaviour
// ...<left as an exercise to the reader>...
// 3) Assert correct value
assert_eq!(Some(value),
fog::metrics::my_metric_category::my_metric_name.test_get_value(None));
}