Testing

The testing strategy for Perfetto is rather complex due to the wide variety of build configurations and embedding targets.

Common test targets (all platforms / checkouts):

perfetto_unittests:
Platform-agnostic unit-tests.

perfetto_integrationtests:
End-to-end tests, involving the protobuf-based IPC transport and ftrace integration (Linux/Android only).

perfetto_benchmarks:
Benchmarks tracking the performance of: (i) trace writing, (ii) trace readback and (iii) ftrace raw pipe -> protobuf translation.

Running tests

On Linux / MacOS

tools/ninja -C out/default perfetto_{unittests,integrationtests,benchmarks}
out/default/perfetto_unittests --gtest_help

perfetto_integrationtests requires that the ftrace debugfs directory is is readable/writable by the current user on Linux:

sudo chown  -R $USER /sys/kernel/debug/tracing

On Android

  1. Connect a device through adb
  2. Start the build-in emulator (supported on Linux and MacOS):
tools/install-build-deps --android
tools/run_android_emulator &
  1. Run the tests (either on the emulator or physical device):
tools/run_android_test out/default perfetto_unittests

Continuous testing

Perfetto is tested in a variety of locations:

Perfetto CI: https://ci.perfetto.dev/
Builds and runs perfetto_{unittests,integrationtests,benchmarks} from the standalone checkout. Benchmarks are ran in a reduced form for smoke testing. See this doc for more details.

Android CI (see go/apct and go/apct-guide):
runs only perfetto_integrationtests

Android presubmits (TreeHugger):
Runs before submission of every AOSP CL of external/perfetto.

Android CTS (Android test suite used run to ensure API compatibility):
Rolling runs internally.

Note that Perfetto CI uses the standalone build system and the others build as part of the Android tree.

Unit tests

Unit tests exist for most of the code in Perfetto on the class level. They ensure that each class broadly works as expected.

Unit tests are currently ran on ci.perfetto.dev and build.chromium.org. Running unit tests on APCT and Treehugger is WIP.

Integration tests

Integration tests ensure that subsystems (importantly ftrace and the IPC layer) and Perfetto as a whole is working correctly end-to-end.

There are two configurations in which integration tests can be run:

1. Production mode (Android-only)
This mode assumes that both the tracing service (traced) and the OS probes service (traced_probes) are already running. In this mode the test enables only the consumer endpoint and tests the interaction with the production services. This is the way our Android CTS and APCT tests work.

2. Standalone mode:
Starting up the daemons in the test itself and then testing against them. This is how standalone builds are tested. This is the only supported way to run integration tests on Linux and MacOS.

Trace Processor diff tests

Trace Processor is mainly tested using so called “diff tests” rather than unit tests. Unit tests have proven too brittle when dealing with code that parses traces — they require painful mechanical updates whenever the parsing logic is refactored — so they are reserved for the low-level building blocks the rest of Trace Processor is built on. Everything else (parsing events, table schemas, stdlib modules, dynamic tables) is covered by diff tests.

For these tests, Trace Processor parses a known trace and executes a query string or file. The output of these queries is then compared (i.e. “diff”ed) against an expected output file and discrepancies are highlighted.

Similar diff tests are also available when writing metrics — instead of a query, the metric name is used and the expected output string contains the expected result of computing the metric.

These tests (for both queries and metrics) can be run as follows:

tools/ninja -C <out directory>
tools/diff_test_trace_processor.py <out directory>/trace_processor_shell

TIP: Query diff tests are expected to only have a single query which produces output in the whole file (usually at the end). Calling SELECT RUN_METRIC('metric file') can trip up this check as this query generates some hidden output. To address this issue, if a query only has column is named suppress_query_output, even if it has output, this will be ignored (for example, SELECT RUN_METRIC('metric file') as suppress_query_output)

Adding a new diff test

All diff tests live under test/trace_processor in tests{_category_name}.py files as methods of a class. To add a new test, add a new method starting with test_ in the suitable Python file.

Methods cannot take arguments and have to return a DiffTestBlueprint:

class DiffTestBlueprint:
  trace: Union[Path, Json, Systrace, TextProto]
  query: Union[str, Path, Metric]
  out: Union[Path, Json, Csv, TextProto]

Trace and Out: For every type apart from Path, contents of the object will be treated as file contents so it has to follow the same rules.

Query: For metric tests it is enough to provide the metric name. For query tests there can be a raw SQL statement, for example "SELECT * FROM SLICE", or a path to an .sql file.

NOTE: trace_processor_shell and the associated proto descriptors need to be built before running tools/diff_test_trace_processor.py. The easiest way to do this is to run tools/ninja -C <out directory> both initially and on every change to Trace Processor code.

Choosing where to add diff tests

diff_tests/ contains directories corresponding to different areas of Trace Processor:

  1. stdlib: Tests focusing on the PerfettoSQL Standard Library, both the prelude and the regular modules. The subdirectories generally correspond to directories in perfetto_sql/stdlib.
  2. parser: Tests focusing on ensuring different trace formats are parsed correctly and the corresponding built-in tables are populated.
  3. syntax: Tests focusing on the core syntax of PerfettoSQL (e.g. CREATE PERFETTO TABLE, CREATE PERFETTO FUNCTION).

Scenario: A new stdlib module foo/bar.sql is being added.

Answer: Add the test to stdlib/foo/bar_tests.py.

Scenario: A new event is being parsed and the focus of the test is to ensure the event is parsed correctly.

Answer: Add the test in one of the parser subdirectories. Prefer adding the test to an existing related directory (e.g. sched, power) if one exists.

Scenario: A new dynamic table is being added and the focus of the test is to ensure the dynamic table is correctly computed.

Answer: Add the test to stdlib/dynamic_tables.

Scenario: The internals of Trace Processor are being modified and the test is to ensure Trace Processor is correctly filtering/sorting important built-in tables.

Answer: Add the test to parser/core_tables.

UI pixel diff tests

The pixel tests are used to ensure core user journeys work by verifying they are the same pixel to pixel against a golden screenshot. They use a headless chrome to load the webpage and take a screenshot and compare pixel by pixel a golden screenshot. You can run these tests by using ui/run-integrationtests.

These test fail when a certain number of pixels are different. If these tests fail, you'll need to investigate the diff and determine if its intentional. If its a desired change you will need to update the screenshots on a linux machine to get the CI to pass. You can update them by generating and uploading a new baseline (this requires access to a google bucket through gcloud which only googlers have access to, googlers can install gcloud here).

The tests are run in a docker container by default, unless --no-docker is passed. It‘s recommended to use the container for a stable and reproducable testing environment, especially for rebaselining, otherwise it’s very likely the screenshots will not match when run on the CI.

ui/run-integrationtests --rebaseline
tools/test_data upload

Once finished you can commit and upload as part of your CL to cause the CI to use your new screenshots.

NOTE: If you see a failing diff test you can see the pixel differences on the CI by using a link ending with ui-test-artifacts/index.html. Report located on that page contains changed screenshots as well as a command to accept the changes if these are desirable.

Android CTS tests

CTS tests ensure that any vendors who modify Android remain compliant with the platform API.

These tests include a subset of the integration tests above as well as adding more complex tests which ensure interaction between platform (e.g. Android apps etc.) and Perfetto is not broken.

The relevant targets are CtsPerfettoProducerApp and CtsPerfettoTestCases. Once these are built, the following commands should be run:

adb push $ANDROID_HOST_OUT/cts/android-cts/testcases/CtsPerfettoTestCases64 /data/local/tmp/
adb install -r $ANDROID_HOST_OUT/cts/android-cts/testcases/CtsPerfettoProducerApp.apk

Next, the app named android.perfetto.producer should be run on the device.

Finally, the following command should be run:

adb shell /data/local/tmp/CtsPerfettoTestCases64

Chromium waterfall

Perfetto is constantly rolled into chromium's //third_party/perfetto via this autoroller.

The Chromium CI runs the perfetto_unittests target, as defined in the buildbot config.

You can also test a pending Perfetto CL against Chromium‘s CI / TryBots before submitting it. This can be useful when making trickier API changes or to test on platforms that the Perfetto CI doesn’t cover (e.g. Windows, MacOS), allowing you to verify the patch before you submit it (and it then eventually auto-rolls into Chromium).

To do this, first make sure you have uploaded pull request to GitHub. Next, create a new Chromium CL that modifies Chromium's //src/DEPS file.

If you recently uploaded your change, it may be enough to modify the git commit hash in the DEPS entry for src/third_party/perfetto:

  'src/third_party/perfetto':
    Var('chromium_git') + '/external/github.com/google/perfetto/' + '@' + '8fe19f55468ee227e99c1a682bd8c0e8f7e5bcdb',

Replace the git hash with the commit hash of your most recent patch set.

Alternatively, you can add hooks to patch in the pending CL on top of Chromium‘s current third_party/perfetto revision. For this, add the following entries to the hooks array in Chromium’s //src/DEPS file, modifying the refs/pull/XXXX/head to the appropriate values for your pull request.

  {
    'name': 'fetch_custom_patch',
    'pattern': '.',
    'action': [ 'git', '-C', 'src/third_party/perfetto/',
                'fetch', 'https://github.com/google/perfetto.git',
                'refs/pull/XXXX/head',
    ],
  },
  {
    'name': 'apply_custom_patch',
    'pattern': '.',
    'action': ['git', '-C', 'src/third_party/perfetto/',
               '-c', 'user.name=Custom Patch', '-c', 'user.email=custompatch@example.com',
               'cherry-pick', 'FETCH_HEAD',
    ],
  },

If you‘d like to test your change against the SDK build of Chrome, you can add Cq-Include-Trybots: lines for perfetto SDK trybots to the change description in gerrit (this won’t be needed once Chrome's migration to the SDK is complete, see tracking bug):

Cq-Include-Trybots: luci.chromium.try:linux-perfetto-rel
Cq-Include-Trybots: luci.chromium.try:android-perfetto-rel
Cq-Include-Trybots: luci.chromium.try:mac-perfetto-rel
Cq-Include-Trybots: luci.chromium.try:win-perfetto-rel