This guide explains how to use Perfetto's trace summarization feature to extract structured, actionable data from your traces.
PerfettoSQL is a powerful tool for interactively exploring traces. You can write any query you want, and the results are immediately available. However, this flexibility presents a challenge for automation and large-scale analysis. The output of a SELECT statement has an arbitrary schema (column names and types), which can change from one query to the next. This makes it difficult to build generic tools, dashboards, or regression-detection systems that consume this data, as they cannot rely on a stable data structure.
Trace summarization solves this problem. It provides a way to define a stable, structured schema for the data you want to extract from a trace. Instead of producing arbitrary tables, it generates a consistent protobuf message (TraceSummary) that is easy for tools to parse and process.
This is especially powerful for cross-trace analysis. By running the same summary specification across hundreds or thousands of traces, you can reliably aggregate the results to track performance metrics over time, compare different versions of your application, and automatically detect regressions.
In short, use trace summarization when you need to:
The easiest way to get started is by using the modules in the PerfettoSQL Standard Library.
Let's walk through an example. Suppose we want to compute the average memory usage (specifically, RSS + Swap) for each process in a trace. The linux.memory.process module already provides a table, memory_rss_and_swap_per_process, that is perfect for this.
We can define a TraceSummarySpec to compute this metric:
// spec.textproto metric_spec { id: "memory_per_process" dimensions: "process_name" value: "avg_rss_and_swap" query: { table: { table_name: "memory_rss_and_swap_per_process" } referenced_modules: "linux.memory.process" group_by: { column_names: "process_name" aggregates: { column_name: "rss_and_swap" op: DURATION_WEIGHTED_MEAN result_column_name: "avg_rss_and_swap" } } } }
To run this, save the above content as spec.textproto and use your preferred tool.
Often, you'll want to compute several related metrics that share the same underlying query and dimensions. For example, for a given process, you might want to know the minimum, maximum, and average memory usage.
Instead of writing a separate metric_spec for each, which would involve repeating the same query and dimensions blocks, you can use a TraceMetricV2TemplateSpec. This is more concise, less error-prone, and more performant as the underlying query is only run once.
Let's extend our memory example to calculate the min, max, and duration-weighted average of RSS+Swap for each process.
// spec.textproto metric_template_spec { id_prefix: "memory_per_process" dimensions: "process_name" value_columns: "min_rss_and_swap" value_columns: "max_rss_and_swap" value_columns: "avg_rss_and_swap" query: { table: { table_name: "memory_rss_and_swap_per_process" } referenced_modules: "linux.memory.process" group_by: { column_names: "process_name" aggregates: { column_name: "rss_and_swap" op: MIN result_column_name: "min_rss_and_swap" } aggregates: { column_name: "rss_and_swap" op: MAX result_column_name: "max_rss_and_swap" } aggregates: { column_name: "rss_and_swap" op: DURATION_WEIGHTED_MEAN result_column_name: "avg_rss_and_swap" } } } }
This single template generates three metrics:
memory_per_process_min_rss_and_swapmemory_per_process_max_rss_and_swapmemory_per_process_avg_rss_and_swapYou can then run this, requesting any or all of the generated metrics, as shown below.
To make automated analysis and visualization of metrics more powerful, you can add units and polarity (i.e., whether a higher or lower value is better) to your metrics.
This is done by using the value_column_specs field in a TraceMetricV2TemplateSpec instead of the simpler value_columns. This allows you to specify a unit and polarity for each metric generated by the template.
Let‘s adapt our previous memory example to include this information. We’ll specify that the memory values are in BYTES and that a lower value is better.
// spec.textproto metric_template_spec { id_prefix: "memory_per_process" dimensions: "process_name" value_column_specs: { name: "min_rss_and_swap" unit: BYTES polarity: LOWER_IS_BETTER } value_column_specs: { name: "max_rss_and_swap" unit: BYTES polarity: LOWER_IS_BETTER } value_column_specs: { name: "avg_rss_and_swap" unit: BYTES polarity: LOWER_IS_BETTER } query: { table: { table_name: "memory_rss_and_swap_per_process" } referenced_modules: "linux.memory.process" group_by: { column_names: "process_name" aggregates: { column_name: "rss_and_swap" op: MIN result_column_name: "min_rss_and_swap" } aggregates: { column_name: "rss_and_swap" op: MAX result_column_name: "max_rss_and_swap" } aggregates: { column_name: "rss_and_swap" op: DURATION_WEIGHTED_MEAN result_column_name: "avg_rss_and_swap" } } } }
This will add the specified unit and polarity to the TraceMetricV2Spec of each generated metric, making the output richer and more useful for automated tooling.
While the standard library is powerful, you will often need to analyze custom events specific to your application. You can achieve this by writing your own SQL modules and loading them into Trace Processor.
A SQL package is simply a directory containing .sql files. This directory can be loaded into Trace Processor, and its files become available as modules.
Let's say you have custom slices named game_frame and you want to calculate the average, minimum, and maximum frame duration.
1. Create your custom SQL module:
Create a directory structure like this:
my_sql_modules/
└── my_game/
└── metrics.sql
Inside metrics.sql, define a view that calculates the frame stats:
-- my_sql_modules/my_game/metrics.sql CREATE PERFETTO VIEW game_frame_stats AS SELECT 'game_frame' AS frame_type, MIN(dur) AS min_duration_ns, MAX(dur) AS max_duration_ns, AVG(dur) AS avg_duration_ns FROM slice WHERE name = 'game_frame' GROUP BY 1;
2. Use a template in your summary spec:
Again, we can use a TraceMetricV2TemplateSpec to generate these related metrics from a single, shared configuration.
Create a spec.textproto that references your custom module and view:
// spec.textproto metric_template_spec { id_prefix: "game_frame" dimensions: "frame_type" value_columns: "min_duration_ns" value_columns: "max_duration_ns" value_columns: "avg_duration_ns" query: { table: { // The module name is the directory path relative to the package root, // with the .sql extension removed. table_name: "game_frame_stats" } referenced_modules: "my_game.metrics" } }
3. Run the summary with your custom package:
You can now compute the summary using either the Python API or the command-line shell, telling Trace Processor where to find your custom package.
The select_columns field provides a powerful way to manipulate the columns of your query result. You can rename columns and perform transformations using SQL expressions.
Each SelectColumn message has two fields:
column_name_or_expression: The name of a column from the source or a SQL expression.alias: The new name for the column.This example shows how to select the ts and dur columns from the slice table, rename ts to timestamp, and create a new column dur_ms by converting dur from nanoseconds to milliseconds.
query: { table: { table_name: "slice" } select_columns: { column_name_or_expression: "ts" alias: "timestamp" } select_columns: { column_name_or_expression: "dur / 1000" alias: "dur_ms" } }
interval_intersectA common analysis pattern is to analyze data from one source (e.g., CPU usage) within specific time windows from another (e.g., a “Critical User Journey” slice). The interval_intersect query makes this easy.
It works by taking a base query and one or more interval queries. The result includes only the rows from the base query that overlap in time with at least one row from each of the interval queries.
Use Cases:
This example demonstrates using interval_intersect to find total CPU time for thread bar within the duration of any “baz_*” slice from the “system_server” process.
// In a metric_spec with id: "bar_cpu_time_during_baz_cujs" query: { interval_intersect: { base: { // The base data is CPU time per thread. table: { table_name: "thread_slice_cpu_time" } referenced_modules: "slices.cpu_time" filters: { column_name: "thread_name" op: EQUAL string_rhs: "bar" } } interval_intersect: { // The intervals are the "baz_*" slices. simple_slices: { slice_name_glob: "baz_*" process_name_glob: "system_server" } } } group_by: { // We sum the CPU time from the intersected intervals. aggregates: { column_name: "cpu_time" op: SUM result_column_name: "total_cpu_time" } } }
dependenciesThe dependencies field in the Sql source allows you to build complex queries by composing them from other structured queries. This is especially useful for breaking down a complex analysis into smaller, reusable parts.
Each dependency is given an alias, which is a string that can be used in the SQL query to refer to the result of the dependency. The SQL query can then use this alias as if it were a table.
This example shows how to use dependencies to join CPU scheduling data with CUJ slices. We define two dependencies, one for the CPU data and one for the CUJ slices, and then join them in the main SQL query.
query: { sql: { sql: "SELECT s.id, s.ts, s.dur, t.track_name FROM $slice_table s JOIN $track_table t ON s.track_id = t.id" column_names: "id" column_names: "ts" column_names: "dur" column_names: "track_name" dependencies: { alias: "slice_table" query: { table: { table_name: "slice" } } } dependencies: { alias: "track_table" query: { table: { table_name: "track" } } } } }
You can add key-value metadata to your summary to provide context for the metrics, such as the device model or OS version. This is especially useful when analyzing multiple traces, as it allows you to group or filter results based on this metadata.
The metadata is computed alongside any metrics you request in the same run.
1. Define the metadata query in your spec:
This query must return “key” and “value” columns.
// In spec.textproto, alongside your metric_spec definitions query { id: "device_info_query" sql { sql: "SELECT 'device_name' AS key, 'Pixel Test' AS value" column_names: "key" column_names: "value" } }
2. Run the summary with both metrics and metadata:
When you run the summary, you specify both the metrics you want to compute and the query to use for metadata.
The result of a summary is a TraceSummary protobuf message. This message contains a metric_bundles field, which is a list of TraceMetricV2Bundle messages.
Each bundle can contain the results for one or more metrics that were computed together. Using a TraceMetricV2TemplateSpec is the most common way to create a bundle. All metrics generated from a single template are automatically placed in the same bundle, sharing the same specs and row structure. This is highly efficient as the dimension values, which are often repetitive, are only written once per row.
For the memory_per_process template example, the output TraceSummary would contain a TraceMetricV2Bundle like this:
# In TraceSummary's metric_bundles field: metric_bundles { # The specs for all three metrics generated by the template. specs { id: "memory_per_process_min_rss_and_swap" dimensions: "process_name" value: "min_rss_and_swap" # ... query details ... } specs { id: "memory_per_process_max_rss_and_swap" dimensions: "process_name" value: "max_rss_and_swap" # ... query details ... } specs { id: "memory_per_process_avg_rss_and_swap" dimensions: "process_name" value: "avg_rss_and_swap" # ... query details ... } # Each row contains one set of dimensions and three values, corresponding # to the three metrics in `specs`. row { values { double_value: 100000 } # min values { double_value: 200000 } # max values { double_value: 123456.789 } # avg dimension { string_value: "com.example.app" } } row { values { double_value: 80000 } # min values { double_value: 150000 } # max values { double_value: 98765.432 } # avg dimension { string_value: "system_server" } } # ... }
Perfetto previously had a different system for computing metrics, often referred to as “v1 metrics.” Trace summarization is the successor to this system, designed to be more robust and easier to use.
Here are the key differences:
TraceSummary), ensuring that all summaries are structured consistently..proto files for the output. You only need to define what data to compute (the query) and its shape (dimensions and value). Perfetto handles the rest.You can compute summaries using different Perfetto tools.
TraceSummarySpecThe top-level message for configuring a summary. It contains:
metric_spec (repeated TraceMetricV2Spec): Defines individual metrics.query (repeated PerfettoSqlStructuredQuery): Defines shared queries that can be referenced by metrics or used for trace-wide metadata.TraceSummaryThe top-level message for the output of a summary. It contains:
metric_bundles (repeated TraceMetricV2Bundle): The computed results for each metric.metadata (repeated Metadata): Key-value pairs of trace-level metadata.TraceMetricV2SpecDefines a single metric.
id (string): A unique identifier for the metric.dimensions (repeated string): Columns that act as dimensions.value (string): The column containing the metric's numerical value.unit (oneof): The unit of the metric's value (e.g. TIME_NANOS, BYTES). Can also be a custom_unit string.polarity (enum): Whether a higher or lower value is better (e.g. HIGHER_IS_BETTER, LOWER_IS_BETTER).query (PerfettoSqlStructuredQuery): The query to compute the data.TraceMetricV2TemplateSpecDefines a template for generating multiple, related metrics from a single, shared configuration. This is useful for reducing duplication when you have several metrics that share the same query and dimensions.
Using a template automatically bundles the generated metrics into a single TraceMetricV2Bundle in the output.
id_prefix (string): A prefix for the IDs of all generated metrics.dimensions (repeated string): The shared dimensions for all metrics.value_columns (repeated string): A list of columns from the query. Each column will generate a unique metric with the ID <id_prefix>_<value_column>.value_column_specs (repeated ValueColumnSpec): A list of value column specifications, allowing each to have a unique unit and polarity.query (PerfettoSqlStructuredQuery): The shared query that computes the data for all metrics.TraceMetricV2BundleContains the results for one or more metrics which are bundled together.
specs (repeated TraceMetricV2Spec): The specs for all the metrics in the bundle.row (repeated Row): Each row contains the dimension values and all the metric values for that set of dimensions.PerfettoSqlStructuredQueryThe PerfettoSqlStructuredQuery message provides a structured way to define PerfettoSQL queries. It is built by defining a data source and then optionally applying filters, group_by operations, and select_columns transformations.
A query's source can be one of the following:
table: A PerfettoSQL table or view.sql: An arbitrary SQL SELECT statement.simple_slices: A convenience for querying the slice table.inner_query: A nested structured query.inner_query_id: A reference to a shared structured query.interval_intersect: A time-based intersection of a base data source with one or more interval data sources.These operations are applied sequentially to the data from the source:
filters: A list of conditions to filter rows.group_by: Groups rows and applies aggregate functions.select_columns: Selects and optionally renames columns.