There are multiple ways to use Firebase Test Lab to run tests on your Android app, including the command line interface, Android Studio, the Test Lab UI in the Firebase console, and the Testing API. However you choose to start your tests, the results are stored in the Firebase project that you specify. You can explore the results using ToolResults API in addition to any of the tools above. This page describes how to review and analyze these test results.
To see the results from all your previous test runs, select Test Lab in the left navigation panel of your project in the Firebase console. This page displays all the test runs from the apps that you have tested with your project using Test Lab.
To review test results, you first need to understand three concepts:
Devices × Test Executions = Test Matrix
- A device you run a test on, such as a phone, a tablet, or a wearable device. Devices in a test matrix are identified by device model, OS version, locale, and screen orientation.
- Test execution
- A test run on a device. In a typical test matrix there is one test execution per selected device.
- Test matrix
- A set of test executions. If any test execution in a matrix fails, the whole matrix fails as well.
The following sections explain how to navigate test results.
Interpret test history results
When you navigate to your test results by selecting Test Lab, you see the results of tests you have run so far.
Testing history is grouped by app. Only the most recent five test matrices are shown for each app; if more are available, you can click the All Matrices link at the bottom of the app test list to see the complete list for that app.
Interpret test matrix results
When starting a test through the Test Lab UI, you are redirected to a page where you can see your test matrix and click a specific test execution to view test results. Android Studio and the gcloud command provide a URL for the test matrix results page as well.
In a typical test matrix, you might run a test across a dozen or so different devices. Each test execution can have a different outcome. The possible outcomes for any test execution in a test matrix include the following:
- Passed : No failures were encountered.
- Failed : At least one failure was encountered.
- Inconclusive : Test results were inconclusive, possibly due to a Test Lab error.
- Skipped : The selected dimension values for some test executions in the matrix were incompatible. This occurs when devices that you selected are incompatible with one or more of the Android API levels that you selected.
To review aggregated test results for all test matrices for a given app in your Firebase project, click the name of the app, as shown in the following example:
Example test matrix results page with only four test executions
This takes you to the test matrix list for your app, where you can click the name of any test matrix to see the test matrix results, and where you can click the name of the app (shown in the red box below) to view the test matrix list for other apps associated with your Firebase project.
Example test matrix list page
A test matrix can pass, fail, or be inconclusive. A test matrix is shown as failed or inconclusive if any test executions in that matrix fail or are inconclusive.
Interpret Robo test results
If you ran your tests with Robo, your results include videos and screenshots of Robo crawling your UI, in addition to the usual test metrics. Those video and screenshots include visual indications of the actions Robo took during the crawl, similar to the 'Show touches' feature in Android. You can use the indications to help you follow along with Robo's progress, and reproduce any bugs it might uncover.
Example Robo test results video
Interpret results from a single test execution
From the test matrix results page, click on one of the test executions to see the result of that specific test execution.
Example test execution results page
On this page, you can see the time required for each test execution. You can also see the results for specific test cases that correspond to methods in your test APK (for instrumentation tests) and detailed test results, including test logs, screenshots, and videos. For Robo test, detailed test results also include an activity map that graphically shows the UI paths that were visited by Robo test.
Partitioned instrumentation test results
To help you interpret instrumented test results, Test Lab separates each test into its own detailed report page, complete with stack traces, logs, and videos. This feature works whether or not you are using Android Orchestrator.
Example testcase results page
Interpret accessibility results
Robo tests use Android Accessibility Scanner to detect accessibility issues in your app (note that you can also run a scan locally on your device). For instructions on how to review and interpret the accessibility results of your Robo test, visit Get started with Accessibility Scanner.
For general information on how to improve the accessibility of your app, visit the Android Developer Accessibility documentation.
Tests run on physical devices also return performance metrics:
|Metric||Required device configuration|
|App startup time||API 19+|
|CPU usage||API 21+|
|Frames per second||API 21+ and includes a
|Graphics performance||API 23+|
Graphics performance details
The graphics performance report contains stats on several key graphics metrics:
- Missed Vsync: The number of missed Vsync events, divided by the number of frames that took longer than 16 ms to render.
- High input latency: The number of input events that took longer than 24 ms, divided by the number of frames that took longer than 16 ms to render.
- Slow UI thread: The number of times the UI thread took more than 8 ms to complete, divided by the number of frames that took longer than 16 ms to render.
- Slow draw commands: The number of times that sending draw commands to the GPU took more than 12 ms, divided by the number of frames that took longer than 16 ms to render.
- Slow bitmap uploads: The number of times that the bitmap took longer than 3.2 ms to upload to the GPU divided by the number of frames that took longer than 16 ms to render.
- Render time: The distribution of render times for each frame of the
test run. Render times greater than 32 milliseconds cause a perceptible
slowdown of your UI. Render times of 700+ indicate frozen frames. Render
data is gathered from
Detailed test results
Detailed test results are available for 90 days after you run a test and are stored in a Google Cloud Storage (GCS) bucket (but are also visible in the Firebase console). You can view detailed test results in the GCS bucket when you click View Source Files on the test execution results page. When detailed test results are no longer available, you can still see which tests passed or failed.
To retain detailed test results for longer than 90 days, you need to send these test results to a GCS bucket that you own using the --results-bucket gcloud command-line option. You can then set the Age setting to determine how long results are stored in your GCS bucket. See Lifecycle conditions for information about how to change the Age setting.