This page provides troubleshooting help and answers to frequently asked questions about running tests with Firebase Test Lab. Known issues are also documented. If you can't find what you're looking for or need additional help, join the #test-lab channel on Firebase Slack or contact Firebase support.
Troubleshooting
When you select a device with a high capacity level in the Test Lab catalog, tests may start faster. When a device has low capacity, tests might take longer to run. If the number of tests invoked is much larger than the capacity of the selected devices, tests can take longer to finish.
Tests running on any level device capacity level may take longer due to the following factors:
- Traffic, which affects device availability and test speed.
- Device or infrastructure failures, which can happen at any time. To check if there is a reported infrastructure for Test Lab, see the Firebase status dashboard.
To learn more about device capacity in Test Lab, see device capacity information for Android and iOS.
Inconclusive test outcomes commonly occur either because of canceled test runs or infrastructure errors.
Infrastructure errors are caused by internal Test Lab issues, like network errors or unexpected device behaviors. Test Lab internally retires test runs that produce infrastructure errors multiple times before reporting an inconclusive outcome; however, you can disable these retries using failFast.
To determine the cause of the error, follow these steps:
- Check for known outages in the Firebase status dashboard.
Retry the test in Test Lab to verify that it is reproducible.
Try running the test on a different device or device type, if applicable.
If the issue persists, contact the Test Lab team in the #test-lab channel on Firebase Slack.
Sharding can cause your tests to run longer when the number of shards you specified exceeds the number of devices available for use in Test Lab. To avoid this situation, try switching to a different device. For more information about choosing a different device, see Device Capacity.
When you submit a test request, your app is first validated, re-signed, etc. in preparation for running tests on a device. Normally, this process completes in less than a few seconds, but it can be affected by factors like the size of your app.
After your app is prepared, test executions are scheduled and remain in a queue until a device is ready to run it. Until all test executions finish running, the matrix status will be "Pending" (regardless of whether test executions are in the queue or actively running).
After the test execution is finished, test artifacts are downloaded from the device, processed, and uploaded to Cloud Storage. The duration of this step can be affected by the amount and size of the artifacts.
Frequently asked questions
Firebase Test Lab offers no-cost quotas for testing on devices and for using Cloud APIs. Note that the testing quota uses the standard Firebase pricing plan, while the Cloud API quotas do not.
Testing quota
Testing quotas are determined by the number of devices used to run tests. The Firebase Spark plan has a fixed testing quota at no cost to users. For the Blaze plan, your quotas might increase if your usage of Google Cloud increases over time. If you reach your testing quota, wait until the next day or upgrade to the Blaze plan if you are currently on the Spark plan. If you are already on the Blaze plan, you can request a quota increase. For more information, see Testing quota.
You can monitor your testing quota usage in the Google Cloud console.
Cloud Testing API quota
The Cloud Testing API comes with two quota limits: requests per day per project, and requests per every 100 seconds per project. You can monitor your usage in the Google Cloud console.
Cloud Tool Results API quota
The Cloud Tool Results API comes with two quota limits: queries per day per project, and queries per every 100 seconds per project. You can monitor your usage in the Google Cloud console.
Refer to Cloud API quotas for Test Lab for more information on API limits. If you've reached an API quota:
Submit a request for higher quotas by editing your quotas directly in the Google Cloud console (note that most limits are set to maximum by default), or
Request higher API quotas by filling out a request form in the Google Cloud console or by contacting Firebase support.
From your backend, you can determine if traffic is coming from Firebase-hosted test devices by checking the source IP address against our IP ranges.
Test Lab does not work with VPC-SC, which blocks the copying of apps and other test artifacts between Test Lab's internal storage and users' results buckets.
To detect flaky behavior in your tests, we recommend using the --num-flaky-test-attempts option. Deflake reruns are billed or counted toward your daily quota the same as normal test executions.
Keep the following in mind:
- The entire test execution runs again when a failure is detected. There’s no support for retrying only failed test cases.
- Deflake retry runs are scheduled to run at the same time, but are not guaranteed to run in parallel, for example, when traffic exceeds the number of available devices.
While some of these items are on our roadmap, we're currently unable to provide commitment to supporting these testing and app development platforms.
Detailed device information is available through the API and can be accessed from the gcloud client using the describe command:
gcloud firebase test ios models describe MODEL
Sharding isn't natively supported within Test Lab for iOS. However, you can use the Flank client to shard iOS test cases.
This works by setting OnlyTestIdentifiers
key and values in .xctestrun
file.
See man
page for xcodebuild.xctestrun
for more details.
Known issues
Robo test cannot bypass sign-in screens that require additional user action beyond entering credentials to sign in, for example, completing a CAPTCHA.
Robo test works best with apps that use UI elements from the Android UI
framework (including View
, ViewGroup
, and WebView
objects). If you use Robo test to exercise apps that use other UI
frameworks, including apps that use the Unity game engine, the test may exit
without exploring beyond the first screen.