Create Firebase Remote Config Experiments with A/B Testing

When you are updating your app and using Firebase Remote Config to push it to an application with an active user base, you want to make sure you get it right. You might be uncertain about the following:

  • The best way to implement a feature to optimize the user experience. Too often, app developers don't learn that their users dislike a new feature or an updated user experience until their app's rating in the app store declines. A/B testing can help measure whether your users like new variants of features, or whether they prefer the app as it currently exists. Plus, keeping most of your users in a control group ensures that most of your user base can continue to use your app without experiencing any changes to its behavior or appearance until the experiment has concluded.
  • The best way to optimize the user experience for a business goal. Sometimes you’re implementing product changes to maximize a metric like revenue or retention. With A/B testing, you set your business objective, and Firebase does the statistical analysis to determine if a variant is outperforming the control group for your selected objective.

To A/B test feature variants with a control group, do the following:

  1. Create your experiment.
  2. Validate your experiment on a test device.
  3. Manage your experiment.

See the video tutorials at the end of this guide. These videos are a valuable way to learn more about creating and running effective A/B tests.

Create an experiment

A Remote Config experiment lets you evaluate multiple variants on one or more Remote Config parameters.

  1. On the Firebase console navigation bar, click Grow, click Remote Config, and then click A/B testing on the right-hand side of the screen.
  2. Click Create next to the Remote Config experiments heading.
  3. Fill out the following fields to define Experiment basics:

    • Name: Enter a name for your experiment.
    • Description (optional): Enter additional text to describe the experiment.
    • Target users: Choose the app that uses your experiment. You can also target a subset of your users to participate in your experiment by choosing one or more of the following options:
      • Version: One or more versions of your app
      • User audience: Analytics audiences used to target users who might be included in the experiment
      • User property: One or more Analytics user properties for selecting users who might be included in the experiment
      • Device country: One or more countries or regions for selecting users who might be included in the experiment
      • Device language: One or more languages and locales used to select users who might be included in the experiment
    • Percentage of target users: Enter the percentage of your app's user base, matching the criteria set under Target users, that you want to evenly divide between the control group and the one or more variants in your experiment. This can be any percentage value between 0.01% and 100%. Percentages are randomly reassigned to users for each experiment, including duplicated experiments.
  4. To choose or create one or more parameters to experiment with, click Add Parameter, and then type or select a parameter name that already exists in the Firebase console. You can also create a parameter that has not previously been used in the Firebase console, but it must exist in your app for it to have any effect. You can repeat this step to add multiple parameters to your experiment.

  5. (optional) To add more than one variant to your experiment, click Add Variant. By default, experiments have one control group and one variant.

  6. (optional) Enter a name for each variant in your experiment to replace the names Variant A, Variant B, etc.
  7. Change one or more parameters for specific variants. Any unchanged parameters are the same for users not included in the experiment.

  8. Define a goal for your experiment:

    • To optimize a metric, click Select goal metric, and choose a metric to use when evaluating the variants in your experiment. These include built-in objectives (engagement, purchases, revenue, retention, etc.), Analytics conversion events, and other Analytics events.
    • (optional) To set an activation event to ensure that only users who have first triggered some Analytics event are counted in your experiment, click Advanced options, and then choose an Analytics conversion event or another Analytics event.

    To learn more about the Analytics events used to measure how well variants achieve experiment goals, see Goal metrics.

  9. Define other metrics to track the results of your experiment and detect side-effects of experiment variants. To learn more about other metrics to track, see Other metrics.

  10. Click Review to save your experiment.

Validate your experiment on a test device

Each Firebase app installation has an instance ID token (or registration token) associated with it. You can use this token to test specific experiment variants on a test device with your app installed. To validate your experiment on a test device, do the following:

  1. Get the instance ID token as follows:

    iOS

    NSLog(@"%@", [[FIRInstanceID instanceID] token]);
    

    Android

    Log.d("IID_TOKEN", FirebaseInstanceId.getInstance().getToken());
    
  2. On the Firebase console navigation bar, click Remote Config or Notifications, and then click A/B testing.
  3. Click Draft, and then click the title of your experiment.
  4. Click Manage test devices, and then enter the instance ID token for a test device and choose the experiment variant to send to that test device.
  5. Run the app and confirm that the selected variant is being received on the test device.

To learn more about the instance ID token, see FIRInstanceID (iOS) or FirebaseInstanceId (Android).

Manage your experiment

Whether you create an experiment with Remote Config or the Notifications composer, you can then validate and start your experiment, monitor your experiment while it is running, and increase the number of users included in your running experiment.

When your experiment is done, you can take note of the settings used by the winning variant, and then roll out those settings to all users. Or, you can run another experiment.

Start an experiment

  1. On the Firebase console navigation bar, expand Grow, click Remote Config or Notifications, and then click A/B testing.
  2. Click Draft, and then click the title of your experiment.
  3. To validate that your app has users who would be included in your experiment, check for a number greater than 0% in the Experiment overview (for example, 1% of users matching the criteria).
  4. To change your experiment, click Edit.
  5. To start your experiment, click Start Experiment. You can run up to 6 experiments per project at a time.

Monitor an experiment

Once an experiment has been running for a while, you can check in on its progress and see what your results look like for the users who have participated in your experiment so far.

  1. On the Firebase console navigation bar, click Grow, click Remote Config or Notifications, and then click A/B testing.
  2. Click Running, and then click the title of your experiment. On this page, you can view various statistics about your draft experiment, including your goal metric and other metrics. For each metric, the following information is available:

    • Improvement: A measure of the improvement of a metric for a given variant as compared to the baseline (or control group). Calculated by comparing the value range for the variant to the value range for the baseline.
    • Probability to beat baseline: The estimated probability that a given variant beats the baseline for the selected metric.
    • Probability to be the best variant: The estimated probability that a given variant beats other variants for the selected metric.
    • Value per user: Based on experiment results, this is the predicted range that the metric value will fall into over time.
    • Total value: The observed cumulative value for the control group or variant. The value is used to measure how well each experiment variant performs, and is used to calculate Improvement, Value range, Probability to beat baseline, and Probability to be the best variant. Depending on the metric being measured, this column may be labeled "Duration per user," "Retention rate," or "Conversion rate."
  3. To increase the number of users included in your experiment, click Increase Distribution, and then select an increased percentage to add more eligible users to your experiment.

  4. After your experiment has run for a while (at least 24 hours), data on this page indicates which variant, if any, is the "leader." Some measurements are accompanied by a bar chart that presents the data in a visual format.

Roll out an experiment to all users

After an experiment has run long enough that you have a "leader," or winning variant, for your goal metric, you can roll out the experiment to 100% of users. This stops the experiment, if it's still running, and allows you to select a value to publish in Remote Config for all users moving forward. Even if your experiment has not created a clear winner, you can still choose to roll out one variant to all of your users.

  1. On the Firebase console navigation bar, click Grow, click Remote Config or Notifications, and then click A/B testing.
  2. Click Completed or Running, click an experiment that you want to roll out to all users, click the context menu (more_vert), and then click Roll out to 100%.
  3. Roll out your experiment to all users by doing one of the following:

    • For an experiment that uses the Notifications composer, use the Roll out message dialog to send the message to the remaining targeted users who were not part of the experiment.
    • For a Remote Config experiment, use the dialog to determine which Remote Config parameter values to change for all users.

Expand an experiment

If you find that an experiment isn't bringing in enough users for A/B testing to declare a leader, you can expand your experiment to reach a larger percentage of the app's user base.

  1. On the Firebase console navigation bar, click Grow, click Remote Config or Notifications, and then click A/B testing.
  2. Click Running, hover over your experiment, click the context menu (more_vert), and then click Increase Distribution.
  3. The console displays a dialog with an option to increase the percentage of users who are in the currently running experiment. Input a number greater than the current percentage and click Send. The experiment will be pushed out to the percentage of users you have specified.

Duplicate or stop an experiment

  1. On the Firebase console navigation bar, click Grow, click Remote Config or Notifications, and then click A/B testing.
  2. Click Completed or Running, hover over your experiment, click the context menu (more_vert), and then click Duplicate or Stop.

User targeting

You can target the users to include in the control group or the variants in your experiment using the following user-targeting criteria.

Targeting criterion Operator(s)    Value(s) Note
Version contains,
does not contain,
matches exactly,
contains regex
Enter a value for one or more app versions that you want to include in the experiment.

When using any of the contains, does not contain, or matches exactly operators, you can provide a comma-separated list of values.

When using the contains regex operator, you can create regular expressions in RE2 format. Your regular expression can match all or part of the target version string. You can also use the ^ and $ anchors to match the beginning, end, or entirety of a target string.

User audience(s) includes all of,
includes at least one of,
does not include all of,
does not include at least one of
Select one or more Analytics audiences to target users who might be included in your experiment.  
User property For text:
contains,
does not contain,
exactly matches,
contains regex

For numbers:
<, ≤, =, ≥, >
An Analytics user property is used to select users who might be included in an experiment, with a range of options for selecting user property values. When using the contains regex operator, you can create regular expressions in RE2 format. Your regular expression can match all or part of the target version string. You can also use the ^ and $ anchors to match the beginning, end, or entirety of a target string.
Prediction N/A Target groups of users defined by Firebase Predictions—for example, those who are likely to stop using your app, or users who are likely to make an in-app purchase. Select one of the values defined by the Firebase Predictions tool. If an option is not available, you may need to opt-in to Firebase Predictions by visiting the Predictions section of the Firebase console.  
Device country N/A One or more countries or regions used to select users who might be included in the experiment.  
Device language N/A One or more languages and locales used to select users who might be included in the experiment. This targeting criterion is only available for Remote Config.

A/B testing metrics

When you create your experiment, you choose a metric that is used to compare experiment variants, and you can also choose other metrics to track to help you to better understand each experiment variant and detect any significant side-effects (such as app crashes). The following tables provide details on how goal metrics and other metrics are calculated:

Goal metrics

Metric Description
Daily user engagement The number of users who have your app in the foreground each day for long enough to trigger the user_engagement Analytics event.
Retention (1 day) The number of users who return to your app on a daily basis.
Retention (2-3 days) The number of users who return to your app within 2-3 days.
Retention (4-7 days) The number of users who return to your app within 4-7 days.
Retention (8-14 days) The number of users who return to your app within 8-14 days.
Retention (15+ days) The number of users who return to your app 15 or more days after they last used it.
Notification open Tracks whether a user opens the notification sent by the Notifications composer.
Purchase revenue An AdMob metric that measures revenue from in-app purchases.
first_open An Analytics event that triggers when a user first opens an app after installing or reinstalling it. Used as part of a conversion funnel.
notification_open An Analytics event that triggers when a user opens a notification sent by the Notifications composer . Used as part of a conversion funnel.

Other metrics

Metric Description
Crash-free users The percentage of users who have not encountered errors in your app that were detected by the Firebase Crash Reporting SDK during the experiment. To learn more, see Crash-Free User Metrics.
notification_dismiss An Analytics event that triggers when a notification sent by the Notifications composer is dismissed (Android only).
notification_receive An Analytics event that triggers when a notification sent by the Notifications composer is received while the app is in the background (Android only).
os_update An Analytics event that tracks when the device operating system is updated to a new version.To learn more, see Automatically collected events.
screen_view An Analytics event that tracks screens viewed within your app. To learn more, see Track Screenviews.
session_start An Analytics event that counts user sessions in your app. To learn more, see Automatically collected events.
user_engagement An Analytics event that triggers periodically while your app is in the foreground. To learn more, see Automatically collected events.

Learn more from the A/B testing video series

The video series "A/B Test Like a Pro" explores A/B testing's details and complexities through an end-to-end example test.

Episode 1: Preparing for A/B testing

What exactly is A/B testing? Why should I care about it, and how do I get started? This video answers your questions about preparing A/B tests and walks through the setup for an example Remote Config test.

Episode 2: Creating an experiment

This video dives deep into the details of creating an effective A/B test. You'll learn about defining test groups, selecting variants, and all the other tasks for creating a successful test.

Episode 3: Understanding experiment results

This video is under production and coming soon.

Episode 4: A/B test notifications

Coming soon!

Episode 5: How I added Remote Config to my iOS app

Coming soon!

Send feedback about...

Need help? Visit our support page.