TrustRadius: an HG Insights company

Optimizely Feature Experimentation

Score8.6 out of 10

56 Reviews and Ratings

Get a Demo

Contact about Optimizely Feature Experimentation

Please fill out the form below to get in touch.

Optimizely

Connect with Optimizely

What are you interested in?

Already have an account?

You hereby consent to have TrustRadius share the information supplied on this form with Optimizely so that Optimizely and TrustRadius may contact you in regard to the information requested.

What is Optimizely Feature Experimentation?

Optimizely Feature Experimentation unites feature flagging, A/B testing, and built-in collaboration—so marketers can release, experiment, and optimize with confidence in one platform.

Media

Feature Flag Setup. Here users can run flexible A/B and multi-armed bandit tests, as well as:

- Set up a single feature flag to test multiple variations and experiment types
- Enable targeted deliveries and rollouts for more precise experimentation
- Roll back changes quickly when needed to ensure experiment accuracy and reduce risks
- Increase testing flexibility with control over experiment types and delivery methods
Audience Setup. This is used to target specific user segments for personalized experiments, and:

- Create and customize audiences based on user attributes
- Refine audience segments to ensure the right users are included in tests
- Enhance experiment relevance by setting specific conditions for user groups
Experiment Results, supporting the analysis and optimization of experimentation outcomes. Viewers can also:

- examine detailed experiment results, including key metrics like conversion rates and statistical significance
- Compare variations side-by-side to identify winning treatments
- Use advanced filters to segment and drill down into specific audience or test data
a Program Overview. These offer insights into any experimentation program’s performance. It also offers:

- A comprehensive view of the entire experimentation program’s status and progress
- Monitoring for key performance metrics like test velocity, success rates, and overall impact
- Evaluation of the impact of experiments with easy-to-read visualizations and reporting tools
- Performance tracking of experiments over time to guide decision-making and optimize strategies
AI Variable Suggestions. These enhance experimentation with AI-driven insights, and can also help with:

- Generating multiple content variations with AI to speed up experiment design
- Improving test quality with content suggestions
- Increasing experimentation velocity and achieving better outcomes with AI-powered optimization
Schedule Changes, to streamline experimentation. Users can also:

- Set specific times to toggle flags or rules on/off, ensuring precise control
- Schedule traffic allocation percentages for smooth experiment rollouts
- Increase test velocity and confidence by automating progressive changes

1 / 6

A Project managers perspective of Optimizely.

Use Cases and Deployment Scope

Our team uses it as a core part of our release and Validation process across client projects, not just internally. Currently, we are working with a whale client, a bank, and we've leveraged feature flags to test different credit card recommendation engines in prod without risking the live customer base. That has solved a longstanding problem of all-or-nothing releases, where any new algorithm introduced carried a huge rollback risk.

Pros

  • We can do feature flag based rollouts with surgical control.
  • It's good at progressive delivery in compliance-heavy sectors.

Cons

  • We currently can't correlate feature flagged experiments with Salesforce CRM data.

Return on Investment

  • We have a huge, noteworthy ROI case study of how we did a SaaS onboarding revamp early this year. Our A/B test on a guided setup flow improved activation rates by 20 percent, which translated to over $1.2m in retained ARR.

Usability

Other Software Used

IBM App Connect, Atlassian Jira, TeamViewer

Optimizely Feature Experimentation Review

Use Cases and Deployment Scope

We utilize feature experimentation for our web experiments and our app experiments. Currently we have iOS and Android on there, and whenever we make any copy changes or new product features are rolled out, then we use it for our experimentation it addresses. It addresses the fact that we have this usually quite company wide and so having other users and other stakeholders that can access the charts to see the primary metrics, how they're performing secondary metrics is really helpful. And we also use the multi-arm bandit, for instance, whenever we're doing tester and peak periods like Mother's Day because we're a flower company. It addresses quite a few problems in that sense. It allows everyone to be unified with the experiments.

Pros

  • I guess there are multi-on band, it's really helpful during peak periods. We also being able to feel flexible with the different metrics that we have. So probably not really meant to all the time, but sometimes we'll have a look at different primary metrics to optimize our new products that are out.
  • It helps to bring us to statistical significance very accurately and quickly and we can trust and rely also on the results that it comes up with.

Cons

  • I would probably say with apps, some of the things that we find a bit difficult is we obviously different app versions and so when we maybe start an experiment in one app version, it may be that they've had to do a bug fix or something like that, and so we end up having to roll out other versions during the experiment. I think really being able to just manage different versions and different user experiences on those versions would be really helpful.
  • From an app's perspective, it's very difficult to be fair. Whilst with, I guess other features, more generically speaking, I think it'd be the results side of things. We'd really love to have that exported out into a way that we can have an overview of the experiments that we've run and the results and being able to understand our win rates and those sorts of things. I think it's very difficult to have a nice overview of the results and insights as well. Just having a single space for us to look at our insights is where we're struggling, I guess.

Usability

Great tool for experiment management.

Use Cases and Deployment Scope

The software has really helped us in experimenting at scale. It has very powerful and rich analytics systems. The product is artificial intelligence enabled and has high data output as well as real-time analytics. implementation process of the tool is smooth and pricing is considerably affordable.

Pros

  • The software is very intuitive and and ease to use.
  • Versatility of the tool as it supports cross team collaboration and real-time peer feedback.
  • Reporting features of the software are very comprehensive and clear.

Cons

  • Integration of the software with third party applications requires specialist guidance.
  • Limited customization.

Return on Investment

  • The software has reduced risk by testing tools before running them.
  • Use of the software has enabled us to test new systems and make data backed decisions.
  • Optimizely Feature Experimentation has enabled to efficiently handle A/B tests

Usability

Other Software Used

Atlassian Bamboo, Hyland NilNexus, EventTitans

Optimizely Feature Experimentation Review

Use Cases and Deployment Scope

We use Optimizely Feature Experimentation to solve business problems with conversion funnels and conversion rate optimization. Our implementation is purely server side, so we use it mainly to experiment and then roll out features on the web and app. For us, it helps test ideas and hypotheses before risking it and rolling it out to a hundred percent.

Pros

  • It is easy to use any of our product owners, marketers, developers can set up experiments and roll them out with some developer support. So the key thing there is this front end UI easy to use and maybe this will come later, but the new features such as Opal and the analytics or database centric engine is something we're interested in as well.

Cons

  • I think the way metrics and events are set up within each experiment could potentially be better. We've had to come up with a bit of a work around to make it more user-friendly for now, obviously that's a con for now, but with the analytics database engine that should be resolved.

Usability

Alternatives Considered

Monetate

Other Software Used

Rudderstack

How we use Optimizely Feature Experimentation in Healthtech

Use Cases and Deployment Scope

The Optimizely feature experimentation is our backbone for how we release and validate new product capabilities. It helps us to derisk client facing changes. In our space, a poorly designed flow or misstep in a clinical tool can erode trust quickly so we always need to be certain before scaling any feature.

Pros

  • Feature flagging combined with metrics tracking.

Cons

  • I had to build custom connectors so that experimentation results flowed into our analytics stacks automatically. That's because of its poor integration with our legacy record systems

Return on Investment

  • avaerage feature rollout time went down by a third
  • less engineering waste from killing weak features early

Usability

Other Software Used

F5 Distributed Cloud App Stack, Microsoft Security Copilot