Optimizely full-stack enables us to empower our engineering functions to own their experiments, aligning them closer to customer metrics and helping to articulate the value of what they do. Review collected by and hosted on G2.com.
Optimizely full stack doesnt come with some of the bells and whistles features that you get with client side. The setup is more complicated, and Optimizely doesn't give you out of the box metrics, you have to calibrate and align everything yourself. Review collected by and hosted on G2.com.
Optimizely Full Stack is extremely powerful for experimenting and A/B testing. As a developer, it provides a full experimentation framework for our product. It's highly customizable (since almost everything is done in code), and it offers detailed insight into how our customers are interacting with the product. With Optimizely Full Stack, I've been able to increase conversion rate, identify pain points in our product, reduce friction, and create a better UX for our customers. It's simply a necessary tool at this point. We use it every single day, and we can't live without it. Review collected by and hosted on G2.com.
The only thing I dislike about Optimizely Full Stack is its web interface. Sometimes, it takes a while to load. And overall, it feels slightly clunky.
As a developer, I wish I could configure everything using code instead of having to configure it through the web app. This would be in line with today's trend of "Configuration as Code" (which is very popular in the developer world). That way, I'd only have to use the web app to see results.
Ultimately, these are nitpicks. At the end of the day, it's a great product, and the issues above are not dealbreakers by any means. Review collected by and hosted on G2.com.
My favorite feature is their feature flag feature, since it's so versatile. It can transition a feature conceptually between an experiment and rollout, which is great for any team that needs to test out a feature and iterate over time. Review collected by and hosted on G2.com.
Certain parts of the UI need some work to help enable teams to move faster i.e. showing environment toggles on the feature flag dashboard, having a "one click" button to transition a feature from experiment to feature rollout without costing more impressions.
One of my team's largest painpoints however, is how changes to an audience rollout and exposure made for an experiment must be done for all the environments the experiment is a part of -- that has lead my team to need to create separate projects for each environment and duplicate experiments across them. Review collected by and hosted on G2.com.
It's easy to start a new experience and all data around is transparent to all in the team, not only devs. Review collected by and hosted on G2.com.
- Is a bit hard to find where/how to config each environment, everyone that joins the team has doubts around. Questions like "Where I change the value of variable x on development", "Where I change the percentage of the experiment on development".
- I don't like to have to change the experiments in the environment when I am developing/testing something locally, usually there is more than one person using the same environment.
- Do automatic tests, we use jest, is something that is consuming time, principally when the goal is testing all variations. Review collected by and hosted on G2.com.
I like that you can set up several secondary/other metrics in addition to the primary metric. Results are very clear, and I like that I can see everything on one page, including impressions, metrics, stat sig, conversion rates, and visualizations. I'm a Product Manager, and from a PM standpoint, the tool is easy to use. Review collected by and hosted on G2.com.
I don't like that there isn't a way to filter on environment. Since Optimizely allows you to maintain separate Prod and other environments for each experiment, I would expect to have the option to filter for each. A developer on my team figured out a way to create a custom segment for this, but it's something I would have expected out of the box. Review collected by and hosted on G2.com.
Adding feature flags for trunk-based development Review collected by and hosted on G2.com.
It can get quite complicated, depending on what you want to do.
Also, they have blog entries that correctly point out the benefits of testing with feature flags on and off, but have no examples of how you would use their SDK to do that. And when asked, they don't know how to actually do it either. Review collected by and hosted on G2.com.
Optimizely is a very powerful A/B testing platform, and their libraries are easy to integrate into our web servers. Review collected by and hosted on G2.com.
The website for managing Optimizely feature flags and experiments is a little clunky. It's very slow if there are a lot of flags set up, and the controls for rolling out flags are poorly explained. Additionally, we had to build a separate tool in order to map Optimizely's audiences to our users. Review collected by and hosted on G2.com.
It's easy to check AB tests. No digging deep into data or doing queries. It's there for everyone in the company to see the results and anaylze it. Also the exclusion groups make it easy to test multiple tests at a time. Review collected by and hosted on G2.com.
Not much. Only a bit for effort on the tech side than the web tests. But we mostly try to do fullstack tests since it's seems more accurate for us. Review collected by and hosted on G2.com.
It enables us to do all the analytical plumbing of connecting our back-end systems to our optimization toolkit. This means we can really get into the detail of how our audiences behaved before and after conversion. Review collected by and hosted on G2.com.
It's not a nimble as their other product X Web. It can take our dev/product teams a while to spin out tests, due to all the 'plumbing' we have to do. While it does synch with other analytics tools, it's not as easy to integrate as other tools (e.g. X Web) Review collected by and hosted on G2.com.
Quick response for the questions provided in the optimizely-community.slack.com Review collected by and hosted on G2.com.
Even the small feature was there "Asa Schachar:optimizely-white: 6 months ago
I think I see what’s happening. This is good feedback and I think we will be addressing this in an upcoming new version of targeted rollouts.
Since the rollout for the first audience is 0%, that means 0% of those qualifying for the first audience will be in the rollout. So it evaluates the everyone else rule.
To get the behavior you are looking for, you might need to use a feature variable, to keep it simple, you could have a boolean variable enabled that’s either true or false
Then for the targeted rollout, set it to 100% for all audiences but change the value of the variable for each audience. That way you can have a true experience for cypressON and a false experience for cypressOFF. Does that make sense? (edited) ' Review collected by and hosted on G2.com.