We Failed, Yay! Experimentation at Unomaly

“An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated.”  - Wikipedia

One of the best things working in our engineering organization is the amount of solution ideas that are generated when we attempt to solve a problem. Often, after spending considerable time defining and agreeing about what problem we’re solving, we have too many solution for how to solve that problem with no clear path on determining the “right” path forward.

To solve the problem of finding the right path we enter in to a load of discussions about each of the pros and the cons, how to get data to test and what assumptions we need to validate to take steps forward.

This all adds up to a lot of time spent talking.

We have tried various circular patterns of proposals, design sprints and customer research. These tools help us decide what we need to validate with our customers then understand what the feedback means, but we still asked if we chose the right questions to begin with, restarting our process time and time again.

To break this pattern we decided to stop talking so much and create hypotheses with designed experiments, which our customers can use. This gives us an opportunity to understand if our customers have interest in our solutions and give us the ability to improve them.

Our first hypothesis is that experiments are good.  Very meta! In practice, what does this mean? Over the past several months we’ve designed and tested several experiments including Frequency Anomalies, log transforms and an alternate scoring method.

Setting Ourselves Up for Failure… and Success

Frequency Anomalies

Initially, Frequency Anomalies were released as an optional setting in the beginning of 2018. We immediately received feedback from our customers that it was too noisy. Over the next year, we made a few alterations, which resulted in our team becoming confident in the feature.

Ultimately, Frequency Anomalies became a core part of Unomaly in our 3.1 release, when we turn the feature ON by default for all new and existing customers. We hope as all of our customers use the feature, we will continue to iterate and improve it..

Alternative Scoring

Not all experiments are a success. In September of 2018 we released an alternative scoring.  The feature currently has very low adoption and will be pulled from our product in our next release. Initially, we hypothesized that the new scoring which uses at all the anomalies in a situation vs just the one with highest impact to give better insight if the situation is important or not.

After we spent a quarter on this experiment, we learned that we did not solve the problem. Though the experiment failed, we learned more about our customers, their needs and opened up new paths to explore within our product.

Log Transforms

Sometimes experiments don’t have a hypothesis at all. Transforms were introduced to enhance our tokenizer to work with parameter changes in structured logs. The transforms feature was not released to customers, instead only our sales engineers were able to selectively use the feature with customers they were assisting during installs.

However, a change in our free trial business means that now installations may happen without the guidance of our sales engineers, therefore it’s necessary that this feature becomes available as an option in our settings for all customers.

Did we actually learn anything?

Yes! Of course, we learned specifics about each of our experiments and which provided the most value to our customers. More importantly, we learned about how we need to handle experimentation differently in the future despite the different outcomes of the individual experiments.

  1. We need to stop hiding our experiments and make them top, front and center for our customers to test to maximize feedback and iteration.
  2. We need to improve our way of collecting feedback in our product to get more direct feedback and make actionable decisions based the data we gather.
  3. Balancing the cost of inaction vs failing fast is tough, but well worth it because we feel more confident moving forward to create a product that our customers have build with us.

Looking forward, we hope to serve our customers with experiments that gives a view of your environments full-state, not just anomalies. Our next hypothesis is that anomalies are just one part of understanding how your systems work and we would like to give more ways for our customers to understand change in their environments.

If you are running an experiment in your organization, don’t miss the opportunity to fail and don’t be afraid of getting your experiments in the hands of your customers. Whether the feedback you receive is positive or negative it’s a great tool to learn from.

Written by Ingrid Franck
on March 28, 2019