What does it do?

In short, Unomaly uses data produced by sofware entities (servers, services, applications, containers etc.) to learn their normal behavior and detect anomalies.It continuously analyzes streaming data such as logs and events without the need for specific parsers or predefined knowledge. Analyzed data is presented in a user interface with a workflow for ease of use. The timeline of anomalies shows the evolution of the environment and provides a simple and universal way for teams to stay in tune. It enables detection of problems early, resolution of the root cause problem, mitigation of unwanted technical drift and debt, and detection of anomalous manipulations and malicous activity.

How does it work?

Unomaly does four things:

1) Integrates any data from software entities without parsing or requirements of data format or protocol. The product offers general data interfaces (for standard data protocols such as Syslog) and highly flexible data integration frameworks with prebuilt integrations for the major data technologies such as the ELK stack, Logstash, Fluentd, AWS Cloudfront, Splunk, Kafka etc.

2) Continuously analyzes and learns generated data, creating entity profiles. Normal events, their parameters and frequencies are captured and continuously adjusted, forming a current picture of normal conditions.

3) Automatically detects, scores and prioritizes anomalous events, including deviant structures, new parameters and changing frequencies. The solution also clusters related anomalous events together and allows looking at a larger set of anomalies at once. This helps see root cause events, propagations and effect in one view.

4) Offers an classification, knowledge and action framework that allows users to add expert knowledge to data by tagging it. An action and plugin engine allows setting up forwarders and automated actions when certain conditions arise. Unomaly offers integrations for many technologies and the plugin framework for building integrations is simple.

How does Unomaly detect and describe an incident?

Detecting incidents is difficult because each one is largely unique. Often the impact might be the same, but how it ultimately ended up there is usually a unique combination of many things. Traditional means of revealing incidents are primarily based on rules and queries that demand the conditions of the incident to be known. Effectively, this requires an incident to occur at least once before it can be detected and stopped from recurring. This ultimately means that many incidents will need to be impacting before being detected. This builds a reactive loop with built-­in risk for anything new or unknown.

Free of rules or pre­defined conditions, Unomaly detects incidents through their anomalous nature. It knows which data is normally generated by an entity, so it can determine and isolate data points with different characteristics. Unomaly ties related anomalies together into ‘situations’ ­­ effectively framing an incident from a data perspective. Approaching the problem this way, Unomaly can detect the progress of any incident.

A situation is defined by the first anomaly created. Any subsequent anomalies are added to the situation as it progresses. As it updates, the score that summarizes the situational risk is also updated. This score can be used as an alert, and to prioritize and visualize situations. A situation ends when there are no more anomalies added for a certain period of time. As such, a situation is the close representation of the incident from a data perspective.

In this way, Unomaly has automated much of the work involved with understanding things like root­ cause, propagation and impact. Unomaly also adds additional context to the scoring. For example, was the entity normal for a long time prior to the anomaly? Has the user defined an event as particularly harmful?

What kind of incidents can Unomaly detect and explain?

Since Unomaly detects incidents by their anomalous nature, it is both far­reaching in the types of incidents it can detect, and limitless in how much detail it can keep. Essentially, the Unomaly detection capability is boundless. The data and the software producing it is what matters. The greater the amount of data Unomaly receives, the easier it is to draw quick and correct conclusions and gain intelligence.

Examples of incidents that Unomaly automatically detects are:

  • An administrator performs a configuration change that causes other software and components to start behaving unusually. This typically isn’t visible until impact, but Unomaly detects the unusual behavior through the anomalies created. These will be tied together into situations that will be explained fully by the new data points typically across components, servers etc.

  • A breach occurs. An intruder manipulates software components and performs anomalous commands to elevate privileges and install malicious software while concealing the actions. Since this is unusual behavior, the new data points created will highlight these activities. If the intruder terminates logging, this too will be noticed through the change in data flow.

  • A bug triggers due to a series of unfortunate circumstances. Software components become erratic and unpredictable. New data points will explain the origin and evolution of this scenario. Where did it start? In what entity? What entity can we rule out? Moreover, Unomaly doesn’t base its approach on a definition of an incident, error or attack indicator. Rather, it bases it on the inverse of what is normal. And normal is defined by the entities themselves in the data they generate under normal circumstances.

How does Unomaly learn?

The engine always starts afresh, without bias. Everything it learns is driven by the data generated by entity. The method of learning uses a proprietary way of comparing data to what has previously been seen. When Unomaly receives an event, the comparison begins. If it’s considered to be an anomaly, it’s added to the learning database as a new occurrence. It’s then incorporated and merged with existing learning.

This process can understand the difference between new events and differing parameters. For every event (anomaly or not), it updates statistical counters that are used for determining the frequency of data. Unomaly presents this as simple frequencies (e.g. by the second, minute, hour, day, month) or classified as rare.

How long does learning take Unomaly?

When an event is seen for the first time, it takes a series of subsequent occurrences (3 to 10 depending of various factors) to understand the normal frequency. As an example, Unomaly will need a minute to understand if an event happens every second. A weekly event may however take one or two months to baseline thoroughly. Some events, with multiple and varying parameters can take longer to baseline. For those cases, Unomaly offers various ways of manually overriding the baseline. For instance, an event in the user interface can be classified manually.

To our experience, more than 99.999% of all data is repetitive on an daily basis. A comprehensive with this amount of data reduction is typically established within 1 to 2 weeks.

Is there good and bad data?

From an security incident perspective (which is many times the most obscure incident), the best data is generated by the systems experiencing the actual incident. The victims of an incident will always have the most detailed and raw picture of what occurred. There is no replacement for this data. And without it, end users will ultimately work with an incomplete picture that may emphasise a less relevant part of an incident. Lacking access to data insight will drastically increase the risk of missing the incident completely, or lead to misinterpretation through guesswork. This limits the ability to respond, as well as the quality of the response.

More often than not, incidents need to progress considerably (almost to the point where everything is stand­still) to be deemed worthy of action. Good data offers insight into the internal workings of important objects. Inferior data offers eyewitness news which is not fully actionable. To exemplify, a firewall or an IDS (witness) can detect indications, but it is only the data from the server (victim) itself, that, with certainty, can tell whether data has been compromised.

What is unique about Unomaly?

Unomaly has a unique approach to the problem of being able to universally detect incidents. While being advanced at its core, it’s extremely straightforward to use. This applies to how it manages the entire life­cycle of data, from integrating to actually working with it. Most other tools lack this capacity, and therefore produce far less real value.

Here are some key differentiators:

  • Simple: ­­Users of Unomaly need little knowledge of the product itself. Just send data and Unomaly will do the heavyweight analysis automatically and continuously. You can interact with the results by simple clicks and navigation. No coding skills are needed.

  • Applicability: Since the engine supports any raw text data, it has virtually unlimited support for analyzing data from any entity, system or container, with minimal integration work.

  • Speed: The engine is capable of analyzing tens of thousands of simultaneous, unstructured data points. And for ease of user interaction, Unomaly pre­computes the analysis results – ensuring swiftness to save you time.

  • Accuracy: Unomaly doesn’t match what is stated in rules. It detects anomalies. This means a very high signal­-to­-noise score where risk is quantified primarily by being abnormal versus normal.

  • Automation: The algorithm is continuously self­-learning and has no need for manual input. It will automatically learn any entities in 3 to 10 days, and is operationally valuable within just a few days.

  • Workflow: This is an important part of the approach. There are detection and investigation capabilities at the core, and collaboration and action features for sharing, commenting and a conversion functionallity for adding expert judgement to data.

  • Resources: By understanding the normality and frequency of data points, Unomaly optimizes storage of data in ways that reduce storage requirements. Since Unomaly has a condensed memory, it can process and store the entire history of daily gigabyte data with minimal storage.

  • Integration: Unomaly offers custom actions to integrate analysis results with third party tools, such as external correlation engines, workflow solutions and ticketing systems.

Will the engine learn existing errors?

Unomaly is completely unbiased and learns the entities’ actual behavior. If an entity has a problem that generates a data point, it will be added to the baseline, even though it may be potentially negative.

Unomaly gives attention to this fact in the following ways:

  • Entity profiles offer a condensed and summarized view of all the gigabytes of data produced by an entity over time. The unique events that occur are usually depicted by a baseline profile frequency. Usually, it’s a set of 100­,200 events.

  • Any progression of an incident (when an existing error develops) will result in anomalies and situational awareness. As the situation progresses, it will become more and more apparent with increased score.

  • The knowledgebase workflow enables the end user to tag important events that should considered harmful in the future.

Conclusion

Unomaly offers a pioneering way of instrumenting your digital entities, targeting the underlying issue of incidents being largely unknown in advance. By utilizing existing raw data from software entities, it employs machine learning to master the normal behaviour. The anomalies offer a way of understanding differences in entity behaviour, together with a capability of early warning, alerting and reporting of unknown and known incidents. It can monitor any entity - from servers, to services and containers - and learn the basis of its normal behaviour in just a days .

An automatic capability to detect issues early and fully and understand entities normal behavior will allow your organization to increase uptime, stability and security in addition to freeing up resources and reducing both risk and cost.