Log analysis - it's still important
We’ve spoken to some engineers that have expressed they find log data boring. Boring? We say it’s far from boring, it’s the most contextually rich data we have, but is often ignored because of its volume and how unmanageable it can be for humans to work with. However, without automating the process and having logs pre-analyzed, one could see how this could be the perception.
Logs are still valuable
The causes of many IT issues are usually found in your log data. It can show you what happened, when and where. They are extremely versatile and hold a lot of value when it comes to investigating and getting to the root cause of a problem.
"If you have a busy system to which hundreds of people are logging in and out, you may miss an intruder trying to gain access to your servers due to the noise of information generated at this time" - from a Quora user
Another example of the importance of log data is. If you take syslog for example, if you do not analyze this you may miss the error message your boot disk is logging and as a result could have a significant impact on your systems.
Log data can be mined for patterns that are not obvious when taken at face value. Looking for anomalies in log data can help you spot subtle clues of unwanted behaviour such as breaches, bugs or hardware failures.
Log data can be difficult to tackle
We understand that large volumes of log data can be quite difficult to handle, however it could provide some really useful information if you have a process that enables you to dig into it without the noise.
For many teams, the process of dealing with their log data used to be to go through them manually. This was such a lengthy, time consuming process, it was more or less impossible to get through it all. A common result of this was that because there was so much data, issues could easily go undetected and by the time you catch a problem another problem arises.
What Unomaly does
We take a different approach to log analysis compared to those that are out on the market. We pre-analyze your log data and then deduplicate common log events, removing anything that is repetitive and then visualises changes in your log events, anywhere within your IT environment. Unomaly creates profiles for each log source that sends data . These profiles continue to adapt as your environment changes, essentially providing a clustered overview of change happening within your systems. The idea behind this is to enable engineers to start investigation from a pre-analyzed dataset as opposed to raw data, allowing software to do more of the work, putting engineers back in control no matter how much change is happening.
How Unomaly fits into your ecosystem
Current log analysis tools focus on aggregation and search (log aggregation is the practice of gathering log files for the purpose of organising data and making it searchable)or like many new AIOps tools, attempt prediction, which is based on past data.
When it comes to observability you should have a tool that provides you with a clear and concise overview of the changes that occur within your environment. Components, such as metrics and traces, play a vital part in the infrastructure and application monitoring. For instance, metrics are nice to track for overall performance and pointing out where the dependencies are. Similarly, tracing is also really valuable in providing critical visibility into the requests from users end-to-end. However, how Unomaly fits into the ecosystem is that we specialise in log data and analyze it for you, as opposed to you having to go through raw data. We do this by tokenizing the data and grouping the profiles that are similar and then highlighting changes that are unknown to your dataset. With the objective to enable engineers to act on pre-analyzed data and only act on the insights that are flagged to them as change.
To find out how we could possibly fit into your ecosystem click here.