We are making a major shift for how we are going to release updates of our product to our customers and users by releasing weekly.
Where there is code, there will be bugs. Where there is change, there will be new bugs. And there is a limit to how much time that can be spent making sure code is bug free. It’s impossible to completely prevent them.
Our first hypothesis is that experiments are good. In practice, what does this mean? Over the past several months we’ve designed and tested several experiments including Frequency Anomalies, log transforms and an alternate scoring method.
This post is a summary of a talk presented 27 February, 2018 at the Stockholm DevOps Meetup on patterns of behavior when remediating incidents.
For the Unomaly 2.28 release, we have completely reworked our tokenizer in order to pick up on nested structures and key-value pairs in the unstructured log data that we ingest — without any schema specification. To make it understandable we’ll go through a bit of how Unomaly works and then delve into the technical details of the new structural tokenizer.
Think of logs as a journal for your program — that should be both readable by humans — and parsable by computers. It contains messages that describe what’s going on, along with the relevant context as key-value pairs. Also, keep in mind that it should be used for data mining purposes. Therefore — include relevant identifiers such as request ID’s, PID’s, user ID’s, etc.
The task of producing good software and making it run reliably is associated with a plethora of words and concepts: monitoring, log analysis, pen.testing, auditing, metrics, reliability engineering, etc. However, something that is central to all of this is observability