Welcome!

@BigDataExpo Authors: Elizabeth White, Yeshim Deniz, William Schmarzo, Pat Romanski, Liz McMillan

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Article

Five #Logstash Alternatives | @DevOpsSummit @Sematext #Elasticsearch

Shippers have their pros and cons, and ultimately it’s down to your specifications

When it comes to centralizing logs to Elasticsearch, the first log shipper that comes to mind is Logstash. People hear about it even if it's not clear what it does:
- Bob: I'm looking to aggregate logs
- Alice: you mean... like... Logstash?

When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn't the only log shipper that fits the bill:

  • fetching data from a source: a file, a UNIX socket, TCP, UDP...
  • processing it: appending a timestamp, parsing unstructured data, adding Geo information based on IP
  • shipping it to a destination. In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry

In this post, we'll describe Logstash and its alternatives - 5 "alternative" log shippers (Filebeat, Fluentd, rsyslog, syslog-ng and Logagent), so you know which fits which use-case.

Logstash
It's not the oldest shipper of this list (that would be syslog-ng, ironically the only one with "new" in its name), it's certainly the best known. That's because it has lots of plugins: inputs, codecs, filters and outputs. Basically, you can take pretty much any kind of data, enrich it as you wish, then push it to lots of destinations.

Strengths
Logstash's main strongpoint is flexibility, due to the number of plugins. Also, its clear documentation and straightforward configuration format means it's used in a variety of use-cases. This leads to a virtuous cycle: you can find online recipes for doing pretty much anything. Here are a few examples from us: 5 minute intro, reindexing data in Elasticsearch, parsing Elasticsearch logs, rewriting Elasticsearch slowlogs so you can replay them with JMeter.

Weaknesses
Logstash's Achille's heel has always been performance and resource consumption (the default heap size is 1GB). Though performance improved a lot over the years, it's still a lot slower than the alternatives. We've done some benchmarks comparing Logstash to rsyslog and to filebeat and Elasticsearch's Ingest node. This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.

Another problem is that Logstash currently doesn't buffer yet. A typical workaround is to use Redis or Kafka as a central buffer:

Logstash - Kafka - Elasticsearch

Typical use-case
Because of the flexibility and abundance of recipes, Logstash is a great tool for prototyping, especially for more complex parsing. If you have big servers, you might as well install Logstash on each. You won't need buffering if you're tailing files, because the file itself can act as a buffer (i.e. Logstash remembers where it left off):

Logstash - Elasticsearch (1)

If you have small servers, installing Logstash on each is a no go, so you'll need a lightweight log shipper on them, that could push data to Elasticsearch though one (or more) central Logstash servers:

Light shipper - Logstash - Elasticsearch

As your logging project moves forward, you may or may not need to change your log shipper because of performance/cost. When choosing whether Logstash performs well enough, it's important to have a good estimation of throughput needs - which would predict how much you'd spend on Logstash hardware.

Filebeat
As part of the Beats "family", Filebeat is a lightweight log shipper that came to life precisely to address the weakness of Logstash: Filebeat was made to be that lightweight log shipper that pushes to Logstash.

With version 5.x, Elasticsearch has some parsing capabilities (like Logstash's filters) called Ingest. This means you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing. You shouldn't need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off:

Filebeat - Ingest - Elasticsearch

If you need buffering (e.g. because you don't want to fill up the file system on logging servers), you can use Redis/Kafka, because Filebeat can talk to them:

Filebeat - Kafka - Elasticsearch

Strengths
Filebeat is just a tiny binary with no dependencies. It takes very little resources and, though it's young, I find it quite reliable - mainly because it's simple and there are few things that can go wrong. That said, you have lots of knobs regarding what it can do. For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn't get changes for a while.

Weaknesses
Filebeat's scope is very limited, so you'll have a problem to solve somewhere else. For example, if you use Logstash down the pipeline, you have about the same performance issue. Because of this, Filebeat's scope is growing. Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.

Typical use-cases
Filebeat is great for solving a specific problem: you log to files, and you want to either:

  • ship directly to Elasticsearch. This works if you want to just "grep" them or if you log in JSON (Filebeat can parse JSON). Or, if you want to use Elasticsearch's Ingest for parsing and enriching (assuming the performance and functionality of Ingest fits your needs)
  • put them in Kafka/Redis, so another shipper (e.g. Logstash, or a custom Kafka consumer) can do the enriching and shipping. This assumes that the chosen shipper fits your functionality and performance needs

Logagent
This is our log shipper that was born out of the need to make it easy for someone who didn't use a log shipper before to send logs to Logsene (our logging SaaS which exposes the Elasticsearch API). And because Logsene exposes the Elasticsearch API, Logagent can be just as easily used to push data to Elasticsearch.

Strengths
The main one is ease of use: if Logstash is easy (actually, you still need a bit of learning if you never used it, that's natural), this one really gets you started in a minute. It tails everything in /var/log out of the box, parses various logging formats out of the box (Elasticsearch, Solr, MongoDB, Apache HTTPD...). It can mask sensitive data like PII, date of birth, credit card numbers, etc. It will also do GeoIP enriching based on IPs (e.g., for access logs) and update the GeoIP database automatically. It's also light and fast, you'll be able to put it on most logging boxes (unless you have very small ones, like appliances). The new 2.x version added support for pluggable inputs and outputs in a form of 3rd party node.js modules. Very importantly, Logagent has local buffering so, unlike Logstash, it will not lose your logs when the destination is not available.

Weaknesses
Logagent is still young, although is developing and maturing quickly. It has some interesting functionality (e.g. it accepts Heroku or CloudFoundry logs), but it is not yet as flexible as Logstash.

Typical use-cases
Logagent is a good choice of a shipper that can do everything (tail, parse, buffer - yes, it can buffer on disk - and ship) that you can install on each logging server. Especially if you want to get started quickly. Logagent is embedded in Sematext Docker Agent to parse and ship Docker containers logs. Sematext Docker Agent works with Docker Swarm, Docker Datacenter, Docker Cloud, as well as Amazon EC2, Google Container Engine, Kubernetes, Mesos, RancherOS, and CoreOS, so for Docker log shipping, this is the tool to use.

rsyslog
The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages. It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch. You can find a howto for processing Apache and system logs here.

Strengths
rsyslog is the fastest shipper that we tested so far. If you use it as a simple router/shipper, any decent machine will be limited by network bandwidth, but it really shines when you want to parse multiple rules. Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim). This means that with 20-30 rules, like you have when parsing Cisco logs, it can outperform the regex-based parsers like grok by a factor of 100 (it can be more or less, depending on the grok implementation and liblognorm version).

It's also one of the lightest parsers you can find, depending on the configured memory buffers.

Weaknesses
rsyslog requires more work to get the configuration right (you can find some sample configuration snippets here on our blog) and this is made more difficult by two things:

  • documentation is hard to navigate, especially for somebody new to the terminology
  • versions up to 5.x had a different configuration format (expanded from the syslogd config format, which it still supports). Newer versions can still work with the old format, but most newer features (like the Elasticsearch output) only work with the new configuration format, but then again there are older plugins (for example, the Postgres output) which only support the old format

Though rsyslog tends to be reliable once you get to a stable configuration (and it's rich enough that there are usually multiple ways of getting the same result), you're likely to find some interesting bugs along the way. Not all features are tested as part of the testbench.

Typical use-cases
rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container). If you need to do processing in another shipper (e.g. Logstash) you can forward JSON over TCP for example, or connect them via a Kafka/Redis buffer.

rsyslog also works well when you need that ultimate performance. Especially if you have multiple parsing rules. Then it makes sense to invest time in getting that configuration working.

syslog-ng
You can think of syslog-ng as an alternative to rsyslog (though historically it was actually the other way around). It's also a modular syslog daemon, that can do much more than just syslog. It recently received disk buffers and an Elasticsearch HTTP output. Equipped with a grammar-based parser (PatternDB), it has all you probably need to be a good log shipper to Elasticsearch.

Advantages
Like rsyslog, it's a light log shipper and it also performs well. It used to be a lot slower than rsyslog before, and I haven't benchmarked the two recently, but 570K logs/s two years ago isn't bad at all. Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.

Disadvantages
The main reason why distros switched to rsyslog was syslog-ng Premium Edition, which used to be much more feature-rich than the Open Source Edition which was somewhat restricted back then. We're concentrating on the Open Source Edition here, all these log shippers are open source. Things have changed in the meantime, for example disk buffers, which used to be a PE feature, landed in OSE. Still, some features, like the reliable delivery protocol (with application-level acknowledgements) have not made it to OSE yet.

Typical use-cases
Similarly to rsyslog, you'd probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing. As with rsyslog, there's a Kafka output that allows you to use Kafka as a central queue and potentially do more processing in Logstash or a custom consumer:

syslog-ng - Kafka - Elasticsearch

The difference is, syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance: for example, only outputs are buffered, so processing is done before buffering - meaning that a processing spike would put pressure up the logging stream.

Fluentd
Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don't have to guess which substring is which field of which type. As a result, there are libraries for virtually every language, meaning you can easily plug in your custom applications to your logging pipeline.

Advantages
Like most Logstash plugins, Fluentd plugins are in Ruby and very easy to write. So there are lots of them, pretty much any source and destination has a plugin (with varying degrees of maturity, of course). This, coupled with the "fluent libraries" means you can easily hook almost anything to anything using Fluentd.

Disadvantages
Because in most cases you'll get structured data through Fluentd, it's not made to have the flexibility of other shippers on this list (Filebeat excluded). You can still parse unstructured via regular expressions and filter them using tags, for example, but you don't get features such as local variables or full-blown conditionals. Also, while performance is fine for most use-cases, it's not in on the top of this list: buffers exist only for outputs (like in syslog-ng), single-threaded core and the Ruby GIL for plugins means ultimate performance on big boxes is limited, but resource consumption is acceptable for most use-cases. For small/embedded devices, you might want to look at Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.

Typical use-cases
Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins. Also, if most of the sources are custom applications, you may find it easier to work with fluent libraries than coupling a logging library with a log shipper. Especially if your applications are written in multiple languages - meaning you'd use multiple logging libraries, which may behave differently.

The conclusion?
First of all, the conclusion is that you're awesome for reading all the way to this point. If you did that, you get the nuances of an "it depends on your use-case" kind of answer. All these shippers have their pros and cons, and ultimately it's down to your specifications (and in practice, also to your personal preferences) to choose the one that works best for you. If you need help deciding, integrating, or really any help with logging don't be afraid to reach out - we offer Logging Consulting. Similarly, if you are looking for a place to ship your logs and avoid costs/headaches associated with running the full ELK/Elastic Stack on your own servers, check out Logsene - it exposes Elasticsearch API, so you can use it with all shippers we covered here.

The post 5 Logstash Alternatives appeared first on Sematext.

More Stories By Radu Gheorghe

Radu Gheorghe is a search consultant, software engineer and trainer at Sematext Group, working mainly with Elasticsearch, Solr and logging-related projects. He is the co-author of Elasticsearch in Action.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@BigDataExpo Stories
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
When NSA's digital armory was leaked, it was only a matter of time before the code was morphed into a ransom seeking worm. This talk, designed for C-level attendees, demonstrates a Live Hack of a virtual environment to show the ease in which any average user can leverage these tools and infiltrate their network environment. This session will include an overview of the Shadbrokers NSA leak situation.
In his opening keynote at 20th Cloud Expo, Michael Maximilien, Research Scientist, Architect, and Engineer at IBM, will motivate why realizing the full potential of the cloud and social data requires artificial intelligence. By mixing Cloud Foundry and the rich set of Watson services, IBM's Bluemix is the best cloud operating system for enterprises today, providing rapid development and deployment of applications that can take advantage of the rich catalog of Watson services to help drive insigh...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
The 21st International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Existing Big Data solutions are mainly focused on the discovery and analysis of data. The solutions are scalable and highly available but tedious when swapping in and swapping out occurs in disarray and thrashing takes place. The resolution for thrashing through machine learning algorithms and support nomenclature is through simple techniques. Organizations that have been collecting large customer data are increasingly seeing the need to use the data for swapping in and out and thrashing occurs ...
SYS-CON Events announced today that DivvyCloud will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in rea...
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that Tintri, Inc, a leading provider of enterprise cloud infrastructure, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Tintri offers an enterprise cloud platform built with public cloud-like web services and RESTful APIs. Organizations use Tintri all-flash storage with scale-out and automation as a foundation for their own clouds – to build agile development environments...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Every successful software product evolves from an idea to an enterprise system. Notably, the same way is passed by the product owner's company. In his session at 20th Cloud Expo, Oleg Lola, CEO of MobiDev, will provide a generalized overview of the evolution of a software product, the product owner, the needs that arise at various stages of this process, and the value brought by a software development partner to the product owner as a response to these needs.
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that Tappest will exhibit MooseFS at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. MooseFS is a breakthrough concept in the storage industry. It allows you to secure stored data with either duplication or erasure coding using any server. The newest – 4.0 version of the software enables users to maintain the redundancy level with even 50% less hard drive space required. The software func...
SYS-CON Events announced today that Outscale will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outscale's technology makes an automated and adaptable Cloud available to businesses, supporting them in the most complex IT projects while controlling their operational aspects. You boost your IT infrastructure's reactivity, with request responses that only take a few seconds.
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
SYS-CON Events announced today that EARP will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "We are a software house, so we perfectly understand challenges that other software houses face in their projects. We can augment a team, that will work with the same standards and processes as our partners' internal teams. Our teams will deliver the same quality within the required time and budget just as our partn...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.