Building High Available ELK for Drupal
Setting up your own ELK solution is easy, there are many resources available, for example this DrupalCon NOLA presentation Drupal and Logstash: centralised logging. But how do you make it cope with a massive spike in number of logs, when you need to be able to processs hundreds of events per second? How do you make it scale? This session will help you to answer these questions.
This will be a technical talk targeting sysadmims and systems savvy developers, presenting a possible High Available ELK solution, capable of receiving logs and metrics from different servers and Drupal environments.
We will cover some advanced topics and comon problems, for example
- designing scalable ELK stack
- logstash indexer autoscaling
- preventing elasticsearch to run out of diskspace (S3 backup, curator)
- securing log transmission with TLS/SSL, ssl offloading tricks, ELB
- upgrading your ELK stack without downtime
- different ways of getting logs from Drupal to logstash
Technologies: logstash, elasticsearch, kibana, beats, AWS, High Availability, AMI, auto-scaling, message queues, syslog, elastic filebeat and topbeat, server metrics, S3 backup, curator.
About the presenter
Marji is a co-founder and the chief systems administrator for Morpht, interested in DevOps, Ansible, ELK / Elastic stack, Puppet, Jenkins, server configuration and developer workflow – with Drupal being the centre of his attention.
Marji has been running ELK stacks for about 2 years, receiving and analysing logs from a couple of hundreds of Drupal servers.
Follow up blog post: ha-elk-for-drupal