Building a Highly Available Logstash cluster on AWS
Setting up your own logstash / ELK solution is easy, there are many resources available, for example the excelent logstash book. But how do you make it cope with a massive spike in logs, when you need to be able to processs hundreds of events per second? How do you make it scale? This session will help you!
This will be a technical talk targeting sysadmims and systems savvy developers, presenting a possible High Available logstash / ELK solution, capable of receiving logs and metrics from different server and Drupal environments (like your own LAMP/LEMP rackspace server, Acquia subscription, Aegir server).
We will cover some advanced topics and comon problems, for example
- designing scalable ELK solutions
- logstash indexer autoscaling
- preventing elasticsearch to run out of diskspace (S3 backup, curator)
- securing log transmission with TLS/SSL, ssl offloading tricks, ELB
- upgrading your logstash solution without downtime
- advanced ways of getting logs from Drupal to logstash
Technologies: logstash, elasticsearch, kibana, AWS, High Availability, AMI, auto-scaling, message queues, syslog, elastic filebeat and topbeat, server metrics, S3 backup, curator.
About the presenterMarji is a co-founder and the chief systems administrator for Morpht, specialising in DevOps, Ansible, Logstash / ELK, Puppet, Jenkins, server configuration and developer workflow – with Drupal being the centre of his attention.
Marji has been using logstash / ELK for over 18 months, receiving and analysing logs from about a hundred of Drupal servers.