The open source server monitoring agent is installed using cryptographically signed packages for your OS (.deb or .rpm).
The agent calls various OS level APIs and interacts with applications to collect key metrics. This ranges from parsing the /proc filesystem to querying MySQL servers to collect the data we need.
Once collected, the data is bundled into a JSON payload, checksummed and then HTTP POST’d back to Server Density.
The agent only ever sends data over HTTPS which means no special protocols or firewall rules are used. It also means the data is encrypted in transit.
Server Density uses the Cloudflare network for security and speeding up traffic as a content delivery network. The HTTPS postback DNS lookup finds the closest of 71 POPs using the Cloudflare anycast DNS which reduces latency and hops between your systems and Server Density.
The payload enters the Cloudflare network where it is proxied to Server Density, entering via accelerated transit to a Softlayer POP. Using an anycast routed global IP, the payload hits our Nginx load balancers.
The Nginx load balancers forward the payload into our processing backend, via a Python based, Tornado powered API. The payload is then stored in a distributed Kafka queue for async retrieval. An Apache Storm Spout picks up the payload from Kafka and sends it to a processing bolt to do some validations which then sends it down two routes: into alert processing bolt and intro metrics storage.
The alert processing bolt takes the already validated payload and runs it through our rules engine, comparing the values against your alert configurations. If there’s a match, the right actions are taken - this could be opening a new alert, sending an email notification, triggering a web hook, closing a fixed alert or even just ignoring the value. The action will depend on the configuration and may then trigger another bolt execution e.g. for sending notifications.
The payload is posted via our internal metrics HTTP API. Here it is parsed into the format we use for storing the time series data and then written into our distributed MongoDB time series datastore. The data written to master is replicated a few moments later into the secondary data centre and off-site/off-vendor real time backup.
Once the payload has been entered into our queue we return an HTTP 200 OK response back to the agent. Except for checksum validation, all the processing is done asynchronously and once successfully completed, the payload is discarded. The whole process takes no more than 150ms, which is our internal SLA from postback received to process completed (inc alerts triggered).
HTTPS JSON REST with comprehensive documentation.
Widgetized. Auto updating, auto resizing.
Compatible with Windows, Android, OS X, Linux, TV browsers, iPad, Microsoft Surface.
Multi series line or stacked.
Fully customisable and correlation oriented.
Tag and per alert user targeting.
Wide OS support:
Doesn’t require root. Signed OS package deployment.
HTTP, TCP checks across multiple worldwide locations.
Deployed across 2 data centres, 3000 miles from each other (Washington DC, San Jose).
71 edge POPs to reduce latency both for the agent transmitting monitoring data and for users accessing our web UI and APIs.
Availability monitoring nodes in 27 cities within 20 countries across 6 continents.
No obligation or credit card required. See for yourself how Server Density technical specifications hold up for you and your infrastructure.Signup and start your trial Signup for your trial