Cookies help us create our services and enhance your experience. By using this site you agree to our use of cookies. Okay.

Server Density.

Hosted website + server monitoring software.

Journey

The server monitoring journey.

1. Agent installed

Agent install Agent install

1. Agent installed

The open source server monitoring agent is installed using cryptographically signed packages for your OS (.deb or .rpm).

2. Agent check cycle

Agent check cycle

2. Agent check cycle

The agent calls various OS level APIs and interacts with applications to collect key metrics. This ranges from parsing the /proc filesystem to querying MySQL servers to collect the data we need.

Once collected, the data is bundled into a JSON payload, checksummed and then HTTP POST’d back to Server Density.

3. En route to Server Density

En route to Server Density

3. En route to Server Density

The agent only ever sends data over HTTPS which means no special protocols or firewall rules are used. It also means the data is encrypted in transit.

Server Density uses the Cloudflare network for security and speeding up traffic as a content delivery network. The HTTPS postback DNS lookup finds the closest of 71 POPs using the Cloudflare anycast DNS which reduces latency and hops between your systems and Server Density.

4. Entering the Server Density network

Entering the Server Density network

4. Entering the Server Density network

The payload enters the Cloudflare network where it is proxied to Server Density, entering via accelerated transit to a Softlayer POP. Using an anycast routed global IP, the payload hits our Nginx load balancers.

5. Processing the payload

Processing the payload Processing the payload Processing the payload Processing the payload

5. Processing the payload

The Nginx load balancers forward the payload into our processing backend, via a Python based, Tornado powered API. The payload is then stored in a distributed Kafka queue for async retrieval. An Apache Storm Spout picks up the payload from Kafka and sends it to a processing bolt to do some validations which then sends it down two routes: into alert processing bolt and intro metrics storage.

5a. Alert processing

Alert processing

5a. Alert processing

The alert processing bolt takes the already validated payload and runs it through our rules engine, comparing the values against your alert configurations. If there’s a match, the right actions are taken - this could be opening a new alert, sending an email notification, triggering a web hook, closing a fixed alert or even just ignoring the value. The action will depend on the configuration and may then trigger another bolt execution e.g. for sending notifications.

5b. Metrics storage

Metrics storage

5b. Metrics storage

The payload is posted via our internal metrics HTTP API. Here it is parsed into the format we use for storing the time series data and then written into our distributed MongoDB time series datastore. The data written to master is replicated a few moments later into the secondary data centre and off-site/off-vendor real time backup.

6. Success and discard

Success and discardì

6. Success and discard

Once the payload has been entered into our queue we return an HTTP 200 OK response back to the agent. Except for checksum validation, all the processing is done asynchronously and once successfully completed, the payload is discarded. The whole process takes no more than 150ms, which is our internal SLA from postback received to process completed (inc alerts triggered).

Features

We provide the following features.

API

API docs

HTTPS JSON REST with comprehensive documentation.

Dashboards

Dashboard actions

Widgetized. Auto updating, auto resizing.

Compatible with Windows, Android, OS X, Linux, TV browsers, iPad, Microsoft Surface.

Graphing

Line or stacked

Multi series line or stacked.

Correlation oriented

Fully customisable and correlation oriented.

Snapshot

Snapshot.

Alerting

Tag and per alert user targeting.

  • SMS
  • Email
  • Webhook
  • Slack
  • PagerDuty
  • iOS
  • Android

Server monitoring

Wide OS support:

  • Linux
  • Windows
  • OS X
  • FreeBSD
Linux, Windows, OS X, FreeBSD.

Automatable:

  • Puppet
  • Chef
  • Ansible
  • Salt Stack
Puppet, Chef, Ansible, Salt Stack.

Doesn’t require root. Signed OS package deployment.

Website monitoring

HTTP, TCP checks across multiple worldwide locations.

Infrastructure

Our software has the following infrastructure behind it.

Redundancy

Redundancy

Deployed across 2 data centres, 3000 miles from each other (Washington DC, San Jose).

Locations

71 POPs

71 edge POPs to reduce latency both for the agent transmitting monitoring data and for users accessing our web UI and APIs.

27 cities
20 countries
6 continents

Availability monitoring nodes in 27 cities within 20 countries across 6 continents.

  • * Application stack made up of Python, Ubuntu Linux, Redis, Nginx, Kafka, Storm and Zookeeper.
  • * Built using a microservices architecture.
  • * Our engineering team deploys 5-10 times per day.
  • * Built by a UK company with team members living in the UK, Italy, Spain and Portugal and nationalities ranging including British, Italian, Spanish, Portuguese and Swedish.

Pricing

Based on the number of payloads processed.
Each
   equals 1 monitored server + 1 web check.

1$10/mo
2$20/mo

5$45/mo

10$90/mo

25$150/mo

50$300/mo

75$400/mo

100$500/mo
  • * Free solutions exist, such as Nagios, Munin, Zabbix, but they require a lot more time to setup and maintain. Server Density does not.

    Here's how we see the comparison.

Try us for free.

No obligation or credit card required. See for yourself how Server Density technical specifications hold up for you and your infrastructure.

Signup and start your trial Signup for your trial