By Ryan Gellis
If you’re like most companies, you probably spend much of your time and money on generating web traffic and very little of it on monitoring the traffic itself. I’ve seen time and again companies asking how they can increase their bottom line or monetize new programs without any baseline statistics on how their current efforts are performing.
These questions can be partially answered by analytics, but without diving deeper into the full set of data we get only partial answers: “But I installed performance software on my website. Doesn’t that count as monitoring?”
Not exactly. I consider such software an essential first step to a full monitoring suite, but it is mainly useful for application performance monitoring. This is the type of monitoring that tells you how the application you’re running on your web servers is performing, what pages are slow, and what errors are being thrown. Relying on your customers to complain or spot-checking daily is a reactive plan at best, since neither will tip you off to issues before they become catastrophic. In the worst cases, your website could be down for hours (3 a.m. on a Saturday anyone?) before you even know there’s a problem, losing you sales and damaging your brand.
Identifying the Root of the Cause
One of the most recent issues I can recall website monitoring software helping with was a client who was seeing an increase in 404 traffic. It turned out the bad page requests were being caused by an integration with a behavioral marketing tool that was interacting poorly with the website’s caching mechanism, resulting in a customer experience that highlighted a big 404 message instead of beautiful product details. The quickest way to resolve issues like this is to be able to see what’s causing them in the first place, and your application performance monitoring is the main tool in your toolkit for that job.
The next tool you should consider implementing is a monitoring aggregation platform, which you can use to discover hardware and network issues quickly, sometimes before they cause a failure. You can also monitor the software stack that is running on top of your operating systems to ensure things are running as expected and be notified if they are not.
I’m excited by technology like this because of the industry-wide shift to cloud architecture. When you host your website in the cloud, you must plan for servers to go down and be able to quickly add healthy instances into rotation for maximum speed and uptime. I once had a client whose marketing department accidentally put the company on Slick Deals a day earlier than anticipated. The surge of traffic was overwhelming, and we would never have known the deal went live early if we didn’t aggregate the data from the server cluster and get notified once certain thresholds were reached. The idea of redundancy and high-availability is only achievable if you know when servers in your architecture are reaching their peak performance. In the case with Slick Deals, we were watching for things like CPU load and network bandwidth usage.
You can even monitor for when your site is being attacked. I had a client who was seeing a huge spike in traffic but the orders on their website did not increase in turn. By aggregating logs, we were able to quickly determine traffic outside of the U.S. (they only sold products online domestically) was flooding their servers. We were able to put a stop gap in place to resolve the issue. It turned out later a malicious entity overseas was running automated scripts against their site to test for entry points and vulnerabilities.
Planning for Attacks
In addition to monitoring for attacks using data aggregation and alerts, it is critical to set up solutions that handle distributed denial of service attacks and operating-system level threat intelligence. You can install software that keeps the bad guys from flooding your servers with bad requests and taking your website down, and monitors filesystem changes and operating system updates for known security vulnerabilities. If you operate in industries that handle sensitive data (like e-commerce, medical, or human resource management), tools like these are critical to maintaining proper security compliance rules and regulations.
Finally, a robust monitoring suite helps you improve customer experience and retention. Monitoring your site involves more than making sure that the website functions as it should: It also includes collecting data regarding how visitors are using your site. You should be able to see what drives people to your site, the search terms visitors used to get there, and consumer behavior while they browse. Having information like this allows you to better understand the user experience you’re providing for your customers and optimize your website functionality to increase conversion on key performance goals.
If you’re trying to build a customer-centric strategy, my favorite tools include things like Google Analytics, HotJar and Optimizely. Tools like these help you build data about your customers, see their interactions on your website with heatmaps and user videos, and A/B test new solutions to solve existing problems. They also get you in the habit of achieving longer-term strategy goals by thinking tactically about updates your customers want to see.
I think website monitoring is an opportunity cost versus a hidden one. By introducing application performance monitoring, data and log aggregation, network and operating intelligence tools, user analytics and testing tools, you can get a clear pulse on your website today. You might even be positioned to plan for a better tomorrow, too.
Ryan Gellis is the founding partner of Robofirm. He is responsible for leading the company’s growth, strategy, and vision.