Security is a well-known risk with cloud applications, but…what about bandwidth? More and more businesses are looking to cut costs and improve end customer experience by moving customer-facing applications, storage/backup solutions, and many other applications to distributed data centers in the cloud.
If users experience performance or latency problems due to insufficient network bandwidth, such as slow, stalling or failed applications, the ride to the cloud could get bumpy in a hurry.
In the simplest terms, bandwidth is a way to talk about the capacity or data rate of a network connection. In theory, the greater the bandwidth, the higher the probability of acceptable application performance. Many IT organizations throw more bandwidth at the problem, and have learned the hard way, that this does not resolve serious performance issues. This approach usually just wastes money and does little to address the root cause of poor performance (inefficient network topology, application architectures that aren’t cloud-friendly, too many employees streaming audio at work, etc.).
Factors like latency and packet loss also come into play – especially in the real world where available bandwidth is usually more relevant than theoretical maximum bandwidth.
The available bandwidth along a network path is critically important for cloud application performance. Available bandwidth is the capacity that a new application can use without impacting the transmission of other flows on that path.
So when it comes to cloud applications, available bandwidth isn’t just any old bandwidth – it’s “clean capacity” with minimal packet loss, jitter and latency. And since applications use bandwidth differently, the amount and quality of available bandwidth must be ensured for each application. Virtual Desktop Infrastructure (VDI) or video conferencing applications are much more sensitive to bandwidth availability and quality than cloud-based backup systems are likely to be, for instance.
A recent Computerworld article Bandwidth Bottlenecks Loom Large in the Cloud discussed how IT hasn’t considered all the implications of bandwidth challenges. For example, only 54% of surveyed IT professionals who are using some form of cloud services report that they currently involve network engineering/operations staff in the process; down from 62% in 2009.
Traditional network best practices may be left by the wayside as well. In particular, the health of overall network traffic delivery may be quietly fading away, until poor cloud application performance unmasks the symptoms.
What is required is insight into utilized versus available versus potential network capacity. This level of awareness is particularly challenging to gain when public networks are involved, due to the dynamic, “bursty” nature of Internet traffic. Cloud application performance can degrade due to normal peak hour congestion, for instance. Proactive and continuous bandwidth monitoring is required to ensure application delivery and compliance with service levels.
Network managers need to monitor not only capacity and utilization, but also packet loss, latency and jitter, route changes and other real-time key performance indicators.
SNMP instrumentation can only provide part of the data you need to guarantee cloud application performance. Besides requiring devices to be owned and configured, SNMP tools can’t offer real-time bandwidth data and other KPIs to illustrate the end-to-end performance of cloud applications from the standpoint of remote sites.
Do you know your network’s available bandwidth?? How do you continuously monitor this? Check out a new free tool called PathTest, the most accurate bandwidth capacity test available. Even with an understanding of your bandwidth measurements today, it is critical to continuously and accurately monitor your network’s total (achievable) and available capacity and performance without flooding the network with traffic that can itself degrade application performance? Sign up for a free trial of PathView Cloud and get insight you simply don’t have today.