As the world has become more connected, those connections have changed our lives. Twenty-five years ago, mobile phones were just emerging and a 29 pound Kaypro was seen as the portable computer. The manufacturer described it as “luggable.” You still needed to go to the bank to deposit a check, go to the store to buy birthday presents, and use a paper map to find your way to somewhere new. A lot has happened in 25 years. The mobile phone is now a lot more powerful than the Kaypro and with it your access to services is nearly limitless. You can pay bills while waiting in line for a restaurant table, close your garage door 10 minutes after you leave your house, turn the heat up in your home when you touch down at your local international airport, or even organize a flash mob. This capability has woven its way into our social and business fabrics and with the Internet of Things (IoT) ramping up, adoption will only increase.
Given our increasing reliance on these services, even a few minutes of downtime could be costly. For good reason, the drive within the data center industry today is for mission-critical data centers at lower cost. This becomes possible as facility infrastructure evolves and the resiliency of applications moves up the stack, from the Mechanical Electrical Plumbing level to the IT plane. Our computer applications are becoming more fault tolerant as they take advantage of multiple compute and storage resources available from the cloud. The cloud now houses multiple copies of the data you access, and processes your current request for it (aka session state) at various datacenters around the world. Today, “mission critical” describes not just the resiliency of the facility that houses your data, but the redundancy of the data itself.
Hence the importance of what we call “the power of three”. Assume you’re in business and you want to make your application more resilient. Prior to cloud technology, you would have looked for robust datacenter infrastructure. Now, pairing replication services with advanced connectivity to create a dual-homed environment results not only in a more resilient application, but one that often places the content closer to the end-user, reducing wait time and improving user experience. The downside of this dual-homed approach, is the need for twice as many servers, cabinets, power strips—that is, 200% of your whole IT kit requirement.
But hypothetically, what happens if you deploy at three locations and put 50% of your compute/storage requirement at each location? This model still tolerates a single site failure, just as the dual-homed model does, but requires only 150% of your IT kit. Even if two sites fail, you’re still in business with up to 50% functionality. You could take this a step further, go to four locations, and end up with 133.3% of your requirement, but the law of diminishing returns begins to kick in here. While the move from two to three locations yields a 25% savings in equipment and power costs, the step from three to four locations only results in an 11% power and equipment saving, while potentially increasing the cost of connectivity and multi-site synchronization, as well as the need for technical expertise.
While some large cloud providers may opt for four or more locations, the best bang for most businesses’ IT buck happens in the move from two to three locations. By deploying your IT application at three locations, you not only reduce the cost of the IT kit but also improve the resiliency of the application without adding expensive complexity. This 3X model is what many of the large cloud providers offer, and it’s ready to be deployed at the touch of a button. So if your strategy involves building your own cloud, the next time you are thinking of doling out twice to gain resiliency, you might want to think about the power of three instead.
– Jason Rafkind, Senior Sales Engineer