MIT researchers’ new system could reduce data center latency: Helping clients access the most advanced technology
Researchers from the Massachusetts Institute of Technology recently announced a new invention that could effectively transform the way data centers process information. The system is called Fastpass and, if brought to market, this technology could potentially offer a new way for computing facilities to communicate with one another.
How will Fastpass impact data center latency?
Currently, data centers utilize a decentralized protocol method to connect with other facilities and exchange information between two servers. Fastpass looks to replace this approach with a new centralized strategy, which has the potential to reduce latency and eliminate several other issues associated with the decentralized protocol.
Today’s methods cause considerable latency; when data centers attempt to “talk” to each other, routers can be overloaded during the process of establishing multiple connections across the servers housing the shared processes workloads. As a result of the overwhelmed servers, requests are “queued,” or set to be processed, once servers are freed up from other requests.
In a recent Datacenter Dynamics article, contributor Nick Booth wrote: “The resulting traffic queue creates an average latency between servers in a data center of 3.56 microseconds, according to MIT researchers. Their Fastpass alternative, a centralized communications protocol, will cut that latency to 0.23 microseconds on average, its test figures claim.”
MIT News reported that during research tests, the Fastpass system was able to cut Facebook’s typical router queue length by 99.6 percent, all but eliminating the queues altogether.
How does Fastpass work?
As part of its new protocol approach, Fastpass utilizes a central server known as an arbiter. This hardware provides the decision-making abilities for the system, controlling which network nodes can send data to which other nodes during a certain window of time. When nodes receive a request to transmit content, they in turn put in a request to the arbiter. The central server then sends a routing assignment that the node is programmed to follow.
In the MIT News article, Jonathan Perry, one of the authors of the Fastpass paper stated: “If you have to pay these maybe 40 microseconds to go to the arbiter, can you really gain much from the whole scheme? …Surprisingly, you can.”
What does this mean for data center clients?
This new technology could not only improve the way data centers deal with incoming traffic, but could have a direct impact on data center clients as well. Reduced latency means faster outgoing services, which translates into clients being able to access resources faster than ever and exchange internet traffic with considerably boosted speeds.
The possibility of game-changing technological innovations like Fastpass underscore both just how much the data center and connectivity world is changing, and also the increasing importance of having access to a data center provider with a global footprint.
Digital Realty’s global footprint and broad connectivity offers our clients the ability to extend new technologies like this much more widely.
– John Sarkis, General Manager, Colocation & Connectivity