Researchers from the Massachusetts Institute of Technology recently announced that they have created what they are calling a 'no-wait data center'. According to ZDNet, the researchers were able to conduct experiments in which network transmission queue length was reduced by more than 99 percent. The technology, dubbed FastPass, will be fully explained in a paper being presented in August at a conference for the Association for Computing Machinery special interest group on data communication.
The MIT researchers were able to use one of Facebook's data centers to conduct testing, which showed reductions in latency that effectively eliminated normal request queues. The report states that even in heavy traffic, the latency of an average request dropped from 3.65 microseconds to just 0.23 microseconds.
While the system's increased speed is a benefit, the aim is not to use it for increased processing speeds, but to simplify applications and switches to shrink the amount of bandwidth needed to run a data center. Because of the miniscule queue length, researchers believe FastPass could be used in the construction of highly scalable, centralized systems to deliver faster, more efficient networking models at decreased costs.
Centralizing traffic flow to make quicker decisions
In current network models, packets spend a lot of their time waiting for switches to decide when each packet can move on to its destination, and have to do so with limited information. Instead of this traditional decentralized model, FastPass works on a centralized system and utilizes an arbiter to make all routing decisions. This allows network traffic to be analyzed holistically and routing decisions made based off of the information derived from the analysis. In testing, researchers found that a single eight-core arbiter was able to handle 2.2. terabytes of data per second.
The arbiter is able to file requests quicker because it divides up the necessary processing power to calculate transmission timing among its cores. FastPass arranges workloads by time slot and assigns requests to the first available server, passing the rest of the work on to the next core which follows the same process.
"You want to allocate for many time slots into the future, in parallel, " explained Hari Balakrishnan, an MIT professor in electrical engineering and computer science. " According to Balakrishnan, each core searches the entire list of transmission requests, picks on to assign and then modifies the list. All of the cores work on the same list simultaneously, efficiently eliminating traffic.
Arbiter provides benefits for all levels
Network architects will be able to use FastPass to make packets arrive on time and eliminate the need to overprovision data center links for traffic that can arrive in unpredictable bursts. Similarly, distributed applications developers can benefit from the technology by using it to split up problems and send them for answers to different servers around the network.
"Developers struggle a lot with the variable latencies that current networks offer," said the report's co-author Jonathan Perry. "It's much easier to develop complex, distributed programs like the one Facebook implements."
While the technology's inventors admit that processing requests in such a manner seems counterintuitive, they were able to show that using the arbiter dramatically improved overall network performance even after the lag necessary for the cores to make scheduling decisions.
The FastPass software is planned to be released as open source code, but the MIT researchers warn that it is not production-ready as of yet. They believe that the technology will begin to be seen in data centers sometime in the next two years.