Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process.
Our approach reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 310%, and at least by 20% in most cases for a number of real-world benchmark models.
About Marco Canini
Marco does not know what the next big thing will be. But he’s sure that our next-gen computing and networking infrastructure must be a viable platform for it and avoid stifling innovation. Marco’s research area is cloud computing, distributed systems and networking. His current interest is in designing better systems support for AI/ML and provide practical implementations deployable in the real-world.
Marco is an associate professor in Computer Science at KAUST. Marco obtained his Ph.D. in computer science and engineering from the University of Genoa in 2009 after spending the last year as a visiting student at the University of Cambridge, Computer Laboratory. He was a postdoctoral researcher at EPFL and a senior research scientist at Deutsche Telekom Innovation Labs & TU Berlin. Before joining KAUST, he was an assistant professor at the UCLouvain. He also held positions at Intel, Microsoft and Google.
This event will be conducted in English