Embedding machine learning (ML) models in programmable switches realizes the vision of high-throughput and low-latency inference at line rate. Recent works have made breakthroughs in embedding Random Forest (RF) models in switches for either packet-level inference or flow-level inference. The former relies on simple features from packet headers that are simple to implement but limit accuracy in challenging use cases; the latter exploits richer flow features to improve accuracy, but leaves early packets in each flow unclassified. We propose Jewel, an in-switch ML model based on a fully joint packet- and flow-level design, which takes the best of both worlds by classifying early flow packets individually and shifting to flow-level inference when possible. Our proposal involves (i) a single RF model trained to classify both packets and flows, and (ii) hardware-aware model selection and training techniques for resource footprint minimization. We implement Jewel in P4 and deploy it in a testbed with Intel Tofino switches, where we run extensive experiments with a variety of real-world use cases. Results reveal how our solution outperforms four state-of-the-art benchmarks, with accuracy gains in the 2.0%-5.3% range.
Beyza Bütün is a Ph.D. student in the Networks Data Science Group at IMDEA Networks Institute in Madrid, Spain. She is also a Ph.D. student in the Department of Telematics Engineering at Universidad Carlos III de Madrid, Spain. She holds a bachelor’s and master’s degree in Computer Engineering from Middle East Technical University in Ankara, Turkey.
During her master’s, she worked on the optimal design of wireless data center networks. Beyza’s current research interest is in-band network intelligence, distributed in-band programming, and energy consumption optimization in the data plane.
Este evento se impartirá en inglés