Research

Welcome to Edge Networks Group!

Our research focus is on the design of edge networked systems to support emerging applications from the Cyber-Physical Systems and the Internet of Things. We investigate novel performance metrics and algorithms to cater to the requirement of sensing/collecting (data), communication/offloading, and actuation/inference for these applications. Our primary goal is to achieve low delay, and low energy consumption for edge devices (operating over wireless), by only collecting useful data from the environment with which an application interacts, thereby also reducing the bandwidth requirements both over the wireless and the core networks. The topics of current interest include (but are not limited to): Learning at the Edge, Age of Information (AoI), Computation Offloading, and Wireless Communications.

Distributed ML Inference at the Edge

In the era of Edge Intelligence, i.e., the confluence of edge computing and artificial intelligence, an increasing number of monitoring applications at the network edge are using Machine Learning (ML) inference, in particular Deep Neural Networks (DNN) inference. On one hand, resource-constrained edge devices, such as IoT sensors, can only support small-size ML models, e.g., TinyML models, which provide lower inference accuracy on the data albeit using lower energy. On the other hand, offloading the inference jobs to a computationally powerful edge server results in higher inference accuracy. However, there are several non-trivial aspects that need to be carefully considered: 1) transmission energy consumption at the edge device, 2) energy consumption at the edge server (this is important as we aim for Green Solution), and 3) delay incurred in processing the data at the edge device versus offloading it and processing it at the edge server. Thus, we study a novel three-way trade-off between inference accuracy, delay, and total system energy consumption in this system.

 

  • Andrea Fresa, Jaya Prakash Champati, Offloading Algorithms for Maximizing Inference Accuracy on Edge Device Under a Time Constraint, ACM MSWIM, 2022.

 

AoI Analysis and Optimization

AoI is a freshness metric that measures the time elapsed since the generation time of the freshest packet available at the Receiver. In contrast to system delay,  AoI increases linearly between the packet receptions through which it accounts for the frequency of sampling the Information Source. We analyze AoI for fundamental queueing systems and also study optimal sampling and transmission strategies for minimizing AoI in these systems.                                                                                      

  • Jaya Prakash Champati, Hussein Al-Zubaidy, James Gross. Statistical Guarantee Optimization for AoI in Single-Hop and Two-Hop FCFS Systems with Periodic Arrivals. IEEE Transactions on Communications. 69 – 1, pp. 365 – 381. 2021.       
  • Jaya Prakash Champati, Ramana R. Avula, Tobias J. Oechtering; James Gross. Minimum Achievable Peak Age of Information Under Service Preemptions and Request Delay. IEEE Journal on Selected Areas in Communications. 39 – 5, pp. 1365 – 1379. 2021.   
  • Jaya Prakash Champati and Hussein Al-Zubaidy and James Gross, “On the distribution of AoI for the GI/GI/1/1 and GI/GI/1/2* systems: Exact expressions and bounds”, in Proc. IEEE INFOCOM, May 2019.

 

Edge Computing Offloading Algorithms

Edge computing or fog computing, where computational resources are placed close to (e.g. one hop away) entities that offload computational tasks or data for processing, is a key architectural component of 5G and future wireless networks. Offloading computational tasks from mobile devices to edge servers instead of the cloud results in internet bandwidth savings and circumvents the long delays involved in communicating the data load of the offloaded tasks to a cloud data centre residing somewhere on the internet. Above all, edge computing augments the compute and memory limitations of edge devices.

 

  • Jaya Prakash Champati and Ben Liang. Single Restart with Time Stamps for Parallel Task Processing with Known and Unknown Processors. IEEE Transactions on Parallel and Distributed Systems. 31 – 1, pp. 187 – 200. 2020.     
  • Jaya Prakash Champati and Ben Liang. Semi-Online Algorithms for Computational Task Offloading with Communication Delay. IEEE Transactions on Parallel and Distributed Systems. 28 – 4, pp. 1189 – 1201. 2017.

 

Transient Delay Analysis and Optimization

Most of the research dealing with general network performance analysis using queuing theory consider systems in steady-state. For example, for simple M/M/1 or more general Markovian queuing systems, the steady-state is governed by the (conceptually simple) flow balance equations. In contrast, transient analysis of these systems results in intractable differential equations. Using Stochastic Network Calculus, we derived the end-to-end delay violation probability for a sequence of time-critical packets, given the transient network state (queue backlogs) when the time-critical packets enter the network. Leveraging this analysis we compute good resource allocation strategies for wireless protocols such as WirelessHART to support the QoS requirements of time-critical industrial applications.

  • Jaya Prakash Champati, Hussein Al-Zubaidy, James Gross. Transient Analysis for Multi-hop Wireless Networks Under Static Routing. IEEE/ACM Transactions on Networking. 28 – 2, pp. 722 – 735. 2020.