The tremendous capacity increases in radio access envisioned for 5G and beyond mobile net- works has increased the requirements for efficient ultra-high speed wireless backhaul. Unfortunately, the increase of spectral efficiency of SISO systems has saturated long ago and current MIMO systems are characterized by too complex processing and too expensive deployment, limiting their commercial success.
The aim of this project is to leverage on Deep Learning (DL) techniques to improve performance and reduce implementation complexity of LoS MIMO backhaul communication systems. Specifically, the goal is to define a learning approach that can cope with the diverse hardware imperfections and channel environments found in LoS MIMO links with antenna spacing much lower than the optimal distance.
Most signal processing algorithms, channel coding and modulation schemes are based on linear, static and Gaussian models. However, real systems typically do not (always) have these properties. Often, practical hardware has imperfections and non-linearities due to cost and complexity considerations, as well as limitations in the manufacturing process. These imperfection can be device class specific, specific to an individual device, or be environment dependent and change the behavior of the system over time (e.g., based on temperature changes). For these reasons, commonly used signal processing systems may rely on non optimal solutions for communication. Besides, conventional solutions to overcome specific imperfections (and combinations thereof) may not always be practical, e.g., extensive calibration of installed backhaul devices may prove too costly and complex.
Finally, DL may allow to operate hardware at the very limit of their specification, where conventional compensation mechanism have difficulties dealing with non-linearities, whereas DL approaches implicitly take such imperfections into account in the communication design. In this project we propose to develop a DL framework that will learn from the hardware imperfections present in the wireless backhaul system to improve communication. In this way, our DL framework can be optimized for specific system conditions, without considering each imperfection individually and without requiring to develop a new mathematical model for each possible condition. Having these inherent capabilities also allows to design systems based on even simpler and cheaper hardware, as long as the DL framework is able to cope with the resulting imperfections. Our deep learning approach considers the real time information obtained for the wireless channel, to model a replica of it in a neural network, and then use this neural net in a larger DL structure designed to improve the backhaul link.
Different parameters will be considered for our neural net environment: initially, (i) those related to the observed IQ samples at the transmitter and receiver sides, and (ii) those related to the imperfections that cannot be measured from the channel, such as those inherent to the transmitter and receiver hardware components. This is indeed a challenging problem since current mathematical models typically do not include hardware imperfections. Note that the quality of the algorithm can be verified comparing how the Signal-to-Noise Ratio (SNR) and Bit Error Rate (BER) is improved compared to a communication system built with conventional processing blocks.
A deep learning framework for the physical layer will be designed and validated for different hardware platforms with the aim of optimizing the backhaul link quality.
On the implementation complexity, an efficient use of concurrent compute architectures as the one provided by NPUs (or, alternatively, by GPUs) could potentially enable a power consumption reduction with respect to the traditional processing based on the cascade of several independent blocks.