/ Deep Learning

Deep Learning Meets DSP: OFDM Signal Detection

Here at KickView, we get excited about building products that use AI and digital signal processing (DSP) algorithms to detect, classify and make sense of Radio Frequency (RF) signals and sensor data. In this blog post, we'll focus specifically on detection of RF signals modulated using Orthogonal Frequency Division Multiplexing (OFDM). OFDM is a digital multi-carrier modulation scheme that is employed in many fielded systems; WiFi, cable systems (e.g. DOCSIS 3.1) and cellular networks (e.g. 4G, 5G) either currently deploy or are moving towards OFDM as the PHY-layer standard. First, we will do a technical deep dive into OFDM to get a good understanding of the signal structure and it's benefits. Next, we'll introduce our patent pending signal detection system we've developed that uses deep learning (DL) to robustly detect OFDM waveforms by capturing only a portion of the total RF signal bandwidth. More broadly, our approach can significantly improve detection of wide bandwidth signals using narrower bandwidth receivers. Contributors to this post include Krishna Karra, Robert North and David Ohm, PhD at KickView.

Orthogonal Frequency Division Multiplexing (OFDM)

OFDM is a digital multi-carrier modulation scheme where information is distributed across many orthogonal subcarriers closely spaced in frequency. Each subcarrier is individually modulated using single-carrier modulation schemes such as Quadrature Amplitude Modulation (QAM) or Binary Phase Shift Keying (BPSK). OFDM signals can occupy a wide frequency bandwidth, and achieve a high data rate through the parallel transmission of multiple subcarriers, each of which transmit at a much lower data rate. The orthogonality among the subcarriers ensures that they do not interfere with each other. The primary advantage of OFDM over single-carrier modulation techniques is the ability to counter severe RF channel conditions, particularly multipath in wireless environments, without the use of complex time-domain equalization. Equalization is the process of reversing distortion imparted onto a signal as it propagates through a channel (e.g. a multipath city environment). Intricate equalization filters increase the complexity of a communications system and are difficult to design for single-carrier modulation schemes. Utilizing OFDM eliminates the need for complex equalization to recover the data.

Figure 1 below shows the frequency domain representation of the OFDM subcarriers, as specified by IEEE 802.11g. IEEE 802.11g is a commonly used WiFi protocol, and the remainder of our discussion will focus specifically on OFDM signal parameters defined by this standard. However, all the concepts and technology introduced here naturally extend to any standard that employs OFDM as the PHY-layer modulation scheme. The PHY-layer, or physical layer, deefines the means of transmitting raw bits over a physical data link.

Figure 1: Frequency domain representation of IEEE 802.11g OFDM subcarriers

IEEE 802.11g defines 64 equally spaced subcarriers across a 20 MHz bandwidth. Note that out of the 64 subcarriers, only 48 of them are used for data transmission. Four are pilot channels used for synchronization and channel estimation on the receiver end, and 11 are guard channels to reject adjacent channel interferers. Finally, there is a null subcarrier at the center of the band to mitigate DC level artifacts that are commonly caused by analog components in RF transmitters.

Figure 2: Building blocks of a typical OFDM communications system

Figure 2 above shows how an OFDM signal is modulated, transmitted and demodulated in a typical communications system. The subcarrier data streams, which are orthogonal to each other, are generated through the Inverse Discrete Fourier Transform (IDFT) operation, which acts as a serial-to-parallel converter. Another important component of OFDM is the Cyclic Prefix (CP), which is inserted into the parallel data stream after the IDFT computation. Adding the CP is as easy as copying a certain percentage of the tail end of an OFDM symbol to the front of the symbol.

Why should we even bother with the CP? In a wireless channel, the beginning of the symbol will get corrupted, but since we've conveniently added the CP, this is now redundant information that can effectively be ignored! More generally, given sufficient CP length, the preceding symbols will not spill over into the DFT period, which preserves the data payload in the current symbol. Therefore, modern OFDM receivers simply strip away the CP and correct the corrupted symbols by an amplitude and/or phase shift. Later on, we'll see that the DFT and CP parameters, which are key to recovering the individual subcarriers, also play a key role in the design of our OFDM signal detector. Recall that IEEE 802.11g specifies 64 subcarriers with each OFDM symbol, meaning that a 64-point IDFT is computed at the transmitter (and subsequently a 64-point DFT on the receiver end).

OFDM Signal Detection

We now have an understanding of the PHY-layer building blocks in OFDM. Before we go into the technical design, let's circle back to motivate the problem. The OFDM modulation scheme is used primarily for wide bandwidth transmission; an IEEE 802.11g OFDM signal occupies 20 MHz, and newer standards such as IEEE 802.11ac specify signals as wide as 160 MHz. Since many receivers have collection bandwidths much less than this, signal detection and classification using conventional techniques is quite difficult. Many conventional techniques require that the complete in-band energy of the signal be captured along with explicit time and frequency synchronization to perform signal classification [1]. Our goal is to build an OFDM signal detector that can perform classification using partial bandwidth RF collects and does not require explicit time and frequency synchronization.

Given this background, let's define some goals:

  • Create an RF dataset with OFDM signal bursts for experimentation (we'll pick IEEE 802.11g as our protocol)
  • Build an algorithm to detect OFDM signals from full-band RF collections (bandwidth of 20 MHz as specified by IEEE 802.11g)
  • Expand this algorithm to detect OFDM signals from partial-band RF collections (let's arbitrarily choose a bandwidth of 5 MHz, which translates to capturing 1/4 of the total signal energy)
  • Evaluate the performance of the algorithm as a function of Signal-to-Noise Ratio (SNR)
  • Demonstrate the functionality of the algorithm in a realistic RF environment

Figure 3 below shows a block diagram of our high-level technical approach:

Figure 3: Technical approach to detect and classify OFDM signals

First, we'll walk through our process to generate an RF dataset with snapshots of OFDM signals by conducting an over-the-air collection from a WiFi access point in a controlled RF environment. Next, we'll do some pre-processing to exploit the inherent structure of the OFDM signal by channelizing the time-domain snapshots. This process generates a complex-valued time/frequency representation of each signal burst, which we'll refer to as a complex signal image. Finally, we'll train a deep neural network (DNN) on the complex signal images to learn the structure of the OFDM signal. To conclude, we'll show some performance of how well our detector does for both full and partial-band cases as a function of SNR.

RF Data Collection

While large, freely accessible datasets are available for DL research in other domains (e.g. MNIST, KITTI, ImageNet), much fewer resources exist for the RF domain. Since we've chosen IEEE 802.11g as our protocol, we configured a LinkSys 802.11g WiFi router for use in creating a large dataset. Leveraging our KV-PRELUDE product based around our patent pending system for automated RF dataset creation we generated several very large datasets for RF DL exploration. We utilized an Ettus USRP X300 software-defined radio to capture raw RF data.

Figure 4: We leverage our KV-AIOLOS tools to generate an RF dataset containing IEEE 802.11g OFDM signals

The KV-AIOLOS data generation tools can automate the creation of a labeled RF dataset for any modern communications protocol, both Over-The-Air (OTA) or wired. We used this tool to generate an RF dataset consisting of ~4000 OFDM signal snapshots, with each snapshot ~1.4 msec long in duration (~333 OFDM symbols) critically sampled at 20 MHz. All the bursts are at reasonably high SNR. We'll vary the SNR across all bursts at the pre-processing stage, discussed in the next section.

Pre-Processing

In the pre-processing stage, we'll focus on a strategy to transform the time-domain OFDM snapshots to a representation that maximizes the structure of these signals for subsequent feature extraction. Channelization, which is an operation in digital signal processing that extracts individual carriers from a multi-carrier signal, is a natural strategy for multi-carrier signals. [2] is a great reference that describes various channelization strategies and their performance tradeoffs. We chose a polyphase channelizer because it minimizes out-of-band interference from other channels in each output channel. To design an appropriate polyphase channelizer for IEEE 802.11g OFDM signals, we'll need to dig into some signal specific parameters of IEEE 802.11g OFDM signal.

We define a "blind detection" (BD) symbol (4 microseconds in duration) as nothing more than the OFDM symbol (3.2 microseconds in duration) plus the CP (0.8 microseconds in duration). Recalling our OFDM discussion above, a typical OFDM receiver strips away the CP before computing a DFT (64-point for IEEE 802.11g). However, since we're building an OFDM detector that does not utilize knowledge of the internals of the signal, we'll need to take the CP into account! This means that in the channelizer, we need to compute an 80-point DFT instead of a 64-point DFT to properly handle the longer symbol duration. Our channelization strategy is depicted below in Figure 5.

Figure 5: The polyphase channelizer breaks out individual channels from the multi-carrier OFDM signal

Along with channelization, the pre-processing operation also accounts for uncertainty in time and frequency. This aspect of pre-processing generates ensembles of training examples, each with a unique instantiation of parameters. By training over this uncertainty, we will see that our DNN can detect the presence of OFDM without explicit time and frequency synchronization.

Below is a breakdown of the different uncertainty factors that we capture during pre-processing:

  • Sample phase: in a live RF collection, the initial sample phase (how many samples into a signal burst) of a captured signal snapshot is ambiguous. We account for this uncertainty by generating all possible combinations of start sample phases from each time-domain snapshot in the RF dataset.
  • Oversampling ratio in channelizer: The block pre-sum operation (depicted above in Figure 5) can be thought of as a parallel-to-serial conversion of the BD symbols, which are summed together and transformed to channelized outputs through the DFT computation. We apply an oversampling ratio to generate multiple output ensembles of input samples shifted through the polyphase channelizer.
  • Frequency Offset: In a typical RF link, the transmitter and receiver both have a local oscillator (LO) which acts as a frequency reference. There is typically some disagreement between LOs from two different radios, which results in a frequency shift in the received signal. Furthermore, harsh channel conditions and receiver motion can introduce additional frequency error. We account for these errors by generating Doppler-shifted versions of each snap.

Finally, in our pre-processing step, we vary the SNR of the time-domain snapshots from -8 dB to +8 dB. This is so we can evaluate the performance of our detector to see how it performs under real-world conditions. Figure 6 below shows some example snapshots from our pre-processing operation, both in the time-domain and the complex signal image representations generated after channelization.

Figure 6: Example snaphots of OFDM and Noise in the time-domain and complex signal images after pre-processing

The subcarrier structure from the OFDM signal is strikingly evident in the magnitude of the complex signal image, shown on the top right. Note that the complex signal images are actually complex-valued (In-Phase & Quadrature components), but are visualized above in magnitude space.

Deep Neural Network Architecture

From our pre-processing step, we generated a whole bunch of complex signal images for both OFDM signals and noise. We're now ready to build and train a DNN to act as an OFDM signal detector. It's important to note that the complex signal images are somewhat analogous to images from a camera, with a few key differences. In computer vision applications, images can either be grayscale (single channel) or color (three channels). The complex signal images contain In-Phase and Quadrature components, making them two channels. We took the approach of treating the In-Phase and Quadrature components as separate channels in the DNN, to preserve both amplitude and phase information of the signal. We are also actively conducting research and development in methods for using complex-valued data natively within the formulation of the DNN with common frameworks such as TensorFlow. Initial academic research in this domain [3] has shown that complex-valued DNNs achieve excellent classification performance and can potentially generalize better.

Figure 8: Depiction of input data tensor and DNN architecture

Figure 8 above shows the input data for the DNN in tensor form, as well as a block diagram of the DNN architecture we chose. The first dimension represents time; we fixed each signal snapshot to contain 333 BD symbols, excluding a few symbols at the beginning and end as guard symbols. The second dimension represents frequency; recall that we computed an 80-point DFT in the polyphase channelizer to extract each individual subcarrier. The third dimension represents the split of complex-valued data into two separate channels. For the DNN, we employ multiple convolutional and fully connected layers to learn a rich feature representation of the OFDM signal structure. We performed hyperparameter optimization and utilized common regularization techinques such as Dropout to prevent overfitting.

Experimental Results

We evaluated the performance of our DL-based OFDM signal detector under four different scenarios:

  • Full-band (20 MHz) detection of OFDM vs. Additive White Gaussian Noise (AWGN)
  • Full-band (20 MHz) detection of OFDM vs. Background (Noise, Bluetooth, other in-band interferers)
  • Partial-band (5 MHz) detection of OFDM vs. AWGN
  • Partial-band (5 MHz) detection of OFDM vs. Background (Noise, Bluetooth, other in-band interferers)

Let's take a look at the results, shown below in Figure 9.

Figure 9: Detection performance of KickView's DL-based signal detector for OFDM signals

In the full-band scenario of OFDM vs. AWGN (the most ideal case), we achieve a 90% detection rate down at -5.5 dB SNR. Looking at an example signal snapshot at -5 dB SNR in Figure 9, it is evident that the human eye can't detect a signal in there! Performance suffers slightly (90% detection rate at -4.5 dB SNR) when other in-band interferers such as Bluetooth and Wireless USB are added into the RF dataset, but not by much. We can make the conclusion that for a full-band collection, our DL-based signal detector can detect the presence of OFDM signaling below the noise floor.

The sensitivity of the detector isn't quite as high for the partial-band (5 MHz) scenarios. For partial-band detection of OFDM vs. Noise, we obtain a 90% detection rate at 0.5 dB SNR, and sensitivity decreases to 1.5 dB SNR when in-band interferers are added. This performance impact is expected because since we're only capturing 1/4 of the total signal energy, only some of the structure is present for inference. Furthermore, the structure of a 5 MHz snapshot can vary depending on which part of the signal band is collected; a complex signal image at the tail end of the OFDM signal will look very different compared to a complex signal image captured at the center.

The results above were all generated from our custom RF dataset. However, the real test is whether this signal detector works in a real RF environment. To validate, we performed a live RF collection in our office with a known cooperative WiFi access point transmitting an IEEE 802.11g OFDM signal. In the same band, there were many interfering signals, including other WiFi access points, Bluetooth and Wireless USB. We found the performance of our signal detector to be excellent, with a high detection rate and very low false alarm rate. Although we do not present the detailed detection metrics for this scenario here in this post, Figure 10 below shows some example snapshots of detector classifications on live-collected RF data.

Figure 10: Classification examples of our OFDM signal detector from a live RF collect conducted at KickView's office

Figure 11: Our detector successfully detects the presence of WiFi with only ~50 microseconds of captured energy

Interestingly, although our signal detector was trained on complete IEEE 802.11g signal pulses, it does an excellent job detecting OFDM even when only tiny bursts (in time) of energy are captured. An example of this is shown above in Figure 11. In contrast, conventional techniques typically require many symbols of energy to integrate over in order to make a classification decision and perform time/frequency synchronization.

Conclusion

In this blog, we've introduced technical concepts behind OFDM signaling, and outlined a unique approach that innovatively combines DSP and DL to build a custom signal detector. We've shown that our signal detector can detect the presence of OFDM below the noise floor with only partial bandwidth RF collections and short time duration bursts. We are leveraging this capability along with our other AI-based anomaly detection methods for application in the cable and wireless telecommunications space, where the use of OFDM waveforms is becoming increasingly prevalent. Stay tuned for our tutorial that will include access to a full dataset and step-by-step instructions on building your own AI signal detector. If you are interested in our technology solutions or working together, please contact us.

References

[1] Won-Gyu Song and Jong-Tae Lim, "Channel estimation and signal detection for MIMO-OFDM with time varying channels," in IEEE Communications Letters, vol. 10, no. 7, pp. 540-542, July 2006.

[2] M. Renfors, J. Yli-Kaakinen and F. J. Harris, "Analysis and Design of Efficient and Flexible Fast-Convolution Based Multirate Filter Banks," in IEEE Transactions on Signal Processing, vol. 62, no. 15, pp. 3768-3783, Aug.1, 2014.

[3] A. Hirose and S. Yoshida, "Generalization Characteristics of Complex-Valued Feedforward Neural Networks in Relation to Signal Coherence," in IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 4, pp. 541-551, April 2012.

KickView

KickView

KickView provides real-time AI solutions for extracting actionable information from sensor data at the edge.

Read More