Real-World RF Processing: Theory, Challenges & Machine Learning

by Robert North, Krishna Karra, and Dave Ohm

At KickView, analyzing Radio Frequency (RF) data to extract useful information is a key part of our multi-sensor processing solutions. In previous blog posts, we introduced ideas and concepts about applying deep learning techniques directly on RF data to perform signal detection and modulation recognition. The deep spectral detector operated on spectrograms computed from complex In-Phase/Quadrature (I/Q) samples, which produce magnitude images of signal snapshots as a function of time and frequency. The modulation recognition system that used a Convolutional Neural Network (CNN) operated directly on the raw I/Q data without any further transformation applied.

In this blog post, we want to take a step back to talk more generally about digital communications, and specifically what challenges are involved in processing RF data in the real-world. RF data can be viewed generally as unstructured time-series data. However, as we will see, there are many challenges to extract the structure of the signal from raw I/Q data. Understanding the theory and processing techniques from the digital communications world is critical in order to apply any sort of machine learning to the RF domain.

A Simplified View of Digital Communication

The goal of digital communications is to convey information from one point to another through a channel. More specifically, we want to impart discrete information (sampled data) from a binary data source (1s and 0s) onto a waveform for analog transmission. This process occurs by first mapping the binary data stream into a set of baseband symbols (pulses), which are then modulated onto a baseband waveform. Baseband is a fancy term for the original frequency range of a signal before it is up-converted to a different frequency for transmission. For example, a baseband audio signal (before FM modulation and transmission) has a typical range from 20 Hz to 20 KHz.

How do we construct a baseband waveform? It's done by convolving a shaping pulse against a sequence of complex symbols. To create the complex symbols, we map the source bits into amplitude and phase. Note that amplitude and phase can be trivially derived from a complex I/Q value, which is why signal data is represented as a time series of complex numbers. For example, in Binary Phase Shift Keying (BPSK), a single bit is mapped to a carrier phase of either 0 or 180 degrees. Figure 1 below depicts several complex constellation types with their associated source bit pattern to complex constellation mapping.

Figure 1: Several common digital modulation constellations

Typically, some sort of pulse shaping occurs after modulation. The goal of pulse shaping is to change the properties of the waveform to make it better suited for transmission across a channel. The pulse shaping filter determines the spectrum and orthogonality characteristics of the signal to be transmitted. Pulse shaping is also often used to combat intersymbol interference, where symbols interfere with each other and cause distortion of the signal. There are many sorts of pulse shaping filters that are used, but a very commonly used one is the Raised Cosine (RC) filter. Figure 2 below shows the time and frequency domain representation of this filter.

Figure 2: Time and Frequency Domain Representations of a Raised Cosine Pulse Shaping Filter

Figure 3 below shows the phase of the time-domain waveform in the case of BPSK modulation with an RC pulse shaping filter applied.

Figure 3: Time-domain representation of BPSK waveform

OK! We've mapped our source bits (binary data stream) onto a baseband modulated waveform. Before we proceed, let's take a step back and ask ourselves, why did we do this? By creating the baseband waveform, we've created an efficient representation of our data that can be recovered at the receiver. Our baseband waveform, when modulated up to a much higher carrier frequency, can be effectively transmitted over an RF channel. Different carrier frequencies are chosen based on their propagating characteristics and the amount of free spectrum available. The carrier frequency chosen also drives the design of the antenna, etc. To summarize, we've created our complex baseband waveform, modulated it to a carrier frequency, and now we are ready to transmit the signal over an RF channel. In communication systems, this is achieved by amplifying and transmitting the real component of the up-converted waveform through an RF antenna or other transmission medium such as a coaxial cable.

At the receiver, the opposite set of operations takes place. The up-converted real component of the signal is picked up by an antenna, and the signal is ultimately transformed back to a complex baseband waveform. This waveform is then demodulated to finally recover the original bit stream (binary data) from the modulated symbols.

Real-World Effects

Digital communication seems pretty simple, doesn't it? In reality, everything described above is an incredibly simplified view of a real-world communications link between a transmitter and receiver. Let's discuss some typical impairments that we can expect to see when doing RF processing.

One set of perturbations has to do with frequency and phase error. Phase and frequency are inherently intertwined because frequency is nothing more than the rate of change of phase over time. Typically, the transmitter and receiver both know what carrier frequency they should be tuned to, but in reality, they don't always agree. Typically, the local oscillator for the transmitter and receiver are referenced to different clock sources. A fixed frequency offset between the transmitter and receiver will lead to phase rotation in the constellation diagram, which can make extraction of hard decision intractable unless the offset is corrected for. If the transmitter and/or receiver are in motion, there will be Doppler shifts, which also manifest as frequency error. To handle these sources of error, there are techniques such as phased locked loops and frequency tracking algorithms.

Side note: Other types of signal corruption can be read easily from a constellation diagram, making it a very useful tool. Gaussian noise presents itself as fuzzy constellation points, centered around the correct value. Phase noise and frequency offset, as discussed above, show up as rotationally spreading constellation points. Compression in the amplifier causes points in the corner to move towards the center.

There is also uncertainty in the symbol rate, which is the rate at which information symbols are imparted onto baseband waveforms. Going back to Figure 3 above, there are clearly optimal points at which to sample the analog waveform. If there is an offset, sampling at the wrong times can result in bit errors. There can also be an offset of the phase of the received symbols. To combat these issues, communication systems employ symbol timing recovery and synchronization algorithms.

Finally, there can be impairments due to the RF channel itself. A fading RF channel can be caused by multipath propagation in the environment, weather and/or shadowing from physical obstacles. Multipath, which is caused when a signal is received from two or more paths, can cause constructive and destructive interference as well as phase shifting of the signal. Figure 4 below shows how multipath effects can be caused from transmission over an Over-The-Air (OTA) channel.

Figure 4: Various multipath sources from OTA transmission

Multipath is typically measured by computing the delay spread, which measures the difference between the time of arrival of the earliest to the latest multipath component. Delay spread directly influences the amount of Intersymbol Interference (ISI) that can be expected at the receiver. A large delay spread has the potential to cause severe degradation in performance. These negative effects are often visualized with an eye diagram, shown below in Figure 5.

Figure 5: Eye diagram of BPSK signal (clean, moderate multipath, severe multipath)

An eye diagram is generated by repetitively sampling a digital signal from a receiver and integrating the output over a time window that matches the data rate. An eye diagram can be thought of as a way visualize the trajectory that the received signal takes between successive symbols transmitted between optimal sampling points. Reading an eye diagram properly can provide insight to error sources present in your system. In Figure 5, we see an "open eye" on the left which shows up when error sources are largely mitigated, and the eye closing as the effects of multipath on the received signal are accentuated. Equalization is typically employed to combat perturbations to the signal due to channel effects.

Conclusion: Applying Machine Learning to Real-World RF Problems

If there's one takeaway from this post, it's that processing RF data in the real-world is messy and challenging. At KickView, we believe that it's crucial to have a firm understanding of the tools and techniques grounded in communication theory in order to understand how to effectively apply machine learning to this domain. Tools from communication theory can be powerful additions to aid machine learning algorithms in this domain. Below are some questions that you need to ask yourself when applying machine learning to RF in the real-world:

Can a deep neural net generalize over errors in frequency and time? How much capacity will that kind of network require, and does it have any hope of convergence? How do I generate representative training data to capture all these perturbations? What does the input space of my training data look like? What kind of pre-processing will allow the machine to extract salient features from signal data?

These are the types of problems we face daily at KickView when processing real-world RF sensor data using both Deep Learning and traditional signal processing techniques. What sort of issues have you experienced trying to process RF data using machine learning? Any lessons learned you'd like to share with us?