Signal Detection Using Deep Learning

There are many resources for learning how to use Deep Learning to process imagery. However, very few resources exist to demonstrate how to process data from other sensors such as acoustic, seismic, radio, or radar. In this blog post, we will introduce some basic methods for utilizing a Convolutional Neural Network (CNN) to process Radio Frequency (RF) signals. Our complete tutorial and lab can be found at

Introduction to signal detection

When monitoring radio frequency (RF) signals, or similar signals from sensors such as biomedical, temperature, etc., we are often interested in detecting certain signal “markers” or features. This can become a challenging problem when the signal-of-interest is degraded by noise. Traditional signal detection methods use a range of techniques such as energy detection, “matched filtering”, or other correlation-based processing techniques. Short-duration radio frequency (RF) events can be especially challenging to detect, since the useful data length is limited and long integration times are not possible. Weak signals that are short in duration are some of the most difficult to reliably detect (or even find). We will walk you through a simple approach using a Convolutional Neural Network (CNN) to tackle the traditional signal processing problem of detecting RF signals in noise.

A little background information

Signal detection theory often assumes that a signal is corupted with additive white Gaussian noise (AWGN). This type of noise is common in the real world and the assumption makes mathematical analysis tractable. The detection of a signal in noise depends on the signal duration, amplitude, and the corresponding noise process. This becomes more difficult if correlated noise, or interfering signals, are also in the same band as the signal you wish to detect.

In our tutorial, we will assume no a-priori information about the signal-of-interest. As input to the Convolutional Neural Network, we will utilize spectrograms computed from simulated Radio Frequency (RF) data using a common Fast Fourier Transform (FFT) based method. Taking the input data into the frequency domain as time-frequency grams, which are 2D representations just like a picture, allows us to visualize the energy of a signal over some pre-determined time duration and frequency bandwidth.

The difficulty with real-world signals

For a single sinusoid in AWGN, finding the frequency bin with the maximum amplitude is a method for estimating signal frequency in a spectrogram. But real-world signals are often more complex, with frequency components that change with time, and creating a generalized signal detection algorithm becomes difficult. In this tutorial, we will look at one of these types of signals - Linear Frequency-Modulated (LFM) signals. In a follow-on tutorial we will explore Frequency-Hopped (FH) signals and multi-signal detection scenarios.

Linear Frequency-Modulated Signals

One classic example, is the detection of a linear frequency-modulated (LFM), or chirp, signal. This is a signal that ramps up or down in frequency over some time frame. Its frequency changes with time based on its chirp rate. Chirps are used in many different systems for frequency response measurements and timing. RADAR systems use chirp signals due to the inherent large time-bandwith product available with coherent processing. Another common use is for automatic room equalization in home theater receivers, since chirps can excite a large frequency swath quickly. Chirps can also be used as “pilot” signals to denote the start of an incoming transmission, and more.

Figure 1 shows a high-SNR chirp as seen in a grayscale spectrogram (the format we will be using). Since the spectrogram consists of real numbers all > 0, we can map it to a picture file by scaling the values appropriately. So we only need a single grayscale image channel. In this plot, the x axis is time and the y axis is frequency. Brightness is proportional to signal power.

Fig1. High-SNR chip spectrogram (grayscale)

The above chirp (Figure 1) has a high SNR and is easy to detect with traditional signal processing algorithms. But when you are monitoring RF environments that contain other “offending” signals and high noise levels, reliable detection becomes more difficult. For example, Figure 2 shows an example spectrogram with some pulsed carrier waves (sinusoids) and a low-bitrate digital communication BPSK signal embedded in noise.

Fig2. Typical of real-world noisy spectrum (x-axis is time, y-axis is frequency)

In this spectrogram there is no chirp signal, just noise and other comms-like signals. This is similar to what “real-world” RF signals look like – combinations of signal classes with different strengths, all embedded in noise. As an exemplar of the problem we will solve, Figure 3 consists of another spectrogram showing noise, interfering signals, and a weak chirp signal.

Fig3. Weak chirp embedded in noise

In Figure 3 the chirp signal is 7 dB below the noise power in this frequency band. That is, the signal-to-noise-ratio (SNR) for the chirp is -7 dB. It is barely visible to the human eye. Traditional detection methods, without large amounts of integration and/or a prior signal model, fail consistently in detecting a weak signal like this. Moreover, since we have interfering signals that are sharing the same bandwidth as the chirp, the problem becomes even harder.

When monitoring RF signals, we want accurate detection of these types of signals, as a human cannot visually inspect all the data manually. For example, in the case of intelligent spectral monitoring or cognitive radio, we want something to autonomously analyze extraordinary amounts of signal data all the time. The question arises: Can we design a better process to help detect these weak signals?

Deep Spectral Detection: Data and Network Creation

We created a two-output convolutional neural network that ingests an image of a time/frequency signal spectrogram. The network will determine whether a chirp signal is present (class 0 - signal) or a chirp signal is NOT present (class 1 - noise).

Creating DIGITS image database

We utilized the NVIDIA DIGITS framework to ingest our spectrogram dataset and train a model.


New CNN model creation

Digits allows you to use existing models or create custom models for classification. We started with an AlexNet and pruned the number of fully-connected layers to 2. The two fully connected layers were also reduced in size (less neurons). We Added regularization to aid in a better-trained network.


Models can also be visualized in Digits to see the various layers and activations.


Training the model is straightforward using Digits. It is important to experiment and adjust hyperparameters in order to get the model to train appropriately. Once you have trained your Convolutional Neural Network, Digits can save a copy of the network model at each epoch (it's one of the training parameters), so you can go back and analyze any epoch of the training process.


What can I do with this model?

Now, you can test it out on an selected spectrogram image. We will select one of the training image for a quick example that shows you how Digits can be used to visualize features within the model.


Where can you get more hands on experience?

Our complete tutorial and lab can be found at In that tutorial we show how to use the model as a signal detector, and we show how to analyze the detection performance with test signal data from various SNR levels.

In our next post, we will show how to do multi-signal detection for many signal types in a noisy RF environment.