1 of 24

Soft Demodulation Using Neural Networks

Tammie Yang

EE132A

Professor Lorenzelli

2 of 24

Overview

GOAL: Build neural networks for 5 different modulation schemes to softly demodulate symbols by calculating the bit log-likely ratios (LLRs): BPSK, 8PSK, 4QAM, 16QAM, and 64QAM.

  • Generated data in MATLAB
  • Created the neural networks in Python
  • Calculated exact & approximate bit LLRs
  • Tested and chose optimal parameters
  • Trained with 15 dB SNR, evaluated with 20, 15, 10, 5 dB SNR data sets
  • Plotted accuracy and loss for each modulation scheme
  • Implemented a Rayleigh Fading Channel
  • Compared modulation schemes

2

3 of 24

Data Generation

MATLAB portion

4 of 24

Simulated Channel

  • Used randi to generate symbols from 0 to M-1 with gray coding for ‘c’ vector
  • Used pskmod, qammod to modulate symbols
  • Added AWGN noise with variance calculated from specified SNR value to get ‘s-hat’ vector
  • Demodulated by calculating exact and approximate LLRs to get ‘l’ vector
  • Saved data in .mat files to import into Python

4

5 of 24

LLR Calculations

(Log-likelihood ratio)

Exact log-MAP

  • logarithmic max a-posteriori algorithm
  • More accurate, slower to calculate
  • Logarithm of the ratio of probabilities of a 0 bit being transmitted over that of a 1 bit for a received signal

  • Assuming equal probability for all symbols, and variance is denoted by sigma squared and calculated from specified SNR

Approximate max-log-MAP

  • Maximum logarithmic max a-posteriori algorithm
  • Approximation, faster to calculate
  • Computed using only the nearest constellation point to the received signal

Calculation

  • BPSK, 8PSK calculated manually for both
  • 4QAM, 16QAM calculated manually and using qamdemod
  • 64QAM calculated using qamdemod

5

6 of 24

Exact vs Approximate LLR

  • Linear 1-1 relation proves accurate, correct results
  • All modulation schemes shown

6

7 of 24

LLRnet overview

Input Layer

  • Takes in two inputs
  • Received symbol (s-hat) split into two components
    • In-phase
    • quadrature

Hidden Layer

  • Has K neurons
  • Uses ReLU activation function

Output Layer

  • Outputs M-bit LLRs based on M-modulation scheme

Python code (4QAM example)

7

8 of 24

Optimal Parameters

Tested each while keeping all other parameters constant

9 of 24

Optimizer Choice

  • Tried multiple different optimizers from Keras options
  • Adam gave most optimal results for all modulation schemes
  • Organized data from worst to best performance

Performance data for BPSK modulation scheme

9

BPSK

adadelta

nadam

rmsprop

adam

Training Loss

64772.3865

27029.5765

0.0022

5.8525e-06

Validation Loss

64859.8067

25773.2068

0.0030

2.3995e-06

10 of 24

Number of Symbols

  • loss increased as the number of symbols decreased
  • would like to use higher number of symbols but took much longer to run
  • Used 10,000 symbols for BPSK
  • 100,000 symbols for rest

10

4QAM

1000 symbols

10,000 symbols

100,000 symbols

Training Accuracy

54.12%

99.99%

100%

Validation Accuracy

57.50%

100%

100%

Training Loss

58353.0739

0.0021

6.5823e-04

Validation Loss

57418.9239

0.0132

4.8585e-04

11 of 24

Validation Split

11

BPSK

0.1 validation split

0.2 validation split

0.4 validation split

Training Loss

5.3678e-07

5.8525e-06

27029.5765

Validation Loss

4.6101e-06

2.3995e-06

25773.2068

  • denotes how much of the inputted data will be used to train versus validating the model
  • found 0.2 to be the optimal split, as the training and validation loss performed most similarly

12 of 24

Number of Neurons

12

4QAM

2 neurons

4 neurons

6 neurons

Training Accuracy

99.98%

100%

100%

Validation Accuracy

99.94%

100%

100%

Training Loss

0.6469

6.5823e-04

2.8897e-05

Validation Loss

0.9448

4.8585e-04

6.0873e-08

  • More neurons
    • longer training/validation time
    • Training loss plateaued while validation loss kept decreasing
  • More complicated modulation schemes required more neurons

13 of 24

Modulation Schemes

14 of 24

BPSK

Binary Phase-shift Keying

Constellation Plots

Clearly shows two constellation points at 1 and -1

The model showed very low loss after just 50 epochs

Evaluating the BPSK Keras model with the same SNR it was trained at showed the least loss

Evaluation

Training

14

15 of 24

8PSK

8 Phase-shift Keying

Constellation Plots

Shows 8 constellation points around a circle, blurring out with lower SNR values

The model showed very low loss after just 10 epochs

Training

Because the accuracy is not 100%, the linear relation is more spread out, showing the effect of the noise

Evaluation

16 of 24

4QAM

4 Quadrature Amplitude Modulation

Constellation Plots

Shows 4 constellation points around a circle, blurring out with lower SNR values

The model showed very low loss after just 10 epochs

Training

The thin linear relation showcases the high accuracy

Evaluation

17 of 24

16QAM

16 Quadrature Amplitude Modulation

Constellation Plots

Shows 16 constellation points around a circle, blurring out with lower SNR values

The model showed very low loss after just 10 epochs

Training

Because the accuracy is not 100%, the linear relation is more spread out, showing the effect of the noise

18 of 24

64QAM

64 Quadrature Amplitude Modulation

Constellation Plots

Shows 64 constellation points as predicted

The model showed very low loss after around 20 epochs

Training

The thin linear relation showcases the high accuracy

19 of 24

Rayleigh Fading Channel

20 of 24

Simulation

  • Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio signal before it arrives at the receiver
  • Generated Rayleigh distributed random amplitudes to multiply sent symbol vector with
    • Used scale param value of 1
  • Generated uniform random phase from 0 to 2pi to phase shift all sent symbols with
  • Added AWGN noise

20

21 of 24

Rayleigh Results

  • The noise proved to be too much to create a neural network to train
  • Symbols all overlapping in the constellation
  • Phase shift is very apparent in the angle of the plotted shape
  • Further extension of the project I would attempt to implement phase estimation or receiver equalization

21

22 of 24

Overall Results

23 of 24

Modulation Scheme Comparison

BEST: 64QAM

  • High accuracy and low loss for all SNR values

WORST: 8PSK

  • High levels of loss compared to other modulation schemes at all SNR values

23

BPSK

8PSK

4QAM

16QAM

64QAM

SNR15 dB

Loss

1.46874250e-06

134.1609

0.0229376

6.7020968

0.0223162

Accuracy

N/A

0.9577

1.0

0.99994

0.98913

SNR10 dB

Loss

31390.381

3016.3733

31311.2109

66160.913

0.0692035

Accuracy

N/A

0.9562

0.99994

0.99644

0.99055

SNR5 dB

Loss

57802.007

5907.9523

57840.686

116799.434

0.22164

Accuracy

N/A

0.9471

0.99796

0.98808

0.99025

24 of 24

Thank you!

24