1 of 20

TIME SERIES FORECASTING OF GENERATED POWER FROM TEXAS WIND TURBINE

1

2 of 20

PRESENTATION OUTLINE

2

Methodology

Results

Conclusion

Scope of Research 

&

Abstract

  • Data Pre-Processing
  • Data Normalization
  • LSTM
  • NAR
  • NARX
  • LSTM MSE and prediction
  • NAR MSE and prediction
  • NARX MSE and prediction
  • Comparison and Conclusion

3 of 20

AUTHORS 

Sara Antonijevic, Department of Statistics at Texas A&M University

Nicholas A. Hegedus, Department of Mechanical Engineering at the University of Nevada at Las Vegas

Nuri J. Omolara, Department of Industrial and Systems Engineering at North Carolina Agricultural & Technical State University

3

4 of 20

RESEARCH ADVISORS

Kishore Bingi, Ph.D.

4

Om Prakash Yadav, Ph.D.

Rosdiazli Ibrahim, Ph.D.

5 of 20

SCOPE OF RESEARCH  & ABSTRACT

  • Wind Energy is the cleanest energy source and can be used as an alternative to nonrenewable energy. 
    • Abundant and unlimited resource
    • However, wind speed has a dynamic behavior which makes it difficult to anticipate energy output.
    • Time Series are also intricate to work with and accurately predict.

  • The scope of this study is to find the best algorithm from NAR, NARX, and LSTM that can predict power generated from wind turbines with high accuracy.
  • Data is taken from simulated Texas Wind Turbine and separated for training and testing. It is run through the LSTM, NAR, and NARX algorithms
  •  After obtaining the mean squared error (MSE) of the testing data, the algorithms are compared to determine the best predicting algorithm
  • The results show which algorithm on time-series data holds the most robust prediction of generated energy from wind turbines.

5

6 of 20

RESEARCH AND METHODOLOGY

6

Wind Turbine Information

  • Year-long, hourly time-series simulated using the National Renewable Energy Laboratory (NREL) software
  • Location in Texas, US.
  • General Electric Wind Turbine installed onshore
  • Rotor diameter 111m
  • Rated output 3600kW
  • Hub height 80m
  • Single Wind Turbine

7 of 20

DATA PRE-PROCESSING: VISUAL ANALYSIS

7

Visual Analysis

Variables Considered:

  • System Power Generated (kW)
  • Wind Speed (m/s)
  • Wind Direction (deg)
  • Pressure (atm)
  • Air Temperature (°C)

8 of 20

DATA NORMALIZATION: CORRELATION ANALYSIS

8

Correlation Analysis

Size of Correlation​

Interpretation​

0.90 to 1.00  | -0.90 to –1.00​

Very high positive or negative correlation​

0.70 to 0.90  | -0.70 to –0.90​

High positive or negative correlation​

0.50 to 0.70  | -0.50 to –0.70 ​

Moderate positive or negative correlation​

0.30 to 0.50  | -0.30 to –0.50​

Low positive or negative correlation​

0.00 to 0.30  | 0.00  to –0.30​

Negligible/insignificant correlation​

9 of 20

ARCHITECTURE

9

LSTM

NAR

NARX

Input Gate

Output Gate

Forget Gate

Decides which data components to use for adjusting the algorithm’s memory.

Evaluates which data pieces are unnecessary to create the next set of predictions.

Use the input and weighted data memory to determine the algorithm’s output.

10 of 20

RESULTS

10

11 of 20

LSTM TRAINING

11

75% of data (6750 samples)

RMSE  

Root Mean Square Error

Loss Function

Measures the error of a particular guess. 

Example: You make a guess for value X, but the actual value was Y. 

12 of 20

LSTM TESTING AND RESULTS

12

MSE: 1.5757

Right-Skew

Test for Predicting Next 100 Samples

25% of data (2190 samples)

13 of 20

NAR Results

13

Type

MSE

Training

1.8410×104

Validation

1.9018×104

Testing

2.1093×104

On the top left, the graph depicts the targets and outputs for training, validation, and testing data. The distance in between the target and output is the error margin and the response creates a line plot for the data set.

14 of 20

14

Type

MSE

Training

2.8684×10-4

Validation

3.0183×10-4

Testing

3.1311×10-4

NARX Results

This training Response Plot is a demonstration of wind speed trained as a predictor with hours as the response. The plot illustrates the targets of training, validating, and test targets compared to its outputs. The overall training yields a MSE that is significantly low. Thus, suggesting an accurate forecast.

15 of 20

NARX  Results

15

Type

MSE

Training

0.0013×10-4

Validation

0.0013×10-4

Testing

0.0013×10-4

Similar to the earlier Response Plot, this training is completed with systems power generated as the predictor, and hours, again, set as the response. In comparison to the earlier variable, this training also has notably low MSE.

Out of five variables, wind speed and systems power generated were the two most correlated variables in accurately forecasting projected energy. 

16 of 20

COMPARISON 

16

Techniques

MSE

LSTM

Training

0.2823

Testing

1.5757

NAR

Training

1.8410×104

Validation

1.9018×104

Testing

2.1093×104

NARX

Training

2.8684×10-4

Validation

3.0183×10-4

Testing

3.1311×10-4

NARX has the lowest MSE scores, but it does not account for multivariate data. 

Whilst LSTM holds the second smallest MSE (1.5757), it does account for all five variables.  

The MSE is also skewed towards the right, meaning most errors are closer to 0, with outliers being the main reason the MSE is the second-best performer.

17 of 20

CONCLUSION

  • In single-variable analysis and predictions, the best performer would be the NARX algorithm as it generates significantly small MSE values.
  • In the multivariate analysis and predictions, the LSTM proves as the stronger performer as it results in a relatively small MSE value whilst acknowledging all five variables in the study. 

17

18 of 20

FUTURE DIRECTIONS

  • Multivariable Analysis 
    • While NAR and LSTM algorithms acknowledge the five variables needed to be implemented as predictors, NARX only performed single variable analysis employing system generated power and wind speed as predictors for the response variables.
  • R and RMSE
    • More performance measures allow us to compare the three algorithms in greater depths. 
    • Next study should focus on implementing an R adjusted value 
  • Open & closed loop
    • The LSTM study was stopped before it could be implemented into closed and open loop forecasting. 
    • Forecasting done in this study is based on the network employed by the testing algorithm to predict the next 100 steps.

18

19 of 20

ACKNOWLEDGMENTS 

  • This work was supported by the National Science Foundation’s International Experience for Students (IRES) Site grant.  (Grant Numbers: OISE# 1952490-TAMU, 1952493-NC A&T State, and 195249-UNLV7). Any opinions, findings, conclusions, or recommendations presented are those of the authors and do not necessarily reflect the views of the National Science Foundation. Lastly, the PIs appreciate the work of Dr. Jessica Martone’s team from The Mark USA in providing the evaluation data for this project.
  • Dr. Arun Mozhi Devan Panneer Selvam, Department of Electrical and Electronics Engineering at Univeristi Teknologi PETRONAS

19

20 of 20

QUESTIONS?

20