HomeCryptoHow To Tell If Your Neural Net Forecast Is Worth A Damn....

# How To Tell If Your Neural Net Forecast Is Worth A Damn. Or If You’re Just Blowing Smoke.

## GETTING THE MOST OUT OF NEURAL PROPHET

This is part 2 of what will be a many-part series on how traders with little or no programming experience or formal computer language training can dive right into the deep end of the pool of stock, crypto and Forex time series forecasting with Python and Neural Prophet.

Somewhere down the road I will show you how to integrate your Neural Prophet output into your algorithmic trading strategies and your custom charting software coded in C# (should you so choose) .

Just so you know who you’re dealing with: I am not a mathematician nor a data scientist. My favorite music is Bluegrass. Calculus is a scare word. At best, I am two steps ahead of you. I did take calculus my freshman year in college and all I can remember of my formal calculus education is that the professor was absolutely drop-dead gorgeous Miss Mannarino who lectured in a mini skirt. I got an F. I did make a comeback and Aced formal logic the next semester. So, there we are.

In the introduction to this series For Traders Who Don’t Code And Don’t Think They Can you learned that if you can cut & paste you can run a time series forecast that maps nonlinear functions to approximate the framework’s interpretable decomposition capabilities on synthetic data. Yup. That’s what you did, and you don’t even have to have a clue what any of that means.

This is the simple form of the Neural Prophet engine that does all the computing:

m = NeuralProphet()
metrics = m.fit(df)
future = m.make_future_dataframe(df=df, periods=forecast_days_ahead)forecast forecast = m.predict(df=future)

The rest of the code in that earlier Python script, and indeed most of these forecasting scripts, is only about prepping the data and showing the result. This simplicity and ease of use is why I am focusing on Neural Prophet. It is a professional, industrial grade time series forecaster for the rest of us who do not have data science backgrounds. Neural Prophet is a black box, and that’s OK. We put data in and we get usable forecasts out.

That begs the question: how can we know if the forecasts that we get out of Neural Prophet are any good?

For reference I’m using Marco Peixeiro’s book Time Series Forecasting in Python. Peixeiro uses MAPE which stands for mean absolute percentage error to benchmark all his models. MAPE is an intuitive measure of how much a predicted value deviates from the actual value. The closer to 0 the better. This is the Python function for MAPE that we are going to implement in today’s model.

`def mape(y_true, y_pred):return np.mean(np.abs((y_true - y_pred) / y_true))`

MAPE is an excellent benchmarking tool because it’s easy to understand. We can use both its absolute value and it’s relative value to tell us (1) if the forecast is usable, and (2) if the changes we are making to our model are actual improvements. If MAPE gets closer to 0 after changing the value of an input then you are improving the model.

On an absolute basis these percentages answer the question of what is a good MAPE for forecasting:

• MAPE less than 5% — you got a usable model.
• MAPE greater than 10% but less than 25% — marginally useful model.
• MAPE greater than 25% — forget about it.

Note: For short times frames, minutes especially, MAPE can vary by large amounts from day to day. This variation is caused because MAPE is calculated from the mean of the sample data. Volatile tickers, like crypto, can experience large variations in prices (and their mean) between chunks of training data and chunks of test data, and from variations in the initial random seed used by NeuralProphet. I suggest that you run a model several times before abandoning it or making any big changes in the input variables.

From this point on all my Python scripts are going to focus on cryptocurrencies with data provided from Tiingo. Why? Cryptos trade worldwide 24/7 so you get to try things and get back real time results almost anytime and anywhere, and DOGE has doubled over the last few days.

There is one thing you must do to be able to run today’s script, and one thing I suggest you do if you plan on continuing through this ongoing Neural Prophet series.

If you want to use Google Colab that’s OK too. If you get an error message running the script and there is noting obvious then try this: Runtime → Restart and run all. Also when using Google Colab I suggest that after you make a change that you use Runtime → Run all (Ctrl — F9) instead of clicking on the black circle arrows that execute each individual cell. That little change you made may ripple through the entire script producing an unexpected result.

Copy and paste into Jupyter or Colab cells. Watch the indentations. Python needs those. Comment out the !pip installs with # after the first run.

`!pip install tiingo!pip install NeuralProphet!pip install Pandas!pip install numpy`

Import the libraries

`import requestsfrom datetime import date, datetime, timedeltaimport pandas as pdimport numpy as npfrom tiingo import TiingoClientfrom neuralprophet import NeuralProphet, set_log_levelset_log_level("CRITICAL")import matplotlib.pyplot as pltpd.plotting.register_matplotlib_converters()%matplotlib inline`

Insert the API key you got from Tiingo between the quotes

`config = {'api_key': 'YOUR API KEY', 'session': True}client = TiingoClient(config)`

You can change the ticker, the frequency and forecast period length.

`ticker = 'btcusd'frequency = '15min'end_date = date.today()bars_ahead_forecast = 60history = client.get_crypto_price_history([ticker],endDate = end_date,resampleFreq = frequency)`

Prep the data into the required ds, y format.

`df_hist = pd.DataFrame.from_dict(history['priceData'])df = pd.DataFrame(df_hist, columns=['date', 'close'])df.rename(columns = {'date':'ds', 'close':'y'}, inplace = True)df["ds"] = df["ds"].astype("datetime64[ns]")`

The Neural Prophet engine. Visit https://neuralprophet.com/quickstart.html

`m = NeuralProphet()df_train, df_val = m.split_df(df, freq='auto', valid_p = (0.1))metrics = m.fit(df_train, freq='auto', validation_df=df_val)future = m.make_future_dataframe(df, periods=bars_ahead_forecast,     n_historic_predictions=len(df))forecast = m.predict(future)`

MAPE function

`# Mean Absolute Percentge Error (MAPE) the closer to 0 the betterdef mape(y_a, y_f):return np.mean(np.abs((y_a - y_f) / y_a))`

Display the MAPE

`# Display the MAPE for this model runy_actual = df_train.yy_forecast = df_val.yprint("MEAN ABSOLUTE PERCENTAGE ERROR FOR THIS MODEL:" + " " + "{:.1%}".format(mape(y_actual, y_forecast)))`

Plot the forecast

`fig_forecast = m.plot(forecast)`

These are the results from my last run. It is a usable model.

`MEAN ABSOLUTE PERCENTAGE ERROR FOR THIS MODEL: 4.2%`

I hope this is useful for you. Follow and subscribe to get notified of the next episode.

References:

RELATED ARTICLES