Tools of the Trade: Time series metrics with a naïve approach

IMG_0283

This is the eighth post in a series of blog posts using a theme of “Tools of the trade”. The series targets software tools, statistical concepts, data science techniques, or related items. In all cases, the topic will contribute to accomplishing data science tasks. The target audience for the posts is engineers, analysts, and managers who want to build their knowledge and skills in data science, particularly those in the Microsoft Dynamics ecosystem.

This post continues discussions of time series data. The focus of this post is an introduction to time series forecasting metrics and applying them to a naïve forecasting approach. Past posts related to time series are shown below:

What is it?

Time series metrics are used to measure the ‘goodness’ of a time series forecast. While there are many time series metrics, I will introduce two of the most common metrics in this post:

  • Root Mean Square Error (RMSE)
  • Mean Absolute Percentage Error (MAPE)

Both of these metrics are based on the difference between the forecasted value and the actual value (the error).

RMSE finds the average magnitude of the error using the units of the time series. For example, if I’m forecasting the daily low temperature for the next month, RMSE will find how many degrees I am off on average for each day. This can be very useful when forecasting values with limited variability, but that’s not always the case. The importance of an error of $100 is significant in my home owner electrical bill time series but would be insignificant for a business whose monthly bill is thousands of dollars.

As MAPE implies, this metric is an average percentage error for each forecast. In some cases, this can be very useful. But in cases where the actual value is near zero, MAPE will provide misleadingly high percentages. For example, the current month of January in Minnesota has an average low temperature of 6 degrees. Being off on a forecast by 6 degrees would produce a 100% error. That same error for an actual low temperature of 60 degrees would be 10%.

A naïve approach is using a ‘best guess’ for a forecast.

How do I use it?

Metrics are needed to compare results from different forecast techniques. Future posts will show that there are many different ways to do forecasting.

Using naïve approaches provides a baseline for comparison of other approaches. I also talked about using a naïve approach when discussing classification ML accuracy in https://lakedatainsights.com/2019/12/22/tools-of-the-trade-classification-ml-accuracy/.

Discussion

The first step in time series forecasting is to split the data set into train and test subsets. As I’ve done in previous time series posts, I will use my monthly electric bill as the time series.

Unlike other ML types where this split is done using a randomly selected subset, the temporal nature of time series requires contiguous segments of the data for train and test. In my data subset, I will do a train / test split such that the first three years will be used to create a forecast for the fourth year. I will then compare the forecasted values for the fourth year with the actual values. Here’s a plot of the split.

Train / Test Dataset

The first naïve approach I will try is to simply use the last value of my train dataset as the forecast for each month of my test dataset. For a time series that is stable or trending consistently, this might be a good approach. For a dataset with as much seasonality as mine, it could be off significantly. This would be especially true if the last value was in a peak or a valley. Here’s what that looks like visually:

With the last value in the summer months which has low electricity usage, this isn’t a very good forecast.

The second naïve approach is to use the mean value for all data in the training set. This is a better approach for this data set. If your goal was to establish a yearly budget with monthly values, this could be sufficient. Here’s what this looks like:

Simple Average 
Simple Average

Here are the two metrics for each approach.

Approach RMSE MAPE
Last value $167.83 30.3%
Average value $119.88 38.6%

The metrics and the visuals demonstrate how a single metric can be misleading. The Average value approach clearly looks more accurate and has a significantly better RMSE, but the MAPE value is actually worse than the Last value approach. What is going on here? Remember that with a percentage, the actual value is in the denominator. That means the Average approach will have very high percent errors for the summer month predictions where the values are relatively small. The Last value approach works well for the summer and that is why the MAPE appears worse for Average value.

I’ll use the Average value metrics as a baseline for more sophisticated forecasting techniques in future blog posts.

References

5 Most Popular Measures For Capturing Forecasting Error For Time Series – Data Science Central

Picture details:  Winding trail, 10/5/20, Canon PowerShot G3 X, f/4.5, 1/1000, ISO-640