# 1 What is ARIMA?

Arima is the easternmost and second largest in area of the three boroughs of Trinidad and Tobago. It is geographically adjacent to – wait, just kidding!

ARIMA stands for auto-regressive integrated moving average. It’s a way of modelling time series data for forecasting (i.e., for predicting future points in the series), in such a way that:

• a pattern of growth/decline in the data is accounted for (hence the “auto-regressive” part)
• the rate of change of the growth/decline in the data is accounted for (hence the “integrated” part)
• noise between consecutive time points is accounted for (hence the “moving average” part)

Just as a reminder, “time series data” = data that is made up of a sequence of data points taken at successive equally spaced points in time.

## 1.1 A little historical background

Many decades ago, in a galaxy not too far away from here, statisticians analyzed time series data without taking into account how ‘nonstationariness’(read: growth/decline over time) might have an effect on their analyses. Then George P. Box and Gwilym Jenkins came along and presented a famous monograph called “Time Series Analysis: Forecasting and Control” in which they showed that nonstationary data could be made stationary (read: steady over time) by “differencing” the series. In this way, they could pull apart a juicy trend at a specific time period from a growth/decline that would be expected anyway, given the nonstationariness of the data.

More specifically, their approach involved considering a value Y at time point t and adding/subtracting based on the Y values at previous time points (e.g., t-1, t-2, etc.), and also adding/subtracting error terms from previous time points.

The formula itself looks like this:

$Y_t = c + \phi_1y_d{_{t-1}} + \phi_p y_d{_{t-p}}+...+\theta_1 e_{t-1} + \theta_q e_{t-q} + e_t$

Where “e” is an error term and “c” is a constant.

ARIMA models are typically expressed like “ARIMA(p,d,q)”, with the three terms p, d, and q defined as follows:

• p means the number of preceding (“lagged”) Y values that have to be added/subtracted to Y in the model, so as to make better predictions based on local periods of growth/decline in our data. This captures the “autoregressive” nature of ARIMA.

• d represents the number of times that the data have to be “differenced” to produce a stationary signal (i.e., a signal that has a constant mean over time). This captures the “integrated” nature of ARIMA. If d=0, this means that our data does not tend to go up/down in the long term (i.e., the model is already “stationary”). In this case, then technically you are performing just ARMA, not AR-I-MA. If p is 1, then it means that the data is going up/down linearly. If p is 2, then it means that the data is going up/down exponentially. More on this below…

• q represents the number of preceding/lagged values for the error term that are added/subtracted to Y. This captures the “moving average” part of ARIMA.

# 2 Our example data: Generating sine waves to play with

In order to visualize how ARIMA works, we’re going to generate some sine waves and run ARIMA on them. Of course, normally you wouldn’t be making predictions about such simple sine waves at future time points. The point here is just to demonstrate what ARIMA does.

Recall from high school math class that this is the equation for a sine wave:

$y(t) = A * sin(2 * \pi * f * t + \phi)$ where:

• Y(t) means the value of the wave at time t
• A means the amplitude, or the peak deviation of the function from zero. (Basically, how high or low the wave goes)
• f means the ordinary frequency, or the number of oscillations (cycles) that occur at each second of time. In other words, this is how “wide” or “narrow” the sine wave is.
• Greek letter phi (that rightmost symbol) means the phase, which specifies (in radians) where in its cycle the oscillation is at t = 0. In other words, this refers to what point in the wave we start at, at time point zero.

OK, now let’s generate our own sine waves in R:

#Our equation will be of the form:
#y=a*sin(b*t)

t <- seq(0,4*pi,,100) #sequence of 100 numbers going up by intervals of 4*pi.
a <- 3 # amplitude
b <- 2 # width of each wave

#Here is a perfect, noiseless plot of the sine wave:

plot(a*sin(b*t),type="l")

But what if we want to add some noise to it? We can add this manually with R’s runif and rnorm functions.

n <- 100 #number of data points

set.seed(42) # set the seed so that random values come out consistently, for the sake of reproducibility of these results

c.unif <- runif(n)  ###here are 100 data points of uniform error
c.norm <- rnorm(n) ### here are 100 data points of  Gaussian/normal error
amp <- 2 ### the amplitude of the noise

plot(a*sin(b*t)+c.unif*amp,type="l") # sine wave with uniform error

plot(a*sin(b*t)+c.norm*amp,type="l") # sine wave Gaussian/normal error

Another, easier way to add noise in R is by using the jitter function. Note that changing the factor variable will determine how much noise is added.

jittery <- jitter(a*sin(b*t), factor=1000)

plot(jittery, type="l")

In order to demonstrate the beauty of ARIMA, we’re going to create a more complex time series that is made up of two sine waves added together that go up exponentially.

I’ll walk you through this one step at a time

OK, so here’s our first, simple sine wave. Just for giggles, we’ll add some noise to it, too.

Sine1 <- jitter(30*sin(2*t), factor=200)
plot(t,Sine1,type="l")

Now we’ll make our second sine wave. This one is going to be a little bit different from the first one, though, in that it has a smaller amplitude and a higher frequency. We’ll throw off the phase a bit, too.

Sine2<-jitter(20*sin(5*t+2), factor=200)

plot(t,Sine2,type="l")

Looks different, right? Now let’s add our two sine waves together:

TwoSines<-Sine1+Sine2

plot(t,TwoSines,type="l")

Wow, looks funky, right? You can see that there’s a short-term trend and a longer-term trend involved.

Now let’s make a version of this double-sine-wave where it goes up exponentially.

LineGoingUp<-seq(0.5,50,0.5)
ExponentialLineGoingUp<-(3/5000000)*LineGoingUp^5
plot(ExponentialLineGoingUp, type="l") #This is the line that we'll add to our time series to make it go up exponentially

TwoSinesGoingUpExponentially<-TwoSines+ExponentialLineGoingUp
plot(t,TwoSinesGoingUpExponentially,type="l")

Now, imagine that these were our data and we wanted to predict future time points in this series. We can see that any analysis is going to have to account for several things…

• the fact that there is a slow cycle of oscillations (Sine1)
• the fact that there is a fast cycle of oscillations (Sine2)
• the fact that there’s noise in the data.
• the fact that it goes up over time
• the fact that its growth rate INCREASES over time

# 3 An aside regarding log-adjusting…

Visual inspection of the plot will help you to determine whether or not an “additive model” would describe your data appropriately. If the size of seasonal fluctuations and random fluctiations increases in the time series as time goes on, then this indicates that an additive model is NOT appropriate.

For instance, check out the following line:

MultiplicativeTwoSinesGoingUp<-(TwoSines+100)*LineGoingUp
plot(t,MultiplicativeTwoSinesGoingUp,type="l")

You see what the problem is? The seasonal component at the beginning of the series is smaller than the seasonal component later in the series.

To account for this, you’d need to log-transform the data as follows:

LogTransformedSine<-log(MultiplicativeTwoSinesGoingUp)
plot(t,LogTransformedSine,type="l")

That way, your data could be described by an additive model rather than a multiplicative model.

# 4 Determining stationarity

So, the first thing to do is to determine if our time series is stationary (i.e., if the mean is generally constant throughout the time series, as opposed to going up or down over time).

First, we’ll do this with a visual inspection. Let’s call up our previous plot.

plot(TwoSinesGoingUpExponentially, type="l")

OK, this doesn’t look stationary at all, as the mean tends to go up over time. We can do one better than a visual inspection, however, and use a formal test to determine stationarity (or lack thereof) in a more empirical way.

## 4.1 Determining stationarity with an augmented Dickey-Fuller test

For this, we can use the augmented Dickey-Fuller (ADF) test, which tests the null hypothesis that the series is non-stationary. This is included in the “tseries” package.

Without going into too much detail, the ADF test determines whether the change in Y can be explained by a lagged value (e.g., a value at a previous time point Y [t-1] ) and by a linear trend. If there is a linear trend but the lagged value cannot explain the change in Y over time, then our data will be deemed non-stationary.

#install.packages("tseries") #if needed
library("tseries")
adf.test(TwoSinesGoingUpExponentially, alternative = "stationary")
##
##  Augmented Dickey-Fuller Test
##
## data:  TwoSinesGoingUpExponentially
## Dickey-Fuller = -0.6673, Lag order = 4, p-value = 0.9707
## alternative hypothesis: stationary

It looks like our p value is above .05, meaning that our data is indeed NON-stationary. This confirms the results of our visual inspection.

## 4.2 Finding the d value - a.k.a, differencing the data to achieve stationarity

Given that we have non-stationary data, we will need to “difference” the data until we obtain a stationary time series. We can do this with the “diff” function in R.

This basically takes a vector and, for each value in the vector, subtracts the previous value. So if you have:

5 8 1 6 4

… then the “differenced” vector would be:

3 -7 5 -2

Of course, since the first value in the original vector did NOT have a previous number, this one doesn’t get a corresponding value in the new, differenced vector. So, the differenced vector will have one less data point.

Now, our first step will only take what is known as the “first-order difference – that is, the difference when you only remove the previous Y values only once. In more formal mathematical terms,

$Y_{d_t} = Y_t - Y_{t-1}$

Basically, for each time point in our data, this gives you the change in value from the previous time point. Here’s how we do this in R:

Diff1TwoSinesGoingUpExponentially<-diff(TwoSinesGoingUpExponentially, differences=1)

plot(Diff1TwoSinesGoingUpExponentially, type="l")

Uh-oh… our data don’t look quite stationary yet! See how they seem to go up subtly over time?

Let’s confirm this with an ADF test:

adf.test(Diff1TwoSinesGoingUpExponentially, alternative = "stationary")
##
##  Augmented Dickey-Fuller Test
##
## data:  Diff1TwoSinesGoingUpExponentially
## Dickey-Fuller = -2.8888, Lag order = 4, p-value = 0.2086
## alternative hypothesis: stationary

This lingering nonstationariness is because our original data involved an exponential curve– that is, the “change of the change” changed! Differencing once only took out one level of change

Let’s try taking the second-order difference instead. This basically takes our once-differenced data and differences it a second time. Put formally:

$Y_{d2_t} = Y_{d_t} - Y_{d_t-1} = (Y_t - Y_{t-1}) - (Y_{t-1} - Y_{t-2})$

In effect, this gives you:

$Y_{d2_t} = Y_t - 2 Y_{t-1} + Y_{t-2}$

In a sense, this gives us a measure of “the change of the change,” similar to the concept of “acceleration” (as opposed to “velocity”). One could even get a little crazy and take a third-order difference (corresponding to what physicists call “jerk”, or a change in acceleration over time), a fourth-order difference, etc.

Anyway, let’s get our second order difference:

Diff2TwoSinesGoingUpExponentially<-diff(TwoSinesGoingUpExponentially, differences=2)

plot(Diff2TwoSinesGoingUpExponentially, type="l")

Notice that the data is now stationary — the values don’t go up or down over the long term. Let’s confirm this with the ADF test

adf.test(Diff2TwoSinesGoingUpExponentially, alternative = "stationary")
## Warning in adf.test(Diff2TwoSinesGoingUpExponentially, alternative =
## "stationary"): p-value smaller than printed p-value
##
##  Augmented Dickey-Fuller Test
##
## data:  Diff2TwoSinesGoingUpExponentially
## Dickey-Fuller = -9.9776, Lag order = 4, p-value = 0.01
## alternative hypothesis: stationary

OK, p is way below 0.05. Looks like our data is indeed stationary. Given that we had to difference the data twice, the d value for our ARIMA model is 2.

# 5 Using correlograms and partial correlograms to determine our p and q values

If/once you have a stationary time series, the next step is to select the appropriate ARIMA model. This means finding the most appropriate values for p and q in the ARIMA(p,d,q) model.

(Remember: p refers to how many previous/lagged Y values are accounted for for each time point in our model, and q refers to how many previous/lagged error values are accounted for for each time point in our model. )

To do so, you need to examine the “correlogram” and “partial correlogram” of the stationary time series.

A correlogram shows the AUTOCORRELATION FUNCTION. It’s just like a correlation, except that, rather than correlating two completely different variables, it’s correlating a variable at time t and that same variable at time t-k

A partial correlogram is basically the same thing, except that it removes the effect of shorter autocorrelation lags when calculating the correlation at longer lags. To be more precise, the partial correlation at lag k is the autocorrelation between Yt and Yt-k that is NOT accounted for by the autocorrelations from the 1st to the (k-1)st lags.

To plot a correlogram and partial correlogram, we can use the “acf()” and “pacf()” functions in R, respectively. F.Y.I., if you just want the actual values of the autocorrelations and partial autocorrelations without the plot, we can set “plot=FALSE” in the “acf()” and “pacf()” functions.

For the purposes of this demonstration, let’s get the autocorrelations for the original, non-stationary data as well as the once-differenced, stationary data.

Note that you can specify the maximum number of lags to be shown in the plot by specifying a “lag.max” value:

acf(Diff2TwoSinesGoingUpExponentially, lag.max=30)

The little dotted blue line means that the autocorrelations exceed significance bounds.

In our case, it looks like our time series data repeatedly exceeds these bounds at certain lag points. There’s a recurring pattern involved. not good!

Let’s check the partial correlogram, too:

pacf(Diff2TwoSinesGoingUpExponentially, lag.max=30)

Again, our data seems to follow a pattern at regular lag intervals. This is a sign that our data involves some kind of seasonal component, which brings us to…

# 6 Decomposing data

## 6.1 Decomposing non-seasonal data

If our data just has a lot of noise, and we want to smooth this out, we can use R’s simple moving average function, SMA(), which is available in the TTR package.

#install.packages("TTR") #if needed
library("TTR")
SmoothedSines<-SMA(Diff2TwoSinesGoingUpExponentially, n=5) #specify averaging over 5 time points
plot(SmoothedSines,type="l")

## 6.2 Decomposing seasonal data.

Decomposing can also allow us to remove seasonal trends in our data. To illustrate why this might be useful:

Imagine that you want to track the amount of food that a restaurant sells over time. Now, if you have hourly data for this, you might find that the numbers will go up and down predictably based on typical mealtimes, e.g., food sales will peak around lunchtime and dinnertime but dip between these two. Removing this seasonal component would allow you to capture relevant, longer-term trends in the data without being thrown off by this predictable variation.

Sometimes our data will have a seasonal component whose frequency is easy to determine. For instance, if we have monthly observations and our natural time period is a year, then the frequency would simply be 12; if we have daily observations and our natural time period is a week, it’d be 7; and so on and so forth.

However, sometimes time series data might not lend itself to such pre-determined, cut-and-dried time periods. If this is the case, then we might try…

### 6.2.1 Determining frequency using Fourier transform

A Fourier Transform is a way of transforming some data from the “time domain” to the “frequency domain.”

To criminally oversimplify things: if you have a wave with frequency 5, then the Fourier-transform will consist of a “blip” at x=5, and little else. If you have some data that is made up of a wave of frequency 5 and a wave of frequency 12, then the Fourier transform will consist of a blip at x=5 and x=12, and little else.

A Fourier Transform is not exactly an easy concept to grasp at first, and a thorough explanation is outside of the scope of this chapter. For anybody who wants an extremely accessible introduction, here is the link to a good explanation (warning: the analogy that they use to explain the Fourier Transform involves fruit smoothies, so avoid this link if you’re hungry!):

https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/

The reason that the Fourier Transform is relevant to us is that it can show us what the frequencies of the seasonal components of our data are, so that we can account for these components when we make predictions about it.

To perform a Fourier Transform, we will need to install and load the TSA package (don’t worry, you won’t have to take off your shoes):

#install.packages("TSA") #if needed
library("TSA")

We’re going to get a “periodogram” for our combined sine waves. Basically, this looks at every possible frequency and determines which ones best explain our data.

PGram<-periodogram(Diff2TwoSinesGoingUpExponentially)

See what it does? It basically gives us a little plot showing which frequencies seem to occur in the data.

To convert this into a useful format for our purposes, we will make a little data set from this periodogram and reshuffle it so the two highest values are on top. Then we’ll call for the values from the 1st and 2nd rows of the data set that we’ve created:

PGramAsDataFrame = data.frame(freq=PGram$freq, spec=PGram$spec)
order = PGramAsDataFrame[order(-PGramAsDataFrame$spec),] top2 = head(order, 2) top2 # display the 2 highest "power" frequencies ## freq spec ## 10 0.10 2817.7623 ## 4 0.04 187.7125 Now we need to convert this frequency to actual time periods by taking the inverse of the frequency. TimePeriod<-1/top2[2,1] TimePeriod  ## [1] 25 TimePeriod2<-1/top2[1,1] TimePeriod2 ## [1] 10 These values – 25 and 10 – correspond exactly to the original sine waves that we put together. If you recall these plots, the first (slow) sine wave completed a cycle once every 25 time points, and the second (fast) sine wave completed its cycle once every 10 time points. ### 6.2.2 Decomposing data once we know the frequency OK, now that we’ve determined the frequency of our data, let’s get down to actually decomposing it. To do this in R, we will first need to install and load the ’tseries’package: #install.packages("tseries") #if needed library('tseries') First, we need to make it into a time series where frequency = 10 (as per the results of our Fourier Transform above)–that is to say, where 10 time periods constitute one cycle. TwoSinesGoingUpExponentiallyFreq10 <- ts(Diff2TwoSinesGoingUpExponentially, frequency=10) Now, let’s try out R’s simple “decompose” function, which uses simple moving averages. RDecomp<-decompose(TwoSinesGoingUpExponentiallyFreq10) plot(RDecomp) A slightly more fine-grained method uses a Loess (“LOcal regrESSion”) model: SineWavesDecomposed <- stl(TwoSinesGoingUpExponentiallyFreq10, s.window="periodic") plot(SineWavesDecomposed) You see how this can pulls apart the different components of our data? Remember that we originally had two seasonal components: a higher-frequency sine wave and a lower-frequency sine wave. That’s why, after the higher-frequency component is pulled apart, the trend still looks sine-y. The following code will actually remove that freq=10 seasonal component from the data. For this, we will need to install the forecast package: #install.packages("forecast") #if needed library('forecast') SineWavesDecomposedSeasonalAdjusted <- seasadj(SineWavesDecomposed) plot(SineWavesDecomposedSeasonalAdjusted) This new line has the high-frequency seasonal component removed. Now let’s take care of that second, lower-frequency seasonal component. To do this, we will now assign it a frequency of 25 (which we had found using our Fourier Transform above). Again, here “frequency=25” means that 25 time points comprise one cycle. TwoSinesGoingUpExponentiallyFreq25 <- ts(SineWavesDecomposedSeasonalAdjusted, frequency=25) SineWavesDecomposed2 <- stl(TwoSinesGoingUpExponentiallyFreq25, s.window="periodic") plot(SineWavesDecomposed2) SineWavesDecomposedSeasonalAdjusted2 <- seasadj(SineWavesDecomposed2) plot(SineWavesDecomposedSeasonalAdjusted2) Now all that’s left here is the noise, with the seasonal component from the sine waves removed. Now if we look at the autocorrelations and partial autocorrelations, we should see that our values don’t follow a pattern where they cross significance at recurring time lags as before: acf(as.numeric(SineWavesDecomposedSeasonalAdjusted2), lag.max = 30) #Because we're using a time series object, we'd have to convert it to a numeric vector instead. Otherwise the lags aren't presented as they should be. pacf(as.numeric(SineWavesDecomposedSeasonalAdjusted2), lag.max = 30) So what exactly do these plots mean? The maximum significant lag values of the correlogram gives you the possible q values for the ARMA model. For instance, if our maximum value is 3, then an ARMA(0,3) model is possible. The maximum significant lag values of the partial correlogram gives you the p value for an ARMA model. For instance, if our maximum value is 3, then an an ARMA(3,0) model would also be possible. There’s a faster way to find the best possible ARIMA model, though, and that’s… # 7 The Auto-ARIMA function The auto.arima() function can be used to find the appropriate ARIMA model in a jiffy: auto.arima(TwoSinesGoingUpExponentially) ## Series: TwoSinesGoingUpExponentially ## ARIMA(2,1,5) with drift ## ## Coefficients: ## ar1 ar2 ma1 ma2 ma3 ma4 ma5 drift ## 1.5894 -0.9599 -0.7602 0.8890 0.1727 0.0447 0.3596 2.2027 ## s.e. 0.0287 0.0276 0.1130 0.1216 0.1392 0.1375 0.0985 1.0999 ## ## sigma^2 estimated as 6.261: log likelihood=-231.75 ## AIC=481.5 AICc=483.52 BIC=504.85 The output tells us that the model that best fits our original data is ARIMA(2,1,5). What does this mean? • 2 tells us that we need to take into account the Y value at 2 lags from a given time point t. • 1 tells us that the time series is not stationary, so we need to take a first-order difference. • 5 tells us that this model takes into account the error term from 5 preceding/lagged values. On a more general note, the principle of parsimony would say that you’d want to choose the model with the fewest parameters/specifications However, note that different criteria can be used to select a model (see auto.arima() help page). For instance, the “bic” criterion penalises the number of parameters involved. ## 7.1 Forecasting with ARIMA OK, now to the good part: we can use ARIMA to forecast future time points in a series. For this, you’d need to first specify an ARIMA model, with a three-number vector corresponding to the p, d, and q values of your model. [Note that we’ll have to specify that we want the arima function from the stats package, and not the arima function from the TSA package. To do this, we’d write “stats::arima( …” rather than just “arima(…” ] TwoSinesAsArima<-stats::arima(SineWavesDecomposedSeasonalAdjusted, order=c(2,1,5)) SineWaveForecasts <- forecast.Arima(TwoSinesAsArima, h=3) SineWaveForecasts ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 10.80 -1.8926850 -4.666228 0.8808583 -6.134453 2.349083 ## 10.90 1.1180806 -1.656260 3.8924213 -3.124907 5.361069 ## 11.00 0.1699921 -2.900124 3.2401083 -4.525345 4.865330 Then you can use the forecast.Arima function in the “forecast” package to make predictions for the next X items in the time series By default, R will spit out the 80% and 95% prediction intervals. However, you can also specify a level of confidence as follows… SineWaveForecasts <- forecast.Arima(TwoSinesAsArima, h=5, level=c(99.5)) You can conveniently plot these forecasts as follows.. plot.forecast(SineWaveForecasts) ## 7.2 Testing the distribution of errors in your ARIMA model. ### 7.2.1 Are successive errors correlated? Ideally, successive forecast errors should NOT be correlated. We can test this in the following ways… We can use “$residuals” on our forecast object to call up these errors, and pull up an autocorrelation plot for these as follows:

acf(as.numeric(SineWaveForecasts$residuals), lag.max=20) If any of the lagged autocorrelations are significant (read: if they cross the blue dotted line), then we have a significant correlation. Ideally, this wouldn’t be the case! This can also be tested formally with a Ljung-Box test, as follows: Box.test(SineWaveForecasts$residuals, lag=20, type="Ljung-Box")
##
##  Box-Ljung test
##
## data:  SineWaveForecasts$residuals ## X-squared = 25.044, df = 20, p-value = 0.1998 Here, p is greater than .05, suggesting that there are NO significant autocorrelations between successive forecasting errors. ### 7.2.2 Do our errors follow a normal distribution? To determine whether the errors in our ARIMA forecast are normally distributed with a mean of 0 and constant variance, we can also visualize our forecast errors with a time plot. plot.ts(SineWaveForecasts$residuals)

This time plot should show that our forecast errors have generally equal variance over time, and that their mean is around zero.

Of course, we can directly calculate the mean rather than trying to eyeball it from the plot:

mean(SineWaveForecasts\$residuals)
## [1] 0.3226414

Our mean is close to zero – a good sign!

# 8 Conclusion & Acknowledgments

I hope you’ve learned a thing or two! I sure have, in putting this together. Of course, I couldn’t have done it by myself.

Big shout-outs to Olivia Holmes and Kirk Mansen for their excellent feedback in putting this together.

Also, massive thanks to Alex Demos and Carlos Salas for introducing me to the sublime beauty of R.

Finally, I strongly recommend the following extremely helpful tutorials, on which this chapter was based: