Background
We present here a very brief discussion of the specification and estimation of a linear state space model. Those desiring greater detail are directed to Harvey (1989), Hamilton (1994a, Chapter 13; 1994b), and especially the excellent treatment of Koopman, Shephard, and Doornik (1999), whose approach we largely follow.
Specification
A linear state space representation of the dynamics of the vector is given by the system of equations:
 (50.1)
 (50.2)
where is an vector of possibly unobserved state variables, where , , and are conformable vectors and matrices, and where and are vectors of mean zero, Gaussian disturbances. Note that the unobserved state vector is assumed to move over time as a first-order vector autoregression.
We will refer to the first set of equations as the “signal” or “observation” equations and the second set as the “state” or “transition” equations. The disturbance vectors and are assumed to be serially independent, with contemporaneous variance structure:
 (50.3)
where is an symmetric variance matrix, is an symmetric variance matrix, and is an matrix of covariances.
Note that the updating equation for the states is for the states in period , given the errors specified in period . This particular timing convention, which follows Koopman, Shephard, and Doornik (1999), has important implications for the interpretation of correlations between errors in the signal and state equations as discussed in “A Note on Correlated Errors”.
In the discussion that follows, we will generalize the specification given in (50.1)(50.3) by allowing the system matrices and vectors to depend upon observable explanatory variables and unobservable parameters . Estimation of the parameters is discussed in “Estimation”.
Filtering
Consider the conditional distribution of the state vector given information available at time . We can define the mean and variance matrix of the conditional distribution as:
 (50.4)
 (50.5)
where the subscript below the expectation operator indicates that expectations are taken using the conditional distribution for that period.
One important conditional distribution is obtained by setting , so that we obtain the one-step ahead mean and one-step ahead variance of the states . Under the Gaussian error assumption, is also the minimum mean square error estimator of and is the mean square error (MSE) of . If the normality assumption is dropped, is still the minimum mean square linear estimator of .
Given the one-step ahead state conditional mean, we can also form the (linear) minimum MSE one-step ahead estimate of :
 (50.6)
The one-step ahead prediction error is given by,
 (50.7)
and the prediction error variance is defined as:
 (50.8)
The Kalman (Bucy) filter is a recursive algorithm for sequentially updating the one-step ahead estimate of the state mean and variance given new information. Details on the recursion are provided in the references above. For our purposes, it is sufficient to note that given initial values for the state mean and covariance, values for the system matrices , and observations on , the Kalman filter may be used to compute one-step ahead estimates of the state and the associated mean square error matrix, , the contemporaneous or filtered state mean and variance, , and the one-step ahead prediction, prediction error, and prediction error variance, . Note that we may also obtain the standardized prediction residual, , by dividing by the square-root of the corresponding diagonal element of .
Fixed-Interval Smoothing
Suppose that we observe the sequence of data up to time period . The process of using this information to form expectations at any time period up to is known as fixed-interval smoothing. Despite the fact that there are a variety of other distinct forms of smoothing (e.g., fixed-point, fixed-lag), we will use the term smoothing to refer to fixed-interval smoothing.
Additional details on the smoothing procedure are provided in the references given above. For now, note that smoothing uses all of the information in the sample to provide smoothed estimates of the states, , and smoothed estimates of the state variances, . The matrix may also be interpreted as the MSE of the smoothed state estimate .
As with the one-step ahead states and variances above, we may use the smoothed values to form smoothed estimates of the signal variables,
 (50.9)
and to compute the variance of the smoothed signal estimates:
 (50.10)
Lastly, the smoothing procedure allows us to compute smoothed disturbance estimates, and , and a corresponding smoothed disturbance variance matrix:
 (50.11)
Dividing the smoothed disturbance estimates by the square roots of the corresponding diagonal elements of the smoothed variance matrix yields the standardized smoothed disturbance estimates and .
Forecasting
There are a variety of types of forecasting which may be performed with state space models. These methods differ primarily in what and how information is used. We will focus on the three methods that are supported by EViews built-in forecasting routines.
Earlier, we examined the notion of one-step ahead prediction. Consider now the notion of multi-step ahead prediction of observations, in which we take a fixed set of information available at a given period, and forecast several periods ahead. Modifying slightly the expressions in (50.4)(50.8) yields the n-step ahead state conditional mean and variance:
 (50.12)
 (50.13)
 (50.14)
and the corresponding n-step ahead forecast MSE matrix:
 (50.15)
for . As before, may also be interpreted as the minimum MSE estimate of based on the information set available at time , and is the MSE of the estimate.
It is worth emphasizing that the definitions given above for the forecast MSE matrices do not account for extra variability introduced in the estimation of any unknown parameters . In this setting, the will understate the true variability of the forecast, and should be viewed as being computed conditional on the specific value of the estimated parameters.
It is also worth noting that the n-step ahead forecasts may be computed using a slightly modified version of the basic Kalman recursion (Harvey 1989). To forecast at period , simply initialize a Kalman filter at time with the values of the predicted states and state covariances using information at time , and run the filter forward additional periods using no additional signal information. This procedure is repeated for each observation in the forecast sample, .
Dynamic Forecasting
The concept of dynamic forecasting should be familiar to you from other EViews estimation objects. In dynamic forecasting, we start at the beginning of the forecast sample , and compute a complete set of n-period ahead forecasts for each period in the forecast interval. Thus, if we wish to start at period and forecast dynamically to , we would compute a one-step ahead forecast for , a two-step ahead forecast for , and so forth, up to an -step ahead forecast for . It may be useful to note that as with n-step ahead forecasting, we simply initialize a Kalman filter at time and run the filter forward additional periods using no additional signal information. For dynamic forecasting, however, only one n-step ahead forecast is required to compute all of the forecast values since the information set is not updated from the beginning of the forecast period.
Smoothed Forecasting
Alternatively, we can compute smoothed forecasts which use all available signal data over the forecast sample (for example, ). These forward looking forecasts may be computed by initializing the states at the start of the forecast period, and performing a Kalman smooth over the entire forecast period using all relevant signal data. This technique is useful in settings where information on the entire path of the signals is used to interpolate values throughout the forecast sample.
We make one final comment about the forecasting methods described above. For traditional n-step ahead and dynamic forecasting, the states are typically initialized using the one-step ahead forecasts of the states and variances at the start of the forecast window. For smoothed forecasts, one would generally initialize the forecasts using the corresponding smoothed values of states and variances. There may, however, be situations where you wish to choose a different set of initial values for the forecast filter or smoother. The EViews forecasting routines (described in “State Space Procedures”) provide you with considerable control over these initial settings. Be aware, however, that the interpretation of the forecasts in terms of the available information will change if you choose alternative settings.
Estimation
To implement the Kalman filter and the fixed-interval smoother, we must first replace any unknown elements of the system matrices by their estimates. Under the assumption that the and are Gaussian, the sample log likelihood:
 (50.16)
may be evaluated using the Kalman filter. Using numeric derivatives, standard iterative techniques may be employed to maximize the likelihood with respect to the unknown parameters (see Appendix C. “Estimation and Solution Options”).
Initial Conditions
Evaluation of the Kalman filter, smoother, and forecasting procedures all require that we provide the initial one-step ahead predicted values for the states and variance matrix . With some stationary models, steady-state conditions allow us to use the system matrices to solve for the values of and . In other cases, we may have preliminary estimates of , along with measures of uncertainty about those estimates. But in many cases, we may have no information, or diffuse priors, about the initial conditions.