The Hamilton model assumes that there exist x might
be thought of as realisations of a process with, say, two 'states',
each of which occurs randomly. For example, values of _{t}x in the first
state are drawn from a normal distribution with mean, _{t}μ_{1}, and variance σ_{1}^{2} while in the second they come from a normal distribution with mean μ_{2}, and variance σ_{2}^{2}. The state which occurs will be determined by a
third process for which the probability of state one occurring is λ.
Hence the density of x will be: _{t}where Each state of nature is assumed to follow a Markov process, with the probability i at time t conditional upon the fact that the process is in state i in time t-1. The model's strength lies in its flexibility, being capable of capturing changes in the variance between state processes, as well as changes in the mean. It has been applied with some success to other markets; for example, Engel [1994] uses a two-state model to study the behaviour of exchange rates.Specifying the model in this manner differs from the well known ARIMA and ARCH (or GARCH) models - see Bollerslev, Chou & Kroner [1992] for a survey of the latter. In the former, the variance of the process is assumed to be constant, but the expected value of the series follows a memory process. As a result, ARIMA based models are highly restrictive, and inappropriate if the disturbances are heteroscedastic. In the latter, the mean of the process is assumed to be constant, whereas under Markov regime-switching models this restriction is relaxed. Hamilton's basic model It can be shown by substitution that this scheme implies that where and conditional upon with conditional upon The intention is that observed returns where σ^{2}). Equation (6.3) shows that the expected values x in the two states are (_{t}μ_{0}, μ_{0} + μ_{1}) respectively, while the variances are (σ^{2}, σ^{2}+θ).Thus, from equation (6.2), the following can be derived: where x is an ARMA(1,1) process which is covariance stationary._{t}Returning to equation (6.3), express it as where with due to the independence of ε, the variance of_{t}is a linear function of _{_1}.^{4} Combining (6.2) and (6.4) produces a two-equation procedure very similar to that used in generating the Kalman filter. There is an observation equation (6.4), a state dynamics equation (6.2), and the errors are jointly martingale differences i. e.One difference, however, is that the error terms in both equations have time varying conditional variances that depend on unobserved quantities i. e.
_{-1})and
_{-1})Both depend upon _{-1}; the Kalman filter allows for the conditional variances of the errors to vary in a known way with the past history of x, but does not allow them to depend on the past unobserved states._{t}In the Kalman filter case the likelihood of the data X i. e._{t-1}
| X)_{t-1}where
{x_{1,} x_{2}, ... , x_{T}}If the following is given
_{-1, }z_{t}_{-2} | X)_{t-1}when
This can either be set to the unconditional density
or estimated. Hamilton [1989] uses the former, whereas in Hamilton [1990] he proceeds under the latter assumption. From the properties of conditional densities
e.g.
Given
and recognising that
_{, }z_{t-1}
| X_{t-1})will be equation (6.1), the joint density of ( z_{t}_{-1}, x)_{t}conditional upon
_{ }|_{ }z_{t}_{, }z_{t-1},X_{t-1}) f(z_{t}_{, }z_{t-1}
| X_{t-1})is determined, the density of X_{t}_{_1} can be found by integrating out the states z, _{t}z_{t}_{_1}. In this case the integration simply involves summation due to the discrete nature of the states, i. e.Of the two densities from equation (6.6),
_{ }|_{ }z_{t}_{,
}z_{t-1},X_{t-1})and
_{, }z_{t-1}
| X_{t-1})the first is found directly from the fact that is
_{,
}z_{t-1}
| X_{t-1})generally. This is achieved by using the formula for a conditional density as all the densities on the right-hand side of equation (6.8) have been previously determined.
X_{t-1})Iteration of equations (6.5) ... (6.8) for
X_{t}_{-1}) (t = 1,...,T)To determine the log likelihood, the joint density of returns is then written as the product of a conditional, F_{t}_{_1}) and a marginal density, f(F_{t}_{_1}).
F_{t}_{_1}) = f(x | _{t}F_{t}_{_1}) f(F_{t}_{_1})Building this up for all making the log likelihood of and this may be maximised with respect to the unknown parameters ____________________________
z_{t}_{ _1}, z_{t _ 2}, X_{t}_{-1}) =f (z |_{t}z_{t}_{- 1}) due to independence and the first-order Markov process assumed for the states. The f(.) is used to indicate both the density of the continuous variable x and the probability function of the discrete random variable _{t}z._{t}
z_{t}_{-1} alone. |