Time Series Analysis / / James Douglas Hamilton.

The last decade has brought dramatic changes in the way that researchers analyze economic and financial time series. This book synthesizes these recent advances and makes them accessible to first-year graduate students. James Hamilton provides the first adequate text-book treatments of important inn...

Full description

Saved in:
Bibliographic Details
Superior document:Title is part of eBook package: De Gruyter Princeton University Press eBook-Package Archive 1927-1999
VerfasserIn:
Place / Publishing House:Princeton, NJ : : Princeton University Press, , [2020]
©1994
Year of Publication:2020
Language:English
Online Access:
Physical Description:1 online resource (816 p.)
Tags: Add Tag
No Tags, Be the first to tag this record!
id 9780691218632
ctrlnum (DE-B1597)567570
(OCoLC)1229161176
collection bib_alma
record_format marc
spelling Hamilton, James Douglas, author. aut http://id.loc.gov/vocabulary/relators/aut
Time Series Analysis / James Douglas Hamilton.
Princeton, NJ : Princeton University Press, [2020]
©1994
1 online resource (816 p.)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
text file PDF rda
Frontmatter -- Contents -- Preface -- 1 Difference Equations -- 1.1. First-Order Difference Equations -- 1.2. pth-Order Difference Equations -- APPENDIX I.A. Proofs of Chapter 1 Propositions -- Chapter 1 References -- 2 Lag Operators -- 2.1. Introduction -- 2.2. First-Order Difference Equations -- 2.3. Second-Order Difference Equations -- 2.4. pth-Order Difference Equations -- 2.5. Initial Conditions and Unbounded Sequences -- Chapter 2 References -- 3 Stationary ARMA Processes -- 3.1. Expectations, Stationarity, and Ergodicity -- 3.2. White Noise -- 3.3. Moving Average Processes -- 3.4. Autoregressive Processes -- 3.5. Mixed Autoregressive Moving Average Processes -- 3.6. The Autocovariance-Generating Function -- 3.7. Invertibility -- APPENDIX 3.A. Convergence Results for Infinite-Order Moving Average Processes -- Chapter 3 Exercises -- Chapter 3 References -- 4 Forecasting -- 4.1. Principles of Forecasting -- 4.2. Forecasts Based on an Infinite Number of Observations -- 4.3. Forecasts Based on a Finite Number of Observations -- 4.4. The Triangular Factorization of a Positive Definite Symmetric Matrix -- 4.5. Updating a Linear Projection -- 4.6. Optimal Forecasts for Gaussian Processes -- 4.7. Sums of ARM A Processes -- 4.8. Wold's Decomposition and the Box-Jenkins Modeling Philosophy -- APPENDIX 4.A. Parallel Between OLS Regression and Linear Projection -- APPENDIX 4.B. Triangular Factorization of the Covariance Matrix for an MA(1) Process -- Chapter 4 Exercises -- Chapter 4 References -- 5 Maximum Likelihood Estimation -- 5.1. Introduction -- 5.2. The Likelihood Function for a Gaussian AR(7J Process -- 5.3. The Likelihood Function for a Gaussian AR(p) Process -- 5.4. The Likelihood Function for a Gaussian MA(1) Process -- 5.5. The Likelihood Function for a Gaussian MA(q) Process -- 5.6. The Likelihood Function for a Gaussian ARMA(p, q) Process -- 5.7. Numerical Optimization -- 5.8. Statistical Inference with Maximum Likelihood Estimation -- 5.9. Inequality Constraints -- APPENDIX 5. A. Proofs of Chapter 5 Propositions -- Chapter 5 Exercises -- Chapter 5 References -- 6 Spectral Analysis -- 6.1. The Population Spectrum -- 6.2. The Sample Periodogram -- 6.3. Estimating the Population Spectrum -- 6.4. Uses of Spectral Analysis -- APPENDIX 6. A. Proofs of Chapter 6 Propositions -- Chapter 6 Exercises -- Chapter 6 References -- 7 Asymptotic Distribution Theory -- 7.1. Review of Asymptotic Distribution Theory -- 7.2. Limit Theorems for Serially Dependent Observations -- APPENDIX 7.A. Proofs of Chapter 7 Propositions -- Chapter 7 Exercises -- Chapter 7 Exercises -- 8 Linear Regression Models -- 8.1. Review of Ordinary Least Squares with Deterministic Regressors and i.i.d. Gaussian Disturbances -- 8.2. Ordinary Least Squares Under More General Conditions -- 8.3. Generalized Least Squares -- APPENDIX 8. A. Proofs of Chapter 8 Propositions -- Chapter 8 Exercises -- Chapter 8 References -- 9 Linear Systems of Simultaneous Equations -- 9.1. Simultaneous Equations Bias -- 9.2. Instrumental Variables and Two-Stage Least Squares -- 9.3. Identification -- 9.4. Full-Information Maximum Likelihood Estimation -- 9.5 Estimation Based on the Reduced Form -- 9.6. Overview of Simultaneous Equations Bias -- APPENDIX 9.A. Proofs of Chapter 9 Proposition -- Chapter 9 Exercise -- Chapter 9 References -- 10 Covariance-Stationary Vector Processes -- 10.1. Introduction to Vector Autoregressions -- 10.2. Autocovariances and Convergence Results for Vector Processes -- 10.3. The Autocovariance-Generating Function for Vector Processes -- 10.4. The Spectrum for Vector Processes -- 10.5. The Sample Mean of a Vector Process -- APPENDIX 10.A. Proofs of Chapter 10 Propositions -- Chapter 10 Exercises -- Chapter 10 References -- 11 Vector Autoregressions -- 11.1. Maximum Likelihood Estimation and Hypothesis Testing for an Unrestricted Vector Autoregression -- 11.2. Bivariate Granger Causality Tests -- 11.3. Maximum Likelihood Estimation of Restricted Vector Autoregressions -- 11.4. The Impulse-Response Function -- 11.5. Variance Decomposition -- 11.6. Vector Autoregressions and Structural Econometric Models -- 11.7. Standard Errors for Impulse-Response Functions -- APPENDIX 11. A. Proofs of Chapter 11 Propositions -- APPENDIX 11.B. Calculation of Analytic Derivatives -- Chapter 11 Exercises -- Chapter 11 References -- 12 Bayesian Analysis -- 12.1. Introduction to Bayesian Analysis -- 12.2. Bayesian Analysis of Vector Autoregressions -- 12.3. Numerical Bayesian Methods -- APPENDIX 12.A. Proofs of Chapter 12 Propositions -- Chapter 12 Exercise -- Chapter 12 References -- 13 The Kalman Filter -- 13.1. The State-Space Representation of a Dynamic System -- 13.2. Derivation of the Kalman Filter -- 13.3. Forecasts Based on the State-Space Representation -- 13.4. Maximum Likelihood Estimation -- 13.5. The Steady-State Kalman Filter -- 13.6. Smoothing -- 13.7. Statistical Inference with the Kalman Filter -- 13.8. Time-Varying Parameters -- APPENDIX 13. A. Proofs of Chapter 13 Propositions -- Chapter 13 Exercises -- Chapter 13 References -- 14 Generalized Method of Moments -- 14.1. Estimation by the Generalized Method of Moments -- 14.2. Examples -- 14.3. Extensions -- 14.4. GMM and Maximum Likelihood Estimation -- APPENDIX 14. A. Proof of Chapter 14 Proposition -- Chapter 14 Exercise -- Chapter 14 References -- 15 Models of Nonstationary Time Series -- 15.1. Introduction -- 15.2. Why Linear Time Trends and Unit Roots? -- 15.3. Comparison of Trend-Stationary and Unit Root Processes -- 15.4. The Meaning of Tests for Unit Roots -- 15.5. Other Approaches to Trended Time Series -- APPENDIX 15. A. Derivation of Selected Equations for Chapter 15 -- Chapter 15 References -- 16 Processes with Deterministic Time Trends -- 16.1. Asymptotic Distribution of OLS Estimates of the Simple Time Trend Model -- 16.2. Hypothesis Testing for the Simple Time Trend Model -- 16.3. Asymptotic Inference for an Autoregressive Process Around a Deterministic Time Trend -- APPENDIX 16. A. Derivation of Selected Equations for Chapter 16 -- Chapter 16 Exercises -- Chapter 16 References -- 17 Univariate Processes with Unit Roots -- 17.1. Introduction -- 17.2. Brownian Motion -- 17.3. The Functional Central Limit Theorem -- 17.4. Asymptotic Properties of a First-Order Autoregression when the True Coefficient Is Unity -- 17.5. Asymptotic Results for Unit Root Processes with General Serial Correlation -- 17.6. Phillips-Perron Tests for Unit Roots -- 17.7. Asymptotic Properties of a pth-Order Autoregression and the Augmented Dickey-Fuller Tests for Unit Roots -- 17.8. Other Approaches to Testing for Unit Roots -- 17.9. Bayesian Analysis and Unit Roots -- APPENDIX 17.A. Proofs of Chapter 17 Propositions -- Chapter 17 Exercises -- Chapter 17 References -- 18 Unit Roots in Multivariate Time Series -- 18.1. Asymptotic Results for Nonstationary Vector Processes -- 18.2. Vector Autoregressions Containing Unit Roots -- 18.3. Spurious Regressions -- APPENDIX 18.A. Proofs of Chapter 18 Propositions -- Chapter 18 Exercises -- Chapter 18 References -- 19 Cointegration -- 19.1. Introduction -- 19.2. Testing the Null Hypothesis -- 19.3. Testing Hypotheses About the Cointegrating Vector -- APPENDIX 19. A. Proofs of Chapter 19 Propositions -- Chapter 19 Exercises -- Chapter 19 References -- 20 Full-Information Maximum Likelihood Analysis of Cointegrated Systems -- 20.1. Canonical Correlation -- 20.2. Maximum Likelihood Estimation -- 20.3. Hypothesis Testing -- 20.4. Overview of Unit Roots-To Difference or Not to Difference? -- APPENDIX 20.A. Proof of Chapter 20 Proposition -- Chapter 20 Exercises -- Chapter 20 References -- 21 Time Series Models of Heteroskedasticity -- 21.1. Autoregressive Conditional Heteroskedasticity (ARCH) -- 21.2. Extensions -- APPENDIX 21. A. Derivation of Selected Equations for Chapter 21 -- Chapter 21 References -- 22 Modeling Time Series with Changes in Regime -- 22.1. Introduction -- 22.2. Markov Chains -- 22.3. Statistical Analysis of i.i.d. Mixture Distributions -- 22.4.
Time Series Models of Changes in Regime -- APPENDIX 22. A. Derivation of Selected Equations for Chapter 22 -- Chapter 22 Exercise -- Chapter 22 Reference -- A Mathematical Review -- A.1. Trigonometry -- A.2. Complex Numbers -- A.3. Calculus -- A.4. Matrix Algebra -- A.5. Probability and Statistics -- Appendix A References -- B Statistical Tables -- C Answers to Selected Exercises -- D Greek Letters and Mathematical Symbols Used in the Text -- Author Index -- Subject Index
restricted access http://purl.org/coar/access_right/c_16ec online access with authorization star
The last decade has brought dramatic changes in the way that researchers analyze economic and financial time series. This book synthesizes these recent advances and makes them accessible to first-year graduate students. James Hamilton provides the first adequate text-book treatments of important innovations such as vector autoregressions, generalized method of moments, the economic and statistical consequences of unit roots, time-varying variances, and nonlinear time series models. In addition, he presents basic tools for analyzing dynamic systems (including linear representations, autocovariance generating functions, spectral analysis, and the Kalman filter) in a way that integrates economic theory with the practical difficulties of analyzing and interpreting real-world data. Time Series Analysis fills an important need for a textbook that integrates economic theory, econometrics, and new results. The book is intended to provide students and researchers with a self-contained survey of time series analysis. It starts from first principles and should be readily accessible to any beginning graduate student, while it is also intended to serve as a reference book for researchers.
Mode of access: Internet via World Wide Web.
In English.
Description based on online resource; title from PDF title page (publisher's Web site, viewed 30. Aug 2021)
Time-series analysis.
BUSINESS & ECONOMICS / Investments & Securities / General. bisacsh
Absolute summability.
Autocovariance.
Bartlett kernel.
Block exogeneity.
Cointegrating vector.
Consumption spending.
Cospectrum.
Dickey-Fuller test.
EM algorithm.
Exchange rates.
Filters.
Fundamental innovation.
Gamma distribution.
Global identification.
Gross national product.
Hessian matrix.
Inequality constraints.
Invertibility.
Jacobian matrix.
Joint density.
Khinchine's theorem.
Kronecker product.
Lagrange multiplier.
Loss function.
Mean-value theorem.
Mixingales.
Monte Carlo method.
Newton-Raphson.
Order in probability.
Orthogonal.
Permanent income.
Quadrature spectrum.
Recessions.
Reduced form.
Sample periodogram.
Stock prices.
Taylor series.
Vech operator.
Title is part of eBook package: De Gruyter Princeton University Press eBook-Package Archive 1927-1999 9783110442496
https://doi.org/10.1515/9780691218632?locatt=mode:legacy
https://www.degruyter.com/isbn/9780691218632
Cover https://www.degruyter.com/cover/covers/9780691218632.jpg
language English
format eBook
author Hamilton, James Douglas,
Hamilton, James Douglas,
spellingShingle Hamilton, James Douglas,
Hamilton, James Douglas,
Time Series Analysis /
Frontmatter --
Contents --
Preface --
1 Difference Equations --
1.1. First-Order Difference Equations --
1.2. pth-Order Difference Equations --
APPENDIX I.A. Proofs of Chapter 1 Propositions --
Chapter 1 References --
2 Lag Operators --
2.1. Introduction --
2.2. First-Order Difference Equations --
2.3. Second-Order Difference Equations --
2.4. pth-Order Difference Equations --
2.5. Initial Conditions and Unbounded Sequences --
Chapter 2 References --
3 Stationary ARMA Processes --
3.1. Expectations, Stationarity, and Ergodicity --
3.2. White Noise --
3.3. Moving Average Processes --
3.4. Autoregressive Processes --
3.5. Mixed Autoregressive Moving Average Processes --
3.6. The Autocovariance-Generating Function --
3.7. Invertibility --
APPENDIX 3.A. Convergence Results for Infinite-Order Moving Average Processes --
Chapter 3 Exercises --
Chapter 3 References --
4 Forecasting --
4.1. Principles of Forecasting --
4.2. Forecasts Based on an Infinite Number of Observations --
4.3. Forecasts Based on a Finite Number of Observations --
4.4. The Triangular Factorization of a Positive Definite Symmetric Matrix --
4.5. Updating a Linear Projection --
4.6. Optimal Forecasts for Gaussian Processes --
4.7. Sums of ARM A Processes --
4.8. Wold's Decomposition and the Box-Jenkins Modeling Philosophy --
APPENDIX 4.A. Parallel Between OLS Regression and Linear Projection --
APPENDIX 4.B. Triangular Factorization of the Covariance Matrix for an MA(1) Process --
Chapter 4 Exercises --
Chapter 4 References --
5 Maximum Likelihood Estimation --
5.1. Introduction --
5.2. The Likelihood Function for a Gaussian AR(7J Process --
5.3. The Likelihood Function for a Gaussian AR(p) Process --
5.4. The Likelihood Function for a Gaussian MA(1) Process --
5.5. The Likelihood Function for a Gaussian MA(q) Process --
5.6. The Likelihood Function for a Gaussian ARMA(p, q) Process --
5.7. Numerical Optimization --
5.8. Statistical Inference with Maximum Likelihood Estimation --
5.9. Inequality Constraints --
APPENDIX 5. A. Proofs of Chapter 5 Propositions --
Chapter 5 Exercises --
Chapter 5 References --
6 Spectral Analysis --
6.1. The Population Spectrum --
6.2. The Sample Periodogram --
6.3. Estimating the Population Spectrum --
6.4. Uses of Spectral Analysis --
APPENDIX 6. A. Proofs of Chapter 6 Propositions --
Chapter 6 Exercises --
Chapter 6 References --
7 Asymptotic Distribution Theory --
7.1. Review of Asymptotic Distribution Theory --
7.2. Limit Theorems for Serially Dependent Observations --
APPENDIX 7.A. Proofs of Chapter 7 Propositions --
Chapter 7 Exercises --
8 Linear Regression Models --
8.1. Review of Ordinary Least Squares with Deterministic Regressors and i.i.d. Gaussian Disturbances --
8.2. Ordinary Least Squares Under More General Conditions --
8.3. Generalized Least Squares --
APPENDIX 8. A. Proofs of Chapter 8 Propositions --
Chapter 8 Exercises --
Chapter 8 References --
9 Linear Systems of Simultaneous Equations --
9.1. Simultaneous Equations Bias --
9.2. Instrumental Variables and Two-Stage Least Squares --
9.3. Identification --
9.4. Full-Information Maximum Likelihood Estimation --
9.5 Estimation Based on the Reduced Form --
9.6. Overview of Simultaneous Equations Bias --
APPENDIX 9.A. Proofs of Chapter 9 Proposition --
Chapter 9 Exercise --
Chapter 9 References --
10 Covariance-Stationary Vector Processes --
10.1. Introduction to Vector Autoregressions --
10.2. Autocovariances and Convergence Results for Vector Processes --
10.3. The Autocovariance-Generating Function for Vector Processes --
10.4. The Spectrum for Vector Processes --
10.5. The Sample Mean of a Vector Process --
APPENDIX 10.A. Proofs of Chapter 10 Propositions --
Chapter 10 Exercises --
Chapter 10 References --
11 Vector Autoregressions --
11.1. Maximum Likelihood Estimation and Hypothesis Testing for an Unrestricted Vector Autoregression --
11.2. Bivariate Granger Causality Tests --
11.3. Maximum Likelihood Estimation of Restricted Vector Autoregressions --
11.4. The Impulse-Response Function --
11.5. Variance Decomposition --
11.6. Vector Autoregressions and Structural Econometric Models --
11.7. Standard Errors for Impulse-Response Functions --
APPENDIX 11. A. Proofs of Chapter 11 Propositions --
APPENDIX 11.B. Calculation of Analytic Derivatives --
Chapter 11 Exercises --
Chapter 11 References --
12 Bayesian Analysis --
12.1. Introduction to Bayesian Analysis --
12.2. Bayesian Analysis of Vector Autoregressions --
12.3. Numerical Bayesian Methods --
APPENDIX 12.A. Proofs of Chapter 12 Propositions --
Chapter 12 Exercise --
Chapter 12 References --
13 The Kalman Filter --
13.1. The State-Space Representation of a Dynamic System --
13.2. Derivation of the Kalman Filter --
13.3. Forecasts Based on the State-Space Representation --
13.4. Maximum Likelihood Estimation --
13.5. The Steady-State Kalman Filter --
13.6. Smoothing --
13.7. Statistical Inference with the Kalman Filter --
13.8. Time-Varying Parameters --
APPENDIX 13. A. Proofs of Chapter 13 Propositions --
Chapter 13 Exercises --
Chapter 13 References --
14 Generalized Method of Moments --
14.1. Estimation by the Generalized Method of Moments --
14.2. Examples --
14.3. Extensions --
14.4. GMM and Maximum Likelihood Estimation --
APPENDIX 14. A. Proof of Chapter 14 Proposition --
Chapter 14 Exercise --
Chapter 14 References --
15 Models of Nonstationary Time Series --
15.1. Introduction --
15.2. Why Linear Time Trends and Unit Roots? --
15.3. Comparison of Trend-Stationary and Unit Root Processes --
15.4. The Meaning of Tests for Unit Roots --
15.5. Other Approaches to Trended Time Series --
APPENDIX 15. A. Derivation of Selected Equations for Chapter 15 --
Chapter 15 References --
16 Processes with Deterministic Time Trends --
16.1. Asymptotic Distribution of OLS Estimates of the Simple Time Trend Model --
16.2. Hypothesis Testing for the Simple Time Trend Model --
16.3. Asymptotic Inference for an Autoregressive Process Around a Deterministic Time Trend --
APPENDIX 16. A. Derivation of Selected Equations for Chapter 16 --
Chapter 16 Exercises --
Chapter 16 References --
17 Univariate Processes with Unit Roots --
17.1. Introduction --
17.2. Brownian Motion --
17.3. The Functional Central Limit Theorem --
17.4. Asymptotic Properties of a First-Order Autoregression when the True Coefficient Is Unity --
17.5. Asymptotic Results for Unit Root Processes with General Serial Correlation --
17.6. Phillips-Perron Tests for Unit Roots --
17.7. Asymptotic Properties of a pth-Order Autoregression and the Augmented Dickey-Fuller Tests for Unit Roots --
17.8. Other Approaches to Testing for Unit Roots --
17.9. Bayesian Analysis and Unit Roots --
APPENDIX 17.A. Proofs of Chapter 17 Propositions --
Chapter 17 Exercises --
Chapter 17 References --
18 Unit Roots in Multivariate Time Series --
18.1. Asymptotic Results for Nonstationary Vector Processes --
18.2. Vector Autoregressions Containing Unit Roots --
18.3. Spurious Regressions --
APPENDIX 18.A. Proofs of Chapter 18 Propositions --
Chapter 18 Exercises --
Chapter 18 References --
19 Cointegration --
19.1. Introduction --
19.2. Testing the Null Hypothesis --
19.3. Testing Hypotheses About the Cointegrating Vector --
APPENDIX 19. A. Proofs of Chapter 19 Propositions --
Chapter 19 Exercises --
Chapter 19 References --
20 Full-Information Maximum Likelihood Analysis of Cointegrated Systems --
20.1. Canonical Correlation --
20.2. Maximum Likelihood Estimation --
20.3. Hypothesis Testing --
20.4. Overview of Unit Roots-To Difference or Not to Difference? --
APPENDIX 20.A. Proof of Chapter 20 Proposition --
Chapter 20 Exercises --
Chapter 20 References --
21 Time Series Models of Heteroskedasticity --
21.1. Autoregressive Conditional Heteroskedasticity (ARCH) --
21.2. Extensions --
APPENDIX 21. A. Derivation of Selected Equations for Chapter 21 --
Chapter 21 References --
22 Modeling Time Series with Changes in Regime --
22.1. Introduction --
22.2. Markov Chains --
22.3. Statistical Analysis of i.i.d. Mixture Distributions --
22.4.
Time Series Models of Changes in Regime --
APPENDIX 22. A. Derivation of Selected Equations for Chapter 22 --
Chapter 22 Exercise --
Chapter 22 Reference --
A Mathematical Review --
A.1. Trigonometry --
A.2. Complex Numbers --
A.3. Calculus --
A.4. Matrix Algebra --
A.5. Probability and Statistics --
Appendix A References --
B Statistical Tables --
C Answers to Selected Exercises --
D Greek Letters and Mathematical Symbols Used in the Text --
Author Index --
Subject Index
author_facet Hamilton, James Douglas,
Hamilton, James Douglas,
author_variant j d h jd jdh
j d h jd jdh
author_role VerfasserIn
VerfasserIn
author_sort Hamilton, James Douglas,
title Time Series Analysis /
title_full Time Series Analysis / James Douglas Hamilton.
title_fullStr Time Series Analysis / James Douglas Hamilton.
title_full_unstemmed Time Series Analysis / James Douglas Hamilton.
title_auth Time Series Analysis /
title_alt Frontmatter --
Contents --
Preface --
1 Difference Equations --
1.1. First-Order Difference Equations --
1.2. pth-Order Difference Equations --
APPENDIX I.A. Proofs of Chapter 1 Propositions --
Chapter 1 References --
2 Lag Operators --
2.1. Introduction --
2.2. First-Order Difference Equations --
2.3. Second-Order Difference Equations --
2.4. pth-Order Difference Equations --
2.5. Initial Conditions and Unbounded Sequences --
Chapter 2 References --
3 Stationary ARMA Processes --
3.1. Expectations, Stationarity, and Ergodicity --
3.2. White Noise --
3.3. Moving Average Processes --
3.4. Autoregressive Processes --
3.5. Mixed Autoregressive Moving Average Processes --
3.6. The Autocovariance-Generating Function --
3.7. Invertibility --
APPENDIX 3.A. Convergence Results for Infinite-Order Moving Average Processes --
Chapter 3 Exercises --
Chapter 3 References --
4 Forecasting --
4.1. Principles of Forecasting --
4.2. Forecasts Based on an Infinite Number of Observations --
4.3. Forecasts Based on a Finite Number of Observations --
4.4. The Triangular Factorization of a Positive Definite Symmetric Matrix --
4.5. Updating a Linear Projection --
4.6. Optimal Forecasts for Gaussian Processes --
4.7. Sums of ARM A Processes --
4.8. Wold's Decomposition and the Box-Jenkins Modeling Philosophy --
APPENDIX 4.A. Parallel Between OLS Regression and Linear Projection --
APPENDIX 4.B. Triangular Factorization of the Covariance Matrix for an MA(1) Process --
Chapter 4 Exercises --
Chapter 4 References --
5 Maximum Likelihood Estimation --
5.1. Introduction --
5.2. The Likelihood Function for a Gaussian AR(7J Process --
5.3. The Likelihood Function for a Gaussian AR(p) Process --
5.4. The Likelihood Function for a Gaussian MA(1) Process --
5.5. The Likelihood Function for a Gaussian MA(q) Process --
5.6. The Likelihood Function for a Gaussian ARMA(p, q) Process --
5.7. Numerical Optimization --
5.8. Statistical Inference with Maximum Likelihood Estimation --
5.9. Inequality Constraints --
APPENDIX 5. A. Proofs of Chapter 5 Propositions --
Chapter 5 Exercises --
Chapter 5 References --
6 Spectral Analysis --
6.1. The Population Spectrum --
6.2. The Sample Periodogram --
6.3. Estimating the Population Spectrum --
6.4. Uses of Spectral Analysis --
APPENDIX 6. A. Proofs of Chapter 6 Propositions --
Chapter 6 Exercises --
Chapter 6 References --
7 Asymptotic Distribution Theory --
7.1. Review of Asymptotic Distribution Theory --
7.2. Limit Theorems for Serially Dependent Observations --
APPENDIX 7.A. Proofs of Chapter 7 Propositions --
Chapter 7 Exercises --
8 Linear Regression Models --
8.1. Review of Ordinary Least Squares with Deterministic Regressors and i.i.d. Gaussian Disturbances --
8.2. Ordinary Least Squares Under More General Conditions --
8.3. Generalized Least Squares --
APPENDIX 8. A. Proofs of Chapter 8 Propositions --
Chapter 8 Exercises --
Chapter 8 References --
9 Linear Systems of Simultaneous Equations --
9.1. Simultaneous Equations Bias --
9.2. Instrumental Variables and Two-Stage Least Squares --
9.3. Identification --
9.4. Full-Information Maximum Likelihood Estimation --
9.5 Estimation Based on the Reduced Form --
9.6. Overview of Simultaneous Equations Bias --
APPENDIX 9.A. Proofs of Chapter 9 Proposition --
Chapter 9 Exercise --
Chapter 9 References --
10 Covariance-Stationary Vector Processes --
10.1. Introduction to Vector Autoregressions --
10.2. Autocovariances and Convergence Results for Vector Processes --
10.3. The Autocovariance-Generating Function for Vector Processes --
10.4. The Spectrum for Vector Processes --
10.5. The Sample Mean of a Vector Process --
APPENDIX 10.A. Proofs of Chapter 10 Propositions --
Chapter 10 Exercises --
Chapter 10 References --
11 Vector Autoregressions --
11.1. Maximum Likelihood Estimation and Hypothesis Testing for an Unrestricted Vector Autoregression --
11.2. Bivariate Granger Causality Tests --
11.3. Maximum Likelihood Estimation of Restricted Vector Autoregressions --
11.4. The Impulse-Response Function --
11.5. Variance Decomposition --
11.6. Vector Autoregressions and Structural Econometric Models --
11.7. Standard Errors for Impulse-Response Functions --
APPENDIX 11. A. Proofs of Chapter 11 Propositions --
APPENDIX 11.B. Calculation of Analytic Derivatives --
Chapter 11 Exercises --
Chapter 11 References --
12 Bayesian Analysis --
12.1. Introduction to Bayesian Analysis --
12.2. Bayesian Analysis of Vector Autoregressions --
12.3. Numerical Bayesian Methods --
APPENDIX 12.A. Proofs of Chapter 12 Propositions --
Chapter 12 Exercise --
Chapter 12 References --
13 The Kalman Filter --
13.1. The State-Space Representation of a Dynamic System --
13.2. Derivation of the Kalman Filter --
13.3. Forecasts Based on the State-Space Representation --
13.4. Maximum Likelihood Estimation --
13.5. The Steady-State Kalman Filter --
13.6. Smoothing --
13.7. Statistical Inference with the Kalman Filter --
13.8. Time-Varying Parameters --
APPENDIX 13. A. Proofs of Chapter 13 Propositions --
Chapter 13 Exercises --
Chapter 13 References --
14 Generalized Method of Moments --
14.1. Estimation by the Generalized Method of Moments --
14.2. Examples --
14.3. Extensions --
14.4. GMM and Maximum Likelihood Estimation --
APPENDIX 14. A. Proof of Chapter 14 Proposition --
Chapter 14 Exercise --
Chapter 14 References --
15 Models of Nonstationary Time Series --
15.1. Introduction --
15.2. Why Linear Time Trends and Unit Roots? --
15.3. Comparison of Trend-Stationary and Unit Root Processes --
15.4. The Meaning of Tests for Unit Roots --
15.5. Other Approaches to Trended Time Series --
APPENDIX 15. A. Derivation of Selected Equations for Chapter 15 --
Chapter 15 References --
16 Processes with Deterministic Time Trends --
16.1. Asymptotic Distribution of OLS Estimates of the Simple Time Trend Model --
16.2. Hypothesis Testing for the Simple Time Trend Model --
16.3. Asymptotic Inference for an Autoregressive Process Around a Deterministic Time Trend --
APPENDIX 16. A. Derivation of Selected Equations for Chapter 16 --
Chapter 16 Exercises --
Chapter 16 References --
17 Univariate Processes with Unit Roots --
17.1. Introduction --
17.2. Brownian Motion --
17.3. The Functional Central Limit Theorem --
17.4. Asymptotic Properties of a First-Order Autoregression when the True Coefficient Is Unity --
17.5. Asymptotic Results for Unit Root Processes with General Serial Correlation --
17.6. Phillips-Perron Tests for Unit Roots --
17.7. Asymptotic Properties of a pth-Order Autoregression and the Augmented Dickey-Fuller Tests for Unit Roots --
17.8. Other Approaches to Testing for Unit Roots --
17.9. Bayesian Analysis and Unit Roots --
APPENDIX 17.A. Proofs of Chapter 17 Propositions --
Chapter 17 Exercises --
Chapter 17 References --
18 Unit Roots in Multivariate Time Series --
18.1. Asymptotic Results for Nonstationary Vector Processes --
18.2. Vector Autoregressions Containing Unit Roots --
18.3. Spurious Regressions --
APPENDIX 18.A. Proofs of Chapter 18 Propositions --
Chapter 18 Exercises --
Chapter 18 References --
19 Cointegration --
19.1. Introduction --
19.2. Testing the Null Hypothesis --
19.3. Testing Hypotheses About the Cointegrating Vector --
APPENDIX 19. A. Proofs of Chapter 19 Propositions --
Chapter 19 Exercises --
Chapter 19 References --
20 Full-Information Maximum Likelihood Analysis of Cointegrated Systems --
20.1. Canonical Correlation --
20.2. Maximum Likelihood Estimation --
20.3. Hypothesis Testing --
20.4. Overview of Unit Roots-To Difference or Not to Difference? --
APPENDIX 20.A. Proof of Chapter 20 Proposition --
Chapter 20 Exercises --
Chapter 20 References --
21 Time Series Models of Heteroskedasticity --
21.1. Autoregressive Conditional Heteroskedasticity (ARCH) --
21.2. Extensions --
APPENDIX 21. A. Derivation of Selected Equations for Chapter 21 --
Chapter 21 References --
22 Modeling Time Series with Changes in Regime --
22.1. Introduction --
22.2. Markov Chains --
22.3. Statistical Analysis of i.i.d. Mixture Distributions --
22.4.
Time Series Models of Changes in Regime --
APPENDIX 22. A. Derivation of Selected Equations for Chapter 22 --
Chapter 22 Exercise --
Chapter 22 Reference --
A Mathematical Review --
A.1. Trigonometry --
A.2. Complex Numbers --
A.3. Calculus --
A.4. Matrix Algebra --
A.5. Probability and Statistics --
Appendix A References --
B Statistical Tables --
C Answers to Selected Exercises --
D Greek Letters and Mathematical Symbols Used in the Text --
Author Index --
Subject Index
title_new Time Series Analysis /
title_sort time series analysis /
publisher Princeton University Press,
publishDate 2020
physical 1 online resource (816 p.)
contents Frontmatter --
Contents --
Preface --
1 Difference Equations --
1.1. First-Order Difference Equations --
1.2. pth-Order Difference Equations --
APPENDIX I.A. Proofs of Chapter 1 Propositions --
Chapter 1 References --
2 Lag Operators --
2.1. Introduction --
2.2. First-Order Difference Equations --
2.3. Second-Order Difference Equations --
2.4. pth-Order Difference Equations --
2.5. Initial Conditions and Unbounded Sequences --
Chapter 2 References --
3 Stationary ARMA Processes --
3.1. Expectations, Stationarity, and Ergodicity --
3.2. White Noise --
3.3. Moving Average Processes --
3.4. Autoregressive Processes --
3.5. Mixed Autoregressive Moving Average Processes --
3.6. The Autocovariance-Generating Function --
3.7. Invertibility --
APPENDIX 3.A. Convergence Results for Infinite-Order Moving Average Processes --
Chapter 3 Exercises --
Chapter 3 References --
4 Forecasting --
4.1. Principles of Forecasting --
4.2. Forecasts Based on an Infinite Number of Observations --
4.3. Forecasts Based on a Finite Number of Observations --
4.4. The Triangular Factorization of a Positive Definite Symmetric Matrix --
4.5. Updating a Linear Projection --
4.6. Optimal Forecasts for Gaussian Processes --
4.7. Sums of ARM A Processes --
4.8. Wold's Decomposition and the Box-Jenkins Modeling Philosophy --
APPENDIX 4.A. Parallel Between OLS Regression and Linear Projection --
APPENDIX 4.B. Triangular Factorization of the Covariance Matrix for an MA(1) Process --
Chapter 4 Exercises --
Chapter 4 References --
5 Maximum Likelihood Estimation --
5.1. Introduction --
5.2. The Likelihood Function for a Gaussian AR(7J Process --
5.3. The Likelihood Function for a Gaussian AR(p) Process --
5.4. The Likelihood Function for a Gaussian MA(1) Process --
5.5. The Likelihood Function for a Gaussian MA(q) Process --
5.6. The Likelihood Function for a Gaussian ARMA(p, q) Process --
5.7. Numerical Optimization --
5.8. Statistical Inference with Maximum Likelihood Estimation --
5.9. Inequality Constraints --
APPENDIX 5. A. Proofs of Chapter 5 Propositions --
Chapter 5 Exercises --
Chapter 5 References --
6 Spectral Analysis --
6.1. The Population Spectrum --
6.2. The Sample Periodogram --
6.3. Estimating the Population Spectrum --
6.4. Uses of Spectral Analysis --
APPENDIX 6. A. Proofs of Chapter 6 Propositions --
Chapter 6 Exercises --
Chapter 6 References --
7 Asymptotic Distribution Theory --
7.1. Review of Asymptotic Distribution Theory --
7.2. Limit Theorems for Serially Dependent Observations --
APPENDIX 7.A. Proofs of Chapter 7 Propositions --
Chapter 7 Exercises --
8 Linear Regression Models --
8.1. Review of Ordinary Least Squares with Deterministic Regressors and i.i.d. Gaussian Disturbances --
8.2. Ordinary Least Squares Under More General Conditions --
8.3. Generalized Least Squares --
APPENDIX 8. A. Proofs of Chapter 8 Propositions --
Chapter 8 Exercises --
Chapter 8 References --
9 Linear Systems of Simultaneous Equations --
9.1. Simultaneous Equations Bias --
9.2. Instrumental Variables and Two-Stage Least Squares --
9.3. Identification --
9.4. Full-Information Maximum Likelihood Estimation --
9.5 Estimation Based on the Reduced Form --
9.6. Overview of Simultaneous Equations Bias --
APPENDIX 9.A. Proofs of Chapter 9 Proposition --
Chapter 9 Exercise --
Chapter 9 References --
10 Covariance-Stationary Vector Processes --
10.1. Introduction to Vector Autoregressions --
10.2. Autocovariances and Convergence Results for Vector Processes --
10.3. The Autocovariance-Generating Function for Vector Processes --
10.4. The Spectrum for Vector Processes --
10.5. The Sample Mean of a Vector Process --
APPENDIX 10.A. Proofs of Chapter 10 Propositions --
Chapter 10 Exercises --
Chapter 10 References --
11 Vector Autoregressions --
11.1. Maximum Likelihood Estimation and Hypothesis Testing for an Unrestricted Vector Autoregression --
11.2. Bivariate Granger Causality Tests --
11.3. Maximum Likelihood Estimation of Restricted Vector Autoregressions --
11.4. The Impulse-Response Function --
11.5. Variance Decomposition --
11.6. Vector Autoregressions and Structural Econometric Models --
11.7. Standard Errors for Impulse-Response Functions --
APPENDIX 11. A. Proofs of Chapter 11 Propositions --
APPENDIX 11.B. Calculation of Analytic Derivatives --
Chapter 11 Exercises --
Chapter 11 References --
12 Bayesian Analysis --
12.1. Introduction to Bayesian Analysis --
12.2. Bayesian Analysis of Vector Autoregressions --
12.3. Numerical Bayesian Methods --
APPENDIX 12.A. Proofs of Chapter 12 Propositions --
Chapter 12 Exercise --
Chapter 12 References --
13 The Kalman Filter --
13.1. The State-Space Representation of a Dynamic System --
13.2. Derivation of the Kalman Filter --
13.3. Forecasts Based on the State-Space Representation --
13.4. Maximum Likelihood Estimation --
13.5. The Steady-State Kalman Filter --
13.6. Smoothing --
13.7. Statistical Inference with the Kalman Filter --
13.8. Time-Varying Parameters --
APPENDIX 13. A. Proofs of Chapter 13 Propositions --
Chapter 13 Exercises --
Chapter 13 References --
14 Generalized Method of Moments --
14.1. Estimation by the Generalized Method of Moments --
14.2. Examples --
14.3. Extensions --
14.4. GMM and Maximum Likelihood Estimation --
APPENDIX 14. A. Proof of Chapter 14 Proposition --
Chapter 14 Exercise --
Chapter 14 References --
15 Models of Nonstationary Time Series --
15.1. Introduction --
15.2. Why Linear Time Trends and Unit Roots? --
15.3. Comparison of Trend-Stationary and Unit Root Processes --
15.4. The Meaning of Tests for Unit Roots --
15.5. Other Approaches to Trended Time Series --
APPENDIX 15. A. Derivation of Selected Equations for Chapter 15 --
Chapter 15 References --
16 Processes with Deterministic Time Trends --
16.1. Asymptotic Distribution of OLS Estimates of the Simple Time Trend Model --
16.2. Hypothesis Testing for the Simple Time Trend Model --
16.3. Asymptotic Inference for an Autoregressive Process Around a Deterministic Time Trend --
APPENDIX 16. A. Derivation of Selected Equations for Chapter 16 --
Chapter 16 Exercises --
Chapter 16 References --
17 Univariate Processes with Unit Roots --
17.1. Introduction --
17.2. Brownian Motion --
17.3. The Functional Central Limit Theorem --
17.4. Asymptotic Properties of a First-Order Autoregression when the True Coefficient Is Unity --
17.5. Asymptotic Results for Unit Root Processes with General Serial Correlation --
17.6. Phillips-Perron Tests for Unit Roots --
17.7. Asymptotic Properties of a pth-Order Autoregression and the Augmented Dickey-Fuller Tests for Unit Roots --
17.8. Other Approaches to Testing for Unit Roots --
17.9. Bayesian Analysis and Unit Roots --
APPENDIX 17.A. Proofs of Chapter 17 Propositions --
Chapter 17 Exercises --
Chapter 17 References --
18 Unit Roots in Multivariate Time Series --
18.1. Asymptotic Results for Nonstationary Vector Processes --
18.2. Vector Autoregressions Containing Unit Roots --
18.3. Spurious Regressions --
APPENDIX 18.A. Proofs of Chapter 18 Propositions --
Chapter 18 Exercises --
Chapter 18 References --
19 Cointegration --
19.1. Introduction --
19.2. Testing the Null Hypothesis --
19.3. Testing Hypotheses About the Cointegrating Vector --
APPENDIX 19. A. Proofs of Chapter 19 Propositions --
Chapter 19 Exercises --
Chapter 19 References --
20 Full-Information Maximum Likelihood Analysis of Cointegrated Systems --
20.1. Canonical Correlation --
20.2. Maximum Likelihood Estimation --
20.3. Hypothesis Testing --
20.4. Overview of Unit Roots-To Difference or Not to Difference? --
APPENDIX 20.A. Proof of Chapter 20 Proposition --
Chapter 20 Exercises --
Chapter 20 References --
21 Time Series Models of Heteroskedasticity --
21.1. Autoregressive Conditional Heteroskedasticity (ARCH) --
21.2. Extensions --
APPENDIX 21. A. Derivation of Selected Equations for Chapter 21 --
Chapter 21 References --
22 Modeling Time Series with Changes in Regime --
22.1. Introduction --
22.2. Markov Chains --
22.3. Statistical Analysis of i.i.d. Mixture Distributions --
22.4.
Time Series Models of Changes in Regime --
APPENDIX 22. A. Derivation of Selected Equations for Chapter 22 --
Chapter 22 Exercise --
Chapter 22 Reference --
A Mathematical Review --
A.1. Trigonometry --
A.2. Complex Numbers --
A.3. Calculus --
A.4. Matrix Algebra --
A.5. Probability and Statistics --
Appendix A References --
B Statistical Tables --
C Answers to Selected Exercises --
D Greek Letters and Mathematical Symbols Used in the Text --
Author Index --
Subject Index
isbn 9780691218632
9783110442496
callnumber-first Q - Science
callnumber-subject QA - Mathematics
callnumber-label QA280
callnumber-sort QA 3280 H264 41994
url https://doi.org/10.1515/9780691218632?locatt=mode:legacy
https://www.degruyter.com/isbn/9780691218632
https://www.degruyter.com/cover/covers/9780691218632.jpg
illustrated Not Illustrated
dewey-hundreds 500 - Science
dewey-tens 510 - Mathematics
dewey-ones 519 - Probabilities & applied mathematics
dewey-full 519.5/5
dewey-sort 3519.5 15
dewey-raw 519.5/5
dewey-search 519.5/5
doi_str_mv 10.1515/9780691218632?locatt=mode:legacy
oclc_num 1229161176
work_keys_str_mv AT hamiltonjamesdouglas timeseriesanalysis
status_str n
ids_txt_mv (DE-B1597)567570
(OCoLC)1229161176
carrierType_str_mv cr
hierarchy_parent_title Title is part of eBook package: De Gruyter Princeton University Press eBook-Package Archive 1927-1999
is_hierarchy_title Time Series Analysis /
container_title Title is part of eBook package: De Gruyter Princeton University Press eBook-Package Archive 1927-1999
_version_ 1770176322244444160
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>13874nam a22011655i 4500</leader><controlfield tag="001">9780691218632</controlfield><controlfield tag="003">DE-B1597</controlfield><controlfield tag="005">20210830012106.0</controlfield><controlfield tag="006">m|||||o||d||||||||</controlfield><controlfield tag="007">cr || ||||||||</controlfield><controlfield tag="008">210830t20201994nju fo d z eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780691218632</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1515/9780691218632</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-B1597)567570</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1229161176</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-B1597</subfield><subfield code="b">eng</subfield><subfield code="c">DE-B1597</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">nju</subfield><subfield code="c">US-NJ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA280</subfield><subfield code="b">.H264 1994</subfield></datafield><datafield tag="072" ind1=" " ind2="7"><subfield code="a">BUS036000</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">519.5/5</subfield><subfield code="2">20</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hamilton, James Douglas, </subfield><subfield code="e">author.</subfield><subfield code="4">aut</subfield><subfield code="4">http://id.loc.gov/vocabulary/relators/aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Time Series Analysis /</subfield><subfield code="c">James Douglas Hamilton.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Princeton, NJ : </subfield><subfield code="b">Princeton University Press, </subfield><subfield code="c">[2020]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©1994</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (816 p.)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="347" ind1=" " ind2=" "><subfield code="a">text file</subfield><subfield code="b">PDF</subfield><subfield code="2">rda</subfield></datafield><datafield tag="505" ind1="0" ind2="0"><subfield code="t">Frontmatter -- </subfield><subfield code="t">Contents -- </subfield><subfield code="t">Preface -- </subfield><subfield code="t">1 Difference Equations -- </subfield><subfield code="t">1.1. First-Order Difference Equations -- </subfield><subfield code="t">1.2. pth-Order Difference Equations -- </subfield><subfield code="t">APPENDIX I.A. Proofs of Chapter 1 Propositions -- </subfield><subfield code="t">Chapter 1 References -- </subfield><subfield code="t">2 Lag Operators -- </subfield><subfield code="t">2.1. Introduction -- </subfield><subfield code="t">2.2. First-Order Difference Equations -- </subfield><subfield code="t">2.3. Second-Order Difference Equations -- </subfield><subfield code="t">2.4. pth-Order Difference Equations -- </subfield><subfield code="t">2.5. Initial Conditions and Unbounded Sequences -- </subfield><subfield code="t">Chapter 2 References -- </subfield><subfield code="t">3 Stationary ARMA Processes -- </subfield><subfield code="t">3.1. Expectations, Stationarity, and Ergodicity -- </subfield><subfield code="t">3.2. White Noise -- </subfield><subfield code="t">3.3. Moving Average Processes -- </subfield><subfield code="t">3.4. Autoregressive Processes -- </subfield><subfield code="t">3.5. Mixed Autoregressive Moving Average Processes -- </subfield><subfield code="t">3.6. The Autocovariance-Generating Function -- </subfield><subfield code="t">3.7. Invertibility -- </subfield><subfield code="t">APPENDIX 3.A. Convergence Results for Infinite-Order Moving Average Processes -- </subfield><subfield code="t">Chapter 3 Exercises -- </subfield><subfield code="t">Chapter 3 References -- </subfield><subfield code="t">4 Forecasting -- </subfield><subfield code="t">4.1. Principles of Forecasting -- </subfield><subfield code="t">4.2. Forecasts Based on an Infinite Number of Observations -- </subfield><subfield code="t">4.3. Forecasts Based on a Finite Number of Observations -- </subfield><subfield code="t">4.4. The Triangular Factorization of a Positive Definite Symmetric Matrix -- </subfield><subfield code="t">4.5. Updating a Linear Projection -- </subfield><subfield code="t">4.6. Optimal Forecasts for Gaussian Processes -- </subfield><subfield code="t">4.7. Sums of ARM A Processes -- </subfield><subfield code="t">4.8. Wold's Decomposition and the Box-Jenkins Modeling Philosophy -- </subfield><subfield code="t">APPENDIX 4.A. Parallel Between OLS Regression and Linear Projection -- </subfield><subfield code="t">APPENDIX 4.B. Triangular Factorization of the Covariance Matrix for an MA(1) Process -- </subfield><subfield code="t">Chapter 4 Exercises -- </subfield><subfield code="t">Chapter 4 References -- </subfield><subfield code="t">5 Maximum Likelihood Estimation -- </subfield><subfield code="t">5.1. Introduction -- </subfield><subfield code="t">5.2. The Likelihood Function for a Gaussian AR(7J Process -- </subfield><subfield code="t">5.3. The Likelihood Function for a Gaussian AR(p) Process -- </subfield><subfield code="t">5.4. The Likelihood Function for a Gaussian MA(1) Process -- </subfield><subfield code="t">5.5. The Likelihood Function for a Gaussian MA(q) Process -- </subfield><subfield code="t">5.6. The Likelihood Function for a Gaussian ARMA(p, q) Process -- </subfield><subfield code="t">5.7. Numerical Optimization -- </subfield><subfield code="t">5.8. Statistical Inference with Maximum Likelihood Estimation -- </subfield><subfield code="t">5.9. Inequality Constraints -- </subfield><subfield code="t">APPENDIX 5. A. Proofs of Chapter 5 Propositions -- </subfield><subfield code="t">Chapter 5 Exercises -- </subfield><subfield code="t">Chapter 5 References -- </subfield><subfield code="t">6 Spectral Analysis -- </subfield><subfield code="t">6.1. The Population Spectrum -- </subfield><subfield code="t">6.2. The Sample Periodogram -- </subfield><subfield code="t">6.3. Estimating the Population Spectrum -- </subfield><subfield code="t">6.4. Uses of Spectral Analysis -- </subfield><subfield code="t">APPENDIX 6. A. Proofs of Chapter 6 Propositions -- </subfield><subfield code="t">Chapter 6 Exercises -- </subfield><subfield code="t">Chapter 6 References -- </subfield><subfield code="t">7 Asymptotic Distribution Theory -- </subfield><subfield code="t">7.1. Review of Asymptotic Distribution Theory -- </subfield><subfield code="t">7.2. Limit Theorems for Serially Dependent Observations -- </subfield><subfield code="t">APPENDIX 7.A. Proofs of Chapter 7 Propositions -- </subfield><subfield code="t">Chapter 7 Exercises -- </subfield><subfield code="t">Chapter 7 Exercises -- </subfield><subfield code="t">8 Linear Regression Models -- </subfield><subfield code="t">8.1. Review of Ordinary Least Squares with Deterministic Regressors and i.i.d. Gaussian Disturbances -- </subfield><subfield code="t">8.2. Ordinary Least Squares Under More General Conditions -- </subfield><subfield code="t">8.3. Generalized Least Squares -- </subfield><subfield code="t">APPENDIX 8. A. Proofs of Chapter 8 Propositions -- </subfield><subfield code="t">Chapter 8 Exercises -- </subfield><subfield code="t">Chapter 8 References -- </subfield><subfield code="t">9 Linear Systems of Simultaneous Equations -- </subfield><subfield code="t">9.1. Simultaneous Equations Bias -- </subfield><subfield code="t">9.2. Instrumental Variables and Two-Stage Least Squares -- </subfield><subfield code="t">9.3. Identification -- </subfield><subfield code="t">9.4. Full-Information Maximum Likelihood Estimation -- </subfield><subfield code="t">9.5 Estimation Based on the Reduced Form -- </subfield><subfield code="t">9.6. Overview of Simultaneous Equations Bias -- </subfield><subfield code="t">APPENDIX 9.A. Proofs of Chapter 9 Proposition -- </subfield><subfield code="t">Chapter 9 Exercise -- </subfield><subfield code="t">Chapter 9 References -- </subfield><subfield code="t">10 Covariance-Stationary Vector Processes -- </subfield><subfield code="t">10.1. Introduction to Vector Autoregressions -- </subfield><subfield code="t">10.2. Autocovariances and Convergence Results for Vector Processes -- </subfield><subfield code="t">10.3. The Autocovariance-Generating Function for Vector Processes -- </subfield><subfield code="t">10.4. The Spectrum for Vector Processes -- </subfield><subfield code="t">10.5. The Sample Mean of a Vector Process -- </subfield><subfield code="t">APPENDIX 10.A. Proofs of Chapter 10 Propositions -- </subfield><subfield code="t">Chapter 10 Exercises -- </subfield><subfield code="t">Chapter 10 References -- </subfield><subfield code="t">11 Vector Autoregressions -- </subfield><subfield code="t">11.1. Maximum Likelihood Estimation and Hypothesis Testing for an Unrestricted Vector Autoregression -- </subfield><subfield code="t">11.2. Bivariate Granger Causality Tests -- </subfield><subfield code="t">11.3. Maximum Likelihood Estimation of Restricted Vector Autoregressions -- </subfield><subfield code="t">11.4. The Impulse-Response Function -- </subfield><subfield code="t">11.5. Variance Decomposition -- </subfield><subfield code="t">11.6. Vector Autoregressions and Structural Econometric Models -- </subfield><subfield code="t">11.7. Standard Errors for Impulse-Response Functions -- </subfield><subfield code="t">APPENDIX 11. A. Proofs of Chapter 11 Propositions -- </subfield><subfield code="t">APPENDIX 11.B. Calculation of Analytic Derivatives -- </subfield><subfield code="t">Chapter 11 Exercises -- </subfield><subfield code="t">Chapter 11 References -- </subfield><subfield code="t">12 Bayesian Analysis -- </subfield><subfield code="t">12.1. Introduction to Bayesian Analysis -- </subfield><subfield code="t">12.2. Bayesian Analysis of Vector Autoregressions -- </subfield><subfield code="t">12.3. Numerical Bayesian Methods -- </subfield><subfield code="t">APPENDIX 12.A. Proofs of Chapter 12 Propositions -- </subfield><subfield code="t">Chapter 12 Exercise -- </subfield><subfield code="t">Chapter 12 References -- </subfield><subfield code="t">13 The Kalman Filter -- </subfield><subfield code="t">13.1. The State-Space Representation of a Dynamic System -- </subfield><subfield code="t">13.2. Derivation of the Kalman Filter -- </subfield><subfield code="t">13.3. Forecasts Based on the State-Space Representation -- </subfield><subfield code="t">13.4. Maximum Likelihood Estimation -- </subfield><subfield code="t">13.5. The Steady-State Kalman Filter -- </subfield><subfield code="t">13.6. Smoothing -- </subfield><subfield code="t">13.7. Statistical Inference with the Kalman Filter -- </subfield><subfield code="t">13.8. Time-Varying Parameters -- </subfield><subfield code="t">APPENDIX 13. A. Proofs of Chapter 13 Propositions -- </subfield><subfield code="t">Chapter 13 Exercises -- </subfield><subfield code="t">Chapter 13 References -- </subfield><subfield code="t">14 Generalized Method of Moments -- </subfield><subfield code="t">14.1. Estimation by the Generalized Method of Moments -- </subfield><subfield code="t">14.2. Examples -- </subfield><subfield code="t">14.3. Extensions -- </subfield><subfield code="t">14.4. GMM and Maximum Likelihood Estimation -- </subfield><subfield code="t">APPENDIX 14. A. Proof of Chapter 14 Proposition -- </subfield><subfield code="t">Chapter 14 Exercise -- </subfield><subfield code="t">Chapter 14 References -- </subfield><subfield code="t">15 Models of Nonstationary Time Series -- </subfield><subfield code="t">15.1. Introduction -- </subfield><subfield code="t">15.2. Why Linear Time Trends and Unit Roots? -- </subfield><subfield code="t">15.3. Comparison of Trend-Stationary and Unit Root Processes -- </subfield><subfield code="t">15.4. The Meaning of Tests for Unit Roots -- </subfield><subfield code="t">15.5. Other Approaches to Trended Time Series -- </subfield><subfield code="t">APPENDIX 15. A. Derivation of Selected Equations for Chapter 15 -- </subfield><subfield code="t">Chapter 15 References -- </subfield><subfield code="t">16 Processes with Deterministic Time Trends -- </subfield><subfield code="t">16.1. Asymptotic Distribution of OLS Estimates of the Simple Time Trend Model -- </subfield><subfield code="t">16.2. Hypothesis Testing for the Simple Time Trend Model -- </subfield><subfield code="t">16.3. Asymptotic Inference for an Autoregressive Process Around a Deterministic Time Trend -- </subfield><subfield code="t">APPENDIX 16. A. Derivation of Selected Equations for Chapter 16 -- </subfield><subfield code="t">Chapter 16 Exercises -- </subfield><subfield code="t">Chapter 16 References -- </subfield><subfield code="t">17 Univariate Processes with Unit Roots -- </subfield><subfield code="t">17.1. Introduction -- </subfield><subfield code="t">17.2. Brownian Motion -- </subfield><subfield code="t">17.3. The Functional Central Limit Theorem -- </subfield><subfield code="t">17.4. Asymptotic Properties of a First-Order Autoregression when the True Coefficient Is Unity -- </subfield><subfield code="t">17.5. Asymptotic Results for Unit Root Processes with General Serial Correlation -- </subfield><subfield code="t">17.6. Phillips-Perron Tests for Unit Roots -- </subfield><subfield code="t">17.7. Asymptotic Properties of a pth-Order Autoregression and the Augmented Dickey-Fuller Tests for Unit Roots -- </subfield><subfield code="t">17.8. Other Approaches to Testing for Unit Roots -- </subfield><subfield code="t">17.9. Bayesian Analysis and Unit Roots -- </subfield><subfield code="t">APPENDIX 17.A. Proofs of Chapter 17 Propositions -- </subfield><subfield code="t">Chapter 17 Exercises -- </subfield><subfield code="t">Chapter 17 References -- </subfield><subfield code="t">18 Unit Roots in Multivariate Time Series -- </subfield><subfield code="t">18.1. Asymptotic Results for Nonstationary Vector Processes -- </subfield><subfield code="t">18.2. Vector Autoregressions Containing Unit Roots -- </subfield><subfield code="t">18.3. Spurious Regressions -- </subfield><subfield code="t">APPENDIX 18.A. Proofs of Chapter 18 Propositions -- </subfield><subfield code="t">Chapter 18 Exercises -- </subfield><subfield code="t">Chapter 18 References -- </subfield><subfield code="t">19 Cointegration -- </subfield><subfield code="t">19.1. Introduction -- </subfield><subfield code="t">19.2. Testing the Null Hypothesis -- </subfield><subfield code="t">19.3. Testing Hypotheses About the Cointegrating Vector -- </subfield><subfield code="t">APPENDIX 19. A. Proofs of Chapter 19 Propositions -- </subfield><subfield code="t">Chapter 19 Exercises -- </subfield><subfield code="t">Chapter 19 References -- </subfield><subfield code="t">20 Full-Information Maximum Likelihood Analysis of Cointegrated Systems -- </subfield><subfield code="t">20.1. Canonical Correlation -- </subfield><subfield code="t">20.2. Maximum Likelihood Estimation -- </subfield><subfield code="t">20.3. Hypothesis Testing -- </subfield><subfield code="t">20.4. Overview of Unit Roots-To Difference or Not to Difference? -- </subfield><subfield code="t">APPENDIX 20.A. Proof of Chapter 20 Proposition -- </subfield><subfield code="t">Chapter 20 Exercises -- </subfield><subfield code="t">Chapter 20 References -- </subfield><subfield code="t">21 Time Series Models of Heteroskedasticity -- </subfield><subfield code="t">21.1. Autoregressive Conditional Heteroskedasticity (ARCH) -- </subfield><subfield code="t">21.2. Extensions -- </subfield><subfield code="t">APPENDIX 21. A. Derivation of Selected Equations for Chapter 21 -- </subfield><subfield code="t">Chapter 21 References -- </subfield><subfield code="t">22 Modeling Time Series with Changes in Regime -- </subfield><subfield code="t">22.1. Introduction -- </subfield><subfield code="t">22.2. Markov Chains -- </subfield><subfield code="t">22.3. Statistical Analysis of i.i.d. Mixture Distributions -- </subfield><subfield code="t">22.4.</subfield></datafield><datafield tag="505" ind1="0" ind2="0"><subfield code="t">Time Series Models of Changes in Regime -- </subfield><subfield code="t">APPENDIX 22. A. Derivation of Selected Equations for Chapter 22 -- </subfield><subfield code="t">Chapter 22 Exercise -- </subfield><subfield code="t">Chapter 22 Reference -- </subfield><subfield code="t">A Mathematical Review -- </subfield><subfield code="t">A.1. Trigonometry -- </subfield><subfield code="t">A.2. Complex Numbers -- </subfield><subfield code="t">A.3. Calculus -- </subfield><subfield code="t">A.4. Matrix Algebra -- </subfield><subfield code="t">A.5. Probability and Statistics -- </subfield><subfield code="t">Appendix A References -- </subfield><subfield code="t">B Statistical Tables -- </subfield><subfield code="t">C Answers to Selected Exercises -- </subfield><subfield code="t">D Greek Letters and Mathematical Symbols Used in the Text -- </subfield><subfield code="t">Author Index -- </subfield><subfield code="t">Subject Index</subfield></datafield><datafield tag="506" ind1="0" ind2=" "><subfield code="a">restricted access</subfield><subfield code="u">http://purl.org/coar/access_right/c_16ec</subfield><subfield code="f">online access with authorization</subfield><subfield code="2">star</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The last decade has brought dramatic changes in the way that researchers analyze economic and financial time series. This book synthesizes these recent advances and makes them accessible to first-year graduate students. James Hamilton provides the first adequate text-book treatments of important innovations such as vector autoregressions, generalized method of moments, the economic and statistical consequences of unit roots, time-varying variances, and nonlinear time series models. In addition, he presents basic tools for analyzing dynamic systems (including linear representations, autocovariance generating functions, spectral analysis, and the Kalman filter) in a way that integrates economic theory with the practical difficulties of analyzing and interpreting real-world data. Time Series Analysis fills an important need for a textbook that integrates economic theory, econometrics, and new results. The book is intended to provide students and researchers with a self-contained survey of time series analysis. It starts from first principles and should be readily accessible to any beginning graduate student, while it is also intended to serve as a reference book for researchers.</subfield></datafield><datafield tag="538" ind1=" " ind2=" "><subfield code="a">Mode of access: Internet via World Wide Web.</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">In English.</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Description based on online resource; title from PDF title page (publisher's Web site, viewed 30. Aug 2021)</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Time-series analysis.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">BUSINESS &amp; ECONOMICS / Investments &amp; Securities / General.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Absolute summability.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Autocovariance.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Bartlett kernel.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Block exogeneity.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Cointegrating vector.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Consumption spending.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Cospectrum.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Dickey-Fuller test.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">EM algorithm.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Exchange rates.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Filters.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Fundamental innovation.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Gamma distribution.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Global identification.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Gross national product.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Hessian matrix.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Inequality constraints.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Invertibility.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Jacobian matrix.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Joint density.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Khinchine's theorem.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Kronecker product.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Lagrange multiplier.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Loss function.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Mean-value theorem.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Mixingales.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Monte Carlo method.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Newton-Raphson.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Order in probability.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Orthogonal.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Permanent income.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Quadrature spectrum.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Recessions.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Reduced form.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Sample periodogram.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Stock prices.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Taylor series.</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Vech operator.</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Title is part of eBook package:</subfield><subfield code="d">De Gruyter</subfield><subfield code="t">Princeton University Press eBook-Package Archive 1927-1999</subfield><subfield code="z">9783110442496</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1515/9780691218632?locatt=mode:legacy</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://www.degruyter.com/isbn/9780691218632</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="3">Cover</subfield><subfield code="u">https://www.degruyter.com/cover/covers/9780691218632.jpg</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">978-3-11-044249-6 Princeton University Press eBook-Package Archive 1927-1999</subfield><subfield code="c">1927</subfield><subfield code="d">1999</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_BACKALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_CL_LAEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_EBACKALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_EBKALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_ECL_LAEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_EEBKALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_ESSHALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_ESTMALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_PPALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_SSHALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">EBA_STMALL</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV-deGruyter-alles</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA11SSHE</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA12STME</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA13ENGE</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA17SSHEE</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA18STMEE</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">PDA5EBK</subfield></datafield></record></collection>