Record:   Prev Next
Author Millar, Russell B
Title Maximum Likelihood Estimation and Inference : With Examples in R, SAS and ADMB
Imprint New York : John Wiley & Sons, Incorporated, 2011
©2008
book jacket
Edition 1st ed
Descript 1 online resource (378 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Series Statistics in Practice Ser. ; v.110
Statistics in Practice Ser
Note Maximum Likelihood Estimation and Inference: With Examples in R, SAS and ADMB -- Contents -- Preface -- Part I: PRELIMINARIES -- 1 A taste of likelihood -- 1.1 Introduction -- 1.2 Motivating example -- 1.2.1 ML estimation and inference for the binomial -- 1.2.2 Approximate normality versus likelihood ratio -- 1.3 Using SAS, R and ADMB -- 1.3.1 Software resources -- 1.4 Implementation of the motivating example -- 1.4.1 Binomial example in SAS -- 1.4.2 Binomial example in R -- 1.4.3 Binomial example in ADMB -- 1.5 Exercises -- 2 Essential concepts and iid examples -- 2.1 Introduction -- 2.2 Some necessary notation -- 2.2.1 MLEs of functions of the parameters -- 2.3 Interpretation of likelihood -- 2.4 IID examples -- 2.4.1 IID Bernoulli (i.e. binomial) -- 2.4.2 IID normal -- 2.4.3 IID uniform -- 2.4.4 IID Cauchy -- 2.4.5 IID binormal mixture model -- 2.5 Exercises -- Part II: PRAGMATICS -- 3 Hypothesis tests and confidence intervals or regions -- 3.1 Introduction -- 3.2 Approximate normality of MLEs -- 3.2.1 Estimating the variance of θ -- 3.3 Wald tests, confidence intervals and regions -- 3.3.1 Test for a single parameter -- 3.3.2 Test of a function of the parameters -- 3.3.3 Joint test of two or more parameters -- 3.3.4 In R and SAS: Old Faithful revisited -- 3.4 Likelihood ratio tests, confidence intervals and regions -- 3.4.1 Using R and SAS: Another visit to Old Faithful -- 3.5 Likelihood ratio examples -- 3.5.1 LR inference from a two-dimensional contour plot -- 3.5.2 The G-test for contingency tables -- 3.6 Profile likelihood -- 3.6.1 Profile likelihood for Old Faithful -- 3.7 Exercises -- 4 What you really need to know -- 4.1 Introduction -- 4.2 Inference about g(θ) -- 4.2.1 The delta method -- 4.2.2 The delta method applied to MLEs -- 4.2.3 The delta method using R, SAS and ADMB -- 4.2.4 Delta method examples
4.3 Wald statistics - quick and dirty? -- 4.3.1 Wald versus likelihood ratio revisited -- 4.3.2 Pragmatic considerations -- 4.4 Model selection -- 4.4.1 AIC -- 4.5 Bootstrapping -- 4.5.1 Bootstrap simulation -- 4.5.2 Bootstrap confidence intervals -- 4.5.3 Bootstrap estimate of variance -- 4.5.4 Bootstrapping test statistics -- 4.5.5 Bootstrap pragmatics -- 4.5.6 Bootstrapping Old Faithful -- 4.5.7 How many bootstrap simulations is enough? -- 4.6 Prediction -- 4.6.1 The plug-in approach -- 4.6.2 Predictive likelihood -- 4.6.3 Bayesian prediction -- 4.6.4 Pseudo-Bayesian prediction -- 4.6.5 Bootstrap prediction -- 4.7 Things that can mess you up -- 4.7.1 Multiple maxima of the likelihood -- 4.7.2 Lack of convergence -- 4.7.3 Parameters on the boundary of the parameter space -- 4.7.4 Insufficient sample size -- 4.8 Exercises -- 5 Maximizing the likelihood -- 5.1 Introduction -- 5.2 The Newton-Raphson algorithm -- 5.3 The EM (Expectation-Maximization) algorithm -- 5.3.1 The simple form of the EM algorithm -- 5.3.2 Properties of the EM algorithm -- 5.3.3 Accelerating the EM algorithm -- 5.3.4 Inference -- 5.4 Multi-stage maximization -- 5.4.1 Efficient maximization via profile likelihood -- 5.4.2 Multi-stage optimization -- 5.5 Exercises -- 6 Some widely used applications of maximum likelihood -- 6.1 Introduction -- 6.2 Box-Cox transformations -- 6.2.1 Example: the Box and Cox poison data -- 6.3 Models for survival-time data -- 6.3.1 Notation -- 6.3.2 Accelerated failure-time model -- 6.3.3 Parametric proportional hazards model -- 6.3.4 Cox's proportional hazards model -- 6.3.5 Example in R and SAS: Leukaemia data -- 6.4 Mark-recapture models -- 6.4.1 Hypergeometric likelihood for integer valued N -- 6.4.2 Hypergeometric likelihood for N ε R+ -- 6.4.3 Multinomial likelihood -- 6.4.4 Closing remarks -- 6.5 Exercises
7 Generalized linear models and extensions -- 7.1 Introduction -- 7.2 Specification of a GLM -- 7.2.1 Exponential family distribution -- 7.2.2 GLM formulation -- 7.3 Likelihood calculations -- 7.4 Model evaluation -- 7.4.1 Deviance -- 7.4.2 Model selection -- 7.4.3 Residuals -- 7.4.4 Goodness of fit -- 7.5 Case study 1: Logistic regression and inverse prediction in R -- 7.5.1 Size-selectivity modelling in R -- 7.6 Beyond binomial and Poisson models -- 7.6.1 Quasi-likelihood and quasi-AIC -- 7.6.2 Zero inflation and the negative binomial -- 7.7 Case study 2: Multiplicative vs additive models of over-dispersed counts in SAS -- 7.7.1 Background -- 7.7.2 Poisson and quasi-Poisson fits -- 7.7.3 Negative binomial fits -- 7.8 Exercises -- 8 Quasi-likelihood and generalized estimating equations -- 8.1 Introduction -- 8.2 Wedderburn's quasi-likelihood -- 8.2.1 Quasi-likelihood analysis of barley blotch data in R -- 8.3 Generalized estimating equations -- 8.3.1 GEE analysis of multi-centre data in SAS -- 8.4 Exercises -- 9 ML inference in the presence of incidental parameters -- 9.1 Introduction -- 9.1.1 Analysis of paired data: an intuitive use of conditional likelihood -- 9.2 Conditional likelihood -- 9.2.1 Restricted maximum likelihood -- 9.3 Integrated likelihood -- 9.3.1 Justification -- 9.3.2 Uses of integrated likelihood -- 9.4 Exercises -- 10 Latent variable models -- 10.1 Introduction -- 10.2 Developing the likelihood -- 10.3 Software -- 10.3.1 Background -- 10.3.2 The Laplace approximation and Gauss-Hermite quadrature -- 10.3.3 Importance sampling -- 10.3.4 Separability -- 10.3.5 Overview of examples -- 10.4 One-way linear random-effects model -- 10.4.1 SAS -- 10.4.2 R -- 10.4.3 ADMB -- 10.5 Nonlinear mixed-effects model -- 10.5.1 SAS -- 10.5.2 ADMB -- 10.6 Generalized linear mixed-effects model -- 10.6.1 R -- 10.6.2 SAS -- 10.6.3 ADMB
10.6.4 GLMM vs GEE -- 10.7 State-space model for count data -- 10.8 ADMB template files -- 10.8.1 One-way linear random-effects model using REML -- 10.8.2 Nonlinear crossed mixed-effects model -- 10.8.3 Generalized linear mixed model using GREML -- 10.8.4 State-space model for count data -- 10.9 Exercises -- Part III: THEORETICAL FOUNDATIONS -- 11 Cramér-Rao inequality and Fisher information -- 11.1 Introduction -- 11.1.1 Notation -- 11.2 The Cramér-Rao inequality for θ ε R -- 11.3 Cramér-Rao inequality for functions of θ -- 11.4 Alternative formulae for I (θ) -- 11.5 The iid data case -- 11.6 The multi-dimensional case, θ ε Θ c Rs -- 11.6.1 Parameter orthogonality -- 11.6.2 Alternative formulae for I(θ) -- 11.6.3 Fisher information for re-parameterized models -- 11.7 Examples of Fisher information calculation -- 11.7.1 Normal(μ, σ2) -- 11.7.2 Exponential family distributions -- 11.7.3 Linear regression model -- 11.7.4 Nonlinear regression model -- 11.7.5 Generalized linear model with canonical link function -- 11.7.6 Gamma(α, β) -- 11.8 Exercises -- 12 Asymptotic theory and approximate normality -- 12.1 Introduction -- 12.2 Consistency and asymptotic normality -- 12.2.1 Asymptotic normality, θ ε R -- 12.2.2 Asymptotic normality: θ ε Rs -- 12.2.3 Asymptotic normality of g(θ) ε Rp -- 12.2.4 Asymptotic normality under model misspecification -- 12.2.5 Asymptotic normality of M-estimators -- 12.2.6 The non-iid case -- 12.3 Approximate normality -- 12.3.1 Estimation of the approximate variance -- 12.3.2 Approximate normality of M-estimators -- 12.4 Wald tests and confidence regions -- 12.4.1 Wald test statistics -- 12.4.2 Wald confidence intervals and regions -- 12.5 Likelihood ratio test statistic -- 12.5.1 Likelihood ratio test: θ ε R -- 12.5.2 Likelihood ratio test for θ ε Rs, and g(θ) ε Rp -- 12.6 Rao-score test statistic -- 12.7 Exercises
13 Tools of the trade -- 13.1 Introduction -- 13.2 Equivalence of tests and confidence intervals -- 13.3 Transformation of variables -- 13.4 Mean and variance conditional identities -- 13.5 Relevant inequalities -- 13.5.1 Jensen's inequality for convex functions -- 13.5.2 Cauchy-Schwarz inequality -- 13.6 Asymptotic probability theory -- 13.6.1 Convergence in distribution and probability -- 13.6.2 Properties -- 13.6.3 Slutsky's theorem -- 13.6.4 Delta theorem -- 13.7 Exercises -- 14 Fundamental paradigms and principles of inference -- 14.1 Introduction -- 14.2 Sufficiency principle -- 14.2.1 Finding sufficient statistics -- 14.2.2 Examples of the sufficiency principle -- 14.3 Conditionality principle -- 14.4 The likelihood principle -- 14.4.1 Relationship with sufficiency and conditionality -- 14.5 Statistical significance versus statistical evidence -- 14.6 Exercises -- 15 Miscellanea -- 15.1 Notation -- 15.2 Acronyms -- 15.3 Do you think like a frequentist or a Bayesian? -- 15.4 Some useful distributions -- 15.4.1 Discrete distributions -- 15.4.2 Continuous distributions -- 15.5 Software extras -- 15.5.1 R function Plkhci for likelihood ratio confidence intervals -- 15.5.2 R function Profile for calculation of profile likelihoods -- 15.5.3 SAS macro Plkhci for likelihood ratio confidence intervals -- 15.5.4 SAS macro Profile for calculation of profile likelihoods -- 15.5.5 SAS macro DeltaMethod for application of the delta method -- 15.6 Automatic differentiation -- Appendix: Partial solutions to selected exercises -- Bibliography -- Index
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statistical paradigm. Key features: Provides an accessible introduction to pragmatic maximum likelihood modelling. Covers more advanced topics, including general forms of latent variable models (including non-linear and non-normal mixed-effects and state-space models) and the use of maximum likelihood variants, such as estimating equations, conditional likelihood, restricted likelihood and integrated likelihood. Adopts a practical approach, with a focus on providing the relevant tools required by researchers and practitioners who collect and analyze real data. Presents numerous examples and case studies across a wide range of applications including medicine, biology and ecology. Features applications from a range of disciplines, with implementation in R, SAS and/or ADMB. Provides all program code and software extensions on a supporting website. Confines supporting theory to the final chapters to maintain a readable and pragmatic focus of the preceding chapters.    This book is not just an accessible and practical text about maximum likelihood, it is a comprehensive guide to modern maximum likelihood estimation and inference. It will be of interest to readers of all levels, from novice to expert. It will be of great benefit to researchers, and to students of
statistics from senior undergraduate to graduate level. For use as a course text, exercises are provided at the end of each chapter
Description based on publisher supplied metadata and other sources
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2020. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries
Link Print version: Millar, Russell B. Maximum Likelihood Estimation and Inference : With Examples in R, SAS and ADMB New York : John Wiley & Sons, Incorporated,c2011 9780470094822
Subject Estimation theory.;Chance -- Mathematical models
Electronic books
Record:   Prev Next