Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

Bücher der Reihe Springer Series in Statistics

Filter
Filter
Ordnen nachSortieren Reihenfolge der Serie
  • von Peter McCullagh
    114,00 €

  • von Jon A. Wellner
    124,00 €

    This book provides an account of weak convergence theory, empirical processes, and their application to a wide variety of problems in statistics. The first part of the book presents a thorough treatment of stochastic convergence in its various forms. Part 2 brings together the theory of empirical processes in a form accessible to statisticians and probabilists. In Part 3, the authors cover a range of applications in statistics including rates of convergence of estimators; limit theorems for M¿ and Z¿estimators; the bootstrap; the functional delta-method and semiparametric estimation. Most of the chapters conclude with ¿problems and complements.¿ Some of these are exercises to help the reader¿s understanding of the material, whereas others are intended to supplement the text. This second edition includes many of the new developments in the field since publication of the first edition in 1996: Glivenko-Cantelli preservation theorems; new bounds on expectations of suprema of empirical processes; new bounds on covering numbers for various function classes; generic chaining; definitive versions of concentration bounds; and new applications in statistics including penalized M-estimation, the lasso, classification, and support vector machines. The approximately 200 additional pages also round out classical subjects, including chapters on weak convergence in Skorokhod space, on stable convergence, and on processes based on pseudo-observations.

  • von Brajendra C. Sutradhar
    114,00 €

  • von Phillip I. Good
    157,00 €

    This text is intended to provide a strong theoretical background in testing hypotheses and decision theory for those who will be practicing in the real worldorwhowillbeparticipatinginthetrainingofreal-worldstatisticiansand biostatisticians. In previous editions of this text, my rhetoric was somewhat tentative. I was saying, in e?ect, "e;Gee guys, permutation methods provide a practical real-world alternative to asymptotic parametric approximations. Why not give them a try?"e; But today, the theory, the software, and the hardware have come together. Distribution-free permutation procedures are the primary method for testing hypotheses. Parametric procedures and the bootstrap are to be reserved for the few situations in which they may be applicable. Four factors have forced this change: 1. Desire by workers in applied ?elds to use the most powerful statistic for their applications. Such workers may not be aware of the fundamental lemma of Neyman and Pearson, but they know that the statistic they wanttouse-acomplexscoreoraratioofscores,doesnothaveanalready well-tabulated distribution. 2. Pressure from regulatory agencies for the use of methods that yield exact signi?cance levels, not approximations. 3. A growing recognition that most real-world data are drawn from mixtures of populations. 4. A growing recognition that missing data is inevitable, balanced designs the exception. Thus, it seems natural that the theory of testing hypothesis and the more general decision theory in which it is embedded should be introduced via the permutation tests. On the other hand, certain relatively robust param- ric tests such as Student's t continue to play an essential role in statistical practice.

  • von Sadanori Konishi
    86,00 €

    The Akaike information criterion (AIC) derived as an estimator of the Kullback-Leibler information discrepancy provides a useful tool for evaluating statistical models, and numerous successful applications of the AIC have been reported in various fields of natural sciences, social sciences and engineering.One of the main objectives of this book is to provide comprehensive explanations of the concepts and derivations of the AIC and related criteria, including Schwarz's Bayesian information criterion (BIC), together with a wide range of practical examples of model selection and evaluation criteria. A secondary objective is to provide a theoretical basis for the analysis and extension of information criteria via a statistical functional approach. A generalized information criterion (GIC) and a bootstrap information criterion are presented, which provide unified tools for modeling and model evaluation for a diverse range of models, including various types of nonlinear models and model estimation procedures such as robust estimation, the maximum penalized likelihood method and a Bayesian approach.

  • von Yves Tillé
    94,00 €

  • von Roger B. Nelsen
    147,00 €

    Copulas are functions that join multivariate distribution functions to their one-dimensional margins. The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions.With 116 examples, 54 figures, and 167 exercises, this book is suitable as a text or for self-study. The only prerequisite is an upper level undergraduate course in probability and mathematical statistics, although some familiarity with nonparametric statistics would be useful. Knowledge of measure-theoretic probability is not required. The revised second edition includes new sections on extreme value copulas, tail dependence, and quasi-copulas.

  • von Charles F. Manski
    122,00 €

    Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.

  • von Jiming Jiang
    106,00 €

    Over the past decade there has been an explosion of developments in mixed e?ects models and their applications. This book concentrates on two major classes of mixed e?ects models, linear mixed models and generalized linear mixed models, with the intention of o?ering an up-to-date account of theory and methods in the analysis of these models as well as their applications in various ?elds. The ?rst two chapters are devoted to linear mixed models. We classify l- ear mixed models as Gaussian (linear) mixed models and non-Gaussian linear mixed models. There have been extensive studies in estimation in Gaussian mixed models as well as tests and con?dence intervals. On the other hand, the literature on non-Gaussian linear mixed models is much less extensive, partially because of the di?culties in inference about these models. However, non-Gaussian linear mixed models are important because, in practice, one is never certain that normality holds. This book o?ers a systematic approach to inference about non-Gaussian linear mixed models. In particular, it has included recently developed methods, such as partially observed information, iterative weighted least squares, and jackknife in the context of mixed models. Other new methods introduced in this book include goodness-of-?t tests, p- diction intervals, and mixed model selection. These are, of course, in addition to traditional topics such as maximum likelihood and restricted maximum likelihood in Gaussian mixed models.

  • von Geert Verbeke & Geert Molenberghs
    123,00 €

    This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mid models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. How3ever, some other commercially available packages are discussed as well. Great care has been taken in presenting the data analyses in a software-independent fashion. Geert Verbeke is Assistant Professor at the Biostistical Centre of the Katholieke Universiteit Leuven in Belgium. He received the B.S. degree in mathematics (1989) from the Katholieke Universiteit Leuven, the M.S. in biostatistics (1992) from the Limburgs Universitair Centrum, and earned a Ph.D. in biostatistics (1995) from the Katholieke Universiteit Leuven. Dr. Verbeke wrote his dissertation, as well as a number of methodological articles, on various aspects of linear mixed models for longitudinal data analysis. He has held visiting positions at the Gerontology Research Center and the Johns Hopkins University. Geert Molenberghs is Assistant Professor of Biostatistics at the Limburgs Universitair Centrum in Belgium. He received the B.S. degree in mathematics (1988) and a Ph.D. in biostatistics (1993) from the Universiteit Antwerpen. Dr. Molenberghs published methodological work on the analysis of non-response in clinical and epidemiological studies. He serves as an associate editor for Biometrics, Applied Statistics, and Biostatistics, and is an officer of the Belgian Statistical Society. He has held visiting positions at the Harvard School of Public Health.

  • von Michael Wolf, Dimitris N. Politis & Joseph P. Romano
    99,00 €

  • von Noel A. C. Cressie & Timothy R. C. Read
    45,00 €

  • - Methods for the Exploration of Posterior Distributions and Likelihood Functions
    von Martin A. Tanner
    105,00 - 106,00 €

    This book provides a unified introduction to a variety of computational algorithms for Bayesian and likelihood inference. In this third edition, I have attempted to expand the treatment of many of the techniques discussed. I have added some new examples, as well as included recent results. Exercises have been added at the end of each chapter. Prerequisites for this book include an understanding of mathematical statistics at the level of Bickel and Doksum (1977), some understanding of the Bayesian approach as in Box and Tiao (1973), some exposure to statistical models as found in McCullagh and NeIder (1989), and for Section 6. 6 some experience with condi- tional inference at the level of Cox and Snell (1989). I have chosen not to present proofs of convergence or rates of convergence for the Metropolis algorithm or the Gibbs sampler since these may require substantial background in Markov chain theory that is beyond the scope of this book. However, references to these proofs are given. There has been an explosion of papers in the area of Markov chain Monte Carlo in the past ten years. I have attempted to identify key references-though due to the volatility of the field some work may have been missed.

  • von Mark J. Schervish
    115,00 €

    The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "e;uniformly most powerful"e; approach to testing is contrasted with available decision-theoretic approaches.

  • von Samuel Kotz
    65,00 €

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Editors 1993. 600 pp. Softcover. ISBN 0-387-94039-1

  • von Jiming Jiang
    106,00 €

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models. It presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields.

  • von Nicolas Chopin & Omiros Papaspiliopoulos
    54,00 - 79,00 €

    This book provides a general introduction to Sequential Monte Carlo (SMC) methods, also known as particle filters. Bayesian inference or rare-event problems), are also discussed. The book may be used either as a graduate text on Sequential Monte Carlo methods and state-space modeling, or as a general reference work on the area.

  • - For Science and Data Science
    von Goran Kauermann
    95,00 €

    This textbook provides a comprehensive introduction to statistical principles, concepts and methods that are essential in modern statistics and data science.

  • - With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis
    von Jr. & Frank E. Harrell
    67,00 - 104,00 €

    Most of the methods in this text apply to all regression models, but special emphasis is given to multiple regression using generalised least squares for longitudinal data, the binary logistic model, models for ordinal responses, parametric survival regression models and the Cox semi parametric survival model.

  • - A General Unifying Theory
    von George Seber
    46,00 - 64,00 €

    This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory.

  • - Volume I: Density Estimation
    von Vincent N. LaRiccia & P.P.B. Eggermont
    174,00 €

    This book deals with parametric and nonparametric density estimation from the maximum (penalized) likelihood point of view, including estimation under constraints.

  • - Strategy, Method and Application
    von Grace Y. Yi
    131,00 - 132,00 €

  • von Anuj Srivastava & Eric P. Klassen
    87,00 - 120,00 €

    This textbook for courses on function data analysis and shape data analysis describes how to define, compare, and mathematically represent shapes, with a focus on statistical modeling and inference.

  • von Karl G. Joreskog, Ulf H. Olsson & Fan Y. Wallentin
    124,00 €

    It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis.

  • - Nonparametric Bayesian Estimation
    von Eswar G. Phadia
    106,00 - 107,00 €

    After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process.

  • von Jan G. De Gooijer
    157,00 €

    This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications.

  • von Matthias Schmid & Gerhard Tutz
    80,00 - 81,00 €

    This book focuses on statistical methods for the analysis of discrete failure times. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale.

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.