Hayashi in Chapter 1 of his classic graduate textbook "Econometrics" (2000) states that the following assumptions comprise the classical linear regression model: This simplification is a key feature of regression models.į. So instead of looking on full joint distribution, we focus on conditional distribution of $Y$. In case of quartile regression we focus on other feature of distribution, we estimate $\mu$ that is distribution's quartile rather then expected value. With $L_1$ regularized regression it is a location of Laplace distribution, for robust model minimizing Huber loss so called Huber density is used. In GLM's $\mu$ can be location of Poisson, Binomial, Gamma etc. Where $f$ is a function of predictors that can take different forms (linear, non-linear) depending on particular regression model and $\mu$ is a mean of some distribution when thinking of regression models in terms of generalized linear models. We simplify this problem to focusing solely on conditional distribution, or more precisely on conditional expectation of $Y$ given the other variables. The variables have some unknown distribution and complicated covariance structure. In regression case we have some random variables $Y$ and $X_1,\dots,X_k$. Two nice answers were already given, but I'd like to add my two cents. To repeat: what matters is the definition used by the authors you are reading now, and not some metaphysics about what it "really is". That could well be included in my description of "regression model" above, but is often taken as an alternative model.Īlso, what is meant might vary among fields, see What is the difference between conditioning on regressors vs. To delineate from other models we better have a look at some other words often taken to denote something different for "regression models", like "errors in variables", when we accept the possibility of measurement errors in the predictor variables. The relationship mentioned above can be linear or nonlinear, specified in a parametric or nonparametric way, and so on. Mostly, we take the predictor variables as given, and treat them as constants in the model, not as random variables. We are not interested in influence the other direction, and we are not interested in relationships among the predictor variables. So, what distinguishes a "regression model" from other kinds of statistical models? Mostly, that there is a response variable, which you want to model as influenced by (or determined by) some set of predictor variables. , what a word really means, are seldom worthwhile. This in the same way as in mathemathics we usually do not define "number", but "natural number", "integers", "real number", "p-adic number" and so on, and if somebody will want to include the quaternions among numbers so be it! it doesn't really matter, what matters is what definitions is used by the book/paper you are reading at the moment.ĭefinitions are tools, and essentialism, that is discussing what is the essence of. I would say that "regression model" is a kind of meta-concept, in the sense that you will not find a definition of "regression model", but more concrete concepts such as "linear regression", "non-linear regression", "robust regression" and so on.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |