| @@ -705,11 +705,11 @@ This chapter has focussed exclusively on the class of linear models, which assum | ||||
| * __Generalised additive models__, e.g. `mgcv::gam()`, extend generalised | ||||
|   linear models to incorporate arbitrary smooth functions. That means you can | ||||
|   write a formula like `y ~ s(x)` which becomes an equation like  | ||||
|   `y = f(x)` and the `gam()` estimate what that function is (subject to some | ||||
|   `y = f(x)` and let `gam()` estimate what that function is (subject to some | ||||
|   smoothness constraints to make the problem tractable). | ||||
|    | ||||
| * __Penalised linear models__, e.g. `glmnet::glmnet()`, add a penalty term to | ||||
|   the distance which penalises complex models (as defined by the distance  | ||||
|   the distance that penalises complex models (as defined by the distance  | ||||
|   between the parameter vector and the origin). This tends to make | ||||
|   models that generalise better to new datasets from the same population. | ||||
|  | ||||
| @@ -718,8 +718,8 @@ This chapter has focussed exclusively on the class of linear models, which assum | ||||
|   of outliers, at the cost of being not quite as good when there are no  | ||||
|   outliers. | ||||
|    | ||||
| * __Trees__, e.g. `rpart::rpart()`, attack the problem in a complete different | ||||
|   way to linear models. They fit a piece-wise constant model, splitting the | ||||
| * __Trees__, e.g. `rpart::rpart()`, attack the problem in a completely different | ||||
|   way than linear models. They fit a piece-wise constant model, splitting the | ||||
|   data into progressively smaller and smaller pieces. Trees aren't terribly | ||||
|   effective by themselves, but they are very powerful when used in aggregate | ||||
|   by models like __random forests__ (e.g. `randomForest::randomForest()`) or  | ||||
|   | ||||
		Reference in New Issue
	
	Block a user