Generalized Additive Models in R (2022)

Generalized additivemodels in R

GAMs in R are a nonparametric extension of GLMs, used oftenfor the case when you have no a priori reason for choosing a particularresponse function (such as linear, quadratic, etc.) and want the data to 'speakfor themselves'. GAMs do this via asmoothing function, similar to what you may already know about locally weightedregressions. GAMs take each predictorvariable in the model and separate it into sections (delimited by 'knots'), andthen fit polynomial functions to each section separately, with the constraintthat there are no kinks at the knots (second derivatives of the separatefunctions are equal at the knots). Thenumber of parameters used for such fitting is obviously more than what would benecessary for a simpler parametric fit to the same data, but computationalshortcuts mean the model degrees of freedom is usually lower than what youmight expect from a line with so much 'wiggliness'. Indeed this is the principal statisticalissue associated with GAM modeling: minimizing residual deviance (goodness offit) while maximizing parsimony (lowest possible degrees of freedom). Since the model fit is based ondeviance/likelihood, fitted models are directly comparable with GLMs usinglikelihood techniques (like AIC) or classical tests based on model deviance(Chi-squared or F tests, depending on the error structure). Even better, all the error and linkstructures of GLMs are available in GAMs (including poisson and binomial), asare the standard suite of lm or glm attributes (resid, fitted, summary,coef, etc.). A principal reason why GAMsare often less preferred than GLMs is that the results are often difficult tointerpret because no parameter values are returned (although significance testsof each term are). They can be very goodfor prediction/interpolation, as well as exploratory analyses about thefunctional nature of a response. Someresearchers examine the shape of a curve with GAMs, then reconstruct the curveshape parametrically with GLMs for model building.

There aretwo common implementations of GAMs in R.The older version (originally made for S-PLUS) is available as the 'gam'package by Hastie and Tibshirani. Thenewer version that we will use below is the 'mgcv' package from SimonWood. The basic modeling procedure forboth packages is similar (the function is gamfor both; be wary of having both libraries loaded at the same time), but thebehind-the-scenes computational approaches differ, as do the arguments foroptimization and the model output.Expect the results to be slightly different when used with the samemodel structure on the same dataset.

Let's first examine output fromthe built-in function loess, which is a locally weighted polynomialregression. This time we'll trymodeling yellow birch, and we'll include plots of birch absences.

dat = read.csv(file.choose()) #grab the treedata.csv file

dat2 = subset(dat,dat$species=="Betulaalleghaniensis") #a yellow birchsubset

b.plots = unique(as.character(dat2$plotID)) #plots with birch

u.plots = unique(as.character(dat$plotID)) #all plots

nob.plots =u.plots[is.element(u.plots,b.plots)==F] #plots without birch

dat3 =subset(dat,is.element(as.character(dat$plotID),nob.plots)) #datast of no birch

dat4 = subset(dat3,duplicated(dat3$plotID)==F) #one row per plot

dat4$cover = 0 #coverof birch is zero in these plots

dat5 = rbind(dat2,dat4) #new dataframe of presences and absences

As from last time, dat5 is our new data frame with bothpresences and absences, this time for yellow birch.

ls1 = loess(cover>0~elev,data=dat5)



loess(formula = as.numeric(cover > 0) ~ elev,data = dat5)

Number of Observations: 1009

Equivalent Number of Parameters: 4.69

Residual Standard Error: 0.3931

Trace of smoother matrix: 5.11

Control settings:

normalize: TRUE

span :0.75

degree : 2

family : gaussian

surface : interpolate cell = 0.2

The summary of the loess model gives the precision (SE) ofthe fit and the (default) arguments used.We can plot the result using predict. (Note predict with loess requiresnewdata to be entered as a data.frame rather than a list.)


x = seq(0,2000)


The curve looks oddly classical-niche-like. The loess function gives a straightforwardway to create smooth responses for simple regressions, but its utility endsthere: the fit is not based on likelihood and there is no easy way to comparewhether this model fits better than other (e.g., parametric) models, nor isthere a way to accommodate non-Gaussian error functions. The older lowess function (ported fromS-PLUS) does essentially the same thing, used for plotting trendlines (ratherthan creating models).

Now we canexamine what gam can do. Theformulation of a gam model is nearly exactly the same as for glm; allthe same families and link functions apply.The only difference is wrapping the predictors in a non-parametricsmoother function, s().



gam1 =gam(cover>0~s(elev),family=binomial,data=dat5)


Family: binomial

Link function: logit


cover > 0 ~ s(elev)

Parametric coefficients:

Estimate Std. Error z value Pr(>|z|)

(Intercept) -0.37157 0.08805-4.22 2.44e-05 ***


Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Approximate significance of smooth terms:

edf Ref.df Chi.sq p-value

s(elev) 8.2668.85 298.6 <2e-16 ***


Signif. codes:0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

R-sq.(adj) =0.396 Deviance explained = 32.5%

(Video) Introduction to Generalized Additive Models with R and mgcv

UBRE score = -0.056009 Scale est. = 1 n = 1009


You can see right away that the gam fit here was moresensitive to minimizing deviance (higher wiggliness) than the default fit ofthe loess function. The gam wasalso able to minimize deviance based on the logit transformation. The model output shows that an overall(parametric) intercept was fit (the mean), -0.37157. But this is on the scale of the logittransformation. To get the value on thescale of actual probability we use the inverse of the logit function:




The parametric estimates are listed with tests ofsignificance against a null of zero. Asummary of the smoothed terms are listed next, which here is only s(elev). Reported statistics include the estimateddegrees of freedom (edf) and a test of whether the smoothed functionsignificantly reduces model deviance.Finally, a pseudo-R2 is available using deviance explained (here 32.5%);this statistic is just 1 – (residual deviance/null deviance).

Let'scompare the fit of this model against a simpler parametric binary regressionusing glm:

glm1 =glm(cover>0~elev,family=binomial,data=dat5)




glm(formula = cover > 0 ~ elev, family =binomial, data = dat5)

Deviance Residuals:

Min 1Q Median3Q Max

-2.4790-0.7775 -0.4832 0.86542.3320


Estimate Std. Error z value Pr(>|z|)

(Intercept) -3.9132171 0.2558538-15.29 <2e-16 ***

elev 0.0034832 0.000230615.11 <2e-16 ***


Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family takento be 1)

Nulldeviance: 1382.7 on 1008 degrees of freedom

Residual deviance: 1057.5 on 1007degrees of freedom

AIC: 1061.5

Number of Fisher Scoring iterations: 4

There are several ways to compare the gam1 and glm1models. First, we can take the easy wayout and compare the AICs:


df AIC

gam1 9.266309952.4866

glm1 2.000000 1061.4910

Although the gam1 model faces a stiffer penalty due to theneed for many more df, the lower residual deviance more than makes up for itand the AIC is substantially lower. Whatis the difference in explained deviance?


[1] 0.2352225

The gam1 explained deviance was 32.5%, but that for glm1 isonly 23.5%. Finally, the residualdeviance and df for both models means we can compare them via a Chi-square testusing anova:


Analysis of Deviance Table

Model 1: cover > 0 ~ s(elev)

Model 2: cover > 0 ~ elev

Resid. DfResid. Dev Df DevianceP(>|Chi|)

1 999.73933.95

2 1007.001057.49 -7.2663 -123.54 <2.2e-16 ***


Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The clear winner is gam1.Looking at the plot, we suspect it is due to the apparently hump-shaped(or even bimodal) distribution of yellow birch along the elevation gradient:our glm1 model is strictly monotonic. Ofcourse, we can allow for a unimodal possibility in the glm by adding aquadratic term:

glm2 =glm(cover>0~elev+I(elev^2),family=binomial,data=dat5)



Analysis of Deviance Table

Model 1: cover > 0 ~ s(elev)

Model 2: cover > 0 ~ elev

Model 3: cover > 0 ~ elev + I(elev^2)

(Video) MAT5890-4-07-2: 7.7 Generalized additive models - R implementation 1/2

Resid. DfResid. Dev Df DevianceP(>|Chi|)

1 999.73933.95

2 1007.001057.49 -7.2663 -123.537 < 2.2e-16 ***

3 1006.001011.99 1.0000 45.505 1.523e-11 ***


Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

It's better, but it still doesn't come close to the goodnessof fit of the gam1. You can create asimple plot of the fitted gam with plot,including residuals. Unless you tell itotherwise, the fit is shown on the scale of the link function. The trans argument can be given the inverselink function (here the inverse logit) to scale the y axis for the desiredprobabilities like this:

windows() #this creates a new (second) plot window


The handy part of these plots is the display of error (2*SE,roughly 95% of the predicted values fall within the gray area, or have thesedrawn as dashed lines using shade=F).Also note what R calls the 'rug': the dashes on the x axis that show thedistribution of x values (elevation).Here the areas of few plots have the highest associated error.

Thespecification of terms within GAMs can a complicated assortment of smoothed,parametric, 'isotropic' smoothing of two or more parameters, and 'overlapping'smoothing. Let's take a secondcontinuous variable like transformed aspect (Beers index: from zero [SW, orhottest] to 2 [NE, coolest]) and model yellow birch with respect to elevationand the heat-load index in several different ways:

gam2 =gam(cover>0~s(elev)+s(beers),family=binomial,data=dat5)

gam3 =gam(cover>0~s(elev)+beers,family=binomial,data=dat5)

gam4 = gam(cover>0~s(elev)+s(beers)+s(elev,by=beers),family=binomial,data=dat5)

gam5 =gam(cover>0~s(elev)+s(beers)+te(elev,beers),family=binomial,data=dat5)

The first model fits both predictor variables with their ownindependent smoothing functions. We getback a parametric intercept and two tests of significance for the smoothedvariables:



Linkfunction: logit


cover> 0 ~ s(elev) + s(beers)


Estimate Std. Error z valuePr(>|z|)

(Intercept) -0.390070.08954 -4.357 1.32e-05 ***


Signif. codes: 0‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Approximatesignificance of smooth terms:

edf Ref.df Chi.sq p-value

s(elev) 8.2998.863 294.83 < 2e-16 ***

s(beers)1.952 2.418 26.30 3.69e-06 ***

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

R-sq.(adj)= 0.421Deviance explained = 34.6%

UBREscore = -0.081485 Scale est. = 1 n = 1009

Both variables appear to be significant, and the explaineddeviance has gone up to 34.6% (from 32.5%).We can test this model against gam1 specifically by using anova:


Analysisof Deviance Table

Model 1:cover > 0 ~ s(elev)

Model 2:cover > 0 ~ s(elev) + s(beers)

Resid. Df Resid. Dev Df Deviance P(>|Chi|)

1 999.73933.95

2 997.75904.28 1.9849 29.675 3.51e-07 ***

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

The lower deviance with very little cost in model df (basically2) leads to a much better model. What dothe predicted responses look like?


Yellow birch appears to prefer cooler and shadier spots inaddition to mid-high elevation. But theresponse to the Beers-transformed aspect looks basically linear: should thisterm just be a (non-smoothed) parametric fit?Our gam3 model is a GAM-GLM hybrid that has a smoothed elevation termand a parametric beers term:


Analysisof Deviance Table

Model 1:cover > 0 ~ s(elev) + s(beers)

Model 2:cover > 0 ~ s(elev) + beers

Resid. Df Resid. Dev Df Deviance P(>|Chi|)

1 997.75904.28

(Video) R Tutorial: Nonlinear Modeling in R with GAMs | Intro

2 998.68909.39 -0.936 -5.1077 0.02146 *

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

Surprisingly, gam2 is actually the better model (lowerresidual deviance) even though it is more highly parameterized. You can check AIC to see if thelikelihood-based assessment agrees.

The nexttwo models are two different ways to assess interaction terms in a GAMmodel. Unfortunately, assessinginteractions in GAMs is anything but straightforward. The first thing to understand is that thismodel structure is not usually what you want:



Linkfunction: logit


cover> 0 ~ s(elev) + s(beers) + s(elev, beers)


Estimate Std. Error z valuePr(>|z|)

(Intercept) -0.390080.08954 -4.357 1.32e-05 ***


Signif. codes: 0‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Approximatesignificance of smooth terms:

edf Ref.dfChi.sq p-value

s(elev)8.298849 8.86203 275.49 < 2e-16 ***

s(beers)1.951841 2.41812 26.30 3.69e-06***

s(elev,beers) 0.002208 0.00384 3.4e-08 NA

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

R-sq.(adj)= 0.421Deviance explained = 34.6%

UBREscore = -0.081484 Scale est. = 1 n = 1009

In fact the fit here is equal to the fit of gam2 (the df andresidual deviance is identical), and the 'interaction' term is not tested. What the third term does is scale thesmoothing functions equally for the two variables (isotropic smoothing). This may come in handy for truly isotrophicvariables (e.g., modeling with spatial distances in 2D or 3D), but otherwisedoesn't get you closer to understanding whether the significance of onevariable in the model depends on values of another. Instead, you can use the form of either gam4(interaction fit isotropically; note the "by" argument here) or gam5(interaction fit non-isotropically using the "tensor product"smoothing function, te). The mgcvdeveloper, Simon Wood, suggests "tensor product smooths often perform better than isotropicsmooths when the covariates of a smooth are not naturally on the same scale, sothat their relative scaling is arbitrary"—in other words, if the variablesare not in the same units, you're probably better off using the structure ofgam5.



Linkfunction: logit


cover> 0 ~ s(elev) + s(beers) + te(elev, beers)


Estimate Std. Error z valuePr(>|z|)

(Intercept) -0.359870.09143 -3.936 8.29e-05 ***


Signif. codes: 0‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Approximatesignificance of smooth terms:

edf Ref.df Chi.sq p-value

s(elev)8.311 8.868 243.283 <2e-16 ***

s(beers)1.915 2.373 7.3610.0364 *

te(elev,beers) 1.0011.002 2.053 0.1523

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

R-sq.(adj)= 0.422Deviance explained = 34.8%

UBREscore = -0.081586 Scale est. = 1 n = 1009

Now we have a significance test associated with theinteraction, which is here not significant.A plot of this model:


shows the smoothed response of cover to each variable, pluscontour lines of smoothed cover against both elevation and beersvariables.

The"by" formulation is also appropriate for an ANCOVA structure, wherewe want to assess whether the smoothed fit varies according to differentclasses of some factor variable. Here'san example using the distribution of yellow birch with respect to elevation anddisturbance history:

gam6 =gam(cover>0~s(elev)+disturb+te(elev,by=disturb,k=4),family=binomial,data=dat5)


The model was fit as a hybrid of a smoothed response toelevation, a parametric (ANOVA-type) response to disturbance class (see thestandard contrasts below), and a tensor-product smoothed response to elevationseparated by disturbance class (here not significant). The graph shows the overall elevationresponse and elevation responses restricted to disturbance classes. The large error associated with most of theseis consistent with the interaction being insignificant. The "k=4" argument limits thenumber of knots in each smoothing of elevation within disturbance classes; Idid this because this is maximum number the 5-class disturbance variable cansupport (try it without the k specified: you should get an error).



(Video) Statistical Methods Series: Generalized Additive Models (GAMs)

Linkfunction: logit


cover> 0 ~ s(elev) + disturb + te(elev, disturb, k = 4)


Estimate Std. Error z valuePr(>|z|)

(Intercept) 0.044270.37272 0.119 0.905

disturbLT-SEL-0.49432 0.45897 -1.0770.281

disturbSETTLE-0.25341 0.60576 -0.4180.676

disturbVIRGIN-0.75584 0.68266 -1.1070.268

Approximatesignificance of smooth terms:

edf Ref.df Chi.sq p-value

s(elev) 8.2038.816 80.45 1.06e-13 ***

te(elev,disturb)2.792 3.633 2.020.678

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

R-sq.(adj)= 0.4Deviance explained = 33.1%

UBREscore = -0.054127 Scale est. = 1 n = 1009

Here's anexample of the procedure used by Bio et al. (1998, JVS 9:5-16) to comparemodels fit by GLMs and GAMs, using the gamfunction in the gam package. Using ouryellow birch dataset:



m0 = glm(cover>0~1,family=binomial,data=dat5)

m1 = glm(cover>0~elev,family=binomial,data=dat5)

m2 =glm(cover>0~elev+I(elev^2),family=binomial,data=dat5)

m3 = gam(cover>0~s(elev,3),family=binomial,data=dat5) #smoother with 3 df

m4 =gam(cover>0~s(elev,4),family=binomial,data=dat5) #smoother with 4 df


Analysisof Deviance Table

Model 1:cover > 0 ~ 1

Model 2:cover > 0 ~ elev

Model 3:cover > 0 ~ elev + I(elev^2)

Model 4:cover > 0 ~ s(elev, 3)

Model 5:cover > 0 ~ s(elev, 4)

Resid. Df Resid. Dev Df Deviance P(>|Chi|)

1 10081382.74

2 10071057.49 1.0000 325.25 <2.2e-16 ***

3 10061011.99 1.0000 45.50 1.523e-11***

4 1005978.88 1.0000 33.11 8.712e-09***

5 1004965.93 1.0002 12.95 0.00032 ***

--- 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

All the models are better than the null model (interceptonly), and the lowest deviance is model 4 (the 4-df smoother). This term would this be included in the model(s(elev,4)) and the next environmental variable would be test using the currentmodel 4 as the null model. Like Bio etal., we can also compare the 4-df smoother fit with a 4th-degreepolynomial fit in a GLM:

m5 =glm(cover>0~elev+I(elev^2)+I(elev^3)+I(elev^4),family=binomial,data=dat5)





Analysisof Deviance Table

Model 1:cover > 0 ~ s(elev, 4)

Model 2:cover > 0 ~ elev + I(elev^2) + I(elev^3) + I(elev^4)

Resid. Df Resid. Dev Df Deviance P(>|Chi|)

1 1004965.93

2 1004973.77 -0.00025783 -7.84385.358e-07 ***


(Video) MAT5890-4-07-3: 7.7 Generalized additive models - R implementation 1/2 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05‘.’ 0.1 ‘ ’ 1

Like that of Bio et al., we conclude that using high-orderpolynomial functions leads to much greater residual variance than thenonparametric equivalent GAM.

Finally,you may wish to experiment with a GAM called BRUTO, or flexible discriminantanalysis, available as the brutofunction in the "mda" package.Elith et al. (Ecography 29: 129-151) suggest this method often worksfaster than the traditional GAM approach and has some convenient options forterm selection. However, it only takesGaussian error, so working with binary data is difficult. Unfortunately I was unable to get the bruto function to work with the aboveSmokies data; Elith et al. also had problems with the R implementation and usedS-PLUS instead.


What is a GAM model in R? ›

'gam' package . Simply saying GAMs are just a Generalized version of Linear Models in which the Predictors Xi depend Linearly or Non linearly on some Smooth Non Linear functions like Splines , Polynomials or Step functions etc.

Is logistic regression a generalized additive model? ›

The logistic regression model is an example of a broad class of models known as generalized linear models (GLM).

Is GAM better than GLM? ›

While the distinction is blurry, gam's can represent interactions also the smae way as glm's so strict additivity is not needed, the big difference is in inference: gam's need special methods, since estimation is not done via projection, but via smoothing.

What is GAM used for? ›

Regularization. As mentioned above, the GAM framework allows us to control smoothness of the predictor functions to prevent overfitting. By controlling the wiggliness of the predictor functions, we can directly tackle the bias/variance tradeoff.

Why do we use generalized additive models? ›

There are many adaptations we can make to adapt the model to perform well on a variety of conditions and data types. Generalised Additive Models (GAMs) are an adaptation that allows us to model non-linear data while maintaining explainability.

What is a GLM in statistics? ›

The term "general" linear model (GLM) usually refers to conventional linear regression models for a continuous response variable given continuous and/or categorical predictors. It includes multiple linear regression, as well as ANOVA and ANCOVA (with fixed effects only).

Is GAM a logistic regression? ›

Logistic Regression using GAM

We can also fit a Logistic Regression Model using GAMs for predicting the Probabilities of the Binary Response values. We will use the identity I() function to convert the Response to a Binary variable.

Is GAM a non parametric model? ›

In this paper, we show how generalized additive models (GAMs), a non- parametric regression-based method, can be useful to accommodate nonlinear trends.

How does a GLM work? ›

The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

What is the difference between linear model and generalized linear model? ›

The main difference between the two approaches is that the general linear model strictly assumes that the residuals will follow a conditionally normal distribution, while the GLM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals.

What is GAM plot? ›

gam. It is the workhorse of the mgcViz package, and allows plotting (almost) any type of smooth, parametric or random effects. It is basically a wrapper around plotting methods that are specific to individual smooth effect classes (such as plot. mgcv.

What is the difference between additive and multiplicative model? ›

The additive model is useful when the seasonal variation is relatively constant over time. The multiplicative model is useful when the seasonal variation increases over time.

How do I set up GAM? ›

GAM is subject to the Apache 2.0 license, which provides the terms and conditions for your use, reproduction, and distribution of GAM.
  1. Before you begin. ...
  2. Step 1: Avoid account conflicts. ...
  3. Step 2: Create a simple, flat organizational structure. ...
  4. Step 3: Prepare a data source. ...
  5. Step 4: Set up GAM.

What package is GAM in? ›

gam: Generalized Additive Models
Suggests:interp, testthat
Author:Trevor Hastie
Maintainer:Trevor Hastie <hastie at>
6 more rows
Jul 4, 2022

What do you mean by additive model? ›

In statistics, an additive model (AM) is a nonparametric regression method. It was suggested by Jerome H. Friedman and Werner Stuetzle (1981) and is an essential part of the ACE algorithm. The AM uses a one-dimensional smoother to build a restricted class of nonparametric regression models.

What is an additive model in statistics? ›

The additive model is the arithmetic sum of the predictor variables' individual effects. For a two factor experiment (X, Y), the additive model can be represented by: Y = B0 + B1 X1 + B2 X2 + ε

How do I run a logit regression in R? ›

This tutorial provides a step-by-step example of how to perform logistic regression in R.
  1. Step 1: Load the Data. ...
  2. Step 2: Create Training and Test Samples. ...
  3. Step 3: Fit the Logistic Regression Model. ...
  4. Step 4: Use the Model to Make Predictions. ...
  5. Step 5: Model Diagnostics.
Oct 28, 2020

Are GLMs machine learning? ›

Generalized Linear Models (GLMs) play a critical role in fields including Statistics, Data Science, Machine Learning, and other computational sciences.

What are the three components in GLM? ›

A GLM consists of three components: A random component, A systematic component, and. A link function.

Is GLM a classification model? ›

GLM can be used to build classification or regression models as follows: Classification: Binary logistic regression is the GLM classification algorithm. The algorithm uses the logit link function and the binomial variance function. Regression: Linear regression is the GLM regression algorithm.

Where is GLM used? ›

Function glm() is used to fit generalized linear models, specified by giving a symbolic description of the linear predictor and a description of the error distribution.

What is an additive model in statistics? ›

The additive model is the arithmetic sum of the predictor variables' individual effects. For a two factor experiment (X, Y), the additive model can be represented by: Y = B0 + B1 X1 + B2 X2 + ε

What package is GAM in? ›

gam: Generalized Additive Models
Suggests:interp, testthat
Author:Trevor Hastie
Maintainer:Trevor Hastie <hastie at>
6 more rows
Jul 4, 2022

What is deviance in R? ›

Deviance is a measure of goodness of fit of a generalized linear model. Or rather, it's a measure of badness of fit–higher numbers indicate worse fit. R reports two forms of deviance – the null deviance and the residual deviance.

What is EDF in GAM? ›

The edf is a summary statistic of GAM and it reflects the degree of non-linearity of a curve (Wood 2006). An edf equal to 1 is equivalent to a linear relationship, 1 < edf ≤ 2 is considered a weakly non-linear relationship, and edf > 2 implies a highly non-linear relationship ( Fig.


1. R Tutorial: Multivariate GAMs
2. Super easy GAM regression using ggplot2
(Mario Castro)
3. What are GAMs
(Brian Caffo)
4. Introduction to GAM models
(Drew Tyre)
5. Quantile Generalized Additive Models: moving beyond Gaussianity - Part 1
(R Consortium)
6. MAT5890-4-07-1: 7.7 Generalized additive models - Concepts

You might also like

Latest Posts

Article information

Author: Pres. Carey Rath

Last Updated: 08/18/2022

Views: 6224

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.