0% found this document useful (0 votes)
140 views

(GAM) Application PDF

The document provides an introduction to generalized additive models (GAMs) and how to get started using them in R. It begins by discussing limitations of general linear models and how generalized linear models and GAMs address these limitations by allowing for nonlinear relationships between predictors and responses. The document then demonstrates fitting GAMs to both single and multiple predictors using R code and graphical displays, and compares GAMs to standard linear models. It also covers other issues like estimation, choice of smoothing functions, and model diagnostics when using GAMs.

Uploaded by

annisa rahmasari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views

(GAM) Application PDF

The document provides an introduction to generalized additive models (GAMs) and how to get started using them in R. It begins by discussing limitations of general linear models and how generalized linear models and GAMs address these limitations by allowing for nonlinear relationships between predictors and responses. The document then demonstrates fitting GAMs to both single and multiple predictors using R code and graphical displays, and compares GAMs to standard linear models. It also covers other issues like estimation, choice of smoothing functions, and model diagnostics when using GAMs.

Uploaded by

annisa rahmasari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

MICHAEL CLARK

CENTER FOR SOCIAL RESEARCH


UNIVERSITY OF NOTRE DAME

GENERALIZED
ADDITIVE
MODELS
G E T T I N G S TA R T E D W I T H G A M S I N R
Getting Started with GAMs in R 2

Contents

Preface 4

Introduction 5
Beyond the General Linear Model I 5
General Linear Model 5
Generalized Linear Model 5
Generalized Additive Model 6
Beyond the General Linear Model II 6
Fitting the Standard Linear Model 6
Polynomial Regression 6
Scatterplot Smoothing 7
Generalized Additive Models 7
Summary 8

Application Using R 8
Initial Examination 8
Single Predictor 10
Linear Fit 10
GAM 11
Graphical Display 13
Model Comparison 14
Multiple Predictors 15
Linear Fit 15
GAM 15
Graphical Display 17
Model Comparison 19

Other Issues 19
Estimation 19
Shrinkage & Variable Selection 20
Effective degrees of freedom again 20
3 Generalized Additive Models

Choice of Smoothing Function 20


Diagnostics & Choice of Basis Dimension 21
Prediction 22
Model Comparison Revisited 22

Miscellaneous 23
Relation to Some Other Nonlinear Approaches 23
Known Form 23
Response Transformation 24
The Black Box 24
Extensions 25

Conclusion 25

Appendix 26
R packages 26
Miscellaneous Code 27
ggopts 27
mod_gam1 plot 27
Penalized Estimation Example 27
R Session Info 29
Getting Started with GAMs in R 4

Preface
The following provides a brief introduction to generalized additive Current draft last updated 2012-08-07
models and some thoughts on getting started within the R environ-
ment. It doesn’t assume much more than a basic exposure to regres-
sion, and maybe a general idea of R though not necessarily any partic-
ular expertise. The presentation is of a very applied nature, and such
that the topics build upon the familiar and generalize to the less so,
with the hope that one can bring the concepts they are comfortable
with to the new material. The audience in mind is a researcher with
typical social science training.
To familiarize yourself with R, I have an introduction to R here
and second part here, and they would provide all one would need for
the following, though there is much to be gained from simple web
browsing on R. And while it wasn’t the intention starting out, this
document could be seen as a vignette for the mgcv package.
With regard to the text, functions are in red text, packages in blue,
and links are sort of orange. With regard to the graphs, they are meant
to be zoomed in upon, and you can as much or little as you like with-
out loss of clarity. Having them smaller and in the margins allows the
reader to focus on them as much as one would like while having them
right where they need to be in relation to the text (for the most part).
This document was created entirely with Rstudio.
5 Generalized Additive Models

Introduction
Beyond the General Linear Model I

General Linear Model

L E T U S B E G I N by considering the standard ordinary least squares


(OLS) linear regression model. We have some response/variable we
wish to study, and believe it to be some function of other variables.
In the margin to the right, y is the variable of interest, assumed to y = N (µ, σ2 )
be normally distributed with mean µ and variance σ2 , and the Xs are µ = b0 + b1 X1 + b2 X2 ... + b p X p
the predictor variables/covariates in this scenario. The predictors are
multiplied by the coefficients (beta) and summed, giving us the linear
predictor, which in this case also directly provides us the estimated
fitted values.
And since we’ll be using R and might as well get into that mindset,
I’ll give an example of how the R code might look like.

mymod = lm(y ~ x1 + x2, data = mydata)

One of the issues with this model is that, in its basic form it can be
very limiting in its assumptions about the data generating process for
the variable we wish to study. Also, while the above is one variant of
the usual of the manner in which it is presented in introductory texts,
it also very typically will not capture what is going on in a great many
data situations.

Generalized Linear Model


In that light, we may consider the generalized linear model. General-
ized linear models incorporate other types of distributions1 , and in- 1
Of the exponential family.
clude a link function g(.) relating the mean µ, or stated differently, the
estimated fitted values E(y), to the linear predictor Xβ, often denoted
η. The general form is thus g(µ) = Xβ. g(µ) = Xβ
Consider again the typical linear regression. We assume a Gaussian g(µ) = η
E ( y ) = µ = g −1 ( η )
(i.e. Normal) distribution for the response, we assume equal variance
for all observations, and that there is a direct link of the linear predic-
tor and the expected value µ, i.e. Xβ = µ. In fact the typical linear
regression model is a generalized linear model with a Gaussian distri-
bution and identity link function.
To further illustrate the generalization, we consider a distribution
other than the Gaussian. In the example to the right, we examine a
Poisson distribution for some vector of count data. There is only one y = Pois(µ)
parameter to be considered, µ, since for the Poisson the mean and g(µ) = b0 + b1 X1 + b2 X2 ... + b p X p
Getting Started with GAMs in R 6

variance are equal. For the Poisson, the (canonical) link function g(.),
is the natural log, and so relates the log of µ to the linear predictor. As
such we could also write it in terms of exponentiating the right hand
side. µ = eb0 +b1 X1 +b2 X2 ...+b p X p
While there is a great deal to further explore with regard to gen-
eralized linear models, the point here is to simply to note them as a
generalization of the typical linear model with which all would be fa-
miliar after an introductory course in statistics. As we eventually move
to generalized additive models, we can see them as a subsequent step
in the generalization.

Generalized Additive Model


Now let us make another generalization to incorporate nonlinear forms
of the predictors. The form at the right gives the new setup relating y = ExpoFam(µ, etc.)
our new, now nonlinear predictor to the expected value, with whatever µ = E(y)
g(µ) = b0 + f ( x1 ) + f ( x2 ) + ... + f ( x p )
link function may be appropriate.
So what’s the difference? In short, we are using smooth functions
of our predictor variables, which can take on a great many forms,
with more detail in the following section. For now, it is enough to
note the observed values y are assumed to be of some exponential
family distribution, and µ is still related to the model predictors via
a link function. The key difference now is that the linear predictor
now incorporates smooth functions f ( x ) of at least some (possibly all)
covariates.
100

Beyond the General Linear Model II 80

60
y

Fitting the Standard Linear Model 40

In many data situations the relationship between some covariate(s) X 20

and response y is not so straightforward. Consider the plot at the right. 0

Attempting to fit a standard OLS regression results in the blue line, 0 2 4


x
6 8 10

which doesn’t capture the relationship very well. Introduction Figure 1

Polynomial Regression
One common approach we could undertake is to add a transformation
of the predictor X, and in this case we might consider a quadratic
term such that our model looks something like that to the right. We y = N (µ, σ2 )
haven’t really moved from the general linear model in this case, but we µ = b0 + b1 X1 + b2 X 2

have a much better fit to the data as evidenced by the graph.


100

80

60
y

40

20

0 2 4 6 8 10
x

Introduction Figure 2
7 Generalized Additive Models

Scatterplot Smoothing
There are still other possible routes we could take. Many are probably
familiar with lowess (or loess) regression, if only in the sense that often
statistical packages may, either by default or with relative ease, add a
nonparametric fit line to a scatterplot2 . By default this is typically a 2
See the scatterplots in the car package
lowess fit. for example

Now look at figure 3. For every (sorted) x0 value, we choose a


neighborhood around it and, for example, fit a simple regression. As 100

an example, let’s look at x0 = 3.0, and choose a neighborhood of 100 80

X values below and 100 values above. Now, for just that range we fit a 60

y
linear regression model and note the fitted value where x0 = 3.0 (Fig. 40

20

4). If we now do this for every X value, we’ll have fitted values based
0

on a rolling neighborhood that is relative to each X value being con- 0 2 4


x
6 8 10

sidered. Other options we could throw into this process, and usually Introduction Figure 3
do, would be to fit a polynomial regression for each neighborhood, and
weight observations the closer they are to the X value, with less weight 20

● ●

given to more distant observations. ● ●

● ●
● ●
● ● ●



● ● ●
● ●● ●

Figure 5 shows the result from such a fitting process, specifically


●● ● ●● ●

● ●
● ● ● ●

●● ●●●●

● ●

● ●●

● ●● ●●

lowess, or locally weighted scatterplot smoothing. For comparison the



15 ●
● ● ●●

● ●
● ● ● ●
● ● ●
● ●● ●●

y
● ●
● ● ● ●
● ● ●

regular regression fit is provided in the background. Even without



● ●● ●
● ● ●● ●●
● ● ●

● ●
● ●● ●
● ●
● ●

●● ● ●
● ●
● ● ● ● ●
●● ●
● ● ●

using a lowess approach, we could have fit have fit a model assuming a ● ● ●
● ●● ● ● ● ● ●

● ● ● ●● ●● ● ●

●● ● ●
● ● ● ● ● ●
10 ● ●

● ●●
● ● ●
● ●●● ● ●
● ● ● ●

quadratic relationship, y = x + x2 , and it would result in a far better fit



● ●
●●● ●● ● ● ●
●●
●● ● ●
●●

than the simpler model3 . While polynomial regression might suffice in



2.5 3.0 3.5


x

some cases, the issue is that nonlinear relationships are generally not
Introduction Figure 4
specified so easily4 . One can see this a variety of approaches Figure 6,
which is a version of that found in Venables and Ripley (1997 edition).
3
y was in fact constructed as x + x2 +
noise.
The leftmost plots are from a polynomial regression and lowess fit. 4
Bolker 2008 notes an example of fitting
a polynomial to 1970s U.S. Census data
that would have predicted a population
Generalized Additive Models crash to zero by 2015.

Figure 6 regards a data set giving a series of measurements of head


acceleration in a simulated motorcycle accident. Time is in millisec- 100

onds, acceleration in g. Here we have data that are probably not going 80

to be captured with simple transformations of the predictors. In the 60


y

figure one can compare various approaches, and the first is a straight- 40

forward cubic polynomial regression, which unfortunately doesn’t help 20

us much. We could try higher orders, which would help, but in the end 0

we will need something more flexible, and generalized additive models 0 2 4 6 8 10


x

can help us in such cases. Introduction Figure 5

Polynomial Regression (cubic) Natural Splines Smoothing Splines

50

−50

−100
Acceleration

Lowess Kernel Smoother GAM

50

−50

−100

0 10 20 30 40 50 600 10 20 30 40 50 600 10 20 30 40 50 60
Time

Introduction Figure 6
Getting Started with GAMs in R 8

Summary

Let’s summarize our efforts up to this point.

Standard Linear Model


y = N (µ, σ2 )
µ = b0 + b1 X1

Polynomial Regression
y = N (µ, σ2 )
µ = b0 + b1 X1 + b2 X 2

GLM formulation
y = N (µ, σ2 )
g(µ) = b0 + b1 X1 + b2 X 2 identity link function

GAM formulation
y = N (µ, σ2 )
g(µ) = f ( X ) identity link function

Now that some of the mystery will hopefully have been removed, let’s
take a look at GAMs in action.

Application Using R
Initial Examination

T H E D ATA S E T H A S B E E N C O N S T R U C T E D using average Science scores


by country from the Programme for International Student Assessment
(PISA) 2006, along with GNI per capita (Purchasing Power Parity, 2005
dollars), Educational Index, Health Index, and Human Development
Index from UN data. The key variables are as follows (variable abbrevi-
ations in bold):

Overall Science Score (average score for 15 year olds) The education component is measured
by mean of years of schooling for adults
Interest in science aged 25 years and expected years of
schooling for children of school entering
Support for scientific inquiry age, the health index by life expectancy
Income Index at birth, and the wealth component is
based on the gross national income per
Health Index capita. The HDI sets a minimum and
a maximum for each dimension, and
Education Index values indicate where each country
stands in relation to these endpoints,
Human Development Index (composed of the Income index, Health expressed as a value between 0 and 1.
Index, and Education Index) More information on the HDI measures
can be found here.
9 Generalized Additive Models

But even before we get too far, it would be good to know our op-
tions in the world of GAMs. Near the end of this document is a list of
some packages to be aware of, and there is quite a bit of GAM func-
tionality available within R, even for just plotting5 . For our purposes 5
ggplot2 has basic gam functionality for
we will use mgcv. scatterplot smoothing. Examples are
further in the text.
The first thing to do is get the data in and do some initial inspec-
tions.

d = read.csv("http://www.nd.edu/~mclark19/learn/data/pisasci2006.csv")
library(psych)
describe(d)[-1, 1:9] #univariate

## var n mean sd median trimmed mad min max


## Overall 2 57 473.14 54.58 489.00 477.45 48.93 322.00 563.00
## Issues 3 57 469.91 53.93 489.00 474.28 41.51 321.00 555.00
## Explain 4 57 475.02 54.02 490.00 478.85 47.44 334.00 566.00
## Evidence 5 57 469.81 61.74 489.00 475.11 54.86 288.00 567.00
## Interest 6 57 528.16 49.84 522.00 525.72 53.37 448.00 644.00
## Support 7 57 512.18 26.08 512.00 512.15 25.20 447.00 569.00
## Income 8 61 0.74 0.11 0.76 0.75 0.11 0.41 0.94
## Health 9 61 0.89 0.07 0.89 0.89 0.08 0.72 0.99
## Edu 10 59 0.80 0.11 0.81 0.80 0.12 0.54 0.99
## HDI 11 59 0.81 0.09 0.82 0.81 0.09 0.58 0.94

450 550 650 0.4 0.6 0.8 0.6 0.8 1.0


library(car) Overall

●●● ● ●

●●

●●

● ● ●


●● ●




● ● ●

●●● ●

500
● ● ● ● ●●
● ● ● ● ●
● ● ● ● ● ●●


● ●●● ● ● ●
● ● ● ●● ● ●


●● ●
● ●●
●● ● ● ● ● ●● ●


●● ● ●●

● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●
●● ● ● ●●● ● ●●●● ● ● ●●
● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ●●● ● ● ●

● ● ● ●
● ● ●
● ● ● ●

● ● ●

● ● ● ●
● ● ●● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ●●

scatterplotMatrix(d[,-c(1,3:5)],pch=19,cex=.5,reg.line=F, lwd.smooth=1.25,
● ●
● ● ● ● ● ●

● ● ● ● ● ● ●● ● ● ● ●●● ●● ●
● ● ● ● ● ● ● ●● ● ● ●● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ●● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ●● ● ● ● ● ● ● ● ●
●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●

350
● ● ● ● ● ●

● ● ● ● ● ●

spread=F,ellipse=T, col=c('gray60','#2957FF','#FF8000'),

650
● ● ● ● ●● ● ● ● ● ● ●

●●
● ● ●

● ●
Interest ●

● ●
● ●


● ●

● ● ●●
● ●


● ●


● ●
●●

● ●
● ●

● ●

● ●
● ● ● ● ● ●

550
● ● ● ● ● ●
● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ● ●

col.axis='gray50')
● ● ● ●● ● ● ● ● ● ●
●● ● ● ●
● ● ● ● ● ● ● ● ● ● ●●





●● ●


● ● ●

● ●
● ● ● ●

●●
● ● ●
● ●


●● ●
● ●● ●
● ●●
●●

● ●
●●
● ●
●●●
● ●

●● ●●




● ●

●● ●●● ● ●● ● ● ● ●● ●
● ● ● ●
● ● ● ● ●
● ● ● ●
● ● ●● ● ●
● ● ●● ● ●

● ● ● ● ● ●
● ● ● ● ● ● ● ● ●● ●

450
● ● ● ● ● ● ●● ● ●
●● ● ●● ● ●●● ●●
●● ● ● ● ●
● ● ●●● ● ●

●●● ●
● ● ●
● ● ● ● ●● ● ● ●
● ●

● ● ● ● ● ●

● ● ● ● ● ●
● ● ● ● ●● ● ● ● ● ● ●


● ●
● ●
● ●


● Support ●









● ●

● ●
● ●


● ●

520
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ●● ●●
● ● ●● ● ●● ●
● ●● ● ● ● ● ● ● ● ● ●● ●
● ● ● ● ●
● ●● ●
● ● ● ● ●
●● ● ● ● ●● ● ● ● ●● ● ●



● ●


● ●

● ● ●


● ●●
● ●● ●

● ●

●●
● ●●
● ● ●




● ●



●●

● ●
● ●●






●● ●

● ● ●


● ●

●● ●
● ●
●● ●


● ● ● ● ●● ● ● ● ● ●●
● ● ●● ● ● ● ● ●● ● ● ●●
●● ● ● ● ● ● ● ● ●
● ● ●● ● ● ● ● ●● ●
● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●

460
● ● ● ● ●● ● ● ● ● ●
● ● ●● ● ● ● ● ● ● ● ●●

● ● ● ● ● ●

● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ●

The scatterplot matrix has quite a bit of information to spend some


● ● ● ● ● ●
● ● ● ●

Income ●● ● ●●
0.4 0.6 0.8
● ● ●● ● ● ● ● ● ●● ●
●● ●●
●●●● ●●● ● ●● ●
●● ● ●● ●
● ●
●●

● ●● ●
● ● ●● ● ●
●●●● ●● ●
● ● ● ●● ●
● ●● ● ● ●
● ● ● ●●●
●● ●
● ● ● ● ●● ● ● ●
●● ●
● ●
● ● ● ● ● ●●
● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●
● ●


●● ●




● ● ● ● ●
● ●

● ● ●●

● ●●
●● ● ●

● ● ●


● ●● ●
● ●

● ● ●●
● ●

●● ●● ● ● ● ● ● ● ●

● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ●
● ● ● ● ● ● ●● ● ●●
● ●● ● ● ● ● ● ● ●● ● ● ● ●
●●
● ● ● ● ● ● ●
● ● ● ●● ● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ● ●● ● ● ● ●

● ● ● ● ● ●

● ● ● ● ● ●

● ● ● ● ● ●

time with- univariate and bivariate density, as well as loess curves. For ●




●●

● ●

●● ●●


● ●
●●
●●

● ●

● ●●

● ●



● ●
●●


● ● ●
●●



● ●● ●
● ●● ● ●
●●



Health














●●





● ● ●


●●

0.90
● ●● ● ● ● ● ● ●● ●●
● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ●●
● ● ● ● ●
●● ● ● ● ● ● ● ●


● ●

● ●



● ●


● ●
● ● ●


● ●

● ●
● ● ● ● ● ● ● ● ● ● ●● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ● ●● ● ● ● ●
● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●
● ● ● ● ●
● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●●
● ●●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●

our purposes we will ignore the issue regarding the haves vs. have-nots

0.75
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
1.0

● ● ● ● ● ● ● ● ●● ● ●
● ● ● ● ● ●





●●

●● ●
● ●
● ● ●
● ●
●● ● ●


● ●


●● ● ●








● ● ●
● ●
● ● ●

● ●●
● ●

● ●



●●
●●
●●



● ●





● ●●● ●

Edu ●
● ●

● ●
●●
● ●



●●

● ●● ● ● ● ●●
● ● ● ●●
● ● ● ●
● ● ● ● ●
● ● ● ●● ●
● ●
0.8




● ●


● ●

● ●●
● ●
● ●
●●


● ● ●

● ●
● ● ●●

● ●

● ● ●

● ●

● ●
● ● ●
●●

● ● ● ● ● ● ●
●● ●●
● ● ● ● ●

● ● ●
● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●
● ●● ●● ●●

on the science scale and save that for another day. Note the surprising
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ●● ● ● ● ● ● ● ● ●
● ● ● ● ● ●
0.6

● ● ● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ●

●● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ●

● ● ● ● ● ●
● ● ● ● ● ●


● ●● ● ●●●


● ●● ●





●●


● ●
●● ● ●● ●



● ●●
●● ●

● ●
●●
● ●
● ●
●●



● ●






●●●

● ●
●●

● ●●
● ●
●●




●● ●
● ●
● ●● ● ●

● ●


● ●●
●●

● ●● ●
●●
● ● ● ● ●●
● ●

● ●
●●● ● ●
● ●
● ● ●●
●● ●


●●


HDI

negative correlation between interest and overall score for science,


● ● ● ● ●

0.8
● ●


● ●





● ● ●
● ●





● ● ●


● ●●

●●
● ● ● ●
● ●● ●●
● ● ●● ●
● ● ● ●
● ● ●
● ● ● ● ● ● ●● ● ● ● ●
●● ● ●● ● ● ● ● ● ● ● ●●
● ●
● ●
● ● ● ● ● ● ●● ● ● ● ●● ● ● ●●
● ● ● ● ● ●

● ● ● ● ● ●
● ●● ● ● ● ● ● ● ● ● ● ●●
● ● ● ●
● ● ● ● ● ●
● ● ● ● ● ●

0.6
● ● ● ● ● ●
● ● ● ● ● ●

350 450 550 460 500 540 0.75 0.85 0.95 0.6 0.8

which might remind us of Simpson’s paradox, namely, that what occurs


for the individual may not be the same for the group. One will also Application Fig. 1

note that while there is a positive correlation between Income and


Overall science scores, it flattens out after an initial rise. Linear fits Fin
land
Interest
Fin
land
Support Income
Fin
land

na na

might be difficult to swallow for human development subscales in


550
Chi Chi
K e of K e of K
naH vinc naH vinc naH
Chi Pro Chi Pro Chi
land
ada an land ada an land ada
s Zea
Can onia an Taiw s an Zea Can
onia Taiw onia Zea anCan s
New tralia
land Est Jap in land Jap New tral
ia Est in Est New Jap tral
ia land in
her m nste her m nste her nste
Net Aus
gdo c hte Net Aus c ea hte c Aus Net gdom hte
Kin Kor
ea ubliLiecvenia Kin
gdo ubli ia
ny Liec ina ubliia nyKin Liec
Rep Slo d ny ina Rep
Kor ven d Rep
ven d
Uni
ted ch rlan ma −Ch Uni
ted ch Slo rlan Germa −Ch ch Slo
ma
Uni
ted rlan
Cze itze Ger cao Cze itze cao Cze
Ger
tria Switze
Swgiumtria Ma gium Sw Austria Ma gium
n
and Bel Aus y n
and Bel y y Aus
Bel and
Irel gar Irel gar gar Irel n
ede Hun ede Hun Hun ede
Sw rk Sw rk Sw rk
and c c and and c
500 Den
ma
and
Sta
tes Pol nce
Fra rg Rep atia
ubli
ia
Den
ma
Sta
and via
tes ubli nce Pol
Rep Fra atia rg ia
Pol
atia
Cro iation
Rep
ubli nce
Fra Den
ma
and
Sta
tes
rg

general, but let’s take a closer look6 .


ted bouvak Croin tion Cro tion
Icel way Uni Lat
via
em Slo uan era Icelted
way Uni Lat Slo
vak era em
bou in uan via era vak
LatLithuan in Icel ted
Uni way em
bou
Nor Lux Spa Lith Fed Nor Fed Lux Spa Lith Fed Slo Spa Nor Lux
sian sian sian
Rus al Rus al Rus al
ece tug ece tug tug ece
Italy Gre Por Italy Gre Por Por Gre Italy

el el el
Isra Isra Isra

450 bia ia le bia ia le bia le


ia
gar Chi gar Chi Chi
gar
Ser
Bul Ser Bul Ser Bul y
y y
gua gua gua
Uru d Uru d d Uru
key key key
Tur dan ilan dan Tur ilan dan ilan Tur
ro ania Jor Tha ro ania Jor Tha Jor Tha ania
ro
neg Rom neg Rom Rom neg
nte nte nte
Mo ico Mo ico Mo ico
Mex Mex Mex

400 Arg
ent
ina
Bra
zil
isia
Ind
one
sia

om
bia
Arg
ent
ina
Bra
Ind
zil
one
sia

isia
om
bia Ind
one
sia
bia
omBrazil Arg
ent
ina

ijan Col Col


ijan ijan isiaCol
Tun rba Tun rba rba Tun
Aze Aze Aze

library(ggplot2); library(reshape2) 350 Qat


ar
Qat
ar
Qat
ar

c c c
ubli ubli ubli
Rep Rep Rep

#get data into a form to take advantage of ggplot


gyz gyz gyz
Kyr Kyr Kyr
Overall

450 500 550 600 650 460 480 500 520 540 560 0.4 0.5 0.6 0.7 0.8 0.9
Health Edu HDI
dmelt = melt(d, id=c('Country','Overall'), 550
Fin
land

Chi
naH
K
Chi
naH
K
Fin
land

Chi
naH
K
Fin
land

land
ada ada land ada land
onia Zea s
Can an an Can s
onia Zea onia Can
an Zeas
ia ia ia

measure=c('Interest','Support','Income','Health','Edu','HDI'))
Est Newin land tral Jap Jap Estland tralNew Est Jap New land tral
nsteher her her
c hteNet gdom Aus m
ia Net c Aus m
ia gdo c Net
Aus
ubli venia Liec Kinny Kin
gdo
ven ny ubli venKin ubli ny
Rep Slo d Slod ma Rep Sloted Rep ma d
tedma rlan ted rlan
ch Uni
Ger itze Uni itze Ger ch Uni ch Ger rlan
Cze gium tria Sw tria Sw gium Cze Cze tria gium Switze
y and Aus
Bel Aus Bely and y Aus Bel and
gar Irel n gar n Irel gar Irel n
Hun ede Hun ede Hun ede
rk Sw Sw rk Swrk
and c and c and c
500 era
tion ia
uanvia
Lith Lat Slo
vak
Pol ubli
Rep atia
Cro
Uni
ma tes
Den Sta
ted
em
bou
rg Fra
in
way
nce
Icel
and Cro
atia
em
bou
rg
Pol

era
tion Fra
nce
in LatviaSlovak
Lith
Rep
ubli
ia and
uan Icel
Den
ma

Uni
ted
Sta
tes

way era
tion
Pol
atia
Cro via ia
LatLithuan Slo
vak
Rep
ubli nce
inembou
ma
Fra rgDen and
Icel United
Sta
tes

way
Fed Lux Spa
Nor Lux Fed Spa Nor Fed SpaLux Nor
sian sian sian
Rus al al Rus Rus al
tug ece tug ece tug ece
Por Gre Italy Por Italy Gre Por Italy
Gre

el el el
Isra Isra Isra

450 ia le le le
gar
bia Chi Chi bia garia bia ia Chi
Ser SerBul SerBulgar

#leave the smooth off for now


Bul y y y
gua gua gua
d Uru d Uru d Uru
key key key
Tur dan ilan Tur ilan dan ilan Tur dan
Jor ania
Tha ro Tha Jor ania ro Tha Jor ania ro
Rom neg Rom neg Rom neg
nte nte nte
Mo ico ico Mo ico Mo
Mex Mex Mex

400 Ind
one
sia

Bra
zil
Col
om
bia
isia
Arg
ent
ina
Ind
one
sia

isiaCol
bia
om Brazil Arg
ent
ina
Ind
one
sia
bia
om zil
Col Bra
isia
Arg
ent
ina

ijan

ggplot(aes(x=value,y=Overall), data=dmelt) +
rba Tun Tun Tun
Aze

ar ar ar

350 Qat Qat Qat

geom_point(color='#FF8000',alpha=.75) +
c c c
ubli ubli ubli
Rep Rep Rep
gyz gyz gyz
Kyr Kyr Kyr

0.75 0.80 0.85 0.90 0.95 0.6 0.7 0.8 0.9 1.0 0.6 0.7 0.8 0.9

#geom_smooth(se=F) + value

geom_text(aes(label=Country), alpha=.25, size=1,angle=30, hjust=-.2, Application Fig. 2. Zoom in to see


vjust=-.2) + country names.

6
In the following plots I have a ggplot
object called ggopts to clean up the de-
fault plotting scheme. See the appendix.
Getting Started with GAMs in R 10

facet_wrap(~variable, scales='free_x') +
ggopts

We can see again that linear fits aren’t going to do so well for some,
though it might be a close approximation for interest in science and
support for science. Now let’s run the smooth. By default ggplot2 will
use a loess smoother for small data sets (i.e. < 1000 observations),
but can use the mgcv gam function as a smoother, though you’ll need to
load either the gam or mgcv packages first.
Interest Support Income

library(mgcv) 550

ggplot(aes(x=value,y=Overall), data=dmelt) + 500

geom_point(color='#FF8000',alpha=.75) + 450

geom_smooth(se=F, method='gam', formula=y~s(x), color='#2957FF') + 400

facet_wrap(~variable, scales='free_x') + 350

ggopts

Overall
450 500 550 600 650 460 480 500 520 540 560 0.4 0.5 0.6 0.7 0.8 0.9
Health Edu HDI

550

Often in these situations people will perform some standard trans- 500

formation, such as a log, but it doesn’t help as nearly as often as it is


450

400

used. For example, in this case one can log the overall score, Income, 350

or both and a linear relation will still not be seen. 0.75 0.80 0.85 0.90 0.95 0.6 0.7 0.8
value
0.9 1.0 0.6 0.7 0.8 0.9

Application Fig. 3

Single Predictor

Linear Fit

We will start with the simple situation of a single predictor. Let’s begin
by using a typical linear regression to predict science scores by the
Income index. We could use the standard R lm function, but I’ll leave
that as an exercise for comparison. We can still do straightforward
linear models with the gam function, and again it is important to note
that the standard linear model can be seen as a special case of a GAM.
library(mgcv)
mod_lm <- gam(Overall ~ Income, data = d)
summary(mod_lm)

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ Income
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 204.3 35.4 5.78 4.3e-07 ***
## Income 355.9 46.8 7.61 5.4e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
11 Generalized Additive Models

## R-sq.(adj) = 0.518 Deviance explained = 52.7%


## GCV score = 1504.5 Scale est. = 1448.8 n = 54

What are we getting here? The same thing you get from a regular
linear model, because you just ran one. However there are a couple
things to look at. The coefficient is statistically significant, but serves
as a reminder that it usually a good idea to scale predictor variables
so that the effect is more meaningful. Here, moving one unit on In-
come is akin from going broke to being the richest country. But in a
more meaningful sense, if we moved from say, .7 to .8, we’d expect
an increase of about 35 points on the science score. We also see the
deviance explained, which serves as a generalization of R-squared 7 , 7
For those more familiar with general-
and in this case, it actually is equivalent to R-squared. Likewise there ized linear models, this is calculated as
(Dev Null -DevResidual )/Dev Null
is the familiar adjusted version of it to account for small sample size One can verify this by running the same
and model complexity. The scale estimate is the scaled deviance, which model via the glm, and using the values
from the summary of the model object.
here is equivalent to the residual sums of squares. The GCV score we
will save for when we run a GAM.

GAM

Let’s now try some nonlinear approaches, keeping in mind that


µ = f ( x ). As a point of comparison, we can start by trying a standard
polynomial regression, and it might do well enough8 . To start we 8
The following example is pretty much
must consider a basis for use, i.e. a set of basis functions b j that will be straight from Wood (2006, Chapter 3)
with little modification.
combined to produce f ( x ):
q
f (x) = ∑ bj (x) β j 470.8
470.6

b1=1
j =1 470.4
470.2
470.0

To better explain by example, if we had a cubic polynomial the basis


250

b2=x
200

is: b1 ( x ) = 1, b2 ( x ) = x, b3 ( x ) = x2 , b4 ( x ) = x3 , which, when


150

−20
−30

combined give us our f ( x ). For those familiar with the mechanics of

b3=x^2
−40
−50
−60
−70

linear modeling, this is just our ’model matrix’, which we can save out −20

b4=x^3
−40

with many model functions in R via the argument x=T or using the −60

−80

model.matrix function. With the graph at the right we can see the
500

450

f(x)
effects in action. It is based on the results extracted from running the
400

350
0.4 0.5 0.6 0.7 0.8 0.9

model9 and obtaining the coefficients (e.g. the first plot represents
Income

Application Fig. 4
the intercept of 470.44, the second plot, our b2 coefficient of 289.5
9
multiplied by Income and so forth). The bottom plot shows the final fit see ?poly for how to fit a polynomial in
R.
f ( x ), i.e. the linear combination of the basis functions.
At this point we have done nothing we couldn’t do in our regular
regression approach, but the take home message is that as we move
to GAMs we are going about things in much the same fashion; we are
simply changing the nature of the basis, and have a great deal more
flexibility in choosing the form.
In figure 5 I show the fit using a ’by-hand’ cubic spline basis (see
Getting Started with GAMs in R 12

the appendix and Wood, 2006, p.126-7). A cubic spline is essentially


a connection of multiple cubic polynomial regressions. We choose
points of the variable at which to create sections, and these points are

550 550

referred to as knots. Separate cubic polynomials are fit at each section, 500
● ● ●



450 ●

and then joined at the knots to create a continuous curve. The graph ●



●●


500 400

represents a cubic spline with 8 knots10 between the first and third

● ●

● ●
● ● ● ●

350 ●


quartiles. The inset graph uses the GAM functionality within ggplot2’s
● ●

0.4 0.5 0.6 0.7 0.8 0.9 1.0


geom_smooth as a point of comparison.

Overall
450


Let’s now fit an actual generalized additive model using the same ● ●



cubic spline as our smoothing function. We again use the gam function 400

● ●

as before for basic model fitting, but now we are using a function s ●

within the formula to denote the smooth terms. Within that function 350 ●

we also specify the type of smooth, though a default is available. I


chose ’cr’, denoting cubic regression splines, to keep consistent with 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Income
our previous example.
Application Fig. 5
mod_gam1 <- gam(Overall ~ s(Income, bs = "cr"), data = d)
summary(mod_gam1) 10
Ten including the endpoints.

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ s(Income, bs = "cr")
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 470.44 4.08 115 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(Income) 6.9 7.74 16.7 1.2e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.7 Deviance explained = 73.9%
## GCV score = 1053.7 Scale est. = 899.67 n = 54

The first thing to note is, that aside from the smooth part, our model
code is similar to what we’re used to with core R functions such as lm
and glm. In the summary, we first see the distribution assumed as well
as the link function used, in this case normal and identity, respectively,
which to iterate, had we had no smoothing would result in a standard
regression model. After that we see that the output is separated into
parametric and smooth, or nonparametric parts. In this case, the only
parametric component is the intercept, but it’s good to remember that
you are not bound to smooth every effect of interest, and indeed, as
we will discuss in more detail later, part of the process may involve
refitting the model with terms that were found to be linear for the
13 Generalized Additive Models

most part anyway. The smooth component of our model regarding a


country’s income’s relationship with overall science score is statistically
significant, but there are a couple of things in the model summary that
would be unfamiliar.
We’ll start with the effective degrees of freedom, or edf. In typical OLS
regression the model degrees of freedom is equivalent to the number
of predictors/terms in the model. This is not so straightforward with
a GAM due to the smoothing process and the penalized regression es-
timation procedure, something that will be discussed more later11 . 11
In this example there are actually 9
In this situation, we are still trying to minimize the residual sums terms associated with this smooth, but
they are each ’penalized’ to some extent
of squares, but also have a built in penalty for ’wiggliness’ of the fit, and thus the edf does not equal 9.
where in general we try to strike a balance between an undersmoothed
fit and an oversmoothed fit. The default p-value for the test is based on
the effective degrees of freedom and the rank r of the covariance ma-
trix for the coefficients for a particular smooth, so here conceptually it
is the p-value associated with the F (r, n − ed f ). However there are still
other issues to be concerned about, and ?summary.gam will provide
your first step down that particular rabbit hole. For hypothesis testing
an alternate edf is actually used, which is the other one provided there
in the summary result12 . At this point you might be thinking these p- 12
Here it is noted ’Rel.df’ but if, for ex-
values are a bit fuzzy, and you’d be right. The gist is, they aren’t to be ample, the argument p.type = 5 is used,
it will be labeled ’Est.Rank’. Also, there
used for harsh cutoffs, say, at an arbitrary .05 level13 , but if they are are four p-value types one can choose
pretty low you can feel comfortable claiming statistical significance, from. The full story of edf, p-values and
related is scattered throughout Wood’s
which of course is the end all, be all, of the scientific endeavor. text. See also ?anova.gam
The GCV, or generalized cross validation score can be taken as an 13
But then, standard p-values shouldn’t
estimate of the mean square prediction error based on a leave-one-out be used that way either.

cross validation estimation process. We estimate the model for all ob-
servations except i, then note the squared residual predicting observa-
tion i from the model. Then we do this for all observations. However,
the GCV score is an efficient measure of this concept that doesn’t ac-
tually require fitting all those models and overcomes other issues. It is In this initial model the GCV can be
this score that is minimized when determining the specific nature of found as:
GCV = (n−ed f −[nno∗fscaled est.
parametric terms])2
the smooth. On its own it doesn’t tell us much, but we can use it simi-
lar to AIC as a comparative measure to choose among different models,
with lower being better.

Graphical Display

One can get sense of the form of the fit by simply plotting the model
object as follows:
50

plot(mod_gam1)
0

Although in this single predictor case one can revisit Application


s(Income,6.9)
−50

Figure 5, where the inset was constructed directly from ggplot. You can
−100
−150
−200

0.4 0.5 0.6 0.7 0.8 0.9


Income

Application Fig. 6. The intervals are


actually Bayesian credible intervals.
Getting Started with GAMs in R 14

examine the code in the appendix.

Model Comparison

Let us now compare our regular regression fit to the GAM model fit.
The following shows how one can extract various measures of perfor-
mance and the subsequent table shows them gathered together.

AIC(mod_lm)

## [1] 550.2

summary(mod_lm)$sp.criterion

## [1] 1504

summary(mod_lm)$r.sq #adjusted R squared

## [1] 0.5175

Do the same to extract those same elements from the GAM. The
following display14 makes for easy comparison. 14
I just gathered the values into a
data.frame object and used xtable from
the eponymous package to produce the
aic_lm aic_gam1 gcv_lm gcv_gam1 rsq_lm rsq_gam1 table.
550.24 529.81 1504.50 1053.73 0.52 0.70

Comparing these various measures, it’s safe to conclude that the GAM
fits better. We can also perform the familiar statistical test via the
anova function we apply to other R model objects. As with the previ-
ous p-value issue, we can’t be too particular, and technically one could
have a model with more terms but lower edf, where it just wouldn’t
even make sense15 . As it would be best to be conservative, we’ll pro- 15
?anova.gam for more information.
ceed cautiously.

anova(mod_lm, mod_gam1, test = "Chisq")

## Analysis of Deviance Table


##
## Model 1: Overall ~ Income
## Model 2: Overall ~ s(Income, bs = "cr")
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 52.0 75336
## 2 46.1 41479 5.9 33857 1.2e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

It would appear the anova results tell us what we have probably


come to believe already, that incorporating nonlinear effects has im-
proved the model considerably.
15 Generalized Additive Models

Multiple Predictors

Let’s now see what we can do with a more realistic case where we have
added model complexity.

Linear Fit

We’ll start with the typical linear model approach again, this time
adding the Health and Education indices.

mod_lm2 <- gam(Overall ~ Income + Edu + Health, data = d)


summary(mod_lm2)

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ Income + Edu + Health
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 121.2 79.0 1.53 0.131
## Income 182.3 85.3 2.14 0.038 *
## Edu 234.1 54.8 4.27 9.1e-05 ***
## Health 27.0 134.9 0.20 0.842
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## R-sq.(adj) = 0.616 Deviance explained = 63.9%
## GCV score = 1212.3 Scale est. = 1119 n = 52

It appears we have statistical effects for Income and Education, but


not for Health, and the adjusted R-squared suggests a notable amount
of the variance is accounted for16 . Let’s see about nonlinear effects. 16
Note that a difference in sample sizes
do not make this directly comparable to
the first model.
GAM

As far as the generalized additive model goes, we can approach things


in a similar manner as before. We will ignore the results of the linear
model for now and look for nonlinear effects for each covariate. The default smoother for s() is the
argument bs=’tp’, a thin plate regression
mod_gam2 <- gam(Overall ~ s(Income) + s(Edu) + s(Health), data = d) spline.
summary(mod_gam2)

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ s(Income) + s(Edu) + s(Health)
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
Getting Started with GAMs in R 16

## (Intercept) 471.15 2.77 170 <2e-16 ***


## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(Income) 7.59 8.41 8.83 1.3e-07 ***
## s(Edu) 6.20 7.18 3.31 0.0073 **
## s(Health) 1.00 1.00 2.74 0.1066
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.863 Deviance explained = 90.3%
## GCV score = 573.83 Scale est. = 399.5 n = 52

There are again a couple things to take note of. First, statistically
speaking, we come to the same conclusion as the linear model regard-
ing the individual effects. One should take particular note of the effect
of Health index. The effective degrees of freedom with value 1 sug-
gests that it has essentially been reduced to a simple linear effect. The
following will update the model to explicitly model the effect as such,
but as one can see, the results are identical.

mod_gam2B = update(mod_gam2, . ~ . - s(Health) + Health)


summary(mod_gam2B)

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ s(Income) + s(Edu) + Health
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 640 102 6.26 3.1e-07 ***
## Health -190 115 -1.65 0.11
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(Income) 7.59 8.41 8.83 1.3e-07 ***
## s(Edu) 6.20 7.18 3.31 0.0073 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.863 Deviance explained = 90.3%
## GCV score = 573.83 Scale est. = 399.5 n = 52

We can also note that this model accounts for much of the variance
in Overall science scores, with an adjusted R-squared of .86. In short,
it looks like the living standards and educational resources of a country
are associated with overall science scores, even if we don’t really need
the Health index in the model.
17 Generalized Additive Models

Graphical Display

Now we examine the effects of interest visually. In the following code,


aside from the basic plot we have changed the default options to put
all the plots on one page, added the partial residuals17 , and changed 17
See e.g. John Fox’s texts and chapters
regarding regression diagnostics, and his
the symbol and its respective size, among other options.
crplots function in the car package.
plot(mod_gam2, pages=1, residuals=T, pch=19, cex=0.25,
scheme=1, col='#FF8000', shade=T,shade.col='gray90')

Here we can see the effects of interest, and again one might again
note the penalized-to-linear effect of Health. We again see the tapering



50

50

● ●
● ● ●
● ● ● ●
● ● ●

● ●



● ● ● ●
● ●

● ● ●
● ●
● ● ● ●
● ●
● ● ●
●●

s(Income,7.59)
● ●

−50 0

−50 0
● ●
● ● ●
● ● ● ● ●
● ● ●

off of Income’s effect at its highest level, and in addition, a kind of


● ● ● ●
● ● ● ●

● ●

s(Edu,6.2)
● ● ●



● ● ● ●
● ●


● ● ●


● ●





sweet spot for a positive effect of Education in the mid-range values,

−150

−150

with a slight positive effect overall. Health, as noted, has been reduced

−250

−250
0.4 0.5 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 1.0
Income Edu

to a linear, surprisingly negative effect, but again this is non-significant.


One will also note the y-axis scale. The scale is on that of the linear

50




● ●
● ● ● ●

● ●
● ● ●

● ● ●

predictor, but due to identifiability constraints, the smooths must sum


● ● ●

−50 0





● ● ● ●
● ● ●


s(Health,1)
● ●
● ●
● ●

● ●


to zero, and thus are presented in a mean-centered fashion.

−150
Previously, we had only one predictor and so we could let ggplot

−250
0.75 0.80 0.85 0.90 0.95
Health
do the work to produce a plot on the response scale, although we
Application Fig. 7
could have added the intercept back in. Here we’ll need to do a little
more. The following will produce a plot for Income on the scale of the
response, with the other predictors held at their mean. Seen at right.
# Note that mod_gam2$model is the data that was used in the modeling process, 500

# so it will have NAs removed.


testdata = data.frame(Income=seq(.4,1, length=100),
Edu=mean(mod_gam2$model$Edu), 400

Health=mean(mod_gam2$model$Health))
fits = predict(mod_gam2, newdata=testdata, type='response', se=T)
fit

predicts = data.frame(testdata, fits) 300

ggplot(aes(x=Income,y=fit), data=predicts) +
geom_smooth(aes(ymin = fit - se.fit, ymax=fit + se.fit), 200

fill='gray80', size=1,stat='identity') +
ggopts
0.4 0.5 0.6 0.7 0.8 0.9 1.0
Income

Application Fig. 8
This gives us a sense for one predictor, but let’s take a gander at
Income and Education at the same time. Previously, while using the
function plot, it actually is plot.gam, or the basic plot function for a
GAM class object. There is another plotting function, vis.gam, that will
give us a bit more to play with. The following will produce a contour
plot (directly to the right) with Income on the x axis, Education on the
y, with values on the response scale given by the contours, with lighter
color indicating higher values.
vis.gam(mod_gam2, type = "response", plot.type = "contour") response

First and foremost Figure 9 reflects the individual plots (e.g. one can
0.9

see the decrease in scores at the highest income levels), and we can see
0.8
Edu
0.7
0.6

0.5 0.6 0.7 0.8 0.9


Income

Application Fig. 9
Getting Started with GAMs in R 18

that middling on Education and high on Income generally produces the


highest scores. Conversely, being low on both the Education and In-
come indices are associated with poor Overall science scores. However,
while interesting, these respective smooths were created separately of
one another, and there is another way we might examine how these
two effects work together in predicting the response.

Let’s take a look at another approach, continuing the focus on visual response

display. It may not be obvious at all, but one can utilize smooths of
more than one variable, in effect, a smooth of the smooths of the vari-

0.9
ables that go into it. Let’s create a new model to play around with this

0.8
feature. After fitting the model, an example of the plot code is given

Edu
(figure 11), but I also provide a different perspective (figure 12) as

0.7
well as contour plot (Application Figure 10) for comparison with the

0.6
graph based on separate smooths.
0.5 0.6 0.7 0.8 0.9

mod_gam3 <- gam(Overall ~ te(Income, Edu), data = d)


Income

summary(mod_gam3) Application Fig. 10

##
## Family: gaussian
## Link function: identity
##
## Formula:
## Overall ~ te(Income, Edu)
## <environment: 0x000000000a575438>
##

respo
## Parametric coefficients:

nse
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 471.15 3.35 141 <2e-16 ***
## ---

u
Ed
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Inc
om
e
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## te(Income,Edu) 10.1 12.2 16.9 <2e-16 *** Application Fig. 11
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.8 Deviance explained = 84%
## GCV score = 741.42 Scale est. = 583.17 n = 52

vis.gam(mod_gam3, type = "response", plot.type = "persp", phi = 30, theta = 30,


n.grid = 500, border = NA)

In the above we are using a type of smooth called a tensor product


respon

Edu

smooth, and by smoothing the marginal smooths of Income and Educa-


se

tion, we see a bit clearer story. As we might suspect, wealthy countries


with more of an apparent educational infrastructure are going to score Income

higher on the Overall science score. However, wealth is not necessarily


Application Fig. 12
indicative of higher science scores (note the dark bottom right corner
on the contour plot)18 , though without at least moderate wealth hopes 18
This is likely due to Qatar. Refer to
Application Figure 2.
19 Generalized Additive Models

are fairly dim for a decent score.


One can also, for example, examine interactions between a smooth
and a linear term f ( x )z, and in a similar vein of thought look at
smooths at different levels of a grouping factor. We’ll leave that for
some other time.

Model Comparison

As before, we can examine indices such as GCV or perhaps adjusted


R-square, which both suggest our GAM performs considerably better.
Statistically we can compare the two models with the anova function
as before.
anova(mod_lm2, mod_gam2, test = "Chisq")

## Analysis of Deviance Table


##
## Model 1: Overall ~ Income + Edu + Health
## Model 2: Overall ~ s(Income) + s(Edu) + s(Health)
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 48.0 53713
## 2 36.2 14463 11.8 39250 9.8e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Not that we couldn’t have assumed as such already, but now we


have additional statistical evidence to suggest that incorporating non-
linear effects improves the model.

Other Issues
Estimation

A S N O T E D P R E V I O U S LY , estimation of GAMs in the mgcv package


is conducted via a penalized likelihood approach. Conceptually this
amounts to fitting the following model
g(µ) = Xβ + f ( x1 ) + f ( x2 )... f ( x p )
But note that each smooth has its own model matrix made up of the
bases. So for each smooth covariate we have:
f j = X̃ j β̃ j
Given a matrix of coefficients S, we can more formally note a penal-
ized likelihood function:
1
l p ( β ) = l ( β ) − ∑ λ j βT S j β
2 j
Getting Started with GAMs in R 20

where l ( β) is the usual GLM likelihood function, and λ j are the smooth-
ing parameters. The part of the function including λ penalizes curva-
ture in the function, where λ establishes a trade-off between the good-
ness of fit and the smoothness, and such an approach will allow for less
overfitting19 . Technically we could specify the smoothing parameters 19
λ → ∞ would result in a straight line
explicitly, and the appendix has some ’by-hand’ code taken directly estimate for f j , while λ = 0 would allow
any f that interpolates the data (Wood,
from Wood (2006) with only slight modifications, where the smoothing 2006, p. 128)
parameters are chosen and compared20 . 20
And we would have to choose in order
Smoothing parameters however are in fact estimated, and this for the penalized maximum likelihood to
work.
brings us back to the cross-validation procedure mentioned before.
Smoothing parameters are selected which minimize the GCV score
by default, though one has other options. Note that there are other
approaches to estimation such as backfitting, generalized smoothing
splines and Bayesian.

Shrinkage & Variable Selection


Some smooths are such that no matter the smoothing parameter, there
will always be non-zero coefficients. An extra penalty may be added
such that if the smoothing parameter is large enough, the coefficients
will shrink to zero, and some smoothing bases will have such alterna-
tive approaches available21 . In this manner one can assess whether a 21
See also, the select argument to the
gam function. Also, in version 1.7-
predictor is adding anything to the model, i.e. if it’s effective degrees
19, which came out during creation
of freedom is near zero, and perhaps use the approach as a variable of this document, summary.gam and
selection technique. anova.gam had changes made to improve
the p-value computation in penalized
situations.
Effective degrees of freedom again
If we define a matrix F that maps the unpenalized estimates of β to the
penalized estimates such that

F = ( X T X + S ) −1 X T X
and note
β̃ = ( X T X )−1 X T y
β̂ = F β̃
the diagonal elements of F are where the effective degrees of freedom
for each covariate come from. I doubt this helps much conceptually but
there it is anyway.

Choice of Smoothing Function

A number of smooths are available with the mgcv package, and one
can learn more by entering ?smooth.terms at the command line. In
21 Generalized Additive Models

our models we have used cubic regression splines and thin plate re-
gression splines (TPRS), the latter being the default for a GAM in this
package. As a brief summary, TPRS work well in general in terms of
performance and otherwise has some particular advantages, and has
a shrinkage alternative available. One should still feel free to play
around, particularly when dealing with multiple smooths, where the
tensor product smooths would be better for covariates of different
scales.

Diagnostics & Choice of Basis Dimension

We have some built-in abilities to examine whether there are any par-
ticular issues, and we can try it with our second GAM model.
Resids vs. linear pred.
gam.check(mod_gam2, k.rep = 1000) ●

40

40
● ●

deviance residuals

20

20
● ● ●

## ●

residuals
● ●● ●

● ● ●
● ● ●
● ● ●●

0
● ●
● ●

## Method: GCV Optimizer: magic ●



● ● ●


●●
●●



−20

−20

● ● ●

## Smoothing parameter selection converged after 21 iterations.


−40

−40

## The RMS GCV score gradiant at convergence was 2.499e-05 . −40 −20 0 20 40 350 400 450 500
theoretical quantiles linear predictor
## The Hessian was positive definite.
Histogram of residuals Response vs. Fitted Values
## The estimated model rank was 28 (maximum possible: 28)

10 12 14

550
## ●
●● ●●
● ●
● ● ● ●●

500
● ●

● ●
●● ●

## Basis dimension (k) checking results. Low p-value (k-index<1) may ● ● ● ●●


Frequency

Response
● ●

450

● ●●

## indicate that k is too low, especially if edf is close to k'. ●

6
●●● ●

400

4
● ● ●

##

350
2

## k' edf k-index p-value

0

−40 −20 0 20 40 350 400 450 500

## s(Income) 9.000 7.593 1.258 0.95 Residuals Fitted Values

## s(Edu) 9.000 6.204 1.008 0.45 Issues Fig. 1


## s(Health) 9.000 1.000 0.899 0.19

The plots are of the sort we’re used to from a typical regression
setting, though it’s perhaps a bit difficult to make any grand conclu-
sion based on such a small data set. The printed output on the other One can inspect the quantile-quantile
hand contains unfamiliar information, but is largely concerned with plot directly with the qq.gam function.

over-smoothing, and so has tests of whether the basis dimension for a


smooth is too low. The p-values are based on simulation, so I bumped
up the number with the additional argument. Guidelines are given
in the output itself, and at least in this case it does not look like we
have an issue. However, if there were a potential problem, it is sug-
gested to double k22 and refit, and if the effective degrees of freedom 22
k can be set as an argument to
s(var1, k=?).
increases quite a bit you would probably want to go with the updated
model23 . Given the penalization process, the exact choice of k isn’t 23
I actually did this for Health, which is
too big of a deal, but the defaults are arbitrary. You want to set it large the only one of the predictors that could
be questioned, and there was no change
enough to get at the true effect as best as possible, but in some cases at all; it still reduced to a linear effect.
computational efficiency will also be of concern. The help on the func-
tion choose.k provides another approach to examining k based on
the residuals from the model under consideration, and provides other
useful information.
Getting Started with GAMs in R 22

Prediction

A previous example used the predict function on the data used to fit
the model to obtain fitted values on the response scale. Typically we’d
use this on new data. I do not cover it, because the functionality is the
same as the predict.glm function in base R, and one can just refer
to that. It is worth noting that there is an option, type=’lpmatrix’,
which will return the actual model matrix by which the coefficients
must be pre-multiplied to get the values of the linear predictor at the
supplied covariate values. This can be particularly useful towards
opening the black box as one learns the technique.

Model Comparison Revisited

We have talked about automated smoothing parameter and term selec-


tion, and in general potential models are selected based on estimation
of the smoothing parameter. Using an extra penalty to allow coeffi-
cients to tend toward zero with the argument select=TRUE is an auto-
matic way to go about it, where some terms could effectively drop out.
Otherwise we could compare models GCV/AIC scores24 , and in general 24
GCV scores are not useful for compar-
either of these would be viable approaches. Consider the following ing fits of different families; AIC is still
ok though.
comparison:
mod_1d = gam(Overall ~ s(Income) + s(Edu), data = d)
mod_2d = gam(Overall ~ te(Income, Edu, bs = "tp"), data = d)
AIC(mod_1d, mod_2d)

## df AIC
## mod_1d 15.59 476.1
## mod_2d 13.25 489.7

In some cases we might prefer to be explicit in comparing models


with and without particular terms, and we can go about comparing
models as we would with a typical GLM analysis of deviance. We have
demonstrated this previously using the anova.gam function, where we
compared linear fits to a model with an additional smooth function.
While we could construct a scenario that is identical to the GLM situ-
ation for a statistical comparison, it should be noted that in the usual
situation the test is actually an approximation, though it should be
close enough when it is appropriate in the first place. The following
provides an example that would nest the main effects of Income and
Education within the product smooth, i.e. sets their basis dimension
and smoothing function to the defaults employed by the te smooth.
mod_A = gam(Overall ~ s(Income, bs = "cr", k = 5) + s(Edu, bs = "cr", k = 5),
data = d)
mod_B = gam(Overall ~ s(Income, bs = "cr", k = 5) + s(Edu, bs = "cr", k = 5) +
te(Income, Edu), data = d)
23 Generalized Additive Models

anova(mod_A, mod_B, test = "Chi")

## Analysis of Deviance Table


##
## Model 1: Overall ~ s(Income, bs = "cr", k = 5) + s(Edu, bs = "cr", k = 5)
## Model 2: Overall ~ s(Income, bs = "cr", k = 5) + s(Edu, bs = "cr", k = 5) +
## te(Income, Edu)
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 46.3 36644
## 2 42.0 24964 4.26 11679 0.00075 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Again though, we could have just used the summary output from the
second model.
Instances where such an approach does not appear to be appropri-
ate within the context of the mgcv package are when terms are able
to be penalized to zero; in such a case p-values will be much too low.
In addition, when comparing GAMs, sometimes the nesting of models
would not be so clear when there are multiple smooths involved, and
additional steps may need to be taken to make sure they are nested.
We must make sure that each smooth term in the null model has no
more effective degrees of freedom than the same term in the alterna-
tive, otherwise it’s possible that the model with more terms can have
lower effective degrees of freedom but better fit, rendering the test
nonsensical. Wood (2006) suggests that if such model comparison is
the ultimate goal an unpenalized approach25 would be best to have 25
This can be achieved with the argu-
ment s(..., fx=T), although now one
much confidence in the p-values.26
has to worry more about the k value
used, as very high k will lead to low
power
26
One might also examine the gss
Miscellaneous package for anova based approach to
generalized smoothing splines.

T H I S S E C T I O N W I L L D I S C U S S some ways to relate the generalized


models above to other forms of nonlinear approaches, some familiar
and others less so. In addition, I will note some extensions to GAMs to
be aware of.

Relation to Some Other Nonlinear Approaches

Known Form
It should be noted that one can fit generalized additive models under A general form of nonlinear models:
a general heading of nonlinear models whose focus may be on transfor- y = f ( X, β) + e

mations of the outcome (as with generalized linear models), the pre-
dictor variables (polynomial regression and GAMs), or both (GAMs), in
addition to those whose effects are nonlinear in the parameters 27 . The 27
For example, various theoretically
difference between the current presentation and those latter nonlinear motivated models in economics and
ecology.
Getting Started with GAMs in R 24

models as distinguished in typical introductory statistical texts that


might cover some nonlinear modeling is that we simply don’t know the
form beforehand.
In cases where the form may be known, one can use an approach
such as nonlinear least squares, and there is inherent functionality
within a standard R installation, such as the nls function. As is the
usual case, such functionality is readily extendable to a great many
other analytic situations, e.g. the gnm for generalized nonlinear models
or nlme for nonlinear mixed effects models.

Response Transformation
It is common practice, perhaps too common, to manually transform the
response and go about things with a typical linear model. While there
might be specific reasons for doing so, the primary reason applied
researchers seem to do so is to make the distribution ’more normal’
so that regular regression methods can be applied. As an example, a
typical transformation is to take the log, particularly to tame ’outliers’
or deal with skewness.
While it was a convenience ’back in the day’ because we didn’t have
software or computing power to deal with a lot of data situations aptly,
this is definitely not the case now. In many situations it would be better
to, for example, conduct a generalized linear model with a log link or
perhaps assume a different distribution for the response directly (e.g.
skew-normal), and many tools allow researchers to do this with ease28 . 28
A lot of ’outliers’ tend to magically
There are still cases where one might focus on response transforma- go away with an appropriate choice
of distribution for the data generating
tion, just not so one can overcome some particular nuisance in trying process.
to fit a linear regression. An example might be in some forms of func-
tional data analysis, where we are concerned with some function of the
response that has been measured on many occasions over time.

The Black Box


Venables and Ripley (2002) make an interesting classification of non-
linear models into those that are less flexible but under full user con-
trol (fully parametric), and those that are black box techniques that are One could probably make the case that
highly flexible and fully automatic: stuff goes in, stuff comes out, but most modeling is ’black box’ for a great
many researchers.
we’re not privy to the specifics29 .
Two examples of the latter that they provide are projection pursuit 29
For an excellent discussion of these
and neural net models, though a great many would fall into such a different approaches to understanding
data see Breiman (2001) and associated
heading. Projection pursuit models are well suited to high dimensional commentary. For some general packages
data where dimension reduction is a concern. One may think of an outside of R that incorporate a purely
algorithmic approach to modeling, you
example where one uses a technique such as principal components
might check out RapidMiner or Weka.
analysis on the predictor set and then examines smooth functions of M Note also that RapidMiner allows for R
principal components. functionality via an add-on package, and
the RWeka package brings Weka to the R
environment.
25 Generalized Additive Models

In the case of neural net models, one can imagine a model where
the input units (predictor variables) are weighted and summed to
create hidden layer units, which are then essentially put through the
same process to create outputs (see a simple example to the right).
One can see projection pursuit models as an example where a smooth
function is taken of the components which make up the hidden layer.
Neural networks are highly flexible in that there can be any number
of inputs, hidden layers, and outputs. However, such models are very
explicit in the black box approach.
Projection pursuit and neural net models are usually found among
data mining/machine learning techniques any number of which might
be utilized in a number of disciplines. Other more algorithmic/black
box approaches include k-nearest-neighbors, random forests, support
vector machines, and various tweaks or variations thereof including Input Layer Hidden Layer Output
boosting, bagging, bragging and other alliterative shenanigans 30 .
As Venables and Ripley note, generalized additive models might be Miscellaneous Fig. 1 A Neural Net Model
thought of as falling somewhere in between the fully parametric and
30
See Hastie et al. (2009) for an
interpretable models of linear regression and black box techniques.
overview of such approaches.
Indeed, there are algorithmic approaches which utilize GAMs as part of
their approach.

Extensions

Note that just as generalized additive models are an extension of the


generalized linear model, there are generalizations of the basic GAM
beyond the settings described. In particular, random effects can be
dealt with in this context as they can with linear and generalized linear
models, and there is an interesting connection between smooths and
random effects in general 31 . Generalized additive models for location, 31
Wood (2006) has a whole chapter
scale, and shape (GAMLSS) allow for distributions beyond the expo- devoted to the subject.

nential family Rigby and Stasinopoulos (2005). In addition there are


boosted, ensemble and other machine learning approaches that apply
GAMs as well32 . In short, there’s plenty to continue to explore once 32
See the GAMens package for example.
one gets the hang of generalized additive models.

Conclusion
Generalized additive models are a conceptually straightforward tool
that allows one to incorporate nonlinear predictor effects into their
otherwise linear models. In addition, they allow one to keep within the
linear and generalized linear frameworks with which one is already fa-
miliar, while providing new avenues of model exploration and possibly
Getting Started with GAMs in R 26

improved results. As was demonstrated, it is easy enough with just a


modicum of familiarity to pull them off within the R environment, and
as such it is hoped that this document provides a means to do so for
the uninitiated.

Appendix
R packages

T H E F O L L O W I N G I S a non-exhaustive list of R packages in which


contain GAM functionality. Each is linked to the CRAN page for the
package. Note also that several build upon the mgcv package used for
this document.
amer: Fitting generalized additive mixed models based on the mixed
model algorithm of lme4 (gamm4 now includes this approach).
CausalGAM: This package implements various estimators for average
treatment effects.
COZIGAM: Constrained and Unconstrained Zero-Inflated Generalized
Additive Models.
CoxBoost: This package provides routines for fitting Cox models. See
also cph in rms package for nonlinear approaches in the survival con-
text.
gam: Functions for fitting and working with generalized additive mod-
els.
GAMBoost: This package provides routines for fitting generalized lin-
ear and and generalized additive models by likelihood based boosting.
gamboostLSS: Boosting models for fitting generalized additive models
for location, shape and scale (gamLSS models).
GAMens: This package implements the GAMbag, GAMrsm and GAMens
ensemble classifiers for binary classification.
gamlss: Generalized additive models for location, shape, and scale.
gamm4: Fit generalized additive mixed models via a version of mgcv’s
gamm function.
gammSlice: Bayesian fitting and inference for generalized additive
mixed models.
GMMBoost: Likelihood-based Boosting for Generalized mixed models.
gss: A comprehensive package for structural multivariate function
estimation using smoothing splines.
mboost: Model-Based Boosting.
mgcv: Routines for GAMs and other generalized ridge regression with
multiple smoothing parameter selection by GCV, REML or UBRE/AIC.
27 Generalized Additive Models

Also GAMMs.
VGAM: Vector generalized linear and additive models, and associated
models.

Miscellaneous Code

Here is any code that might be okay to play around with but would
have unnecessarily cluttered sections of the paper.

ggopts
My ggplot2 default options: ggopts. Note that by actually adding the
opts line after ggopts, you can then override or change certain parts of
it on the fly while maintaining the gist. I have this in my Rprofile.site
file so that it is always available.

#set default options in case we use ggplot later


ggopts =
opts(
axis.text.x = theme_text(colour='gray50'),
axis.text.y = theme_text(colour='gray50'),
panel.background = theme_blank(),
panel.grid.minor = theme_blank(),
panel.grid.major = theme_blank(),
panel.border = theme_rect(colour='gray50'),
strip.background = theme_blank()
)

mod_gam1 plot

ggplot(aes(x=Income, y=Overall), data=d) +


geom_point(color="#FF8000") +
geom_smooth(se=F, method='gam', formula=y~s(x, bs="cr")) +
xlim(.4,1) +
ggopts

Penalized Estimation Example


Initial data set up and functions.
############################
### Wood by-hand example ###
############################

size = c(1.42,1.58,1.78,1.99,1.99,1.99,2.13,2.13,2.13,
2.32,2.32,2.32,2.32,2.32,2.43,2.43,2.78,2.98,2.98)
wear = c(4.0,4.2,2.5,2.6,2.8,2.4,3.2,2.4,2.6,4.8,2.9,
3.8,3.0,2.7,3.1,3.3,3.0,2.8,1.7)
x= size-min(size); x = x/max(x)
Getting Started with GAMs in R 28

d = data.frame(wear, x)

#cubic spline function


rk <- function(x,z) {
((z-0.5)^2 - 1/12)*((x-0.5)^2 - 1/12)/4-
((abs(x-z)-0.5)^4-(abs(x-z)-0.5)^2/2 + 7/240) / 24
}

spl.X <- function(x,knots){


q <- length(knots) + 2 # number of parameters
n <- length(x) # number of observations
X <- matrix(1,n,q) # initialized model matrix
X[,2] <- x # set second column to x
X[,3:q] <- outer(x,knots,FUN=rk) # remaining to cubic spline
X
}

spl.S <- function(knots) {


q = length(knots) + 2
S = matrix(0,q,q) # initialize matrix
S[3:q,3:q] = outer(knots,knots,FUN=rk) # fill in non-zero part
S
}

#matrix square root function


mat.sqrt <- function(S){
d = eigen(S, symmetric=T)
rS = d$vectors%*%diag(d$values^.5)%*%t(d$vectors)
rS
}

#the fitting function


prs.fit <- function(y,x,knots,lambda){
q = length(knots) + 2 # dimension of basis
n = length(x) # number of observations
Xa = rbind(spl.X(x,knots), mat.sqrt(spl.S(knots))*sqrt(lambda)) # augmented model matrix
y[(n+1):(n+q)] = 0 #augment the data vector
lm(y ~ Xa-1) # fit and return penalized regression spline
}

Example 1.

4.5

knots = 1:4/5

X = spl.X(x,knots) # generate model matrix 4.0 ●

mod.1 = lm(wear~X-1) # fit model


3.5

xp <- 0:100/100 # x values for prediction


wear

Xp <- spl.X(xp, knots) # prediction matrix 3.0 ●


● ●

● ●

2.5 ●

# Base R plot ● ●

plot(x, wear, xlab='Scaled Engine size', ylab='Wear Index', pch=19, 2.0

col="#FF8000", cex=.75, col.axis='gray50') ●

0.0 0.2 0.4 0.6 0.8 1.0

lines(xp,Xp%*%coef(mod.1), col='#2957FF') #plot x

# ggplot
library(ggplot2)
ggplot(aes(x=x, y=wear), data=data.frame(x,wear))+
geom_point(color="#FF8000") +
geom_line(aes(x=xp, y=Xp%*%coef(mod.1)), data=data.frame(xp,Xp), color="#2957FF") +
ggopts
29 Generalized Additive Models

Example 2. lambda = 0.1 lambda = 0.01 lambda = 0.001


● ● ●

4.5

● ● ●

knots = 1:7/8 4.0 ●



3.5
● ● ●
● ● ●
● ● ●
3.0 ● ● ● ● ● ●
● ● ●
● ● ● ● ● ●

d2 = data.frame(x=xp)
● ● ●
● ● ● ● ● ●
2.5 ● ● ●
● ● ● ● ● ●

2.0

● ● ●

wear
lambda = 1e−04 lambda = 1e−05 lambda = 1e−06

for (i in c(.1,.01,.001,.0001,.00001,.000001)){ 4.5


● ● ●

mod.2 = prs.fit(wear, x, knots,i) #fit penalized regression


● ● ●

4.0 ● ● ●

● ● ●

3.5

#spline choosing lambda 3.0











● ● ●
● ● ● ● ● ●

Xp = spl.X(xp,knots) #matrix to map parameters to fitted values at xp


● ● ●
● ● ● ● ● ●
2.5 ● ● ●
● ● ● ● ● ●

d2[,paste('lambda = ',i, sep="")] = Xp%*%coef(mod.2)


2.0

● ● ●

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

} x

### ggplot
library(ggplot2); library(reshape)
d3 = melt(d2, id='x')

ggplot(aes(x=x, y=wear), data=d) +


geom_point(col='#FF8000') +
geom_line(aes(x=x,y=value), col="#2957FF", data=d3) +
facet_wrap(~variable) +
ggopts

### Base R approach


par(mfrow=c(2,3))
for (i in c(.1,.01,.001,.0001,.00001,.000001)){
mod.2 = prs.fit(wear, x, knots,i)
Xp = spl.X(xp,knots)
plot(x,wear, main=paste('lambda = ',i), pch=19, col="#FF8000", cex=.75, col.axis='gray50')
lines(xp,Xp%*%coef(mod.2), col='#2957FF')
}

R Session Info

R version 2.15.1 (2012-06-22), x86_64-pc-mingw32

Base packages: base, datasets, graphics, grDevices, methods, stats,


utils

Other packages: car 2.0-12, ggplot2 0.9.1, MASS 7.3-19,


mgcv 1.7-19, nnet 7.3-4, psych 1.2.4, reshape2 1.2.1

Loaded via a namespace (and not attached): colorspace 1.1-1,


dichromat 1.2-4, digest 0.5.2, fortunes 1.5-0, grid 2.15.1,
labeling 0.1, lattice 0.20-6, Matrix 1.0-6, memoise 0.1, munsell 0.3,
nlme 3.1-104, plyr 1.7.1, proto 0.3-9.2, RColorBrewer 1.0-5,
scales 0.2.1, stringr 0.6, tools 2.15.1
Getting Started with GAMs in R 30

References

Breiman, L. (2001). Statistical modeling: The two cultures (with


comments and a rejoinder by the author). Statistical Science,
16(3):199–231.

Fox, J. (2000a). Multiple and Generalized Nonparametric Regression.


SAGE.

Fox, J. (2000b). Nonparametric Simple Regression: Smoothing Scatter-


plots. SAGE.

Hardin, J. W. and Hilbe, J. M. (2012). Generalized Linear Models and


Extensions, Third Edition. Stata Press.

Hastie, T. and Tibshirani, R. (1990). Generalized Additive Models. CRC


Press.

Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements


of Statistical Learning: Data Mining, Inference, and Prediction, Sec-
ond Edition. Springer, 2nd ed. 2009. corr. 3rd printing 5th printing.
edition.

Rigby, R. A. and Stasinopoulos, D. M. (2005). Generalized additive


models for location, scale and shape. Journal of the Royal Statistical
Society: Series C (Applied Statistics), 54(3).

Ruppert, D., Wand, M. P., and Carroll, R. J. (2003). Semiparametric


Regression. Cambridge University Press.

Venables, W. N. and Ripley, B. D. (2002). Modern Applied Statistics


With S. Birkhäuser.

Wood, S. N. (2006). Generalized additive models: an introduction with


R, volume 66. CRC Press.

You might also like