+1 (208) 254-6996 [email protected]
  
  1. Of the 5 glm() models you ran, which appears to be the best model? Support your answer by providing the relevant information from your R output. You should also provide ΔAIC values for all 5 models in your answer.
  2. Of your 5 models, what was the model weight (Akaike weight) of the second best model? Show your calculation of this value.
  3. Provide the image of the graph showing the confidence and prediction limits for the role of latitude in explaining elevation of ponderosa pine.
  4. Provide the 95% confidence and prediction limits, as well as the predicted point value of elevation for ponderosa pine at 39.006 degrees N latitude?
TypeLonglatElevMeanTemp
1-113.65135.5881617124
1-121.16139.046364153
1-123.93643.062283108
1-106.67237.661267228
1-122.65839.084612132
1-107.85227.6222041139
1-121.05240.799172368
1-117.33945.1722548-3
1-120.18145.02998279
1-115.18747.257114037
1-122.12241.785151663
1-115.69649.33158420
1-123.53641.593772102
1-123.99943.226116114
1-106.71136.389239348
1-99.56423.4951847171
1-107.74528.3252266124
1-123.40240.228829106
1-104.91336.63219874
1-120.34341.971143481
1-117.79550.1842772
1-115.60945.115126967
1-109.88234.0071947105
1-106.16126.2322506132
1-119.95837.375674144
1-117.23648.04965281
1-119.06948.297122150
1-111.46634.737215285
1-119.3437.197779153
1-114.9245.112155846
1-106.70126.3772211149
1-122.61340.869123790
1-116.02845.291178129
1-120.92440.502178861
1-116.52547.54377077
1-117.97648.80155276
1-120.34549.392138228
1-121.80436.281722121
1-115.54245.564196011
1-115.39746.35494458
1-117.57248.48130448
1-106.11143.34165570
1-121.2146.953142040
1-116.61845.99491280
1-120.62738.507878140
1-108.32237.946243955
1-104.18141.443164779
1-121.70244.43996971
1-114.81544.778155744
1-102.73628.7781049199
1-119.68650.61994948
1-118.89749.32583657
1-123.1841.771125174
1-121.45542.867142064
1-111.06334.7421917100
1-108.40829.4882081122
1-121.08442.194146967
1-117.66645.834103064
1-122.74845.25930115
1-123.57839.392323117
1-120.80438.806744143
1-108.54937.513210980
1-119.37344.85394982
1-108.57238.885223772
1-113.61536.3381797113
1-120.77141.073159067
1-116.52148.722117048
1-105.1735.426194694
1-122.1441.494187049
1-106.18836.596252651
1-111.99446.957132657
1-100.64125.0811890174
1-117.20147.17776279
1-115.5844.012187926
1-122.94141.626816102
1-99.64523.2062173169
1-115.70644.495206411
1-106.93726.5842141162
1-104.04441.138159478
1-115.86249.239100154
1-114.09647.49992572
1-112.71835.463189898
1-117.12547.09676779
1-103.95445.60394667
1-117.74547.64973881
1-117.249.143107954
1-99.49141.7176887
1-119.24437.351167095
1-99.93823.3091809182
1-121.19941.482143579
1-111.67637.829200480
1-117.73944.609163038
1-121.30343.047163854
1-119.85949.849114835
1-101.06624.8821834192
1-120.22142.444213139
1-120.11749.451147524
1-116.55746.318109065
1-104.74835.913194795
1-106.12736.702279734
2-113.44346.317183129
2-130.96659.6451380-43
2-127.27755.62786711
2-136.02161.0281528-56
2-120.96347.692140328
2-116.06845.572152238
2-10837.4423606-13
2-121.52254.6091495-12
2-114.35248.28691558
2-125.14855.6481357-8
2-133.43461.2451128-39
2-124.70156.5987314
2-127.44753.514102730
2-122.34654.87376424
2-127.02551.578125928
2-116.41154.58489915
2-129.47857.5491255-10
2-124.02361.174509-37
2-119.93748.6932114-4
2-117.05754.35182925
2-121.65146.756116966
2-127.90756.1829910
2-117.69952.7022162-31
2-119.10954.371129410
2-117.34553.132135211
2-124.05160.8871504-71
2-106.67335.6871981102
2-125.27757.9972319-59
2-132.62562.129916-49
2-121.26451.29108040
2-130.46660.4781280-52
2-115.35652.03814710
2-123.7558.6951042-31
2-134.89561.7741094-38
2-131.19159.5221175-36
2-129.42656.15712770
2-126.62554.62111864
2-120.64152.9171944-12
2-106.96138.0023423-1
2-121.83853.83478729
2-107.75139.02231386
2-115.89154.35182115
2-118.84448.55116047
2-123.02356.9121592-30
2-113.64147.272173230
2-112.44746.504187930
2-117.04351.222186-23
2-127.80361.031061-50
2-118.72650.65984749
2-131.74562.8781067-65
2-110.54440.95130146
2-124.95549.40541285
2-117.99249.116113640
2-131.11460.9371187-53
2-118.36651.154144212
2-128.47957.4831616-36
2-129.64856.5949628
2-133.56160.2461219-34
2-116.10154.38591214
2-129.47959.618998-33
2-127.17751.43187750
2-123.97653.94275326
2-129.89862.5631497-80
2-120.68151.57137923
2-118.04850.701151013
2-121.15652.584117124
2-124.13851.62122116
2-129.80959.5661103-29
2-131.84658.423990-8
2-125.07459.611185-38
2-116.2951.045131813
2-118.72653.7391629-7
2-116.06251.6642687-48
2-126.42659.989659-30
2-110.96444.5452494-10
2-122.35157.579974-16
2-119.02953.01115637
2-129.29362.3361487-76
2-127.51154.4021453-1
2-125.23859.521765-20
2-121.93252.42696636
2-113.71247.207205227
2-115.90550.25100134
2-125.30960.536954-43
2-115.19148.552131139
2-123.88950.231176923
2-123.77456.7361723-35
2-123.85357.2541697-35
2-124.19554.03469028
2-128.28855.875100912
2-122.43642.903105681
2-120.92854.55413123
2-128.40755.40984323
2-129.64958.5651795-44
2-123.82152.6712104
2-134.09661.2681277-43
2-110.75538.085257151
2-117.85549.399167931
2-133.82259.665700-3
2-129.26760.392681-32
3-125.16852.7621754-11
3-106.50638.053278621
3-122.10750.78128728
3-112.60143.304141068
3-105.49738.638250241
3-107.55737.6143618-12
3-106.34234.606210997
3-112.18836.706238763
3-123.41348.5696594
3-124.62252.8691535-8
3-123.5348.69200
3-105.33638.764251140
3-106.4537.015306420
3-105.35239.536263931
3-117.14449.57891551
3-117.12649.9652227-9
3-111.70942.44155466
3-108.32137.509255151
3-107.6444.521207131
3-103.72935.6311276136
3-111.24634.949192899
3-111.92636.328200788
3-103.15744.00499283
3-120.63649.535143625
3-116.45550.2562540-28
3-114.09246.446101572
3-102.64442.708118280
3-117.15448.94420955
3-111.75834.9921716124
3-121.09450.485156220
3-118.74850.8131853-1
3-111.88741.581154474
3-115.54548.43587561
3-113.76647.481122455
3-125.65352.51114926
3-118.23150.96456157
3-107.53444.339176850
3-123.11851.3531734-2
3-106.61736.217219673
3-114.32142.578117392
3-107.1838.464229038
3-110.08340.392173371
3-122.35352.90599430
3-117.37550.2012148-11
3-110.02134.245195199
3-111.59740.396207658
3-121.96752.09171254
3-117.30250.2912247-15
3-112.37243.919146558
3-104.65347.80580951
3-112.69442.288175758
3-125.22154.39711049
3-123.21352.365119811
3-108.82636.2451868101
3-121.83251.048146810
3-105.74744.726128467
3-123.4848.28600
3-113.71342.437129287
3-119.02749.79138332
3-103.40537.2971713105
3-106.15440.801247429
3-119.56452.045139913
3-114.547.093107462
3-114.67947.981143241
3-118.97751.11465449
3-117.78750.09417053
3-111.43534.65210890
3-123.37950.9982125-14
3-108.25833.648239268
3-108.36234.214223877
3-117.06349.97118902
3-125.75652.52115106
3-117.51550.046122333
3-106.38236.312225073
3-102.4544.38671783
3-121.32851.358108040
3-108.35437.12207187
3-107.63837.5943437-9
3-119.02249.689152127
3-122.7150.876135321
3-120.38652.03916986
3-120.11851.033118929
3-117.53450.1691969-5
3-123.92152.345105311
3-106.48936.048285931
3-120.58350.574149417
3-120.65750.057116847
3-121.00749.657109942
3-111.38844.182187630
3-110.93540.374255223
3-107.30736.626214872
3-117.23649.298104445
3-101.84633.525976155
3-107.91533.524223881
3-103.21237.4771532110
3-121.12349.963113445
3-113.35942.306137083
3-117.31251.50696935
3-107.41244.7525511
3-125.51252.8111320
4-106.90535.895225767
4-112.75537.289198188
4-105.11738.626265232
4-105.4332.7462069109
4-105.74533.432254471
4-109.3539.315196380
4-111.13437.2211846109
4-104.47635.9231796108
4-111.91234.7961164159
4-107.23837.131237542
4-107.76534.377221985
4-109.45838.139188193
4-109.0235.109204994
4-105.36634.2951863113
4-109.37934.059255865
4-112.01734.7651021165
4-108.53340.517217349
4-109.0735.371194399
4-110.39839.3081457100
4-110.18439.183165390
4-109.32437.6481788102
4-105.15237.679233860
4-102.98536.8031425121
4-109.52338.6751597110
4-110.66833.6111331149
4-108.06133.121226685
4-112.12335.5911848101
4-110.34633.941738112
4-105.81932.7132160106
4-110.21839.958207659
4-105.25637.363295322
4-109.06838.551256350
4-108.52638.193181792
4-106.1734.5731932106
4-107.48535.274260962
4-109.43240.467161574
4-107.67637.248227663
4-109.50635.4551935105
4-112.35235.251855104
4-109.81533.3421984104
4-108.80632.6681205154
4-110.03636.294224787
4-111.9135.316226770
4-108.40539.23194683
4-109.47637.802289739
4-109.8239.671202768
4-112.7337.572285127
4-106.10836.0751757108
4-111.36738.39225461
4-109.9740.453184168
4-111.24434.823211288
4-105.74233.371257075
4-110.97236.61935106
4-108.82839.056197788
4-109.22935.823241771
4-109.59334.298213189
4-106.11536.237196395
4-112.75535.509193795
4-107.89435.566221683
4-111.7634.8841405145
4-109.1235.81227179
4-110.38234.331201094
4-105.59638.421200683
4-112.05439.329163089
4-106.80239.906215051
4-108.22734.301223781
4-112.76636.0181782108
4-110.69834.7251770108
4-109.58639.465200368
4-109.50733.814274658
4-107.66937.067200080
4-104.22936.987206888
4-108.79635.578202993
4-105.40333.1711993108
4-104.16436.57231572
4-104.79836.464187294
4-112.2836.342244761
4-107.45335.548201795
4-110.0937.4581692117
4-109.26940.525173673
4-112.70837.61285028
4-109.29535.61227481
4-111.57637.381640116
4-109.51838.198173894
4-112.29337.295192688
4-106.80236.62217663
4-111.9335.003199895
4-111.35437.973236470
4-109.19937.4981564115
4-108.87833.677194792
4-106.50335.2921687122
4-107.94837.136201683
4-111.5837.866207583
4-108.50732.7721575129
4-112.18734.9391388145
4-104.72437.595190699
4-109.12437.554178699
4-107.7733.419227584
4-105.44135.293196892
4-111.6437.1391570121
5-111.90933.345361213
5-111.00733.048734198
5-113.30932.294247225
5-111.55929.555450216
5-114.19933.334449216
5-113.28234.125703181
5-112.08630.908682194
5-112.09231.332681196
5-113.90533.161321226
5-112.18133.981568202
5-113.52832.574145231
5-113.30432.703191230
5-110.6633.355974179
5-111.6332.606488210
5-113.68433.121452222
5-111.32331.7491023178
5-113.15532.938221228
5-110.51433.331895180
5-113.35532.226259223
5-111.65830.843592205
5-111.26630.085750197
5-112.25632.662642204
5-110.93932.312792202
5-110.93530.193777199
5-111.71633.167772203
5-110.84528.938247241
5-111.95132.819409211
5-113.3333.35449212
5-113.73832.001189219
5-112.11232.971394210
5-113.62934.672760176
5-112.17333.503344214
5-112.12931.624769192
5-110.58728.957417233
5-111.19629.084154239
5-112.3929.918484200
5-112.07333.058350210
5-112.58733.876660191
5-114.00934.471657192
5-110.7232.795934187
5-112.47831.845557206
5-112.17330.769334207
5-112.23330.894558199
5-112.03529.095144226
5-111.02732.358738202
5-111.58633.295438211
5-111.29731.7251095174
5-110.48927.7471251
5-113.36432.685181231
5-110.52828.795440234
5-111.74231.001711197
5-110.50728.645462234
5-110.61133.1771042177
5-111.83233.085388210
5-111.14728.432142240
5-110.52729.705601217
5-110.5128.379181244
5-111.84333.003497206
5-113.5633.284354221
5-112.68831.76470208
5-110.71933.4611280159
5-112.03330.272434203
5-111.7332.205686200
5-112.20833.935733197
5-111.06130.033683203
5-113.39333.805967176
5-113.30734.7851305143
5-111.02130.7851470155
5-111.76231.172774194
5-112.07431.334697195
5-111.18131.8621414158
5-111.70429.946872189
5-112.23132.785580206
5-112.86430.73853214
5-113.14233.521363211
5-113.91932.43290222
5-110.77229.393375227
5-111.53232.402856194
5-111.68530.032879189
5-111.27132.394605206
5-111.80130.135714194
5-111.62633.539422209
5-112.39430.23491199
5-114.07833.532512210
5-111.19528.18589243
5-111.64931.229852188
5-110.1528.658544236
5-114.13133.201447218
5-113.96832.585197228
5-110.51928.831543232
5-111.25532.921643205
5-112.89932.396513221
5-113.29132.442260227
5-110.66628.16376245
5-110.62632.2171513156
5-110.8932.3861702144
5-111.04932.5181298170
5-113.76633.685366213
5-111.4632.264809197

Assignment #2: Regression Models

For this assignment you will need to download the trees.csv file. This file contains data on the elevational distribution of five ‘tree’ species. For your analysis you will be looking at variables that may influence the elevation at which the species is found. During this process, you will hopefully learn a bit about linear regression models and aspects of evaluating models.

The data for this analysis was created by randomly selecting points (n=100 for most species) using GIS maps of the distribution of the species. These were then intersected with digital elevation models and the mean annual temperature (Bio_1) layer from WorldClim ( http://www.worldclim.org/ ). We are looking at five species: PIPO (Ponderosa pine, Pinus ponderosa), ABLA (Subalpine fir, Abies lasiocarpa), JUSC (Rocky Mountain juniperJuniperus scopulorum), PIED (Piñon pine, Pinus edulis), and CAGI (Saguaro, Carnegiea gigantea).

Don't use plagiarized sources. Get Your Custom Essay on
Regression Models Using R Or R-Studio
Just from $13/Page
Order Essay

Types of Variables and Other Terminology

In regression, we consider two broad types of information that are used in the modeling. We are interested in what features influence something about another feature. The feature being influenced is a variable that we call the ‘response variable’. Other common terms for this are the ‘dependent variable’ as its value depends on the value of other variables. These other variables that we believe influence the response variable are called ‘explanatory variables’ because they explain the response variable. Other common terms for these variables are ‘independent variable’ or ‘covariate’.

In a simple example of a regression model we have the equation of a line: Y = a + bX (which you may have seen before as mX+b). We will make a simple change of notation to make this into standard regression ‘language’. The a will be called β0 and the b becomes β1 so our equation is now Y=β0+β1X. In this case, X represents a single explanatory variable, and Y represents the response variable. What do β0 and β1 represent? These are called ‘coefficients’. They are parameters we estimate during the regression modeling process. In this case, you probably already know that m is the slope and b is the y-intercept. In the following you will see references to the ‘intercept’ and the ‘slope coefficients’ which indicate we are dealing with linear models. In many linear models the equation is more complex and may not produce a true line, but these are really expansions of this basic equation. You may see the coefficients referred to as ‘betas’ at times as well, referring to the notation using the Greek letter β.

For this assignment, we are interested in what variables may explain the elevational distribution of these species, so elevation (in m) is our response variable. Our possible explanatory variables are latitude and longitude (in decimal degrees) and mean annual temperature (in 10ths of deg C). Because we may be asking about differences among the species, the species can also be considered an explanatory variable.

Now let’s open R and get to work. Remember to set your working directory following the procedure used in the Week 1 Assignment. Once you have done this, you can import the data in trees.csv into an R object called ‘trees’ and then we will perform some basic manipulations that will be necessary for our analysis. Remember that all items below that are in bold, italic are R commands.

Let’s import these data using the read.csv function, and then let R know that the Type column should be treated as a factor – or nominal (name) variable.

trees<-read.csv(file.choose(),header=T)

trees$Type<-factor(trees$Type, labels=c(‘PIPO’,’ABLA’,’JUSC’,’PIED’,’CEGI’))

Because the temperatures in these data are in tenths of a degree Celsius, we want to convert them to degrees Celsius by dividing them by 10 and placing that result back in the original column.

trees$MeanTemp<-trees$MeanTemp/10

Let’s summarize and visualize these data so you get a feel for it using boxplots to see the elevational distribution by species for these data. We use the summary() and boxplot() functions you learned last week to obtain summary values for each variable and produce potentially useful boxplots. The par() function is changing the graphics window so that all four boxplots are produced in the same window.

summary(trees)

par(mfrow=c(2,2),mar=c(5,4,4,4))

boxplot(trees$Elev~trees$Type, col=c(2,3,5,6,7),main=’Elevational Distribution by Species’, ylab=’Elevation (in m)’, xlab=’Species’)

boxplot(trees$Long~trees$Type, col=c(2,3,5,6,7),main=’Longitudinal Distribution by Species’, ylab=’Longitude’, xlab=’Species’)

boxplot(trees$lat~trees$Type, col=c(2,3,5,6,7),main=’Latitudinal Distribution by Species’, ylab=’Latitude’, xlab=’Species’)

boxplot(trees$MeanTemp~trees$Type, col=c(2,3,5,6,7),main=’Mean Temperature by Species’, ylab=’Temperature (deg C)’, xlab=’Species’)

We may also want to look at a scatterplot of these data to see how latitude, longitude, and species, our explanatory variables, are possibly related. We will essentially be creating a map of the distribution of these species. Keep in mind it will be somewhat distorted because degrees latitude and longitude will be treated as the same size when they really are not the same and the ‘length’ of a degree longitude shortens as you move away from the equator. We need to reset our graphic parameter to a single box, and then use the plot function to create an empty plot (type=’n’argument) specifying that longitude is on the x-axis and latitude on the y-axis. We then use the points() function to add in the locations of each species sample with a different color and marker, and add a legend with the legend() function.

par(mfrow=c(1,1),mar=c(0,0,0,0))

plot(trees$Long, trees$lat, type=’n’, bty=’n’, main=’Geographic Distribution by Species’,ylab=’Latitude’, xlab=’Longitude’)

points(trees$Long[trees$Type==’PIPO’], trees$lat[trees$Type==’PIPO’],col=2,pch=2)

points(trees$Long[trees$Type==’ABLA’], trees$lat[trees$Type==’ABLA’],col=3,pch=3)

points(trees$Long[trees$Type==’JUSC’], trees$lat[trees$Type==’JUSC’],col=4,pch=4)

points(trees$Long[trees$Type==’PIED’], trees$lat[trees$Type==’PIED’],col=5,pch=5)

points(trees$Long[trees$Type==’CEGI’], trees$lat[trees$Type==’CEGI’],col=6,pch=6)

legend(‘bottomleft’,title=’Species’,c(‘Ponderosa’,’Subalpine Fir’,’Rocky Mtn Juniper’, ‘Pinon’, ‘Saguaro’),col=c(2,3,4,5,6),pch=c(2,3,4,5,6))

Okay, now let’s build some regression models. In our first case we will be using simple linear regression so we are basically applying the Y=β0+β1X formula. You may have noticed in one of your boxplots that there is a lot of overlap among most of these species in their elevation. However, remember that boxplots are showing the variation in the sample, maybe we want to see if the mean elevations are different, which is often the question of interest. We can do that by finding the confidence intervals for each species, and we’ll use a linear model to find those intervals. In R there are multiple ways to fit regression models, the most basic uses the lm() function (which stands for ‘linear model’) although the glm() function (for ‘generalized linear model’) can fit a wider type of models. We will use the lm() function here. Note that in the parentheses there is a formula similar to that used in the boxplot() function – these is specifying our regression model to fit. The response variable goes on the left and the explanatory variable(s) goes on the right with a ~ representing the equal sign. For Y=β0+β1X we would write Y~X. Note that we do not specify the intercept term because it is fitted by default. The data argument just tells R that the data is in object ‘trees’ so we don’t have to be writing trees$Elev~trees$Type. The results of the model fit are being placed in the object mod.type.

mod.type<-lm(Elev~Type, data=trees)

We can know look at the model that was fitted by using the summary() function.

summary(mod.type)

Be sure to look at this output carefully and detect what features you recognize from previous statistics classes. Why do we have 5 coefficients (an intercept and one for each of 4 of the 5 tree species) reported? Shouldn’t there be just two? Remember that the explanatory variable is a nominal variable (a factor). It is not the typical X variable of the equation of a line. In reality, we just fitted the following model:

The ,,,, and are the coefficients reported in the summary of this model for the ABLA, JUSC, PIED, and CEGI types respectively. Remember that we were wanting the mean elevation for each of the tree species and we wanted to know if they were different from each other. How do we get that? In the equation above the variables for the tree types (ABLA, JUSC, etc) are really just 0’s or 1’s. For example, if we want the mean elevation for ABLA (subalpine fir) in our dataset, the equation becomes:

Because we’re multiplying 3 of those coefficients by zero they drop out so the equation really is:

The mean elevation of subalpine fir in our sample is 1368 meters. We can verify this using the mean() function

mean(trees$Elev[trees$Type==’ABLA’]

So why did we do all this if we could have just used the mean()? Hopefully this has helped you understand what regression models are doing – in this case finding the coefficients that best fit the data to produce an estimate of a response variable based on combinations of explanatory variables. Take some time to make sure you understand this procedure as this is the basis of all regression models, including the more complex forms of generalized linear regression such as logistic regression, Poisson regression, etc.

Before we move on, you may notice that the tree type ‘PIPO’ is not present in the output. Why? Ponderosa pine are the intercept value in this case, so the mean elevation of ponderosa pine is given by the intercept = 1429.18 m. The elevation of all other species is in relation to this value, so we know that the subalpine fir has a mean elevation 61.26 m lower than ponderosa pine – the slope coefficient gives the value higher or lower than the reference type of the intercept type. This allows us to get information we didn’t get using just the mean. If a slope coefficient isn’t different than zero, then that variable doesn’t add any explanatory value to the intercept. The coefficients reported above are the point estimates for the coefficient value, we really want to see what the 95% confidence interval estimates of these coefficients is. We do this using the confint() function.

confint(mod.type)

Note that the coefficient interval estimate for subalpine fir goes from -211 to 89 meters – it includes the value zero. Remember these are slope coefficients. What type of line has a slope of 0? A horizontal line. This means that we really cannot conclude that the mean elevation of subalpine fir is different than the mean elevation of ponderosa pine.

Question note: Make sure you record the values for the point and interval estimates of all coefficients from this model for use in answering the questions.

From your previous statistics classes you may have recognized that the above question could be asked using Analysis of Variance (ANOVA) because we have a continuous response variable and a categorical (factor) explanatory variable. We can show the ANOVA table using the anova() function specifying our model object in the parentheses.

anova(mod.type)

Compare the results of this table with that provided in the summary(mod.type) output. Note that you see a F value (93.066) and a p-value (2.2×10-16) for the test of the null hypothesis “Tree species do not differ in elevation” in both the ANOVA table and the summary output. This is a key point to recognize – ANOVA is based on linear regression, it is not a separate type of analysis. The difference is the information you can utilize from the analyses – note the different outputs – which ones produce more information?

The t-test you probably remember is just a simplified version as well, in the case where there are only two categories involved. If these data were in the right format (they weren’t collected to meet the needed assumptions) we would find that the t.test() function would provide an estimate of the difference of mean elevation of ponderosa and subalpine fir exactly the same as the coefficient for subalpine fir, and it would report the same t-statistic and p-value reported in the summary of the linear model for the subalpine fir coefficient (i.e, .t=-0.799, p=0.42455). We see that the t-test and ANOVA are just special cases of linear regression that emphasize significance testing whereas the regression approach allows us to work in both the significance testing and information-theoretic (discussed shortly) frameworks, and provides substantially more information about the relationship than the other two approaches.

What if we wanted confidence intervals on the mean elevation for each tree species, rather than on the difference in elevation of the four tree species from ponderosa pine? To obtain that we specify what tree type we want to be the intercept in our regression model using the relevel() function. Because ponderosa pine is the intercept now, we can get it’s 95% confidence interval directly using

(pipo.cl<-confint(mod.type)[1,])

The [1,] is telling R to extract the first listed confidence interval (row 1) from the output. We now use relevel() to set subalpine fir as the intercept, and then rerun the regression and extract the confidence interval on the intercept.

trees$Type<-relevel(trees$Type,’ABLA’)

mod.type<-lm(Elev~Type, data=trees)

(abla.cl<-confint(mod.type)[1,])

Question note: You should now be able to find the 95% confidence intervals on mean elevation for the remaining 3 tree species (JUSC, PIED, CEGI). Keep these confidence intervals for all 5 species for use in answers.

Maximum Likelihood Estimation (MLE) and the Information-Theoretic Approach with AIC

Maximum likelihood estimation underlies much of modern statistics. The concept was first described by Sir R.A. Fisher in 1912 when he was an undergraduate at Cambridge. It works on a really basic premise – the best estimate for a given parameter (e.g., the mean of the population being sampled) is the one that makes the presumed distribution fit the data better than other such estimates. For example, the normal distribution has a probability density function (pdf) given by which produces the readily recognizable bell-shaped curve. For a mean of 50 and standard deviation of 15, it should be intuitive that the highest density value, 0.0266 in the figure below, should be associated with the true mean of the distribution, i.e., the normal pdf is maximized at the value of the mean.

Figure 1. PDF of Normal Distribution w/mean=50 & sd=15

Similarly, if we have some data and a hypothesized model to fit them, we should pick the parameters of that model (mean and standard deviation in the above case) that make the model’s density function fit the observed data the ‘best’. Note that this is basically a probability statement, but with a twist. If we go with the standard notation of probability, we can make a statement like ‘the probability of x given the model parameters equals’ as Pr{x | parameters}=. For the figure above, this would be Pr{50 |50,15}=0.0266, which happens to be the maximum that can be obtained. Likelihoods just turn the statement around to reflect what researchers actually do – we have data and we want to know the underlying parameter values – which means we refer to the ‘likelihood of model parameters given the data’ or L(parameters|data) – we want the value of the parameters that maximize the likelihood function built with the observed data, and we use those as our best estimates of the underlying population values we are trying to estimate.

Let’s illustrate this with a simple probability example, which is directly comparable to flipping a coin. Let’s assume we have 100 animals radiocollared and we follow them for some set period, say a year. During this period, 18 of the animals die. We want to know the best estimate of the yearly survival rate. Intuitively, we had 82 animals out of 100 survive so we’d think 82/100=0.82 is our best guess. Now let’s look at it from a MLE standpoint, what is the likelihood of 0.82 (=p) given N=100 trials and y=82 successes? This follows a classic binomial (or Bernoulli) distribution which has a likelihood function of: . We see in the figure below that, in fact, this function is maximized when p is set to 0.82, resulting in a likelihood value of 0.103.

Figure 2. Binomial likelihood function w/N=100, y=82.

So why don’t we just use the estimator p=y/N? Well, we do. This is the ‘closed’ form solution to finding the maximum of the above likelihood. For those of you with calculus backgrounds, you will remember that finding maxima (or minima) of functions is done by solving the first derivative (partial with respect to the parameter of interest) of the function when it is set equal to 0. The following figure shows the value of the first derivative of this function as p varies, and we see that it is 0 when y/N=0.82.

Chart, histogram  Description automatically generated

Figure 3. First derivative of binomial likelihood with N=100, y=82.

y/N is the solution when the first derivative is set equal to 0 and is known as the maximum likelihood estimate for a Bernoulli trial situation. Many of our common estimators are MLE’s for the same reason – they are the closed form solutions to the first derivative of the likelihood when it is set to zero. However, for many of our applications, there is no closed form estimator available, so we must use numerical optimization routines to develop our MLE of the parameter of interest.

For several reasons we often use log-likelihoods instead of likelihoods. If you notice in Fig 2 above, the value of the likelihood is extremely small (essentially 0) for much of the range of p, which can create issues due to rounding to 0, especially in a numerical optimization. In addition, logarithms often simplify calculation by making multiplicative functions additive which can also help in numerical optimization, e.g., becomes . This results in the following likelihood shape (compare to Fig 2).

Chart  Description automatically generated

Figure 4. Log-likelihood function of binomial w/N=100, y=82.

Why MLE’s?

Okay, so why do we care about using Maximum Likelihood Estimators (as opposed to other forms of estimators that aren’t MLE’s)? Well, for several reasons, all of which imply that there are no better estimators available given a large sample size. The likelihood principle states that the likelihood function contains all relevant information from a sample. Some reasons for using MLE’s are:

· They are asymptotically (i.e., at large sample sizes) normally distributed

· They have minimum variance (therefore provide more precise estimates)

· They are asymptotically unbiased.

· They are efficient (i.e., extract information most effectively, hence the minimum variance)

· They relate to Fisher information, which enables use in model selection, among other things.

Comparison of likelihoods and least squares

Most people are introduced to regression (and ANOVA) through the method of least squares, you have probably heard of ‘Sum of Squares’ and seen it calculated as where x is an individual data point. This is the sum of the squared residual values. For data with normally-distributed errors, it happens that the MLE of the regression is the value that minimizes the sum of squares, which is why it is called ‘least squares’. In simple linear regression, these can be solved with closed-form equations, which are the series of least squares equations you may have had to calculate at some point in previous statistics classes (SST, SSE, MSE). In log-likelihoods involving normally-distributed error, the deviance is equivalent to sum of squares term in least squares.

Profile likelihood confidence intervals

Remembering that deviance is a general form of sum of squares – it gets at the amount of residual variation that is unexplained by the model. It also is approximately chi-square distributed with 1 degree of freedom. This leads to the ability to generate confidence intervals based on the likelihood function. The χ2 value with 1 df for 95% confidence= 1.92. If we subtract 1.92 from the maximum value of the log-likelihood, and find the parameter values that produce these values in the likelihood we get the 95% confidence limits for our parameter estimates. In the example we’ve been using, we have a point estimate of 0.82 which was produced when the log-likelihood was maximized at -2.269711. Subtracting 1.92 from this gives us -4.189711. We then find that the log-likelihood equals -4.189711 where p=0.737 and 0.887. Therefore our 95% profile likelihood confidence interval is [0.737, 0.887]. We can see this in a zoomed in version of figure 3:

Chart, diagram  Description automatically generated

Figure 5. Profile likelihood confidence intervals.

You may notice that the confidence interval is not symmetrical about the point estimate (0.82), extending further on the lower end than on the upper end. If you look at figure 3, you will notice that the log-likelihood function is not symmetrical, it is steeper on the right side than on the left side. It should not surprise us that we then that the confidence interval is asymmetric, and in fact we should not expect them to be so unless we assume error is normally distributed. You should note that when you receive confidence interval estimates on coefficients from your linear models based on lm() and glm() you are getting profile likelihood intervals.

Information Theory and Akaike’s Information Criterion (AIC)

AIC stands for ‘An information criterion’ but is more commonly known as “Akaike’s Information Criterion” because it was developed by Hirotugu Akaike. AIC is based on the Fisher Information from the log-likelihood (the negative second derivative). Recall that this is related to the deviance. Specifically, AIC=-2logL+ 2K where K is the number of parameters in the model and logis the value of the log-likelihood function at its maximum. In our example above, if you remove the constant involving the factorials, the value of the maximum at its maximum is -47.13935. There is only one parameter, p, to be estimated in the model, therefore the AIC=-2(-47.13935)+2(1)= 96.2787. This is the same AIC that R would report if you ran a GLM on a factored vector, success, consisting of 82 1’s and 18 0’s. Run the following code:

success<-factor(c(rep(1,82),rep(0,18)))

(glm(success~1,family=binomial(link=’logit’)))

We have used the glm() function to build our model, which presents a somewhat different output from the lm() function. Note the AIC is reported at the end, and that this matches the value shown above. Also note that our formula was success~1. We have not specified any explanatory variables. The 1 indicates we want to fit an intercept-only model to these data.

If models are based on the same underlying data, then they can be compared using AIC. Models with lower AIC values fit the data better than those with higher AIC values. Note that there are two reasons for this:

1. they have higher maximum values for their log-likelihoods (as log-likelihoods are generally less than 0 their negatives will be lower), and

2. more complex models will have a higher ‘parameter penalty’ (the 2K part). AIC is trying to combine fit with model simplicity, with the goal of trying to find the simplest model that fits the data well – this follows the ‘Principle of Parsimony’ or ‘Occam’s Razor’.

We usually compare models using deltaAIC (ΔAIC) which is the difference =AICbestmodel-AICcurrentmodel. Note that there isn’t a clearcut decision as when a model is not competitive, however, a common recommendation (Burnham and Anderson 2002) is that if ΔAIC<2 than the model is competitive, and support for the model drops off quickly as ΔAIC gets larger, with ΔAIC>7 indicating highly improbable models. Another common way of looking at models is based on Akaike weights, which can be used to look at the weight of evidence in support of the given model. Akaike weights are given by where the denominator is summed over all models in your model set. We can then compare two models, 1 and 2, with weights w1 and w2, using the evidence ratio=w1/w2. For example, let’s look at the following 5 models with ΔAIC’s of 0, 1.2, 2, 3.2, and 7. Based on these values we get the following where the evidence ratio is best model compared with the model listed (e.g., model 1 is w1):

ModelΔAICWiEvidence Ratio
100.4651.00
21.20.2551.82
320.1712.72
43.20.0944.95
570.01433.1

We see that model 1 has 46.5% of the weight of evidence, but model 2 still has 25.5% of the weight. The odds of model 1 being the best model over model 4 are nearly 5 to 1 (explicitly, it is 4.95 times more likely than model 4). The nice thing about these approaches is it allows us to quantify the uncertainty involved in our model selection process, which enables us to incorporate it – which is important in that it allows for model averaging in which we don’t have to select a single ‘best model’.

One last thing concerning AIC. It is recommended that we use the small sample size correction form of AIC, AICc, which slightly increases the weight of the ‘parameter penalty’. Specifically this is given by: where n is the sample size.

Model Selection

Now let’s return to our tree dataset to compare several models to see which explanatory variables produce the model that best fits these data. We are going to use a generalized linear model with a normally-distributed error and the ‘identity’ link function. We will have three different models, we have already run the model above (mod.type) that has tree species as the only explanatory model. Now we’ll use both tree species and latitude as explanatory variables in an additive model. First let’s rerun our first model using glm() and place the results in object ‘gmod.type’ and then see what the results look like compared to the previous output.

gmod.type<-glm(Elev~Type, data=trees)

mod.type

gmod.type

Now let’s run a multiple linear regression with tree type and latitude as the explanatory variables.

gmod.typelat<-glm(Elev~Type+lat, data=trees)

summary(gmod.typelat)

We’ll also show the 95% confidence intervals on the coefficients. Note that there is a line at the top printed while the function was running that states “Waiting for profiling to be done …”. R is calculating the intervals using the profile likelihood approach as discussed above.

confint(gmod.typelat)

Then we’ll use only latitude as an explanatory variable. We’ll log transform elevation in this case because the latitude variable is somewhat skewed. By taking the natural logarithm of the latitude these data become more normally distributed. That does mean we have to remember the values reported in the output are the logarithms of the latitude, so if we want to report the real latitude values we need to back transform the values with evalue.

gmod3.lat<-glm(Elev~lat, data=trees)

summary(gmod3.lat)

confint(gmod3.lat)

So now we have three linear models, two with a single explanatory variable each, and one with these two variables both included in an additive model. Which one of these models best describes (or ‘fits’) these data? This is where AIC is useful. Remember that as AIC gets smaller the model appears to fit the data better. In addition, adhering to the ‘principle of parsimony’, large complex models with many terms (so they have many parameters) are penalized over simpler models. AIC then provides a means of ranking models with a preference toward finding the most simple model that fits the data well. This is the basis of the information-theoretic approach to statistical inference that predominates ecological statistics today (see Burnham and Anderson 2002). Anderson et al (2000) provides an additional short accessible overview of the information-theoretic approach.

So let’s look at the AIC’s for our models, which are extracted using the AIC() function:

AIC(mod.type)

AIC(gmod2.typelat)

AIC(gmod3.lat)

We can clearly see that the addition of latitude as an explanatory variable made this model fit the data much better. In practice, we use delta AIC (ΔAIC) to compare all models with the model with the model that has the lowest AIC. There is no hard and fast ‘rule’ but models with ΔAIC’s > 2 are generally considered not particularly competitive. However, there is some inferential uncertainty here and we can capture that with ‘model averaging’, but that is beyond our scope today.

Question note: You should now be able to run additional models. Run at least two additional models, one with a single explanatory variable, and one with two explanatory variables. Record the output for all 5 GLM models, including confidence intervals on the coefficients. In addition, note the AIC for all 5 models you have run. You will use this information to answer questions.

Let’s conclude the assignment by looking at the confidence intervals and prediction intervals of our current best model (gmod.typelat). Remember that this is an additive model with the explanatory variables of latitude and tree species was the best. We will focus on how well this model demonstrates the elevational distribution of ponderosa pine.

First, we need to create an object that contains latitude values representing the range observed in our dataset for ponderosa pine. The following code does this by creating an object ‘lat’. It then adds a column for type that only has PIPO in it. We need this two column object, newpred, to have both of the explanatory variables in it. R will apply the coefficients from the model to the data in this prediction object to create new objects representing the confidence and prediction intervals.

lat<-seq(min(trees$lat[trees$Type==’PIPO’]),max(trees$lat[trees$Type==’PIPO’]),.1)

Type<-rep(‘PIPO’,length(lat))

newpred<-data.frame(lat,Type)

Now we need to create objects containing the upper and lower confidence and prediction limits based on our best model using the predict() function. The predict() function specifically works with objects created by lm() so we need to create the best model in that form using

mod.typelat<-lm(Elev~Type+lat, data=trees)

Which we now can use to create the 95% limits using

pipo.cl<-predict(gmod.typelat,newpred,interval=c(‘confidence’))

pipo.pl<-predict(gmod.typelat,newpred,interval=c(‘prediction’))

We will create a graph showing the data for ponderosa pine with latitude on the x-axis and elevation on the y-axis.

plot(trees$lat[trees$Type==’PIPO’],trees$Elev[trees$Type==’PIPO’],type=’p’,pch=20,col=2,main=’Relationship of Latitude and Elevation for Ponderosa’,xlab=’Latitude’,ylab=’Elevation’)

We then add lines showing the point estimate and 95% confidence limits for elevation at each latitude.

lines(newpred[,1],pipo.cl[,1],lwd=2,col=4)

lines(newpred[,1],pipo.cl[,2],lty=2,col=4)

lines(newpred[,1],pipo.cl[,3],lty=2,col=4)

And similarly we add the 95% prediction limits for elevation of ponderosa pine at each latitude. We will also add a legend to the graph.

lines(newpred[,1],pipo.pl[,2],lty=4,col=6)

lines(newpred[,1],pipo.pl[,3],lty=4,col=6)

legend(‘bottomleft’,c(‘Predicted Value’,’95% Confidence Limits’,’95% Prediction Limits’),col=c(4,4,6),lty=c(1,2,4),lwd=c(2,1,1))

Question note: Save an image of this graph for the questions.

Note that we are using this model to explore the explanatory potential of variables (confidence limits), and in predicting particular values (prediction limits). Observe the graph and see how the confidence interval and prediction intervals differ. For example, assume we are latitude 33 deg N and want to know what the mean elevation that ponderosa pine might be found at this elevation. We can find the point estimate manually by using the equation Elev = intercept + -48.57*latitude + 1278.44*PIPO or Elev= 2130.09+-48.57(33)+1278.44(1)= 1806 m.

The following R code extracts each coefficient from the model output and then runs this equation. The [[1]] indicates we are extracting a coefficient from this output, the number in single brackets following indicates which coefficient. This follows the listing shown on the object display so the intercept is [1], the PIPO coefficient is [5], and the latitude coefficient is last at [6] (there are 6 coefficients total). Note that we have a PIPO coefficient because we have not releveled back to PIPO as the intercept.

(int<-mod.typelat[[1]][1])

(lat.coef<-mod.typelat[[1]][6])

(pipo.coef<-mod.typelat[[1]][5]

int+lat.coef*33+pipo.coef

Question note: Make sure you have the coefficients for the mod.typelat model recorded. You will be asked to calculate the predicted elevation for a different tree species and different latitude as a question.

Note that this result is the predicted mean value on the graph for an x-axis value of 33 degrees latitude. We can see the values for the confidence and prediction limits by displaying these objects.

pipo.cl

pipo.pl

Our example was very close to the latitude of the 99th object in this list, so we can show them with

pipo.cl[99,]

pipo.pl[99,]

We believe the true mean elevation of ponderosa pine at 33 deg N latitude is between 1692 and 1919 meters. However, if someone asked us to guess the elevation of a single ponderosa pine that they found at 33 deg N latitude, we would say that it is probably between 854 and 2756 meters.

Question note: You will be asked to provide the 95% confidence and prediction limits for ponderosa pine at a different latitude based on the pipo.cl and pipo.pl objects. Know how to access these objects to find those values.

Literature Cited

Anderson, D. R., K. P. Burnham, and W. L. Thompson. 2000. Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64(4):912-923.

Burnham, K. P.; and D. R. Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach, 2nd edition. Springer-Verlag, New York, NY.

Questions: Based on the analysis you ran, answer the following:

1. What are the point estimate and 95% confidence intervals for the coefficients from the lm() regression of tree type explaining elevation?

2. What are the 95% confidence intervals on the mean elevation for the 5 tree species?

3. Based on your answers in #2, which species differ from subalpine fir in mean elevation? From piñon?

4. Of the 5 glm() models you ran, which appears to be the best model. Support your answer by providing the relevant information from your R output. You should also provide ΔAIC values for all 5 models in your answer.

5. Of your 5 models, what was the model weight (Akaike weight) of the second best model? Show your calculation of this value.

6. Provide the image of the graph showing the confidence and prediction limits for the role of latitude in explaining elevation of ponderosa pine.

7. What is the predicted elevation, in meters, for juniper (JUSC) at 38.5 deg N latitude?

8. Provide the 95% confidence and prediction limits, as well as the predicted point value of elevation for ponderosa pine at 39.006 degrees N latitude?

9. What is the difference between confidence and prediction intervals? Provide an example of when you would use each.

Order your essay today and save 10% with the discount code ESSAYHELP