This tutorial will review how to import spatial points stored in .csv (Comma
Separated Value) format into
R as a spatial object - a SpatialPointsDataFrame. We will also
reproject data imported in a shapefile format, export a shapefile from an
R spatial object, and plot raster and vector data as
layers in the same plot.
Learning Objectives
After completing this tutorial, you will be able to:
Import .csv files containing x,y coordinate locations into R.
Convert a .csv to a spatial object.
Project coordinate locations provided in a Geographic
Coordinate System (Latitude, Longitude) to a projected coordinate system (UTM).
Plot raster and vector data in the same plot to create a map.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Spatial Data in Text Format
The HARV_PlotLocations.csv file contains x, y (point) locations for study
plots where NEON collects data on
vegetation and other ecological metrics.
We would like to:
Create a map of these plot locations.
Export the data in a shapefile format to share with our colleagues. This
shapefile can be imported into any GIS software.
Create a map showing vegetation height with plot locations layered on top.
Spatial data are sometimes stored in a text file format (.txt or .csv). If
the text file has an associated x and y location column, then we can
convert it into an R spatial object, which, in the case of point data,
will be a SpatialPointsDataFrame. The SpatialPointsDataFrame
allows us to store both the x,y values that represent the coordinate location
of each point and the associated attribute data, or columns describing each
feature in the spatial object.
**Data Tip:** There is a `SpatialPoints` object (not
`SpatialPointsDataFrame`) in R that does not allow you to store associated
attributes.
We will use the rgdal and raster libraries in this tutorial.
# load packages
library(rgdal) # for vector work; sp package should always load with rgdal
library (raster) # for metadata/attributes- vectors or rasters
# set working directory to data folder
# setwd("pathToDirHere")
Import .csv
To begin let's import the .csv file that contains plot coordinate x, y
locations at the NEON Harvard Forest Field Site (HARV_PlotLocations.csv) into
R. Note that we set stringsAsFactors=FALSE so our data imports as a
character rather than a factor class.
Also note that plot.locations_HARV is a data.frame that contains 21
locations (rows) and 15 variables (attributes).
Next, let's explore data.frame to determine whether it contains
columns with coordinate values. If we are lucky, our .csv will contain columns
labeled:
"X" and "Y" OR
Latitude and Longitude OR
easting and northing (UTM coordinates)
Let's check out the column names of our file to look for these.
View the column names, we can see that our data.frame that contains several
fields that might contain spatial information. The plot.locations_HARV$easting
and plot.locations_HARV$northing columns contain these coordinate values.
# view first 6 rows of the X and Y columns
head(plot.locations_HARV$easting)
## [1] 731405.3 731934.3 731754.3 731724.3 732125.3 731634.3
head(plot.locations_HARV$northing)
## [1] 4713456 4713415 4713115 4713595 4713846 4713295
# note that you can also call the same two columns using their COLUMN NUMBER
# view first 6 rows of the X and Y columns
head(plot.locations_HARV[,1])
## [1] 731405.3 731934.3 731754.3 731724.3 732125.3 731634.3
head(plot.locations_HARV[,2])
## [1] 4713456 4713415 4713115 4713595 4713846 4713295
So, we have coordinate values in our data.frame but in order to convert our
data.frame to a SpatialPointsDataFrame, we also need to know the CRS
associated with these coordinate values.
There are several ways to figure out the CRS of spatial data in text format.
We can explore the file itself to see if CRS information is embedded in the
file header or somewhere in the data columns.
Following the easting and northing columns, there is a geodeticDa and a
utmZone column. These appear to contain CRS information
(datum and projection), so let's view those next.
# view first 6 rows of the X and Y columns
head(plot.locations_HARV$geodeticDa)
## [1] "WGS84" "WGS84" "WGS84" "WGS84" "WGS84" "WGS84"
head(plot.locations_HARV$utmZone)
## [1] "18N" "18N" "18N" "18N" "18N" "18N"
It is not typical to store CRS information in a column, but this particular
file contains CRS information this way. The geodeticDa and utmZone columns
contain the information that helps us determine the CRS:
To create the proj4 associated with UTM Zone 18 WGS84 we could look up the
projection on the
spatial reference website
which contains a list of CRS formats for each projection:
However, if we have other data in the UTM Zone 18N projection, it's much
easier to simply assign the crs() in proj4 format from that object to our
new spatial object. Let's import the roads layer from Harvard forest and check
out its CRS.
Note: if you do not have a CRS to borrow from another raster, see Option 2 in
the next section for how to convert to a spatial object and assign a
CRS.
# Import the line shapefile
lines_HARV <- readOGR( "NEON-DS-Site-Layout-Files/HARV/", "HARV_roads")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARV_roads"
## with 13 features
## It has 15 fields
# view CRS
crs(lines_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view extent
extent(lines_HARV)
## class : Extent
## xmin : 730741.2
## xmax : 733295.5
## ymin : 4711942
## ymax : 4714260
Exploring the data above, we can see that the lines shapefile is in
UTM zone 18N. We can thus use the CRS from that spatial object to convert our
non-spatial data.frame into a spatialPointsDataFrame.
Next, let's create a crs object that we can use to define the CRS of our
SpatialPointsDataFrame when we create it.
Let's convert our data.frame into a SpatialPointsDataFrame. To do
this, we need to specify:
The columns containing X (easting) and Y (northing) coordinate values
The CRS that the column coordinate represent (units are included in the CRS).
Optional: the other columns stored in the data frame that you wish to append
as attributes to your spatial object.
We can add the CRS in two ways; borrow the CRS from another raster that
already has it assigned (Option 1) or to add it directly using the proj4string
(Option 2).
Option 1: Borrow CRS
We will use the SpatialPointsDataFrame() function to perform the conversion
and add the CRS from our utm18nCRS object.
# note that the easting and northing columns are in columns 1 and 2
plot.locationsSp_HARV <- SpatialPointsDataFrame(plot.locations_HARV[,1:2],
plot.locations_HARV, #the R object to convert
proj4string = utm18nCRS) # assign a CRS
# look at CRS
crs(plot.locationsSp_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
Option 2: Assigning CRS
If we didn't have a raster from which to borrow the CRS, we can directly assign
it using either of two equivalent, but slightly different syntaxes.
# first, convert the data.frame to spdf
r <- SpatialPointsDataFrame(plot.locations_HARV[,1:2],
plot.locations_HARV)
# second, assign the CRS in one of two ways
r <- crs("+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
+ellps=WGS84 +towgs84=0,0,0" )
# or
crs(r) <- "+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
+ellps=WGS84 +towgs84=0,0,0"
Plot Spatial Object
We now have a spatial R object, we can plot our newly created spatial object.
# plot spatial object
plot(plot.locationsSp_HARV,
main="Map of Plot Locations")
Define Plot Extent
In
Open and Plot Shapefiles in R
we learned about spatial object extent. When we plot several spatial layers in
R, the first layer that is plotted becomes the extent of the plot. If we add
additional layers that are outside of that extent, then the data will not be
visible in our plot. It is thus useful to know how to set the spatial extent of
a plot using xlim and ylim.
Let's first create a SpatialPolygon object from the
NEON-DS-Site-Layout-Files/HarClip_UTMZ18 shapefile. (If you have completed
Vector 00-02 tutorials in this
Introduction to Working with Vector Data in R
series, you can skip this code as you have already created this object.)
# create boundary object
aoiBoundary_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HarClip_UTMZ18")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
To begin, let's plot our aoiBoundary object with our vegetation plots.
When we attempt to plot the two layers together, we can see that the plot
locations are not rendered. Our data are in the same projection,
so what is going on?
# view extent of each
extent(aoiBoundary_HARV)
## class : Extent
## xmin : 732128
## xmax : 732251.1
## ymin : 4713209
## ymax : 4713359
extent(plot.locationsSp_HARV)
## class : Extent
## xmin : 731405.3
## xmax : 732275.3
## ymin : 4712845
## ymax : 4713846
# add extra space to right of plot area;
# par(mar=c(5.1, 4.1, 4.1, 8.1), xpd=TRUE)
plot(extent(plot.locationsSp_HARV),
col="purple",
xlab="easting",
ylab="northing", lwd=8,
main="Extent Boundary of Plot Locations \nCompared to the AOI Spatial Object",
ylim=c(4712400,4714000)) # extent the y axis to make room for the legend
plot(extent(aoiBoundary_HARV),
add=TRUE,
lwd=6,
col="springgreen")
legend("bottomright",
#inset=c(-0.5,0),
legend=c("Layer One Extent", "Layer Two Extent"),
bty="n",
col=c("purple","springgreen"),
cex=.8,
lty=c(1,1),
lwd=6)
The extents of our two objects are different. plot.locationsSp_HARV is
much larger than aoiBoundary_HARV. When we plot aoiBoundary_HARV first, R
uses the extent of that object to as the plot extent. Thus the points in the
plot.locationsSp_HARV object are not rendered. To fix this, we can manually
assign the plot extent using xlims and ylims. We can grab the extent
values from the spatial object that has a larger extent. Let's try it.
The spatial extent of a shapefile or R spatial object
represents the geographic edge or location that is the furthest
north, south, east and west. Thus is represents the overall geographic
coverage of the spatial object. Source: National Ecological Observatory
Network (NEON)
plotLoc.extent <- extent(plot.locationsSp_HARV)
plotLoc.extent
## class : Extent
## xmin : 731405.3
## xmax : 732275.3
## ymin : 4712845
## ymax : 4713846
# grab the x and y min and max values from the spatial plot locations layer
xmin <- plotLoc.extent@xmin
xmax <- plotLoc.extent@xmax
ymin <- plotLoc.extent@ymin
ymax <- plotLoc.extent@ymax
# adjust the plot extent using x and ylim
plot(aoiBoundary_HARV,
main="NEON Harvard Forest Field Site\nModified Extent",
border="darkgreen",
xlim=c(xmin,xmax),
ylim=c(ymin,ymax))
plot(plot.locationsSp_HARV,
pch=8,
col="purple",
add=TRUE)
# add a legend
legend("bottomright",
legend=c("Plots", "AOI Boundary"),
pch=c(8,NA),
lty=c(NA,1),
bty="n",
col=c("purple","darkgreen"),
cex=.8)
## Challenge - Import & Plot Additional Points
We want to add two phenology plots to our existing map of vegetation plot
locations.
Import the .csv: HARV/HARV_2NewPhenPlots.csv into R and do the following:
Find the X and Y coordinate locations. Which value is X and which value is Y?
These data were collected in a geographic coordinate system (WGS84). Convert
the data.frame into an R spatialPointsDataFrame.
Plot the new points with the plot location points from above. Be sure to add
a legend. Use a different symbol for the 2 new points! You may need to adjust
the X and Y limits of your plot to ensure that both points are rendered by R!
If you have extra time, feel free to add roads and other layers to your map!
In this tutorial, we will create a base map of our study site using a United States
state and country boundary accessed from the
United States Census Bureau.
We will learn how to map vector data that are in different CRS and thus
don't line up on a map.
Learning Objectives
After completing this tutorial, you will be able to:
Identify the CRS of a spatial dataset.
Differentiate between with geographic vs. projected coordinate reference systems.
Use the proj4 string format which is one format used used to
store & reference the CRS of a spatial object.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Working With Spatial Data From Different Sources
To support a project, we often need to gather spatial datasets for from
different sources and/or data that cover different spatial extents. Spatial
data from different sources and that cover different extents are often in
different Coordinate Reference Systems (CRS).
Some reasons for data being in different CRS include:
The data are stored in a particular CRS convention used by the data
provider; perhaps a federal agency or a state planning office.
The data are stored in a particular CRS that is customized to a region.
For instance, many states prefer to use a State Plane projection customized
for that state.
Maps of the United States using data in different projections.
Notice the differences in shape associated with each different projection.
These differences are a direct result of the calculations used to "flatten"
the data onto a 2-dimensional map. Often data are stored purposefully in a
particular projection that optimizes the relative shape and size of
surrounding geographic boundaries (states, counties, countries, etc).
Source: M. Corey,
opennews.org
Check out this short video from
Buzzfeed
highlighting how map projections can make continents
seems proportionally larger or smaller than they actually are!
In this tutorial we will learn how to identify and manage spatial data
in different projections. We will learn how to reproject the data so that they
are in the same projection to support plotting / mapping. Note that these skills
are also required for any geoprocessing / spatial analysis, as data need to be in
the same CRS to ensure accurate results.
We will use the rgdal and raster libraries in this tutorial.
# load packages
library(rgdal) # for vector work; sp package should always load with rgdal.
library (raster) # for metadata/attributes- vectors or rasters
# set working directory to data folder
# setwd("pathToDirHere")
Import US Boundaries - Census Data
There are many good sources of boundary base layers that we can use to create a
basemap. Some R packages even have these base layers built in to support quick
and efficient mapping. In this tutorial, we will use boundary layers for the
United States, provided by the
United States Census Bureau.
It is useful to have shapefiles to work with because we can add additional
attributes to them if need be - for project specific mapping.
Read US Boundary File
We will use the readOGR() function to import the
/US-Boundary-Layers/US-State-Boundaries-Census-2014 layer into R. This layer
contains the boundaries of all continental states in the U.S.. Please note that
these data have been modified and reprojected from the original data downloaded
from the Census website to support the learning goals of this tutorial.
# Read the .csv file
State.Boundary.US <- readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers",
"US-State-Boundaries-Census-2014")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/US-Boundary-Layers", layer: "US-State-Boundaries-Census-2014"
## with 58 features
## It has 10 fields
## Integer64 fields read as strings: ALAND AWATER
## Warning in readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers", "US-
## State-Boundaries-Census-2014"): Z-dimension discarded
# look at the data structure
class(State.Boundary.US)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
Note: the Z-dimension warning is normal. The readOGR() function doesn't import
z (vertical dimension or height) data by default. This is because not all
shapefiles contain z dimension data.
Now, let's plot the U.S. states data.
# view column names
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n US Census Bureau Data")
U.S. Boundary Layer
We can add a boundary layer of the United States to our map to make it look
nicer. We will import
NEON-DS-Site-Layout-Files/US-Boundary-Layers/US-Boundary-Dissolved-States.
If we specify a thicker line width using lwd=4 for the border layer, it will
make our map pop!
# Read the .csv file
Country.Boundary.US <- readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers",
"US-Boundary-Dissolved-States")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/US-Boundary-Layers", layer: "US-Boundary-Dissolved-States"
## with 1 features
## It has 9 fields
## Integer64 fields read as strings: ALAND AWATER
## Warning in readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers", "US-
## Boundary-Dissolved-States"): Z-dimension discarded
# look at the data structure
class(Country.Boundary.US)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
# view column names
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n US Census Bureau Data",
border="gray40")
# view column names
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
Next, let's add the location of a flux tower where our study area is.
As we are adding these layers, take note of the class of each object.
# Import a point shapefile
point_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HARVtower_UTM18N")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARVtower_UTM18N"
## with 1 features
## It has 14 fields
class(point_HARV)
## [1] "SpatialPointsDataFrame"
## attr(,"package")
## [1] "sp"
# plot point - looks ok?
plot(point_HARV,
pch = 19,
col = "purple",
main="Harvard Fisher Tower Location")
The plot above demonstrates that the tower point location data are readable and
will plot! Let's next add it as a layer on top of the U.S. states and boundary
layers in our basemap plot.
# plot state boundaries
plot(State.Boundary.US,
main="Map of Continental US State Boundaries \n with Tower Location",
border="gray40")
# add US border outline
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
# add point tower location
plot(point_HARV,
pch = 19,
col = "purple",
add=TRUE)
What do you notice about the resultant plot? Do you see the tower location in
purple in the Massachusetts area? No! So what went wrong?
Let's check out the CRS (crs()) of both datasets to see if we can identify any
issues that might cause the point location to not plot properly on top of our
U.S. boundary layers.
# view CRS of our site data
crs(point_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view crs of census data
crs(State.Boundary.US)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
crs(Country.Boundary.US)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
It looks like our data are in different CRS. We can tell this by looking at
the CRS strings in proj4 format.
Understanding CRS in Proj4 Format
The CRS for our data are given to us by R in proj4 format. Let's break
down the pieces of proj4 string. The string contains all of the individual
CRS elements that R or another GIS might need. Each element is specified
with a + sign, similar to how a .csv file is delimited or broken up by
a ,. After each + we see the CRS element being defined. For example
projection (proj=) and datum (datum=).
UTM Proj4 String
Our project string for point_HARV specifies the UTM projection as follows:
proj=longlat: the data are in a geographic (latitude and longitude)
coordinate system
datum=WGS84: the datum is WGS84
ellps=WGS84: the ellipsoid is WGS84
Note that there are no specified units above. This is because this geographic
coordinate reference system is in latitude and longitude which is most
often recorded in Decimal Degrees.
**Data Tip:** the last portion of each `proj4` string
is `+towgs84=0,0,0 `. This is a conversion factor that is used if a datum
conversion is required. We will not deal with datums in this tutorial series.
CRS Units - View Object Extent
Next, let's view the extent or spatial coverage for the point_HARV spatial
object compared to the State.Boundary.US object.
# extent for HARV in UTM
extent(point_HARV)
## class : Extent
## xmin : 732183.2
## xmax : 732183.2
## ymin : 4713265
## ymax : 4713265
# extent for object in geographic
extent(State.Boundary.US)
## class : Extent
## xmin : -124.7258
## xmax : -66.94989
## ymin : 24.49813
## ymax : 49.38436
Note the difference in the units for each object. The extent for
State.Boundary.US is in latitude and longitude, which yields smaller numbers
representing decimal degree units; however, our tower location point
is in UTM, which is represented in meters.
To view a list of datum conversion factors, type projInfo(type = "datum")
into the R console.
Reproject Vector Data
Now we know our data are in different CRS. To address this, we have to modify
or reproject the data so they are all in the same CRS. We can use
spTransform() function to reproject our data. When we reproject the data, we
specify the CRS that we wish to transform our data to. This CRS contains
the datum, units and other information that R needs to reproject our data.
The spTransform() function requires two inputs:
The name of the object that you wish to transform
The CRS that you wish to transform that object too. In this case we can
use the crs() of the State.Boundary.US object as follows:
crs(State.Boundary.US)
**Data Tip:** `spTransform()` will only work if your
original spatial object has a CRS assigned to it AND if that CRS is the
correct CRS!
Next, let's reproject our point layer into the geographic latitude and
longitude WGS84 coordinate reference system (CRS).
# reproject data
point_HARV_WGS84 <- spTransform(point_HARV,
crs(State.Boundary.US))
# what is the CRS of the new object
crs(point_HARV_WGS84)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
# does the extent look like decimal degrees?
extent(point_HARV_WGS84)
## class : Extent
## xmin : -72.17266
## xmax : -72.17266
## ymin : 42.5369
## ymax : 42.5369
Once our data are reprojected, we can try to plot again.
# plot state boundaries
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n With Fisher Tower Location",
border="gray40")
# add US border outline
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
# add point tower location
plot(point_HARV_WGS84,
pch = 19,
col = "purple",
add=TRUE)
Reprojecting our data ensured that things line up on our map! It will also
allow us to perform any required geoprocessing (spatial calculations /
transformations) on our data.
## Challenge - Reproject Spatial Data
Create a map of the North Eastern United States as follows:
Import and plot Boundary-US-State-NEast.shp. Adjust line width as necessary.
Reproject the layer into UTM zone 18 north.
Layer the Fisher Tower point location point_HARV on top of the above plot.
Add a title to your plot.
Add a legend to your plot that shows both the state boundary (line) and
the Tower location point.
This tutorial explains what shapefile attributes are and how to work with
shapefile attributes in R. It also covers how to identify and query shapefile
attributes, as well as subset shapefiles by specific attribute values.
Finally, we will review how to plot a shapefile according to a set of attribute
values.
Learning Objectives
After completing this tutorial, you will be able to:
Query shapefile attributes.
Subset shapefiles using specific attribute values.
Plot a shapefile, colored by unique attribute values.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Shapefile Metadata & Attributes
When we import a shapefile into R, the readOGR() function automatically
stores metadata and attributes associated with the file.
Load the Data
To work with vector data in R, we can use the rgdal library. The raster
package also allows us to explore metadata using similar commands for both
raster and vector files.
We will import three shapefiles. The first is our AOI or area of
interest boundary polygon that we worked with in
Open and Plot Shapefiles in R.
The second is a shapefile containing the location of roads and trails within the
field site. The third is a file containing the Fisher tower location.
# load packages
# rgdal: for vector work; sp package should always load with rgdal.
library(rgdal)
# raster: for metadata/attributes- vectors or rasters
library (raster)
# set working directory to data folder
# setwd("pathToDirHere")
# Import a polygon shapefile
aoiBoundary_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV",
"HarClip_UTMZ18", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
# Import a line shapefile
lines_HARV <- readOGR( "NEON-DS-Site-Layout-Files/HARV", "HARV_roads", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARV_roads"
## with 13 features
## It has 15 fields
# Import a point shapefile
point_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV",
"HARVtower_UTM18N", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARVtower_UTM18N"
## with 1 features
## It has 14 fields
class() - Describes the type of vector data stored in the object.
length() - How many features are in this spatial object?
object extent() - The spatial extent (geographic area covered by) features
in the object.
coordinate reference system (crs()) - The spatial projection that the data are
in.
Let's explore the metadata for our point_HARV object.
# view class
class(x = point_HARV)
## [1] "SpatialPointsDataFrame"
## attr(,"package")
## [1] "sp"
# x= isn't actually needed; it just specifies which object
# view features count
length(point_HARV)
## [1] 1
# view crs - note - this only works with the raster package loaded
crs(point_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view extent- note - this only works with the raster package loaded
extent(point_HARV)
## class : Extent
## xmin : 732183.2
## xmax : 732183.2
## ymin : 4713265
## ymax : 4713265
# view metadata summary
point_HARV
## class : SpatialPointsDataFrame
## features : 1
## extent : 732183.2, 732183.2, 4713265, 4713265 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 14
## names : Un_ID, Domain, DomainName, SiteName, Type, Sub_Type, Lat, Long, Zone, Easting, Northing, Ownership, County, annotation
## value : A, 1, Northeast, Harvard Forest, Core, Advanced Tower, 42.5369, -72.17266, 18, 732183.193774, 4713265.041137, Harvard University, LTER, Worcester, C1
About Shapefile Attributes
Shapefiles often contain an associated database or spreadsheet of values called
attributes that describe the vector features in the shapefile. You can think
of this like a spreadsheet with rows and columns. Each column in the spreadsheet
is an individual attribute that describes an object. Shapefile attributes
include measurements that correspond to the geometry of the shapefile features.
For example, the HARV_Roads shapefile (lines_HARV object) contains an
attribute called TYPE. Each line in the shapefile has an associated TYPE
which describes the type of road (woods road, footpath, boardwalk, or
stone wall).
The shapefile format allows us to store attributes for each
feature (vector object) stored in the shapefile. The attribute table is
similar to a spreadsheet. There is a row for each feature. The first column
contains the unique ID of the feature. We can add additional columns that
describe the feature. Image Source: National Ecological Observatory Network
(NEON)
We can look at all of the associated data attributes by printing the contents of
the data slot with objectName@data. We can use the base R length
function to count the number of attributes associated with a spatial object too.
# just view the attributes & first 6 attribute values of the data
head(lines_HARV@data)
## OBJECTID_1 OBJECTID TYPE NOTES MISCNOTES RULEID
## 0 14 48 woods road Locust Opening Rd <NA> 5
## 1 40 91 footpath <NA> <NA> 6
## 2 41 106 footpath <NA> <NA> 6
## 3 211 279 stone wall <NA> <NA> 1
## 4 212 280 stone wall <NA> <NA> 1
## 5 213 281 stone wall <NA> <NA> 1
## MAPLABEL SHAPE_LENG LABEL BIKEHORSE RESVEHICLE
## 0 Locust Opening Rd 1297.35706 Locust Opening Rd Y R1
## 1 <NA> 146.29984 <NA> Y R1
## 2 <NA> 676.71804 <NA> Y R2
## 3 <NA> 231.78957 <NA> <NA> <NA>
## 4 <NA> 45.50864 <NA> <NA> <NA>
## 5 <NA> 198.39043 <NA> <NA> <NA>
## RECMAP Shape_Le_1 ResVehic_1
## 0 Y 1297.10617 R1 - All Research Vehicles Allowed
## 1 Y 146.29983 R1 - All Research Vehicles Allowed
## 2 Y 676.71807 R2 - 4WD/High Clearance Vehicles Only
## 3 <NA> 231.78962 <NA>
## 4 <NA> 45.50859 <NA>
## 5 <NA> 198.39041 <NA>
## BicyclesHo
## 0 Bicycles and Horses Allowed
## 1 Bicycles and Horses Allowed
## 2 Bicycles and Horses Allowed
## 3 <NA>
## 4 <NA>
## 5 <NA>
# how many attributes are in our vector data object?
length(lines_HARV@data)
## [1] 15
We can view the individual name of each attribute using the
names(lines_HARV@data) method in R. We could also view just the first 6 rows
of attribute values using head(lines_HARV@data).
Let's give it a try.
# view just the attribute names for the lines_HARV spatial object
names(lines_HARV@data)
## [1] "OBJECTID_1" "OBJECTID" "TYPE" "NOTES" "MISCNOTES"
## [6] "RULEID" "MAPLABEL" "SHAPE_LENG" "LABEL" "BIKEHORSE"
## [11] "RESVEHICLE" "RECMAP" "Shape_Le_1" "ResVehic_1" "BicyclesHo"
### Challenge: Attributes for Different Spatial Classes
Explore the attributes associated with the `point_HARV` and `aoiBoundary_HARV`
spatial objects.
How many attributes do each have?
Who owns the site in the point_HARV data object?
Which of the following is NOT an attribute of the point data object?
A) Latitude B) County C) Country
Explore Values within One Attribute
We can explore individual values stored within a particular attribute.
Again, comparing attributes to a spreadsheet or a data.frame is similar
to exploring values in a column. We can do this using the $ and the name of
the attribute: objectName$attributeName.
# view all attributes in the lines shapefile within the TYPE field
lines_HARV$TYPE
## [1] woods road footpath footpath stone wall stone wall stone wall
## [7] stone wall stone wall stone wall boardwalk woods road woods road
## [13] woods road
## Levels: boardwalk footpath stone wall woods road
# view unique values within the "TYPE" attributes
levels(lines_HARV@data$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
Notice that two of our TYPE attribute values consist of two separate words:
stone wall and woods road. There are really four unique TYPE values, not six
TYPE values.
Subset Shapefiles
We can use the objectName$attributeName syntax to select a subset of features
from a spatial object in R.
# select features that are of TYPE "footpath"
# could put this code into other function to only have that function work on
# "footpath" lines
lines_HARV[lines_HARV$TYPE == "footpath",]
## class : SpatialLinesDataFrame
## features : 2
## extent : 731954.5, 732232.3, 4713131, 4713726 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 15
## names : OBJECTID_1, OBJECTID, TYPE, NOTES, MISCNOTES, RULEID, MAPLABEL, SHAPE_LENG, LABEL, BIKEHORSE, RESVEHICLE, RECMAP, Shape_Le_1, ResVehic_1, BicyclesHo
## min values : 40, 91, footpath, NA, NA, 6, NA, 146.299844868, NA, Y, R1, Y, 146.299831389, R1 - All Research Vehicles Allowed, Bicycles and Horses Allowed
## max values : 41, 106, footpath, NA, NA, 6, NA, 676.71804248, NA, Y, R2, Y, 676.718065323, R2 - 4WD/High Clearance Vehicles Only, Bicycles and Horses Allowed
# save an object with only footpath lines
footpath_HARV <- lines_HARV[lines_HARV$TYPE == "footpath",]
footpath_HARV
## class : SpatialLinesDataFrame
## features : 2
## extent : 731954.5, 732232.3, 4713131, 4713726 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 15
## names : OBJECTID_1, OBJECTID, TYPE, NOTES, MISCNOTES, RULEID, MAPLABEL, SHAPE_LENG, LABEL, BIKEHORSE, RESVEHICLE, RECMAP, Shape_Le_1, ResVehic_1, BicyclesHo
## min values : 40, 91, footpath, NA, NA, 6, NA, 146.299844868, NA, Y, R1, Y, 146.299831389, R1 - All Research Vehicles Allowed, Bicycles and Horses Allowed
## max values : 41, 106, footpath, NA, NA, 6, NA, 676.71804248, NA, Y, R2, Y, 676.718065323, R2 - 4WD/High Clearance Vehicles Only, Bicycles and Horses Allowed
# how many features are in our new object
length(footpath_HARV)
## [1] 2
Our subsetting operation reduces the features count from 13 to 2. This means
that only two feature lines in our spatial object have the attribute
"TYPE=footpath".
We can plot our subsetted shapefiles.
# plot just footpaths
plot(footpath_HARV,
lwd=6,
main="NEON Harvard Forest Field Site\n Footpaths")
Interesting! Above, it appeared as if we had 2 features in our footpaths subset.
Why does the plot look like there is only one feature?
Let's adjust the colors used in our plot. If we have 2 features in our vector
object, we can plot each using a unique color by assigning unique colors (col=)
to our features. We use the syntax
col="c("colorOne","colorTwo")
to do this.
# plot just footpaths
plot(footpath_HARV,
col=c("green","blue"), # set color for each feature
lwd=6,
main="NEON Harvard Forest Field Site\n Footpaths \n Feature one = blue, Feature two= green")
Now, we see that there are in fact two features in our plot!
### Challenge: Subset Spatial Line Objects
Subset out all:
boardwalk from the lines layer and plot it.
stone wall features from the lines layer and plot it.
For each plot, color each feature using a unique color.
Plot Lines by Attribute Value
To plot vector data with the color determined by a set of attribute values, the
attribute values must be class = factor. A factor is similar to a category.
you can group vector objects by a particular category value - for example you
can group all lines of TYPE=footpath. However, in R, a factor can also have
a determined order.
By default, R will import spatial object attributes as factors.
**Data Tip:** If our data attribute values are not
read in as factors, we can convert the categorical
attribute values using `as.factor()`.
# view the original class of the TYPE column
class(lines_HARV$TYPE)
## [1] "factor"
# view levels or categories - note that there are no categories yet in our data!
# the attributes are just read as a list of character elements.
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# Convert the TYPE attribute into a factor
# Only do this IF the data do not import as a factor!
# lines_HARV$TYPE <- as.factor(lines_HARV$TYPE)
# class(lines_HARV$TYPE)
# levels(lines_HARV$TYPE)
# how many features are in each category or level?
summary(lines_HARV$TYPE)
## boardwalk footpath stone wall woods road
## 1 2 6 4
When we use plot(), we can specify the colors to use for each attribute using
the col= element. To ensure that R renders each feature by it's associated
factor / attribute value, we need to create a vector or colors - one for each
feature, according to it's associated attribute value / factor value.
To create this vector we can use the following syntax:
a vector of colors - one for each factor value (unique attribute value)
the attribute itself ([object$factor]) of class factor
Let's give this a try.
# Check the class of the attribute - is it a factor?
class(lines_HARV$TYPE)
## [1] "factor"
# how many "levels" or unique values does the factor have?
# view factor values
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# count the number of unique values or levels
length(levels(lines_HARV$TYPE))
## [1] 4
# create a color palette of 4 colors - one for each factor level
roadPalette <- c("blue","green","grey","purple")
roadPalette
## [1] "blue" "green" "grey" "purple"
# create a vector of colors - one for each feature in our vector object
# according to its attribute value
roadColors <- c("blue","green","grey","purple")[lines_HARV$TYPE]
roadColors
## [1] "purple" "green" "green" "grey" "grey" "grey" "grey"
## [8] "grey" "grey" "blue" "purple" "purple" "purple"
# plot the lines data, apply a diff color to each factor level)
plot(lines_HARV,
col=roadColors,
lwd=3,
main="NEON Harvard Forest Field Site\n Roads & Trails")
Adjust Line Width
We can also adjust the width of our plot lines using lwd. We can set all lines
to be thicker or thinner using lwd=.
# make all lines thicker
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails\n All Lines Thickness=6",
lwd=6)
Adjust Line Width by Attribute
If we want a unique line width for each factor level or attribute category
in our spatial object, we can use the same syntax that we used for colors, above:
Note that this requires the attribute to be of class factor. Let's give it a
try.
class(lines_HARV$TYPE)
## [1] "factor"
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# create vector of line widths
lineWidths <- (c(1,2,3,4))[lines_HARV$TYPE]
# adjust line width by level
# in this case, boardwalk (the first level) is the widest.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails \n Line width varies by TYPE Attribute Value",
lwd=lineWidths)
### Challenge: Plot Line Width by Attribute
We can customize the width of each line, according to specific attribute value,
too. To do this, we create a vector of line width values, and map that vector
to the factor levels - using the same syntax that we used above for colors.
HINT: `lwd=(vector of line width thicknesses)[spatialObject$factorAttribute]`
Create a plot of roads using the following line thicknesses:
woods road lwd=8
Boardwalks lwd = 2
footpath lwd=4
stone wall lwd=3
**Data Tip:** Given we have a factor with 4 levels,
we can create an vector of numbers, each of which specifies the thickness of each
feature in our `SpatialLinesDataFrame` by factor level (category): `c(6,4,1,2)[lines_HARV$TYPE]`
Add Plot Legend
We can add a legend to our plot too. When we add a legend, we use the following
elements to specify labels and colors:
bottomright: We specify the location of our legend by using a default
keyword. We could also use top, topright, etc.
levels(objectName$attributeName): Label the legend elements using the
categories of levels in an attribute (e.g., levels(lines_HARV$TYPE) means use
the levels boardwalk, footpath, etc).
fill=: apply unique colors to the boxes in our legend. palette() is
the default set of colors that R applies to all plots.
Let's add a legend to our plot.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails\n Default Legend")
# we can use the color object that we created above to color the legend objects
roadPalette
## [1] "blue" "green" "grey" "purple"
# add a legend to our map
legend("bottomright", # location of legend
legend=levels(lines_HARV$TYPE), # categories or elements to render in
# the legend
fill=roadPalette) # color palette to use to fill objects in legend.
We can tweak the appearance of our legend too.
bty=n: turn off the legend BORDER
cex: change the font size
Let's try it out.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails \n Modified Legend")
# add a legend to our map
legend("bottomright",
legend=levels(lines_HARV$TYPE),
fill=roadPalette,
bty="n", # turn off the legend border
cex=.8) # decrease the font / legend size
We can modify the colors used to plot our lines by creating a new color vector
directly in the plot code rather than creating a separate object.
col=(newColors)[lines_HARV$TYPE]
Let's try it!
# manually set the colors for the plot!
newColors <- c("springgreen", "blue", "magenta", "orange")
newColors
## [1] "springgreen" "blue" "magenta" "orange"
# plot using new colors
plot(lines_HARV,
col=(newColors)[lines_HARV$TYPE],
main="NEON Harvard Forest Field Site\n Roads & Trails \n Pretty Colors")
# add a legend to our map
legend("bottomright",
levels(lines_HARV$TYPE),
fill=newColors,
bty="n", cex=.8)
**Data Tip:** You can modify the default R color palette
using the palette method. For example `palette(rainbow(6))` or
`palette(terrain.colors(6))`. You can reset the palette colors using
`palette("default")`!
### Challenge: Plot Lines by Attribute
Create a plot that emphasizes only roads where bicycles and horses are allowed.
To emphasize this, make the lines where bicycles are not allowed THINNER than
the roads where bicycles are allowed.
NOTE: this attribute information is located in the `lines_HARV$BicyclesHo`
attribute.
Be sure to add a title and legend to your map! You might consider a color
palette that has all bike/horse-friendly roads displayed in a bright color. All
other lines can be grey.
### Challenge: Plot Polygon by Attribute
Create a map of the State boundaries in the United States using the data
located in your downloaded data folder: NEON-DS-Site-Layout-Files/US-Boundary-Layers\US-State-Boundaries-Census-2014.
Apply a fill color to each state using its region value. Add a legend.
Using the NEON-DS-Site-Layout-Files/HARV/PlotLocations_HARV.shp shapefile,
create a map of study plot locations, with each point colored by the soil type
(soilTypeOr). Question: How many different soil types are there at this particular field site?
BONUS -- modify the field site plot above. Plot each point,
using a different symbol. HINT: you can assign the symbol using pch= value.
You can create a vector object of symbols by factor level using the syntax
syntax that we used above to create a vector of lines widths and colors:
pch=c(15,17)[lines_HARV$soilTypeOr]. Type ?pch to learn more about pch or
use google to find a list of pch symbols that you can use in R.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
About Vector Data
Vector data are composed of discrete geometric locations (x,y values) known as
vertices that define the "shape" of the spatial object. The organization
of the vertices, determines the type of vector that we are working
with: point, line or polygon.
There are 3 types of vector objects: points, lines or
polygons. Each object type has a different structure.
Image Source: National Ecological Observatory Network (NEON)
Points: Each individual point is defined by a single x, y coordinate.
There can be many points in a vector point file. Examples of point data include:
sampling locations, the location of individual trees or the location of plots.
Lines: Lines are composed of many (at least 2) vertices, or points, that
are connected. For instance, a road or a stream may be represented by a line. This
line is composed of a series of segments, each "bend" in the road or stream
represents a vertex that has defined x, y location.
Polygons: A polygon consists of 3 or more vertices that are connected and
"closed". Thus the outlines of plot boundaries, lakes, oceans, and states or
countries are often represented by polygons. Occasionally, a polygon can have a
hole in the middle of it (like a doughnut), this is something to be aware of but
not an issue we will deal with in this tutorial.
**Data Tip:** Sometimes, boundary layers such as
states and countries, are stored as lines rather than polygons. However, these
boundaries, when represented as a line, will not create a closed object with a defined "area" that can be "filled".
Shapefiles: Points, Lines, and Polygons
Geospatial data in vector format are often stored in a shapefile format.
Because the structure of points, lines, and polygons are different, each
individual shapefile can only contain one vector type (all points, all lines
or all polygons). You will not find a mixture of point, line and polygon
objects in a single shapefile.
Objects stored in a shapefile often have a set of associated attributes that
describe the data. For example, a line shapefile that contains the locations of
streams, might contain the associated stream name, stream "order" and other
information about each stream line object.
We will use the rgdal package to work with vector data in R. Notice that the
sp package automatically loads when rgdal is loaded. We will also load the
raster package so we can explore raster and vector spatial metadata using similar commands.
# load required libraries
# for vector work; sp package will load with rgdal.
library(rgdal)
# for metadata/attributes- vectors or rasters
library(raster)
# set working directory to the directory location on your computer where
# you downloaded and unzipped the data files for the tutorial
# setwd("pathToDirHere")
The shapefiles that we will import are:
A polygon shapefile representing our field site boundary,
The first shapefile that we will open contains the boundary of our study area
(or our Area Of Interest or AOI, hence the name aoiBoundary). To import
shapefiles we use the R function readOGR().
readOGR() requires two components:
The directory where our shapefile lives: NEON-DS-Site-Layout-Files/HARV
The name of the shapefile (without the extension): HarClip_UTMZ18
Let's import our AOI.
# Import a polygon shapefile: readOGR("path","fileName")
# no extension needed as readOGR only imports shapefiles
aoiBoundary_HARV <- readOGR(dsn=path.expand("NEON-DS-Site-Layout-Files/HARV"),
layer="HarClip_UTMZ18")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
**Data Tip:** The acronym, OGR, refers to the
OpenGIS Simple Features Reference Implementation.
Learn more about OGR.
Shapefile Metadata & Attributes
When we import the HarClip_UTMZ18 shapefile layer into R (as our
aoiBoundary_HARV object), the readOGR() function automatically stores
information about the data. We are particularly interested in the geospatial
metadata, describing the format, CRS, extent, and other components of
the vector data, and the attributes which describe properties associated
with each individual vector object.
**Data Tip:** The
*Shapefile Metadata & Attributes in R*
tutorial provides more information on both metadata and attributes
and using attributes to subset and plot data.
Spatial Metadata
Key metadata for all shapefiles include:
Object Type: the class of the imported object.
Coordinate Reference System (CRS): the projection of the data.
Extent: the spatial extent (geographic area that the shapefile covers) of
the shapefile. Note that the spatial extent for a shapefile represents the
extent for ALL spatial objects in the shapefile.
We can view shapefile metadata using the class, crs and extent methods:
# view just the class for the shapefile
class(aoiBoundary_HARV)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
# view just the crs for the shapefile
crs(aoiBoundary_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view just the extent for the shapefile
extent(aoiBoundary_HARV)
## class : Extent
## xmin : 732128
## xmax : 732251.1
## ymin : 4713209
## ymax : 4713359
# view all metadata at same time
aoiBoundary_HARV
## class : SpatialPolygonsDataFrame
## features : 1
## extent : 732128, 732251.1, 4713209, 4713359 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 1
## names : id
## value : 1
Our aoiBoundary_HARV object is a polygon of class SpatialPolygonsDataFrame,
in the CRS UTM zone 18N. The CRS is critical to interpreting the object
extent values as it specifies units.
The spatial extent of a shapefile or R spatial object represents
the geographic "edge" or location that is the furthest north, south east and
west. Thus is represents the overall geographic coverage of the spatial object.
Image Source: National Ecological Observatory Network (NEON)
Spatial Data Attributes
Each object in a shapefile has one or more attributes associated with it.
Shapefile attributes are similar to fields or columns in a spreadsheet. Each row
in the spreadsheet has a set of columns associated with it that describe the row
element. In the case of a shapefile, each row represents a spatial object - for
example, a road, represented as a line in a line shapefile, will have one "row"
of attributes associated with it. These attributes can include different types
of information that describe objects stored within a shapefile. Thus, our road,
may have a name, length, number of lanes, speed limit, type of road and other
attributes stored with it.
Each spatial feature in an R spatial object has the same set of
associated attributes that describe or characterize the feature.
Attribute data are stored in a separate *.dbf file. Attribute data can be
compared to a spreadsheet. Each row in a spreadsheet represents one feature
in the spatial object.
Image Source: National Ecological Observatory Network (NEON)
We view the attributes of a SpatialPolygonsDataFrame using objectName@data
(e.g., aoiBoundary_HARV@data).
# alternate way to view attributes
aoiBoundary_HARV@data
## id
## 0 1
In this case, our polygon object only has one attribute: id.
Metadata & Attribute Summary
We can view a metadata & attribute summary of each shapefile by entering
the name of the R object in the console. Note that the metadata output
includes the class, the number of features, the extent, and the
coordinate reference system (crs) of the R object. The last two lines of
summary show a preview of the R object attributes.
# view a summary of metadata & attributes associated with the spatial object
summary(aoiBoundary_HARV)
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x 732128 732251.1
## y 4713209 4713359.2
## Is projected: TRUE
## proj4string :
## [+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs]
## Data attributes:
## id
## Length:1
## Class :character
## Mode :character
Plot a Shapefile
Next, let's visualize the data in our R spatialpolygonsdataframe object using
plot().
# create a plot of the shapefile
# 'lwd' sets the line width
# 'col' sets internal color
# 'border' sets line color
plot(aoiBoundary_HARV, col="cyan1", border="black", lwd=3,
main="AOI Boundary Plot")
### Challenge: Import Line and Point Shapefiles
Using the steps above, import the HARV_roads and HARVtower_UTM18N layers into
R. Call the Harv_roads object `lines_HARV` and the HARVtower_UTM18N
`point_HARV`.
Answer the following questions:
What type of R spatial object is created when you import each layer?
What is the CRS and extentfor each object?
Do the files contain, points, lines or polygons?
How many spatial objects are in each file?
Plot Multiple Shapefiles
The plot() function can be used for basic plotting of spatial objects.
We use the add = TRUE argument to overlay shapefiles on top of each other, as
we would when creating a map in a typical GIS application like QGIS.
We can use main="" to give our plot a title. If we want the title to span two
lines, we use \n where the line should break.
# Plot multiple shapefiles
plot(aoiBoundary_HARV, col = "lightgreen",
main="NEON Harvard Forest\nField Site")
plot(lines_HARV, add = TRUE)
# use the pch element to adjust the symbology of the points
plot(point_HARV, add = TRUE, pch = 19, col = "purple")
### Challenge: Plot Raster & Vector Data Together
You can plot vector data layered on top of raster data using the add=TRUE
plot attribute. Create a plot that uses the NEON AOP Canopy Height Model NEON_RemoteSensing/HARV/CHM/HARV_chmCrop.tif as a base layer. On top of the
CHM, please add:
Sometimes we want to perform a calculation, or a set of calculations, multiple
times in our code. We could write out the equation over and over in our code --
OR -- we could chose to build a function that allows us to repeat several
operations with a single command. This tutorial will focus on creating functions
in R.
Learning Objectives
After completing this tutorial, you will be able to:
Explain why we should divide programs into small, single-purpose functions.
Use a function that takes parameters (input values).
Return a value from a function.
Set default values for function parameters.
Write, or define, a function.
Test and debug a function. (This section in construction).
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
Set Working Directory: This lesson assumes that you have set your working
directory to the location of the downloaded and unzipped data subsets.
R Script & Challenge Code: NEON data lessons often contain challenges that
reinforce learned skills. If available, the code for challenge solutions is found
in the downloadable R script of the entire lesson, available in the footer of
each lesson page.
Creating Functions
Sometimes we want to perform a calculation, or a set of calculations, multiple
times in our code. For example, we might need to convert units from Celsius to
Kelvin, across multiple datasets and save if for future use.
We could write out the equation over and over in our code -- OR -- we could chose
to build a function that allows us to repeat several operations with a single
command. This tutorial will focus on creating functions in R.
Getting Started
Let's start by defining a function fahr_to_kelvin that converts temperature
values from Fahrenheit to Kelvin:
The definition begins with the name of your new function. Use a good descriptor
of the function you are doing and make sure it isn't the same as a
a commonly used R function!
This is followed by the call to make it a function and a parenthesized list of
parameter names. The parameters are the input values that the function will use
to perform any calculations. In the case of fahr_to_kelvin, the input will be
the temperature value that we wish to convert from fahrenheit to kelvin. You can
have as many input parameters as you would like (but too many are poor style).
The body, or implementation, is surrounded by curly braces { }. Leaving the
initial curly bracket at the end of the first line and the final one on its own
line makes functions easier to read (for the human, the machine doesn't care).
In many languages, the body of the function - the statements that are executed
when it runs - must be indented, typically using 4 spaces.
**Data Tip:** While it is not mandatory in R to indent
your code 4 spaces within a function, it is strongly recommended as good
practice!
When we call the function, the values we pass to it are assigned to those
variables so that we can use them inside the function.
The last line within the function is what R will evaluate as a returning value.
Remember that the last line has to be a command that will print to the screen,
and not an object definition, otherwise the function will return nothing - it
will work, but will provide no output. In our example we print the value of
the object Kelvin.
Calling our own function is no different from calling any other built in R
function that you are familiar with. Let's try running our function.
# call function for F=32 degrees
fahr_to_kelvin(32)
## [1] 273.15
# We could use `paste()` to create a sentence with the answer
paste('The boiling point of water (212 Fahrenheit) is',
fahr_to_kelvin(212),
'degrees Kelvin.')
## [1] "The boiling point of water (212 Fahrenheit) is 373.15 degrees Kelvin."
We've successfully called the function that we defined, and we have access to
the value that we returned.
Question: What would happen if we instead wrote our function as:
Nothing is returned! This is because we didn't specify what the output was in
the final line of the function.
However, we can see that the function still worked by assigning the function to
object "a" and calling "a".
# assign to a
a <- fahr_to_kelvin_test(32)
# value of a
a
## [1] 273.15
We can see that even though there was no output from the function, the function
was still operational.
###Variable Scope
In R, variables assigned a value within a function do not retain their values
outside of the function.
x <- 1:3
x
## [1] 1 2 3
# define a function to add 1 to the temporary variable 'input'
plus_one <- function(input) {
input <- input + 1
}
# run our function
plus_one(x)
# x has not actually changed outside of the function
x
## [1] 1 2 3
To change a variable outside of a function you must assign the funciton's output
to that variable.
plus_one <- function(input) {
output <- input + 1 # store results to output variable
output # return output variable
}
# assign the results of our function to x
x <- plus_one(x)
x
## [1] 2 3 4
### Challenge: Writing Functions
Now that we've seen how to turn Fahrenheit into Kelvin, try your hand at
converting Kelvin to Celsius. Remember, for the same temperature Kelvin is 273.15
degrees less than Celsius.
Compound Functions
What about converting Fahrenheit to Celsius? We could write out the formula as a
new function or we can combine the two functions we have already created. It
might seem a bit silly to do this just for converting from Fahrenheit to Celcius
but think about the other applications where you will use functions!
# use two functions (F->K & K->C) to create a new one (F->C)
fahr_to_celsius <- function(temp) {
temp_k <- fahr_to_kelvin(temp)
temp_c <- kelvin_to_celsius(temp_k)
temp_c
}
paste('freezing point of water (32 Fahrenheit) in Celsius:',
fahr_to_celsius(32.0))
## [1] "freezing point of water (32 Fahrenheit) in Celsius: 0"
This is our first taste of how larger programs are built: we define basic
operations, then combine them in ever-large chunks to get the effect we want.
Real-life functions will usually be larger than the ones shown here—typically
half a dozen to a few dozen lines—but they shouldn't ever be much longer than
that, or the next person who reads it won't be able to understand what's going
on.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Recommended Tutorials
This tutorial uses both dplyr and ggplot2. If you are new to either of these
R packages, we recommend the following NEON Data Skills tutorials before
working through this one.
Normalized Difference Vegetation Index (NDVI) is an indicator of how green
vegetation is.
Watch this two and a half minute video from
Karen Joyce
that explains what NDVI is and why it is used.
NDVI is derived from remote sensing data based on a ratio the
reluctance of visible red spectra and near-infrared spectra. The NDVI values
vary from -1.0 to 1.0.
The imagery data used to create this NDVI data were collected over the National
Ecological Observatory Network's
Harvard Forest
field site.
We need to read in two datasets: the 2009-2011 micrometeorological data and the
2011 NDVI data for the Harvard Forest.
# Remember it is good coding technique to add additional libraries to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
library(grid) # for arranging plots
library(dplyr) # for subsetting by season
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# read in the Harvard micro-meteorological data; if you don't already have it
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE
)
#check out the data
str(harMetDaily.09.11)
## 'data.frame': 1095 obs. of 47 variables:
## $ X : int 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 ...
## $ date : chr "2009-01-01" "2009-01-02" "2009-01-03" "2009-01-04" ...
## $ jd : int 1 2 3 4 5 6 7 8 9 10 ...
## $ airt : num -15.1 -9.1 -5.5 -6.4 -2.4 -4.9 -2.6 -3.2 -9.9 -11.1 ...
## $ f.airt : chr "" "" "" "" ...
## $ airtmax : num -9.2 -3.7 -1.6 0 0.7 0 -0.2 -0.5 -6.1 -8 ...
## $ f.airtmax: chr "" "" "" "" ...
## $ airtmin : num -19.1 -15.8 -9.5 -11.4 -6.4 -10.1 -5.1 -9.9 -12.5 -15.9 ...
## $ f.airtmin: chr "" "" "" "" ...
## $ rh : int 58 75 69 59 77 65 97 93 78 77 ...
## $ f.rh : chr "" "" "" "" ...
## $ rhmax : int 76 97 97 78 97 88 100 100 89 92 ...
## $ f.rhmax : chr "" "" "" "" ...
## $ rhmin : int 33 47 41 40 45 38 77 76 63 54 ...
## $ f.rhmin : chr "" "" "" "" ...
## $ dewp : num -21.9 -12.9 -10.9 -13.3 -6.2 -10.9 -3 -4.2 -13.1 -14.5 ...
## $ f.dewp : chr "" "" "" "" ...
## $ dewpmax : num -20.4 -6.2 -6.4 -9.1 -1.7 -7.5 -0.5 -0.6 -11.2 -10.5 ...
## $ f.dewpmax: chr "" "" "" "" ...
## $ dewpmin : num -23.5 -21 -14.3 -16.3 -12.1 -13 -7.6 -11.8 -15.2 -18 ...
## $ f.dewpmin: chr "" "" "" "" ...
## $ prec : num 0 0 0 0 1 0 26.2 0.8 0 1 ...
## $ f.prec : chr "" "" "" "" ...
## $ slrt : num 8.4 3.7 8.1 8.3 2.9 6.3 0.8 2.8 8.8 5.7 ...
## $ f.slrt : chr "" "" "" "" ...
## $ part : num 16.7 7.3 14.8 16.2 5.4 11.7 1.8 7 18.2 11.4 ...
## $ f.part : chr "" "" "" "" ...
## $ netr : num -39.4 -16.6 -35.3 -24.7 -19.4 -18.9 5.6 -21.7 -31.1 -16 ...
## $ f.netr : chr "" "" "" "" ...
## $ bar : int 1011 1005 1004 1008 1006 1009 991 987 1005 1015 ...
## $ f.bar : chr "" "" "" "" ...
## $ wspd : num 2.4 1.4 2.7 1.9 2.1 1 1.4 0 1.3 1 ...
## $ f.wspd : chr "" "" "" "" ...
## $ wres : num 2.1 1 2.5 1.6 1.9 0.7 1.3 0 1.1 0.6 ...
## $ f.wres : chr "" "" "" "" ...
## $ wdir : int 294 237 278 292 268 257 86 0 273 321 ...
## $ f.wdir : chr "" "" "" "" ...
## $ wdev : int 29 42 24 31 26 44 24 0 20 50 ...
## $ f.wdev : chr "" "" "" "" ...
## $ gspd : num 13.4 8.1 13.9 8 11.6 5.1 9.1 0 10.1 5 ...
## $ f.gspd : chr "" "" "" "" ...
## $ s10t : num 1 1 1 1 1 1 1.1 1.2 1.4 1.3 ...
## $ f.s10t : logi NA NA NA NA NA NA ...
## $ s10tmax : num 1.1 1 1 1 1.1 1.1 1.1 1.3 1.4 1.4 ...
## $ f.s10tmax: logi NA NA NA NA NA NA ...
## $ s10tmin : num 1 1 1 1 1 1 1 1.1 1.3 1.2 ...
## $ f.s10tmin: logi NA NA NA NA NA NA ...
# read in the NDVI CSV data; if you dont' already have it
NDVI.2011 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/NDVI/meanNDVI_HARV_2011.csv"),
stringsAsFactors = FALSE
)
# check out the data
str(NDVI.2011)
## 'data.frame': 11 obs. of 6 variables:
## $ X : int 1 2 3 4 5 6 7 8 9 10 ...
## $ meanNDVI : num 0.365 0.243 0.251 0.599 0.879 ...
## $ site : chr "HARV" "HARV" "HARV" "HARV" ...
## $ year : int 2011 2011 2011 2011 2011 2011 2011 2011 2011 2011 ...
## $ julianDay: int 5 37 85 133 181 197 213 229 245 261 ...
## $ Date : chr "2011-01-05" "2011-02-06" "2011-03-26" "2011-05-13" ...
In the NDVI dataset, we have the following variables:
'X': an integer identifying each row
meanNDVI: the daily total NDVI for that area. (It is a mean of all pixels in
the original raster).
site: "HARV" means all NDVI values are from the Harvard Forest
year: "2011" all values are from 2011
julianDay: the numeric day of the year
Date: a date in format "YYYY-MM-DD"; currently in chr class
### Challenge: Class Conversion & Subset by Time
The goal of this challenge is to get our datasets ready so that we can work
with data from each, within the same plots or analyses.
Ensure that date fields within both datasets are in the Date class. If not,
convert the data to the Date class.
The NDVI data are limited to 2011, however, the meteorological data are from
2009-2011. Subset and retain only the 2011 meteorological data. Name it
harMet.daily2011.
Now that we have our datasets with Date class dates and limited to 2011, we can
begin working with both.
Plot NDVI Data from a .csv
These NDVI data were derived from a raster and are now integers in a
data.frame, therefore we can plot it like any of our other values using
ggplot(). Here we plot meanNDVI by Date.
# plot NDVI by date
ggplot(NDVI.2011, aes(Date, meanNDVI))+
geom_point(colour = "forestgreen", size = 4) +
ggtitle("Daily NDVI at Harvard Forest, 2011")+
theme(legend.position = "none",
plot.title = element_text(lineheight=.8, face="bold",size = 20),
text = element_text(size=20))
Two y-axes or Side-by-Side Plots?
When we have different types of data like NDVI (scale: 0-1 index units),
Photosynthetically Active Radiation (PAR, scale: 0-65.8 mole per meter squared),
or temperature (scale: -20 to 30 C) that we want to plot over time, we cannot
simply plot them on the same plot as they have different y-axes.
One option, would be to plot both data types in the same plot space but each
having it's own axis (one on left of plot and one on right of plot). However,
there is a line of graphical representation thought that this is not a good
practice. The creator of ggplot2 ascribes to this dislike of different y-axes
and so neither qplot nor ggplot have this functionality.
Instead, plots of different types of data can be plotted next to each other to
allow for comparison. Depending on how the plots are being viewed, they can
have a vertical or horizontal arrangement.
### Challenge: Plot Air Temperature and NDVI
Plot the NDVI vs Date (previous plot) and PAR vs Date (create a new plot) in the
same viewer so we can more easily compare them.
The figures from this Challenge are nice but a bit confusing as the dates on the
x-axis don't exactly line up. To fix this we can assign the same min and max
to both x-axes so that they align. The syntax for this is:
limits=c(min=VALUE,max=VALUE).
In our case we want the min and max values to
be based on the min and max of the NDVI.2011$Date so we'll use a function
specifying this instead of a single value.
We can also assign the date format for the x-axis and clearly label both axes.
### Challenge: Plot Air Temperature and NDVI
Create a plot, complementary to those above, showing air temperature (`airt`)
throughout 2011. Choose colors and symbols that show the data well.
Second, plot PAR, air temperature and NDVI in a single pane for ease of
comparison.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Recommended Tutorials
This tutorial uses both dplyr and ggplot2. If you are new to either of these
R packages, we recommend the following NEON Data Skills tutorials before
working through this one.
In this tutorial we will learn how to create a panel of individual plots - known
as facets in ggplot2. Each plot represents a particular data_frame
time-series subset, for example a year or a season.
Load the Data
We will use the daily micro-meteorology data for 2009-2011 from the Harvard
Forest. If you do not have this data loaded into an R data_frame, please
load them and convert date-time columns to a date-time class now.
# Remember it is good coding technique to add additional libraries to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
library(grid) # for arranging plots
library(dplyr) # for subsetting by season
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# daily HARV met data, 2009-2011
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE
)
# covert date to Date class
harMetDaily.09.11$date <- as.Date(harMetDaily.09.11$date)
ggplot2 Facets
Facets allow us to plot subsets of data in one cleanly organized panel. We use
facet_grid() to create a plot of a particular variable subsetted by a
particular group.
Let's plot air temperature as we did previously. We will name the ggplot
object AirTempDaily.
**Data Tip:** If you are working with a date & time
class (e.g. POSIXct), you can use `scale_x_datetime` instead of `scale_x_date`.
This plot tells us a lot about the annual increase and decrease of temperature
at the NEON Harvard Forest field site. However, what if we want to plot each
year's worth of data individually?
We can use the facet() element in ggplot to create facets or a panel of
plots that are grouped by a particular category or time period. To create a
plot for each year, we will first need a year column in our data to use as a
subset factor. We created a year column using the year function in the
lubridate package in the
Subset and Manipulate Time Series Data with dplyr tutorial.
# add year column to daily values
harMetDaily.09.11$year <- year(harMetDaily.09.11$date)
# view year column head and tail
head(harMetDaily.09.11$year)
## [1] 2009 2009 2009 2009 2009 2009
tail(harMetDaily.09.11$year)
## [1] 2011 2011 2011 2011 2011 2011
Facet by Year
Once we have a column that can be used to group or subset our data, we can
create a faceted plot - plotting each year's worth of data in an individual,
labelled panel.
# run this code to plot the same plot as before but with one plot per season
AirTempDaily + facet_grid(. ~ year)
## Error: At least one layer must contain all faceting variables: `year`.
## * Plot is missing `year`
## * Layer 1 is missing `year`
Oops - what happened? The plot did not render because we added the year column
after creating the ggplot object AirTempDaily. Let's rerun the plotting code
to ensure our newly added column is recognized.
The faceted plot is interesting, however the x-axis on each plot is formatted
as: month-day-year starting in 2009 and ending in 2011. This means that the data
for 2009 is on the left end of the x-axis and the data for 2011 is on the right
end of the x-axis of the 2011 plot.
Our plots would be easier to visually compare if the days were formatted in
Julian or year days rather than date. We have Julian days stored in our
data_frame (harMetDaily.09.11) as jd.
Using Julian day, our plots are easier to visually compare. Arranging our plots
this way, side by side, allows us to quickly scan for differences along the
y-axis. Notice any differences in min vs max air temperature across the three
years?
Arrange Facets
We can rearrange the facets in different ways, too.
# move labels to the RIGHT and stack all plots
AirTempDaily_jd + facet_grid(year ~ .)
If we use facet_wrap we can specify the number of columns.
# display in two columns
AirTempDaily_jd + facet_wrap(~year, ncol = 2)
Graph Two Variables on One Plot
Next, let's explore the relationship between two variables - air temperature
and soil temperature. We might expect soil temperature to fluctuate with changes
in air temperature over time.
We will use ggplot() to plot airt and s10t (soil temperature 10 cm below
the ground).
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature\n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
airSoilTemp_Plot
The plot above suggests a relationship between the air and soil temperature as
we might expect. However, it clumps all three years worth of data into one plot.
Let's create a stacked faceted plot of air vs. soil temperature grouped by year.
Lucky for us, we can do this quickly with one line of code while reusing the
plot we created above.
Have a close look at the data. Are there any noticeable min/max temperature
differences between the three years?
### Challenge: Faceted Plot
Create a faceted plot of air temperature vs soil temperature by month rather
than year.
HINT: To create this plot, you will want to add a month column to our
data_frame. We can use lubridate month in the same way we used year to add
a year column.
Faceted Plots & Categorical Groups
In the challenge above, we grouped our data by month - specified by a numeric
value between 1 (January) and 12 (December). However, what if we wanted to
organize our plots using a categorical (character) group such as month name?
Let's do that next.
If we want to group our data by month name, we first need to create a month
name column in our data_frame. We can create this column using the following
syntax:
format(harMetDaily.09.11$date,"%B"),
which tells R to extract the month name (%B) from the date field.
# add text month name column
harMetDaily.09.11$month_name <- format(harMetDaily.09.11$date,"%B")
# view head and tail
head(harMetDaily.09.11$month_name)
## [1] "January" "January" "January" "January" "January" "January"
tail(harMetDaily.09.11$month_name)
## [1] "December" "December" "December" "December" "December" "December"
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature \n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# create faceted panel
airSoilTemp_Plot + facet_wrap(~month_name, nc=3)
Great! We've created a nice set of plots by month. However, how are the plots
ordered? It looks like R is ordering things alphabetically, yet we know
that months are ordinal not character strings. To account for order, we can
reassign the month_name field to a factor. This will allow us to specify
an order to each factor "level" (each month is a level).
The syntax for this operation is
Turn field into a factor: factor(fieldName) .
Designate the levels using a list c(level1, level2, level3).
In our case, each level will be a month.
# order the factors
harMetDaily.09.11$month_name = factor(harMetDaily.09.11$month_name,
levels=c('January','February','March',
'April','May','June','July',
'August','September','October',
'November','December'))
Once we have specified the factor column and its associated levels, we can plot
again. Remember, that because we have modified a column in our data_frame, we
need to rerun our ggplot code.
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature \n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# create faceted panel
airSoilTemp_Plot + facet_wrap(~month_name, nc=3)
Subset by Season - Advanced Topic
Sometimes we want to group data by custom time periods. For example, we might
want to group by season. However, the definition of various seasons may vary by
region which means we need to manually define each time period.
In the next coding section, we will add a season column to our data using a
manually defined query. Our field site is Harvard Forest (Massachusetts),
located in the northeastern portion of the United States. Based on the climate
of this region, we can divide the year into 4 seasons as follows:
Winter: December - February
Spring: March - May
Summer: June - August
Fall: September - November
In order to subset the data by season we will use the dplyr package. We
can use the numeric month column that we added to our data earlier in this
tutorial.
# add month to data_frame - note we already performed this step above.
harMetDaily.09.11$month <- month(harMetDaily.09.11$date)
# view head and tail of column
head(harMetDaily.09.11$month)
## [1] 1 1 1 1 1 1
tail(harMetDaily.09.11$month)
## [1] 12 12 12 12 12 12
We can use mutate() and a set of ifelse statements to create a new
categorical variable called season by grouping three months together.
Within dplyr%in% is short-hand for "contained within". So the syntax
ifelse(month %in% c(12, 1, 2), "Winter",
can be read as "if the month column value is 12 or 1 or 2, then assign the
value "Winter"".
Our ifelse statement ends with
ifelse(month %in% c(9, 10, 11), "Fall", "Error")
which we can translate this as "if the month column value is 9 or 10 or 11,
then assign the value "Winter"."
The last portion , "Error" tells R that if a month column value does not
fall within any of the criteria laid out in previous ifelse statements,
assign the column the value of "Error".
Now that we have a season column, we can plot our data by season!
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature\n 2009-2011\n NEON Harvard Forest Field Site") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# run this code to plot the same plot as before but with one plot per season
airSoilTemp_Plot + facet_grid(. ~ season)
Note, that once again, we re-ran our ggplot code to make sure our new column
is recognized by R. We can experiment with various facet layouts next.
# for a landscape orientation of the plots we change the order of arguments in
# facet_grid():
airSoilTemp_Plot + facet_grid(season ~ .)
Once again, R is arranging the plots in an alphabetical order not an order
relevant to the data.
### Challenge: Create Plots by Season
The goal of this challenge is to create plots that show the relationship between
air and soil temperature across the different seasons with seasons arranged in
an ecologically meaningful order.
Create a factor class season variable by converting the season column that
we just created to a factor, then organize the seasons chronologically as
follows: Winter, Spring, Summer, Fall.
Create a new faceted plot that is 2 x 2 (2 columns of plots). HINT: One can
neatly plot multiple variables using facets as follows:
facet_grid(variable1 ~ variable2).
Create a plot of air vs soil temperature grouped by year and season.
Work with Year-Month Data: base R and zoo Package
Some data will have month formatted in Year-Month
(e.g. met_monthly_HARV$date).
(Note: You will load this file in the Challenge below)
For many analyses, we might want to summarize this data into a yearly total.
Base R does NOT have a distinct year-month date class. Instead to work with a
year-month field using base R, we need to convert to a Date class, which
necessitates adding an associated day value. The syntax would be:
The syntax above creates a Date column from the met_montly_HARV$date column.
We then add the arbitrary date - the first ("-01"). The final bit of code
(sep="") designates the character string used to separate the month, day,
and year portions of the returned string (in our case nothing, as we have
included the hyphen with our inserted date value).
Alternatively, to work directly with a year-month data we could use the zoo
package and its included year-month date class - as.yearmon. With zoo the
syntax would be:
as.Date(as.yearmon(met_monthly_HARV$date))
### Challenge: Convert Year-Month Data
The goal of this challenge is to use both the base R and the `zoo` package
methods for working with year-month data.
Load the NEON-DS-Met-Time-Series/HARV/FisherTower-Met/hf001-04-monthly-m.csv
file and give it the name met_monthly_HARV. Then:
Convert the date field into a date/time class using both base R and the
zoo package. Name the new fields date_base and ymon_zoo respectively.
Look at the format and check the class of both new date fields.
Convert the ymon_zoo field into a new Date class field (date_zoo) so it
can be used in base R, ggplot, etc.
HINT: be sure to load the zoo package, if you have not already.
Do you prefer to use base R or zoo to convert these data to a date/time
class?
**Data Tip:** `zoo` date/time classes cannot be used
directly with ggplot2. If you deal with date formats that make sense to
primarily use `zoo` date/time classes, you can use ggplot2 with the addition of
other functions. For details see the
ggplot2.zoo documentation.
### Challenge: Plot Year-Month Data
Using the date field `date_base` (or `date_zoo`) that you created in the
previous challenge, create a faceted plot of annual air temperature for each
year (2001-2015) with month as the x-axis for the NEON Harvard Forest field
site.
This tutorial uses ggplot2 to create customized plots of time
series data. We will learn how to adjust x- and y-axis ticks using the scales
package, how to add trend lines to a scatter plot and how to customize plot
labels, colors and overall plot appearance using ggthemes.
Learning Objectives
After completing this tutorial, you will be able to:
Create basic time series plots using ggplot() in R.
Explain the syntax of ggplot() and know how to find out more about the
package.
Plot data using scatter and bar plots.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded on
your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Plotting our data allows us to quickly see general patterns including
outlier points and trends. Plots are also a useful way to communicate the
results of our research. ggplot2 is a powerful R package that we use to
create customized, professional plots.
Load the Data
We will use the lubridate, ggplot2, scales and gridExtra packages in
this tutorial.
Our data subset will be the daily meteorology data for 2009-2011 for the NEON
Harvard Forest field site
(NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv).
If this subset is not already loaded, please load it now.
# Remember it is good coding technique to add additional packages to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# daily HARV met data, 2009-2011
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE)
# covert date to Date class
harMetDaily.09.11$date <- as.Date(harMetDaily.09.11$date)
# monthly HARV temperature data, 2009-2011
harTemp.monthly.09.11<-read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Temp_HARV_Monthly_09_11.csv"),
stringsAsFactors=FALSE
)
# datetime field is actually just a date
#str(harTemp.monthly.09.11)
# convert datetime from chr to date class & rename date for clarification
harTemp.monthly.09.11$date <- as.Date(harTemp.monthly.09.11$datetime)
Plot with qplot
We can use the qplot() function in the ggplot2 package to quickly plot a
variable such as air temperature (airt) across all three years of our daily
average time series data.
# plot air temp
qplot(x=date, y=airt,
data=harMetDaily.09.11, na.rm=TRUE,
main="Air temperature Harvard Forest\n 2009-2011",
xlab="Date", ylab="Temperature (°C)")
The resulting plot displays the pattern of air temperature increasing and
decreasing over three years. While qplot() is a quick way to plot data, our
ability to customize the output is limited.
Plot with ggplot
The ggplot() function within the ggplot2 package gives us more control
over plot appearance. However, to use ggplot we need to learn a slightly
different syntax. Three basic elements are needed for ggplot() to work:
The data_frame: containing the variables that we wish to plot,
aes (aesthetics): which denotes which variables will map to the x-, y-
(and other) axes,
geom_XXXX (geometry): which defines the data's graphical representation
(e.g. points (geom_point), bars (geom_bar), lines (geom_line), etc).
The syntax begins with the base statement that includes the data_frame
(harMetDaily.09.11) and associated x (date) and y (airt) variables to be
plotted:
ggplot(harMetDaily.09.11, aes(date, airt))
To successfully plot, the last piece that is needed is the geometry type. In
this case, we want to create a scatterplot so we can add + geom_point().
Let's create an air temperature scatterplot.
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE)
Customize A Scatterplot
We can customize our plot in many ways. For instance, we can change the size and
color of the points using size=, shape pch=, and color= in the geom_point
element.
geom_point(na.rm=TRUE, color="blue", size=1)
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="blue", size=3, pch=18)
Modify Title & Axis Labels
We can modify plot attributes by adding elements using the + symbol.
For example, we can add a title by using + ggtitle="TEXT", and axis
labels using + xlab("TEXT") + ylab("TEXT").
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="blue", size=1) +
ggtitle("Air Temperature 2009-2011\n NEON Harvard Forest Field Site") +
xlab("Date") + ylab("Air Temperature (C)")
**Data Tip:** Use `help(ggplot2)` to review the many
elements that can be defined and added to a `ggplot2` plot.
Name Plot Objects
We can create a ggplot object by assigning our plot to an object name.
When we do this, the plot will not render automatically. To render the plot, we
need to call it in the code.
Assigning plots to an R object allows us to effectively add on to,
and modify the plot later. Let's create a new plot and call it AirTempDaily.
# plot Air Temperature Data across 2009-2011 using daily data
AirTempDaily <- ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="purple", size=1) +
ggtitle("Air Temperature\n 2009-2011\n NEON Harvard Forest") +
xlab("Date") + ylab("Air Temperature (C)")
# render the plot
AirTempDaily
Format Dates in Axis Labels
We can adjust the date display format (e.g. 2009-07 vs. Jul 09) and the number
of major and minor ticks for axis date values using scale_x_date. Let's
format the axis ticks so they read "month year" (%b %y). To do this, we will
use the syntax:
scale_x_date(labels=date_format("%b %y")
Rather than re-coding the entire plot, we can add the scale_x_date element
to the plot object AirTempDaily that we just created.
**Data Tip:** You can type `?strptime` into the R
console to find a list of date format conversion specifications (e.g. %b = month).
Type `scale_x_date` for a list of parameters that allow you to format dates
on the x-axis.
**Data Tip:** If you are working with a date & time
class (e.g. POSIXct), you can use `scale_x_datetime` instead of `scale_x_date`.
Adjust Date Ticks
We can adjust the date ticks too. In this instance, having 1 tick per year may
be enough. If we have the scales package loaded, we can use
breaks=date_breaks("1 year") within the scale_x_date element to create
a tick for every year. We can adjust this as needed (e.g. 10 days, 30 days, 1
month).
From R HELP (?date_breaks): width an interval specification, one of "sec",
"min", "hour", "day", "week", "month", "year". Can be by an integer and a
space, or followed by "s".
# format x-axis: dates
AirTempDaily_1y <- AirTempDaily +
(scale_x_date(breaks=date_breaks("1 year"),
labels=date_format("%b %y")))
AirTempDaily_1y
**Data Tip:** We can adjust the tick spacing and
format for x- and y-axes using `scale_x_continuous` or `scale_y_continuous` to
format a continue variable. Check out `?scale_x_` (tab complete to view the
various x and y scale options)
ggplot - Subset by Time
Sometimes we want to scale the x- or y-axis to a particular time subset without
subsetting the entire data_frame. To do this, we can define start and end
times. We can then define the limits in the scale_x_date object as
follows:
scale_x_date(limits=start.end) +
# Define Start and end times for the subset as R objects that are the time class
startTime <- as.Date("2011-01-01")
endTime <- as.Date("2012-01-01")
# create a start and end time R object
start.end <- c(startTime,endTime)
start.end
## [1] "2011-01-01" "2012-01-01"
# View data for 2011 only
# We will replot the entire plot as the title has now changed.
AirTempDaily_2011 <- ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="purple", size=1) +
ggtitle("Air Temperature\n 2011\n NEON Harvard Forest") +
xlab("Date") + ylab("Air Temperature (C)")+
(scale_x_date(limits=start.end,
breaks=date_breaks("1 year"),
labels=date_format("%b %y")))
AirTempDaily_2011
ggplot() Themes
We can use the theme() element to adjust figure elements.
There are some nice pre-defined themes that we can use as a starting place.
# Apply a black and white stock ggplot theme
AirTempDaily_bw<-AirTempDaily_1y +
theme_bw()
AirTempDaily_bw
Using the theme_bw() we now have a white background rather than grey.
Import New Themes BonusTopic
There are externally developed themes built by the R community that are worth
mentioning. Feel free to experiment with the code below to install ggthemes.
A list of themes loaded in the ggthemes library is found here.
Customize ggplot Themes
We can customize theme elements manually too. Let's customize the font size and
style.
# format x axis with dates
AirTempDaily_custom<-AirTempDaily_1y +
# theme(plot.title) allows to format the Title separately from other text
theme(plot.title = element_text(lineheight=.8, face="bold",size = 20)) +
# theme(text) will format all text that isn't specifically formatted elsewhere
theme(text = element_text(size=18))
AirTempDaily_custom
### Challenge: Plot Total Daily Precipitation
Create a plot of total daily precipitation using data in the `harMetDaily.09.11`
`data_frame`.
Format the dates on the x-axis: Month-Year.
Create a plot object called PrecipDaily.
Be sure to add an appropriate title in addition to x and y axis labels.
Increase the font size of the plot text and adjust the number of ticks on the
x-axis.
Bar Plots with ggplot
We can use ggplot to create bar plots too. Let's create a bar plot of total
daily precipitation next. A bar plot might be a better way to represent a total
daily value. To create a bar plot, we change the geom element from
geom_point() to geom_bar().
The default setting for a ggplot bar plot - geom_bar() - is a histogram
designated by stat="bin". However, in this case, we want to plot actual
precipitation values. We can use geom_bar(stat="identity") to force ggplot to
plot actual values.
Note that some of the bars in the resulting plot appear grey rather than black.
This is because R will do it's best to adjust colors of bars that are closely
spaced to improve readability. If we zoom into the plot, all of the bars are
black.
### Challenge: Plot with scale_x_data()
Without creating a subsetted dataframe, plot the precipitation data for
*2010 only*. Customize the plot with:
a descriptive title and axis labels,
breaks every 4 months, and
x-axis labels as only the full month (spelled out).
HINT: you will need to rebuild the precipitation plot as you will have to
specify a new scale_x_data() element.
We can change the bar fill color by within the
geom_bar(colour="blue") element. We can also specify a separate fill and line
color using fill= and line=. Colors can be specified by name (e.g.,
"blue") or hexadecimal color codes (e.g, #FF9999).
There are many color cheatsheets out there to help with color selection!
# specifying color by name
PrecipDailyBarB <- PrecipDailyBarA+
geom_bar(stat="identity", colour="darkblue")
PrecipDailyBarB
**Data Tip:** For more information on color,
including color blind friendly color palettes, checkout the
ggplot2 color information from Winston Chang's *Cookbook* *for* *R* site
based on the _R_ _Graphics_ _Cookbook_ text.
Figures with Lines
We can create line plots too using ggplot. To do this, we use geom_line()
instead of bar or point.
Note that lines may not be the best way to represent air temperature data given
lines suggest that the connecting points are directly related. It is important
to consider what type of plot best represents the type of data that you are
presenting.
### Challenge: Combine Points & Lines
You can combine geometries within one plot. For example, you can have a
`geom_line()` and `geom_point` element in a plot. Add `geom_line(na.rm=TRUE)` to
the `AirTempDaily`, a `geom_point` plot. What happens?
Trend Lines
We can add a trend line, which is a statistical transformation of our data to
represent general patterns, using stat_smooth(). stat_smooth() requires a
statistical method as follows:
For data with < 1000 observations: the default model is a loess model
(a non-parametric regression model)
For data with > 1,000 observations: the default model is a GAM (a general
additive model)
A specific model/method can also be specified: for example, a linear regression (method="lm").
For this tutorial, we will use the default trend line model. The gam method will
be used with given we have 1,095 measurements.
**Data Tip:** Remember a trend line is a statistical
transformation of the data, so prior to adding the line one must understand if a
particular statistical transformation is appropriate for the data.
# adding on a trend lin using loess
AirTempDaily_trend <- AirTempDaily + stat_smooth(colour="green")
AirTempDaily_trend
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
### Challenge: A Trend in Precipitation?
Create a bar plot of total daily precipitation. Add a:
Trend line for total daily precipitation.
Make the bars purple (or your favorite color!).
Make the trend line grey (or your other favorite color).
Adjust the tick spacing and format the dates to appear as "Jan 2009".
Render the title in italics.
## `geom_smooth()` using formula 'y ~ x'
### Challenge: Plot Monthly Air Temperature
Plot the monthly air temperature across 2009-2011 using the
harTemp.monthly.09.11 data_frame. Name your plot "AirTempMonthly". Be sure to
label axes and adjust the plot ticks as you see fit.
Display Multiple Figures in Same Panel
It is often useful to arrange plots in a panel rather than displaying them
individually. In base R, we use par(mfrow=()) to accomplish this. However
the grid.arrange() function from the gridExtra package provides a more
efficient approach!
grid.arrange requires 2 things:
the names of the plots that you wish to render in the panel.
the number of columns (ncol).
grid.arrange(plotOne, plotTwo, ncol=1)
Let's plot AirTempMonthly and AirTempDaily on top of each other. To do this,
we'll specify one column.
# note - be sure library(gridExtra) is loaded!
# stack plots in one column
grid.arrange(AirTempDaily, AirTempMonthly, ncol=1)
### Challenge: Create Panel of Plots
Plot AirTempMonthly and AirTempDaily next to each other rather than stacked
on top of each other.
Additional ggplot2 Resources
In this tutorial, we've covered the basics of ggplot. There are many great
resources the cover refining ggplot figures. A few are below:
In this tutorial, we will use the group_by, summarize and mutate functions
in the dplyr package to efficiently manipulate atmospheric data collected at
the NEON Harvard Forest Field Site. We will use pipes to efficiently perform
multiple tasks within a single chunk of code.
Learning Objectives
After completing this tutorial, you will be able to:
Explain several ways to manipulate data using functions in the dplyr
package in R.
Use group-by(), summarize(), and mutate() functions.
Write and understand R code with pipes for cleaner, efficient coding.
Use the year() function from the lubridate package to extract year from a
date-time class variable.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded on
your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
The dplyr package simplifies and increases efficiency of complicated yet
commonly performed data "wrangling" (manipulation / processing) tasks. It uses
the data_frame object as both an input and an output.
Load the Data
We will need the lubridate and the dplyr packages to complete this tutorial.
We will also use the 15-minute average atmospheric data subsetted to 2009-2011
for the NEON Harvard Forest Field Site. This subset was created in the
Subsetting Time Series Data tutorial.
If this object isn't already created, please load the .csv version:
Met_HARV_15min_2009_2011.csv. Be
sure to convert the date-time column to a POSIXct class after the .csv is
loaded.
# it's good coding practice to load packages at the top of a script
library(lubridate) # work with dates
library(dplyr) # data manipulation (filter, summarize, mutate)
library(ggplot2) # graphics
library(gridExtra) # tile several plots next to each other
library(scales)
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# 15-min Harvard Forest met data, 2009-2011
harMet15.09.11<- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_15min_2009_2011.csv"),
stringsAsFactors = FALSE)
# convert datetime to POSIXct
harMet15.09.11$datetime<-as.POSIXct(harMet15.09.11$datetime,
format = "%Y-%m-%d %H:%M",
tz = "America/New_York")
Explore The Data
Remember that we are interested in the drivers of phenology including -
air temperature, precipitation, and PAR (photosynthetic active radiation - or
the amount of visible light). Using the 15-minute averaged data, we could easily
plot each of these variables.
However, summarizing the data at a coarser scale (e.g., daily, weekly, by
season, or by year) may be easier to visually interpret during initial stages of
data exploration.
Extract Year from a Date-Time Column
To summarize by year efficiently, it is helpful to have a year column that we
can use to group by. We can use the lubridate function year() to extract
the year only from a date-time class R column.
# create a year column
harMet15.09.11$year <- year(harMet15.09.11$datetime)
Using names() we see that we now have a year column in our data_frame.
Now that we have added a year column to our data_frame, we can use dplyr to
summarize our data.
Manipulate Data using dplyr
Let's start by extracting a yearly air temperature value for the Harvard Forest
data. To calculate a yearly average, we need to:
Group our data by year.
Calculate the mean precipitation value for each group (ie for each year).
We will use dplyr functions group_by and summarize to perform these steps.
# Create a group_by object using the year column
HARV.grp.year <- group_by(harMet15.09.11, # data_frame object
year) # column name to group by
# view class of the grouped object
class(HARV.grp.year)
## [1] "grouped_df" "tbl_df" "tbl" "data.frame"
The group_by function creates a "grouped" object. We can then use this
grouped object to quickly calculate summary statistics by group - in this case,
year. For example, we can count how many measurements were made each year using
the tally() function. We can then use the summarize function to calculate
the mean air temperature value each year. Note that "summarize" is a common
function name across many different packages. If you happen to have two packages
loaded at the same time that both have a "summarize" function, you may not get
the results that you expect. Therefore, we will "disambiguate" our function by
specifying which package it comes from using the double colon notation
dplyr::summarize().
# how many measurements were made each year?
tally(HARV.grp.year)
## # A tibble: 3 x 2
## year n
## <dbl> <int>
## 1 2009 35036
## 2 2010 35036
## 3 2011 35036
# what is the mean airt value per year?
dplyr::summarize(HARV.grp.year,
mean(airt) # calculate the annual mean of airt
)
## # A tibble: 3 x 2
## year `mean(airt)`
## <dbl> <dbl>
## 1 2009 NA
## 2 2010 NA
## 3 2011 8.75
Why did this return a NA value for years 2009 and 2010? We know there are some
values for both years. Let's check our data for NoData values.
# are there NoData values?
sum(is.na(HARV.grp.year$airt))
## [1] 2
# where are the no data values
# just view the first 6 columns of data
HARV.grp.year[is.na(HARV.grp.year$airt),1:6]
## # A tibble: 2 x 6
## X datetime jd airt f.airt rh
## <int> <dttm> <int> <dbl> <chr> <int>
## 1 158360 2009-07-08 14:15:00 189 NA M NA
## 2 203173 2010-10-18 09:30:00 291 NA M NA
It appears as if there are two NoData values (in 2009 and 2010) that are
causing R to return a NA for the mean for those years. As we learned
previously, we can use na.rm to tell R to ignore those values and calculate
the final mean value.
# calculate mean but remove NA values
dplyr::summarize(HARV.grp.year,
mean(airt, na.rm = TRUE)
)
## # A tibble: 3 x 2
## year `mean(airt, na.rm = TRUE)`
## <dbl> <dbl>
## 1 2009 7.63
## 2 2010 9.03
## 3 2011 8.75
Great! We've now used the group_by function to create groups for each year
of our data. We then calculated a summary mean value per year using summarize.
dplyr Pipes
The above steps utilized several steps of R code and created 1 R object -
HARV.grp.year. We can combine these steps using pipes in the dplyr
package.
We can use pipes to string functions or processing steps together. The output
of each step is fed directly into the next step using the syntax: %>%. This
means we don't need to save the output of each intermediate step as a new R
object.
A few notes about piping syntax:
A pipe begins with the object name that we will be manipulating, in our case
harMet15.09.11.
It then links that object with first manipulation step using %>%.
Finally, the first function is called, in our case group_by(year). Note
that because we specified the object name in step one above, we can just use the
column name
So, we have: harMet15.09.11 %>% group_by(year)
We can then add an additional function (or more functions!) to our pipe. For
example, we can tell R to tally or count the number of measurements per
year.
harMet15.09.11 %>% group_by(year) %>% tally()
Let's try it!
# how many measurements were made a year?
harMet15.09.11 %>%
group_by(year) %>% # group by year
tally() # count measurements per year
## # A tibble: 3 x 2
## year n
## <dbl> <int>
## 1 2009 35036
## 2 2010 35036
## 3 2011 35036
Piping allows us to efficiently perform operations on our data_frame in that:
It allows us to condense our code, without naming intermediate steps.
The dplyr package is optimized to ensure fast processing!
We can use pipes to summarize data by year too:
# what was the annual air temperature average
year.sum <- harMet15.09.11 %>%
group_by(year) %>% # group by year
dplyr::summarize(mean(airt, na.rm=TRUE))
# what is the class of the output?
year.sum
## # A tibble: 3 x 2
## year `mean(airt, na.rm = TRUE)`
## <dbl> <dbl>
## 1 2009 7.63
## 2 2010 9.03
## 3 2011 8.75
# view structure of output
str(year.sum)
## tibble [3 × 2] (S3: tbl_df/tbl/data.frame)
## $ year : num [1:3] 2009 2010 2011
## $ mean(airt, na.rm = TRUE): num [1:3] 7.63 9.03 8.75
### Challenge: Using Pipes
Use piping to create a `data_frame` called `jday.avg` that contains the average
`airt` per Julian day (`harMet15.09.11$jd`). Plot the output using `qplot`.
**Data Tip:** Older `dplyr` versions used the `%.%`
syntax to designate a pipe. Pipes are sometimes referred to as chains.
Other dplyr Functions
dplyr works based on a series of verb functions that allow us to manipulate
the data in different ways:
filter() & slice(): filter rows based on values in specified columns
group-by(): group all data by a column
arrange(): sort data by values in specified columns
select() & rename(): view and work with data from only specified columns
distinct(): view and work with only unique values from specified columns
mutate() & transmute(): add new data to a data frame
summarise(): calculate the specified summary statistics
sample_n() & sample_frac(): return a random sample of rows
the subsequent arguments dictate what to do with that data_frame and
the output is a new data frame.
Group by a Variable (or Two)
Our goal for this tutorial is to view drivers of annual phenology patterns.
Specifically, we want to explore daily average temperature throughout the year.
This means we need to calculate average temperature, for each day, across three
years. To do this we can use the group_by() function as we did earlier.
However this time, we can group by two variables: year and Julian Day (jd) as follows:
group_by(year, jd)
Let's begin by counting or tallying the total measurements per Julian day (or
year day) using the group_by() function and pipes.
harMet15.09.11 %>% # use the harMet15.09.11 data_frame
group_by(year, jd) %>% # group data by Year & Julian day
tally() # tally (count) observations per jd / year
## # A tibble: 1,096 x 3
## # Groups: year [3]
## year jd n
## <dbl> <int> <int>
## 1 2009 1 96
## 2 2009 2 96
## 3 2009 3 96
## 4 2009 4 96
## 5 2009 5 96
## 6 2009 6 96
## 7 2009 7 96
## 8 2009 8 96
## 9 2009 9 96
## 10 2009 10 96
## # … with 1,086 more rows
The output shows we have 96 values for each day. Is that what we expect?
**Data Tip:** If Julian days weren't already in our
data, we could use the `yday()` function from the `lubridate` package
to create a new column containing Julian day
values. More information in this
NEON Data Skills tutorial.
Summarize by Group
We can use summarize() function to calculate a summary output value for each
"group" - in this case Julian day per year. Let's calculate the mean air
temperature for each Julian day per year. Note that we are still using
na.rm=TRUE to tell R to skip NA values.
harMet15.09.11 %>% # use the harMet15.09.11 data_frame
group_by(year, jd) %>% # group data by Year & Julian day
dplyr::summarize(mean_airt = mean(airt, na.rm = TRUE)) # mean airtemp per jd / year
## `summarise()` has grouped output by 'year'. You can override using the `.groups` argument.
## # A tibble: 1,096 x 3
## # Groups: year [3]
## year jd mean_airt
## <dbl> <int> <dbl>
## 1 2009 1 -15.1
## 2 2009 2 -9.14
## 3 2009 3 -5.54
## 4 2009 4 -6.35
## 5 2009 5 -2.41
## 6 2009 6 -4.92
## 7 2009 7 -2.59
## 8 2009 8 -3.23
## 9 2009 9 -9.92
## 10 2009 10 -11.1
## # … with 1,086 more rows
### Challenge: Summarization & Calculations with dplyr
We can use `sum` to calculate the total rather than mean value for each Julian
Day. Using this information, do the following:
Calculate total prec for each Julian Day over the 3 years - name your
data_frame total.prec.
Create a plot of Daily Total Precipitation for 2009-2011.
Add a title and x and y axis labels.
If you use qplot to create your plot, use
colour=as.factor(total.prec$year) to color the data points by year.
## `summarise()` has grouped output by 'year'. You can override using the `.groups` argument.
Mutate - Add data_frame Columns to dplyr Output
We can use the mutate() function of dplyr to add additional columns of
information to a data_frame. For instance, we added the year column
independently at the very beginning of the tutorial. However, we can add the
year using a dplyr pipe that also summarizes our data. To do this, we would
use the syntax:
mutate(year2 = year(datetime))
year2 is the name of the column that will be added to the output dplyr
data_frame.
**Data Tip:** The `mutate` function is similar to
`transform()` in base R. However,`mutate()` allows us to create and
immediately use the variable (`year2`).
Save dplyr Output as data_frame
We can save output from a dplyr pipe as a new R object to use for quick
plotting.
harTemp.daily.09.11<-harMet15.09.11 %>%
mutate(year2 = year(datetime)) %>%
group_by(year2, jd) %>%
dplyr::summarize(mean_airt = mean(airt, na.rm = TRUE))
## `summarise()` has grouped output by 'year2'. You can override using the `.groups` argument.
head(harTemp.daily.09.11)
## # A tibble: 6 x 3
## # Groups: year2 [1]
## year2 jd mean_airt
## <dbl> <int> <dbl>
## 1 2009 1 -15.1
## 2 2009 2 -9.14
## 3 2009 3 -5.54
## 4 2009 4 -6.35
## 5 2009 5 -2.41
## 6 2009 6 -4.92
Add Date-Time To dplyr Output
In the challenge above, we created a plot of daily precipitation data using
qplot. However, the x-axis ranged from 0-366 (Julian Days for the year). It
would have been easier to create a meaningful plot across all three years if we
had a continuous date variable in our data_frame representing the year and
date for each summary value.
We can add the the datetime column value to our data_frame by adding an
additional argument to the summarize() function. In this case, we will add the
first datetime value that R encounters when summarizing data by group as
follows:
datetime = first(datetime)
Our new summarize statement in our pipe will look like this:
Plot daily total precipitation from 2009-2011 as we did in the previous
challenge. However this time, use the new syntax that you learned (mutate and
summarize to add a datetime column to your output data_frame).
Create a data_frame of the average monthly air temperature for 2009-2011.
Name the new data frame harTemp.monthly.09.11. Plot your output.
## `summarise()` has grouped output by 'year'. You can override using the `.groups` argument.
## `summarise()` has grouped output by 'month'. You can override using the `.groups` argument.
This tutorial explores how to deal with NoData values encountered in a time
series dataset, in R. It also covers how to subset large files by date and
export the results to a .csv (text) file.
Learning Objectives
After completing this tutorial, you will be able to:
Subset data by date.
Search for NA or missing data values.
Describe different possibilities on how to deal with missing data.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded on
your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Cleaning Time Series Data
It is common to encounter, large files containing more
data than we need for our analysis. It is also common to encounter NoData
values that we need to account for when analyzing our data.
In this tutorial, we'll learn how to both manage NoData values and also
subset and export a portion of an R object as a new .csv file.
In this tutorial, we will work with atmospheric data, containing air temperature,
precipitation, and photosynthetically active radiation (PAR) data - metrics
that are key drivers of phenology. Our study area is the
NEON Harvard Forest Field Site.
Import Timeseries Data
We will use the lubridate and ggplot2 packages. Let's load those first.
If you have not already done so, import the hf001-10-15min-m.csv file, which
contains atmospheric data for Harvard Forest. Convert the datetime column
to a POSIXct class as covered in the tutorial:
Dealing With Dates & Times in R - as.Date, POSIXct, POSIXlt.
# Load packages required for entire script
library(lubridate) # work with dates
library(ggplot2) # plotting
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# Load csv file containing 15 minute averaged atmospheric data
# for the NEON Harvard Forest Field Site
# Factors=FALSE so data are imported as numbers and characters
harMet_15Min <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/hf001-10-15min-m.csv"),
stringsAsFactors = FALSE)
# convert to POSIX date time class - US Eastern Time Zone
harMet_15Min$datetime <- as.POSIXct(harMet_15Min$datetime,
format = "%Y-%m-%dT%H:%M",
tz = "America/New_York")
Subset by Date
Our .csv file contains nearly a decade's worth of data which makes for a large
file. The time period we are interested in for our study is:
Start Time: 1 January 2009
End Time: 31 Dec 2011
Let's subset the data to only contain these three years. We can use the
subset() function, with the syntax:
NewObject <- subset ( ObjectToBeSubset, CriteriaForSubsetting ) .
We will set our criteria to be any datetime that:
Is greater than or equal to 1 Jan 2009 at 0:00
AND
Is less than or equal to 31 Dec 2011 at 23:59.
We also need to specify the timezone so R can handle daylight savings and
leap year.
It worked! The first entry is 1 January 2009 at 00:00 and the last entry is 31
December 2011 at 23:45.
Export data.frame to .CSV
We can export this subset in .csv format to use in other analyses or to
share with colleagues using write.csv.
**Data Tip:** Remember, to give your output files
concise, yet descriptive names so you can identify what it contains in the
future. By default, the `.csv` file will be written to your working directory.
# write harMet15 subset data to .csv
write.csv(harMet15.09.11,
file=paste0(wd,"Met_HARV_15min_2009_2011.csv"))
### Challenge: Subset & Plot Data
Create a plot of precipitation for the month of July 2010 in Harvard
Forest. Be sure to label x and y axes. Also be sure to give your plot a title.
Create a plot of dew point (dewp) for the year 2011 at Harvard Forest.
Bonus challenge: Complete this challenge using the available daily data
instead of the 15-minute data. What will need to change in your subsetting code?
Managing Missing Data: NoData values
Find NoData Values
If we are lucky when working with external data, the NoData value is clearly
specified
in the metadata. No data values can be stored differently:
NA / NaN: Sometimes this value is NA or NaN (not a number).
A Designated Numeric Value (e.g. -9999): Character strings such as NA can
not always be stored along side of numeric values in some file formats. Sometimes
you'll encounter numeric placeholders for noData values such as
-9999 (a value often used in the GIS / Remote Sensing and Micrometeorology
domains.
Blank Values: sometimes noData values are left blank. Blanks are
particularly problematic because we can't be certain if a data value is
purposefully missing (not measured that day or a bad measurement) or if someone
unintentionally deleted it.
Because the actual value used to designate missing data can vary depending upon
what data we are working with, it is important to always check the metadata for
the files associated NoData value. If the value is NA, we are in luck, R
will recognize and flag this value as NoData. If the value is numeric (e.g.,
-9999), then we might need to assign this value to NA.
**Data Tip:** `NA` values will be ignored when
performing calculations in R. However a `NoData` value of `-9999` will be
recognized as an integer and processed accordingly. If you encounter a numeric
`NoData` value be sure to assign it to `NA` in R:
`objectName[objectName==-9999] <- NA`
Excerpt from the metadata:airt: average air temperature. Average of daily averages. (unit: celsius / missing value: NA)
Check For NoData Values
We can quickly check for NoData values in our data using theis.na()
function. By asking for the sum() of is.na() we can see how many NA/ missing
values we have.
# Check for NA values
sum(is.na(harMet15.09.11$datetime))
## [1] 0
sum(is.na(harMet15.09.11$airt))
## [1] 2
# view rows where the air temperature is NA
harMet15.09.11[is.na(harMet15.09.11$airt),]
## datetime jd airt f.airt rh f.rh dewp f.dewp prec
## 158360 2009-07-08 14:15:00 189 NA M NA M NA M 0
## 203173 2010-10-18 09:30:00 291 NA M NA M NA M 0
## f.prec slrr f.slrr parr f.parr netr f.netr bar f.bar wspd
## 158360 290 485 139 NA M 2.1
## 203173 NA M NA M NA M NA M NA
## f.wspd wres f.wres wdir f.wdir wdev f.wdev gspd f.gspd s10t
## 158360 1.8 86 29 5.2 20.7
## 203173 M NA M NA M NA M NA M 10.9
## f.s10t
## 158360
## 203173
The results above tell us there are NoData values in the airt column.
However, there are NoData values in other variables.
### Challenge: NoData Values
How many NoData values are in the precipitation (prec) and PAR (parr)
columns of our data?
Deal with NoData Values
When we encounter NoData values (blank, NaN, -9999, etc.) in our data we
need to decide how to deal with them. By default R treats NoData values
designated with a NA as a missing value rather than a zero. This is good, as a
value of zero (no rain today) is not the same as missing data (e.g. we didn't
measure the amount of rainfall today).
How we deal with NoData values will depend on:
the data type we are working with
the analysis we are conducting
the significance of the gap or missing value
Many functions in R contains a na.rm= option which will allow you to tell R
to ignore NA values in your data when performing calculations.
To Gap Fill? Or Not?
Sometimes we might need to "gap fill" our data. This means we will interpolate
or estimate missing values often using statistical methods. Gap filling can be
complex and is beyond the scope of this tutorial. The take away from this
is simply that it is important to acknowledge missing values in your data and to
carefully consider how you wish to account for them during analysis.
For this tutorial, we are exploring the patterns of precipitation,
and temperature as they relate to green-up and brown-down of vegetation at
Harvard Forest. To view overall trends during these early exploration stages, it
is okay for us to leave out the NoData values in our plots.
**Data Tip:** If we wanted to perform more advanced
statistical analysis, we might consider gap-filling as our next step. Many data
products, from towers such as FluxNet include a higher level, gap-filled
product that we can download.
More on Gap Filling
NoData Values Can Impact Calculations
It is important to consider NoData values when performing calculations on our
data. For example, R will not properly calculate certain functions if there
are NA values in the data, unless we explicitly tell it to ignore them.
# calculate mean of air temperature
mean(harMet15.09.11$airt)
## [1] NA
# are there NA values in our data?
sum(is.na(harMet15.09.11$airt))
## [1] 2
R will not return a value for the mean as there NA values in the air
temperature column. Because there are only 2 missing values (out of 105,108) for
air temperature, we aren't that worried about a skewed 3 year mean. We can tell
R to ignore noData values in the mean calculations using na.rm=
(NA.remove).
# calculate mean of air temperature, ignore NA values
mean(harMet15.09.11$airt,
na.rm=TRUE)
## [1] 8.467904
We now see that the 3-year average air temperature is 8.5°C.
### Challenge: Import, Understand Metadata, and Clean a Data Set
We have been using the 15-minute data from the Harvard Forest. However, overall
we are interested in larger scale patterns of greening-up and browning-down.
Thus a daily summary is sufficient for us to see overall trends.
Import the Daily Meteorological data from the Harvard Forest (if you haven't
already done so in the
Intro to Time Series Data in R tutorial.)
Check the metadata to see what the column names are for the variable of
interest (precipitation, air temperature, PAR, day and time ).
If needed, convert the data class of different columns.
Check for missing data and decide what to do with any that exist.
Subset out the data for the duration of our study: 2009-2011. Name the object
"harMetDaily.09.11".
Export the subset to a .csv file.
Create a plot of Daily Air Temperature for 2009-2011. Be sure to label, x-
and y-axes. Also give the plot a title!