There are myriad resources out there to learn programming in R. After linking to
a tutorial on how to install R and RStudio on your computer, we then outline a
few different paths to learn R basics depending on how you enjoy learning, and
finally we include a few resources for intermediate and advanced learning.
Setting Up your Computer
Start out by installing R and, we recommend, RStudio, on your computer. RStudio
is an Interactive Development Environment (IDE) for the R program. It
is optional, but recommended when working with R. Directions
for installing can be found within the tutorial Install Git, Bash Shell, R & RStudio.
You will need administrator permissions on your computer.
Pathways to Learning the Basics of R
In-person trainings
If you prefer to learn through in-person trainings, consider local workshops
from The Carpentries Software Carpentry or Data Carpentry (generally ~$25 for a
2-day workshop), courses offered by a local college or university (prices vary),
or organize your colleagues to meet regularly to learn R together (free!).
Online interactive courses
If you prefer to learn in a semi-structured online environment, there are a wide
variety of online courses for learning R including Data Camp, Coursera, edX, and
Lynda.com. Many of these options include free introductory lessons or trial
periods as well as paid courses. We do not have personal experience with
these courses and do not recommend or specifically promote any course.
In program interactive course
Swirl
is guided introduction to R where you code along with the instructions in R. You
get direct feedback when you type a command incorrectly. To use this package,
once you have R or RStudio open and running, use the following commands to start
the first lesson.
install.packages("swirl")
library(swirl)
swirl()
Online tutorials
If you prefer a less structured online environment, these tutorial series may be
better suited for you.
Learn R with a focus on data analysis. Beyond the basics, it covers dyplr for
data aggregation & manipulation, ggplot2 for plotting, and touches on
interacting with an SQL database. Designed to be taught by an instructor but the
materials also work for independent learning online.
This comprehensive course contains an R section. While the overall focus is on
data science skills, learning R is a portion of it (note, this is an extensive
course).
RStudio links to many other learning opportunities. Start with the 'Beginners'
learning path.
Video tutorials
A blend of having an instructor and self-paced, video tutorials may also be of
interest. New stand-alone video tutorials are out each day, so we aren’t going
to recommend a specific series. Find what works for you by searching
“R Programming video tutorials” on YouTube.
Books
Books are still a great way to learn R (and other languages). Many books are
available at local libraries (university or community) or online, if you want to
try them out before buying. Below are a few of the many, many books that data
scientists working on the NEON project have found useful.
Michael Crawley’s The R Book
is a classic that takes you from beginning steps to analyses and modelling.
Grolemun and Wickham’s R for Data Science
focuses on using R in data science applications using Hadley Wickham’s
“tidyverse”. It does assume some basic familiarity with R. Bonus: it is available
online or in book format!
(If you are completely new, they recommend starting with
Hands-on Programming with R).
Beyond the Basics
There are many intermediate and advanced courses, lessons, and tutorials linked
in the above resources. For example, the Swirl package offers intermediate and
advanced courses on specific topics, as does RStudio's list. See courses here;
development is ongoing so new courses may be added.
However, once the basics are handled, you will find that much of your learning
will happen through solving individual problems you encounter. To solve these
problems, your favorite search engine is your friend. Paste the error (without
specifics to your file/data) into the search menu and find answers from those
who have had similar questions.
For more on working with NEON data in particular, be sure to check out the other
NEON data tutorials.
This tutorial provides the basics on how to set up Docker on one's local computer
and then connect to an eddy4R Docker container in order to use the eddy4R R package.
There are no specific skills needed for this tutorial, however, you will need to
know how to access the command line tool for your operating system
(basic instructions given).
Learning Objectives
After completing this tutorial, you will be able to:
Access Docker on your local computer.
Access the eddy4R package in a RStudio Docker environment.
Things You’ll Need To Complete This Tutorial
You will need internet access and an up to date browser.
Sources
The directions on how to install docker are heavily borrowed from the author's
of CyVerse's Container Camp's
Intro to Docker and we thank them for providing the information.
The directions for how to access eddy4R comes from
Metzger, S., D. Durden, C. Sturtevant, H. Luo, N. Pingintha-durden, and T. Sachs (2017). eddy4R 0.2.0: a DevOps model for community-extensible processing and analysis of eddy-covariance data based on R, Git, Docker, and HDF5. Geoscientific Model Development 10:3189–3206. doi:
10.5194/gmd-10-3189-2017.
The eddy4R versions within the tutorial have been updated to the 1.0.0 release that accompanied the following manuscript:
Metzger, S., E. Ayres, D. Durden, C. Florian, R. Lee, C. Lunch, H. Luo, N. Pingintha-Durden, J.A. Roberti, M. SanClements, C. Sturtevant, K. Xu, and R.C. Zulueta, 2019: From NEON Field Sites to Data Portal: A Community Resource for Surface–Atmosphere Research Comes Online. Bull. Amer. Meteor. Soc., 100, 2305–2325, https://doi.org/10.1175/BAMS-D-17-0307.1.
In the tutorial below, we give the very barest of information to get Docker set
up for use with the NEON R package eddy4R. For more information on using Docker,
consider reading through the content from CyVerse's Container Camp's
Intro to Docker.
Install Docker
To work with the eddy4R–Docker image, you first need to sign up for an
account at DockerHub.
Once logged in, getting Docker up and running on your favorite operating system
(Mac/Windows/Linux) is very easy. The "getting started" guide on Docker has
detailed instructions for setting up Docker. Unless you plan on being a very
active user and devoloper in Docker, we recommend starting with the stable channel
(not edge channel) as you may encounter fewer problems.
If you're using Docker for Windows make sure you have
shared your drive.
If you're using an older version of Windows or MacOS, you may need to use
Docker Machine
instead.
Test Docker installation
Once you are done installing Docker, test your Docker installation by running
the following command to make sure you are using version 1.13 or higher.
You will need an open shell window (Linux; Mac=Terminal) or the Docker
Quickstart Terminal (Windows).
docker --version
When run, you will see which version of Docker you are currently running.
Note: If you run just the word docker you should see a whole bunch of
lines showing the different options available with docker. Alternatively
you can test your installation by running the following:
docker run hello-world
Notice that the first line states that the image can't be found locally. The
next few lines are pulling the image, so if you were to run the hello-world
prompt again, it would already be local and you'd see the message start at
"Hello from Docker!".
If these steps work, you are ready to go on to access the
eddy4R-Docker image that houses the suite of eddy4R R
packages. If these steps have not worked, follow the installation
instructions a second time.
Accessing eddy4R
Download of the eddy4R–Docker image and subsequent creation of a local container
can be performed by two simple commands in an open shell (Linux; Mac = Terminal)
or the Docker Quickstart Terminal (Windows).
The first command docker login will prompt you for your DockerHub ID and password.
The second command docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0 will
download the latest eddy4R–Docker image and start a Docker container that
utilizes port 8787 for establishing a graphical interface via web browser.
docker run: docker will preform some process on an isolated container
-d: the container will start in a detached mode, which means the container
run in the background and will print the container ID
-p: publish a container to a specified port (which follows)
8787:8787: specify which port you want to use. The default 8787:8787
is great if you are running locally. The first 4 digits are the
port on your machine, the last 4 digits are the port communicating with
RStudio on Docker. You can change the first 4 digits if you want to use a
different port on your machine, or if you are running many containers or
are on a shared network, but the last 4 digits need to be 8787.
-e PASSWORD=YOURPASSWORD: define a password environmental variable to use upon login to the Rstudio instance. YOURPASSWORD can be anything you want.
stefanmet/eddy4r:1.0.0: finally, which container do you want to run.
Now try it.
docker login
docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0
This last command will run a specified release version (eddy4r:1.0.0) of the
Docker image. Alternatively you can use eddy4r:latest to get the most up-to-date
development image of eddy4r.
If you are using data stored on your local machine, rather than cloud hosting, a
physical file system location on the host computer (local/dir) can be mounted
to a file system location inside the Docker container (docker/dir). This is
achieved with the Docker run option -v local/dir:docker/dir.
Access RStudio session
Now you can access the interactive RStudio session for using eddy4r by using any
web browser and going to http://host-ip-address:8787 where host-ip-address
is the internal IP address of the Docker host. For example, if your host IP address
is 10.100.90.169 then you should type http://10.100.90.169:8787 into your browser.
To determine the IP address of your Docker host, follow the instructions below
for your operating system.
Windows
Depending on the version of Docker, older Docker Toolbox versus the newer Docker Desktop for Windows, there are different way to get the docker machine IP address:
Docker Toolbox - Type docker-machine ip default into cmd.exe window. The output will be your local IP address for the docker machine.
Docker Desktop for Windows - Type ipconfig into cmd.exe window. The output will include either DockerNAT IPv4 address or vEthernet IPv4 address that docker uses to communicate to the internet, which in most cases will be 10.0.75.1.
Mac
Type ifconfig | grep "inet " | grep -v 127.0.0.1 into your Terminal window.
The output will be one or more local IP addresses for the docker machine. Use
the numbers after the first inet output.
Linux
Type localhost in a shell session and the local IP will be the output.
Once in the web browser you can log into this instance of the RStudio session
with the username as rstudio and password as defined by YOURPASSWORD. Once complete you are now in
a RStudio user interface with eddy4R installed and ready to use.
This tutorial provides an overview of functions in the
neonUtilities package in R and the
neonutilities package in Python. These packages provide a
toolbox of basic functionality for working with NEON data.
This tutorial is primarily an index of functions and their inputs;
for more in-depth guidance in using these functions to work with NEON
data, see the
Download
and Explore tutorial. If you are already familiar with the
neonUtilities package, and need a quick reference guide to
function inputs and notation, see the
neonUtilities
cheat sheet.
Function index
The neonUtilities/neonutilities package
contains several functions (use the R and Python tabs to see the syntax
in each language):
R
stackByTable(): Takes zip files downloaded from the
Data Portal or
downloaded by zipsByProduct(), unzips them, and joins the
monthly files by data table to create a single file per table.
zipsByProduct(): A wrapper for the
NEON
API; downloads data based on data product and site criteria. Stores
downloaded data in a format that can then be joined by
stackByTable().
loadByProduct(): Combines the functionality of
zipsByProduct(), stackByTable(), and readTableNEON(): Downloads
the specified data, stacks the files, and loads the files to the R
environment.
byFileAOP(): A wrapper for the NEON API; downloads
remote sensing data based on data product, site, and year criteria.
Preserves the file structure of the original data.
byTileAOP(): Downloads remote sensing data for the
specified data product, subset to tiles that intersect a list of
coordinates.
readTableNEON(): Reads NEON data tables into R, using
the variables file to assign R classes to each column.
getCitation(): Get a BibTeX citation for a particular
data product and release.
Python
stack_by_table(): Takes zip files downloaded from the
Data Portal or
downloaded by zips_by_product(), unzips them, and joins the
monthly files by data table to create a single file per table.
zips_by_product(): A wrapper for the
NEON
API; downloads data based on data product and site criteria. Stores
downloaded data in a format that can then be joined by
stack_by_table().
load_by_product(): Combines the functionality of
zips_by_product(), stack_by_table(), and read_table_neon():
Downloads the specified data, stacks the files, and loads the files to
the R environment.
by_file_aop(): A wrapper for the NEON API; downloads
remote sensing data based on data product, site, and year criteria.
Preserves the file structure of the original data.
by_tile_aop(): Downloads remote sensing data for the
specified data product, subset to tiles that intersect a list of
coordinates.
read_table_neon(): Reads NEON data tables into R, using
the variables file to assign R classes to each column.
get_citation(): Get a BibTeX citation for a particular
data product and release.
If you are only interested in joining data
files downloaded from the NEON Data Portal, you will only need to use
stackByTable(). Follow the instructions in the first
section of the
Download
and Explore tutorial.
Install and load packages
First, install and load the package. The installation step only needs
to be run once, and then periodically to update when new package
versions are released. The load step needs to be run every time you run
your code.
R
##
## # install neonUtilities - can skip if already installed
## install.packages("neonUtilities")
##
## # load neonUtilities
library(neonUtilities)
##
Python
# install neonutilities - can skip if already installed
# do this in the command line
pip install neonutilities
# load neonutilities in working environment
import neonutilities as nu
Download files and load to working environment
The most popular function in neonUtilities is
loadByProduct() (or load_by_product() in
neonutilities). This function downloads data from the NEON
API, merges the site-by-month files, and loads the resulting data tables
into the programming environment, classifying each variable’s data type
appropriately. It combines the actions of the
zipsByProduct(), stackByTable(), and
readTableNEON() functions, described below.
This is a popular choice because it ensures you’re always working
with the latest data, and it ends with ready-to-use tables. However, if
you use it in a workflow you run repeatedly, keep in mind it will
re-download the data every time.
loadByProduct() works on most observational (OS) and
sensor (IS) data, but not on surface-atmosphere exchange (SAE) data,
remote sensing (AOP) data, and some of the data tables in the microbial
data products. For functions that download AOP data, see the
byFileAOP() and byTileAOP() sections in this
tutorial. For functions that work with SAE data, see the
NEON
eddy flux data tutorial. SAE functions are not yet available in
Python.
The inputs to loadByProduct() control which data to
download and how to manage the processing:
R
dpID: The data product ID, e.g. DP1.00002.001
site: Defaults to “all”, meaning all sites with
available data; can be a vector of 4-letter NEON site codes, e.g.
c("HARV","CPER","ABBY").
startdate and enddate: Defaults to NA,
meaning all dates with available data; or a date in the form YYYY-MM,
e.g. 2017-06. Since NEON data are provided in month packages, finer
scale querying is not available. Both start and end date are
inclusive.
package: Either basic or expanded data package.
Expanded data packages generally include additional information about
data quality, such as chemical standards and quality flags. Not every
data product has an expanded package; if the expanded package is
requested but there isn’t one, the basic package will be
downloaded.
timeIndex: Defaults to “all”, to download all data; or
the number of minutes in the averaging interval. See example below; only
applicable to IS data.
release: Specify a particular data Release, e.g.
"RELEASE-2024". Defaults to the most recent Release. For
more details and guidance, see the
Release and Provisional tutorial.
include.provisional: T or F: Should provisional data be
downloaded? If release is not specified, set to T to
include provisional data in the download. Defaults to F.
savepath: the file path you want to download to;
defaults to the working directory.
check.size: T or F: should the function pause before
downloading data and warn you about the size of your download? Defaults
to T; if you are using this function within a script or batch process
you will want to set it to F.
token: Optional API token for faster downloads. See the
API token tutorial.
nCores: Number of cores to use for parallel processing.
Defaults to 1, i.e. no parallelization.
Python
dpid: the data product ID, e.g. DP1.00002.001
site: defaults to “all”, meaning all sites with
available data; can be a list of 4-letter NEON site codes, e.g.
["HARV","CPER","ABBY"].
startdate and enddate: defaults to NA,
meaning all dates with available data; or a date in the form YYYY-MM,
e.g. 2017-06. Since NEON data are provided in month packages, finer
scale querying is not available. Both start and end date are
inclusive.
package: either basic or expanded data package.
Expanded data packages generally include additional information about
data quality, such as chemical standards and quality flags. Not every
data product has an expanded package; if the expanded package is
requested but there isn’t one, the basic package will be
downloaded.
timeindex: defaults to “all”, to download all data; or
the number of minutes in the averaging interval. See example below; only
applicable to IS data.
release: Specify a particular data Release, e.g.
"RELEASE-2024". Defaults to the most recent Release. For
more details and guidance, see the
Release and Provisional tutorial.
include_provisional: True or False: Should provisional
data be downloaded? If release is not specified, set to T
to include provisional data in the download. Defaults to F.
savepath: the file path you want to download to;
defaults to the working directory.
check_size: True or False: should the function pause
before downloading data and warn you about the size of your download?
Defaults to True; if you are using this function within a script or
batch process you will want to set it to False.
token: Optional API token for faster downloads. See the
API token tutorial.
cloud_mode: Can be set to True if you are working in a
cloud environment; provides more efficient data transfer from NEON cloud
storage to other cloud environments.
progress: Set to False to omit the progress bar during
download and stacking.
The dpID (dpid) is the data product
identifier of the data you want to download. The DPID can be found on
the
Explore Data Products page. It will be in the form DP#.#####.###
Demo data download and read
Let’s get triple-aspirated air temperature data (DP1.00003.001) from
Moab and Onaqui (MOAB and ONAQ), from May–August 2018, and name the data
object triptemp:
The object returned by loadByProduct() is a named list
of data tables, or a dictionary of data tables in Python. To work with
each of them, select them from the list.
If you prefer to extract each table from the list and work with it as
an independent object, you can use globals().update():
globals().update(triptemp)
For more details about the contents of the data tables and metadata
tables, check out the
Download
and Explore tutorial.
Join data files: stackByTable()
The function stackByTable() joins the month-by-site
files from a data download. The output will yield data grouped into new
files by table name. For example, the single aspirated air temperature
data product contains 1 minute and 30 minute interval data. The output
from this function is one .csv with 1 minute data and one .csv with 30
minute data.
Depending on your file size this function may run for a while. For
example, in testing for this tutorial, 124 MB of temperature data took
about 4 minutes to stack. A progress bar will display while the stacking
is in progress.
Download the Data
To stack data from the Portal, first download the data of interest
from the NEON
Data Portal. To stack data downloaded from the API, see the
zipsByProduct() section below.
Your data will download from the Portal in a single zipped file.
The stacking function will only work on zipped Comma Separated Value
(.csv) files and not the NEON data stored in other formats (HDF5,
etc).
Run stackByTable()
The example data below are single-aspirated air temperature.
To run the stackByTable() function, input the file path
to the downloaded and zipped file.
R
# Modify the file path to the file location on your computer
stackByTable(filepath="~neon/data/NEON_temp-air-single.zip")
Python
# Modify the file path to the file location on your computer
nu.stack_by_table(filepath="/neon/data/NEON_temp-air-single.zip")
In the same directory as the zipped file, you should now have an
unzipped directory of the same name. When you open this you will see a
new directory called stackedFiles. This directory
contains one or more .csv files (depends on the data product you are
working with) with all the data from the months & sites you
downloaded. There will also be a single copy of the associated
variables, validation, and sensor_positions files, if applicable
(validation files are only available for observational data products,
and sensor position files are only available for instrument data
products).
These .csv files are now ready for use with the program of your
choice.
To read the data tables, we recommend using
readTableNEON(), which will assign each column to the
appropriate data type, based on the metadata in the variables file. This
ensures time stamps and missing data are interpreted correctly.
savepath : allows you to specify the file path where
you want the stacked files to go, overriding the default. Set to
"envt" to load the files to the working environment.
saveUnzippedFiles : allows you to keep the unzipped,
unstacked files from an intermediate stage of the process; by default
they are discarded.
The function zipsByProduct() is a wrapper for the NEON
API, it downloads zip files for the data product specified and stores
them in a format that can then be passed on to
stackByTable().
Input options for zipsByProduct() are the same as those
for loadByProduct() described above.
Here, we’ll download single-aspirated air temperature (DP1.00002.001)
data from Wind River Experimental Forest (WREF) for April and May of
2019.
For many sensor data products, download sizes can get very large, and
stackByTable() takes a long time. The 1-minute or 2-minute
files are much larger than the longer averaging intervals, so if you
don’t need high- frequency data, the timeIndex input option
lets you choose which averaging interval to download.
This option is only applicable to sensor (IS) data, since OS data are
not averaged.
Download by averaging interval
Download only the 30-minute data for single-aspirated air temperature
at WREF:
The 30-minute files can be stacked and loaded as usual.
Download remote sensing files
Remote sensing data files can be very large, and NEON remote sensing
(AOP) data are stored in a directory structure that makes them easier to
navigate. byFileAOP() downloads AOP files from the API
while preserving their directory structure. This provides a convenient
way to access AOP data programmatically.
Be aware that downloads from byFileAOP() can take a VERY
long time, depending on the data you request and your connection speed.
You may need to run the function and then leave your machine on and
downloading for an extended period of time.
Here the example download is the Ecosystem Structure data product at
Hop Brook (HOPB) in 2017; we use this as the example because it’s a
relatively small year-site-product combination.
The files should now be downloaded to a new folder in your working
directory.
Download remote sensing files for specific
coordinates
Often when using remote sensing data, we only want data covering a
certain area - usually the area where we have coordinated ground
sampling. byTileAOP() queries for data tiles containing a
specified list of coordinates. It only works for the tiled, AKA
mosaicked, versions of the remote sensing data, i.e. the ones with data
product IDs beginning with “DP3”.
Here, we’ll download tiles of vegetation indices data (DP3.30026.001)
corresponding to select observational sampling plots. For more
information about accessing NEON spatial data, see the
API tutorial and the in-development
geoNEON package.
For now, assume we’ve used the API to look up the plot centroids of
plots SOAP_009 and SOAP_011 at the Soaproot Saddle site. You can also
look these up in the Spatial Data folder of the
document library. The coordinates of the two plots in UTMs are
298755,4101405 and 299296,4101461. These are 40x40m plots, so in looking
for tiles that contain the plots, we want to include a 20m buffer. The
“buffer” is actually a square, it’s a delta applied equally to both the
easting and northing coordinates.
This tutorial discusses ways to plot plant phenology (discrete time
series) and single-aspirated temperature (continuous time series) together.
It uses data frames created in the first two parts of this series,
Work with NEON OS & IS Data - Plant Phenology & Temperature.
If you have not completed these tutorials, please download the dataset below.
Objectives
After completing this tutorial, you will be able to:
plot multiple figures together with grid.arrange()
plot only a subset of dates
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
This tutorial is designed to have you download data directly from the NEON
portal API using the neonUtilities package. However, you can also directly
download this data, prepackaged, from FigShare. This data set includes all the
files needed for the Work with NEON OS & IS Data - Plant Phenology & Temperature
tutorial series. The data are in the format you would receive if downloading them
using the zipsByProduct() function in the neonUtilities package.
To start, we need to set up our R environment. If you're continuing from the
previous tutorial in this series, you'll only need to load the new packages.
# Install needed package (only uncomment & run if not already installed)
#install.packages("dplyr")
#install.packages("ggplot2")
#install.packages("scales")
# Load required libraries
library(ggplot2)
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(gridExtra)
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
library(scales)
options(stringsAsFactors=F) #keep strings as character type not factors
# set working directory to ensure R can find the file we wish to import and where
# we want to save our files. Be sure to move the download into your working directory!
wd <- "~/Documents/data/" # Change this to match your local environment
setwd(wd)
If you don't already have the R objects, temp_day and phe_1sp_2018, loaded
you'll need to load and format those data. If you do, you can skip this code.
# Read in data -> if in series this is unnecessary
temp_day <- read.csv(paste0(wd,'NEON-pheno-temp-timeseries/NEONsaat_daily_SCBI_2018.csv'))
phe_1sp_2018 <- read.csv(paste0(wd,'NEON-pheno-temp-timeseries/NEONpheno_LITU_Leaves_SCBI_2018.csv'))
# Convert dates
temp_day$Date <- as.Date(temp_day$Date)
# use dateStat - the date the phenophase status was recorded
phe_1sp_2018$dateStat <- as.Date(phe_1sp_2018$dateStat)
Separate Plots, Same Panel
In this dataset, we have phenology and temperature data from the Smithsonian
Conservation Biology Institute (SCBI) NEON field site. There are a variety of ways
we may want to look at this data, including aggregated at the site level, by
a single plot, or viewing all plots at the same time but in separate plots. In
the Work With NEON's Plant Phenology Data and the
Work with NEON's Single-Aspirated Air Temperature Data tutorials, we created
separate plots of the number of individuals who had leaves at different times
of the year and the temperature in 2018.
However, plot the data next to each other to aid comparisons. The grid.arrange()
function from the gridExtra package can help us do this.
# first, create one plot
phenoPlot <- ggplot(phe_1sp_2018, aes(dateStat, countYes)) +
geom_bar(stat="identity", na.rm = TRUE) +
ggtitle("Total Individuals in Leaf") +
xlab("") + ylab("Number of Individuals")
# create second plot of interest
tempPlot_dayMax <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point() +
ggtitle("Daily Max Air Temperature") +
xlab("Date") + ylab("Temp (C)")
# Then arrange the plots - this can be done with >2 plots as well.
grid.arrange(phenoPlot, tempPlot_dayMax)
Now, we can see both plots in the same window. But, hmmm... the x-axis on both
plots is kinda wonky. We want the same spacing in the scale across the year (e.g.,
July in one should line up with July in the other) plus we want the dates to
display in the same format(e.g. 2016-07 vs. Jul vs Jul 2018).
Format Dates in Axis Labels
The date format parameter can be adjusted with scale_x_date. Let's format the x-axis
ticks so they read "month" (%b) in both graphs. We will use the syntax:
scale_x_date(labels=date_format("%b"")
Rather than re-coding the entire plot, we can add the scale_x_date element
to the plot object phenoPlot we just created.
**Data Tip:**
You can type ?strptime into the R
console to find a list of date format conversion specifications (e.g. %b = month).
Type scale_x_date for a list of parameters that allow you to format dates
on the x-axis.
If you are working with a date & time
class (e.g. POSIXct), you can use scale_x_datetime instead of scale_x_date.
But this only solves one of the problems, we still have a different range on the
x-axis which makes it harder to see trends.
Align data sets with different start dates
Now let's work to align the values on the x-axis. We can do this in two ways,
setting the x-axis to have the same date range or 2) by filtering the dataset
itself to only include the overlapping data. Depending on what you are trying to
demonstrate and if you're doing additional analyses and want only the overlapping
data, you may prefer one over the other. Let's try both.
Set range of x-axis
Alternatively, we can set the x-axis range for both plots by adding the limits
parameter to the scale_x_date() function.
# first, lets recreate the full plot and add in the
phenoPlot_setX <- ggplot(phe_1sp_2018, aes(dateStat, countYes)) +
geom_bar(stat="identity", na.rm = TRUE) +
ggtitle("Total Individuals in Leaf") +
xlab("") + ylab("Number of Individuals") +
scale_x_date(breaks = date_breaks("1 month"),
labels = date_format("%b"),
limits = as.Date(c('2018-01-01','2018-12-31')))
# create second plot of interest
tempPlot_dayMax_setX <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point() +
ggtitle("Daily Max Air Temperature") +
xlab("Date") + ylab("Temp (C)") +
scale_x_date(date_breaks = "1 month",
labels=date_format("%b"),
limits = as.Date(c('2018-01-01','2018-12-31')))
# Plot
grid.arrange(phenoPlot_setX, tempPlot_dayMax_setX)
Now we can really see the pattern over the full year. This emphasizes the point
that during much of the late fall, winter, and early spring none of the trees
have leaves on them (or that data were not collected - this plot would not
distinguish between the two).
Subset one data set to match other
Alternatively, we can simply filter the dataset with the larger date range so
the we only plot the data from the overlapping dates.
# filter to only having overlapping data
temp_day_filt <- filter(temp_day, Date >= min(phe_1sp_2018$dateStat) &
Date <= max(phe_1sp_2018$dateStat))
# Check
range(phe_1sp_2018$date)
## [1] "2018-04-13" "2018-11-20"
range(temp_day_filt$Date)
## [1] "2018-04-13" "2018-11-20"
#plot again
tempPlot_dayMaxFiltered <- ggplot(temp_day_filt, aes(Date, dayMax)) +
geom_point() +
scale_x_date(breaks = date_breaks("months"), labels = date_format("%b")) +
ggtitle("Daily Max Air Temperature") +
xlab("Date") + ylab("Temp (C)")
grid.arrange(phenoPlot, tempPlot_dayMaxFiltered)
With this plot, we really look at the area of overlap in the plotted data (but
this does cut out the time where the data are collected but not plotted).
Same plot with two Y-axes
What about layering these plots and having two y-axes (right and left) that have
the different scale bars?
Some argue that you should not do this as it can distort what is actually going
on with the data. The author of the ggplot2 package is one of these individuals.
Therefore, you cannot use ggplot() to create a single plot with multiple y-axis
scales. You can read his own discussion of the topic on this
StackOverflow post.
However, individuals have found work arounds for these plots. The code below
is provided as a demonstration of this capability. Note, by showing this code
here, we don't necessarily endorse having plots with two y-axes.
In this tutorial, we explore the NEON single-aspirated air temperature data.
We then discuss how to interpret the variables, how to work with date-time and
date formats, and finally how to plot the data.
This tutorial is part of a series on how to work with both discrete and continuous
time series data with NEON plant phenology and temperature data products.
Objectives
After completing this activity, you will be able to:
work with "stacked" NEON Single-Aspirated Air Temperature data.
correctly format date-time data.
use dplyr functions to filter data.
plot time series data in scatter plots using ggplot function.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
Background Information About NEON Air Temperature Data
Air temperature is continuously monitored by NEON by two methods. At terrestrial
sites temperature at the top of the tower is derived from a triple
redundant aspirated air temperature sensor. This is provided as NEON data
product DP1.00003.001. Single Aspirated Air Temperature sensors (SAAT) are
deployed to develop temperature profiles at multiple levels on the tower at NEON
terrestrial sites and on the meteorological stations at NEON aquatic sites. This
is provided as NEON data product DP1.00002.001.
When designing a research project using this data, consult the
Data Product Details Page
for more detailed documentation.
Single-aspirated Air Temperature
Air temperature profiles are ascertained by deploying SAATs at various heights
on NEON tower infrastructure. Air temperature at aquatic sites is measured
using a single SAAT at a standard height of 3m above ground level. Air temperature
for this data product is provided as one- and thirty-minute averages of 1 Hz
observations. Temperature observations are made using platinum resistance
thermometers, which are housed in a fan aspirated shield to reduce radiative
heating. The temperature is measured in Ohms and subsequently converted to degrees
Celsius during data processing. Details on the conversion can be found in the
associated Algorithm Theoretic Basis Document (ATBD; see Product Details page
linked above).
Available Data Tables
The SAAT data product contains two data tables for each site and month selected,
consisting of the 1-minute and 30-minute averaging intervals. In addition, there
are several metadata files that provide additional useful information.
readme with information on the data product and the download
variables file that defines the terms, data types, and units
EML file with machine readable metadata in standardized Ecological Metadata Language
Access NEON Data
There are several ways to access NEON data, directly from the NEON data portal,
access through a data partner (select data products only), writing code to
directly pull data from the NEON API, or, as we'll do here, using the neonUtilities
package which is a wrapper for the API to make working with the data easier.
Downloading from the Data Portal
If you prefer to download data from the data portal, please
review the Getting started and Stack the downloaded data sections of the
Download and Explore NEON Data tutorial.
This will get you to the point where you can download data from sites or dates
of interest and resume this tutorial.
Downloading Data Using neonUtilities
First, we need to set up our environment with the packages needed for this tutorial.
# Install needed package (only uncomment & run if not already installed)
#install.packages("neonUtilities")
#install.packages("ggplot2")
#install.packages("dplyr")
#install.packages("tidyr")
# Load required libraries
library(neonUtilities) # for accessing NEON data
library(ggplot2) # for plotting
library(dplyr) # for data munging
library(tidyr) # for data munging
# set working directory
# this step is optional, only needed if you plan to save the
# data files at the end of the tutorial
wd <- "~/data" # enter your working directory here
setwd(wd)
This tutorial is part of series working with discrete plant phenology data and
(nearly) continuous temperature data. Our overall "research" question is to see if
there is any correlation between plant phenology and temperature.
Therefore, we will want to work with data that
align with the plant phenology data that we worked with in the first tutorial.
If you are only interested in working with the temperature data, you do not need
to complete the previous tutorial.
Our data of interest will be the temperature data from 2018 from NEON's
Smithsonian Conservation Biology Institute (SCBI) field site located in Virginia
near the northern terminus of the Blue Ridge Mountains.
NEON single aspirated air temperature data is available in two averaging intervals,
1 minute and 30 minute intervals. Which data you want to work with is going to
depend on your research questions. Here, we're going to only download and work
with the 30 minute interval data as we're primarily interest in longer term (daily,
weekly, annual) patterns.
This will download 7.7 MB of data. check.size is set to false (F) to improve flow
of the script but is always a good idea to view the size with true (T) before
downloading a new dataset.
# download data of interest - Single Aspirated Air Temperature
saat <- loadByProduct(dpID="DP1.00002.001", site="SCBI",
startdate="2018-01", enddate="2018-12",
package="basic", timeIndex="30",
check.size = F)
Explore Temperature Data
Now that you have the data, let's take a look at the structure and understand
what's in the data. The data (saat) come in as a large list of four items.
View(saat)
So what exactly are these five files and why would you want to use them?
data file(s): There will always be one or more dataframes that include the
primary data of the data product you downloaded. Since we downloaded only the 30
minute averaged data we only have one data table SAAT_30min.
readme_xxxxx: The readme file, with the corresponding 5 digits from the data
product number, provides you with important information relevant to the data
product and the specific instance of downloading the data.
sensor_positions_xxxxx: This table contains the spatial coordinates
of each sensor, relative to a reference location.
variables_xxxxx: This table contains all the variables found in the associated
data table(s). This includes full definitions, units, and rounding.
issueLog_xxxxx: This table contains records of any known issues with the
data product, such as sensor malfunctions.
scienceReviewFlags_xxxxx: This table may or may not be present. It contains
descriptions of adverse events that led to manual flagging of the data, and is
usually more detailed than the issue log. It only contains records relevant to
the sites and dates of data downloaded.
Since we want to work with the individual files, let's make the elements of the
list into independent objects.
The sensor data undergo a variety of automated quality assurance and quality control
checks. You can read about them in detail in the Quality Flags and Quality Metrics ATBD, in the Documentation section of the product details page.
The expanded data package
includes all of these quality flags, which can allow you to decide if not passing
one of the checks will significantly hamper your research and if you should
therefore remove the data from your analysis. Here, we're using the
basic data package, which only includes the final quality flag (finalQF),
which is aggregated from the full set of quality flags.
A pass of the check is 0, while a fail is 1. Let's see what percentage
of the data we downloaded passed the quality checks.
What should we do with the 23% of the data that are flagged?
This may depend on why it is flagged and what questions you are asking,
and the expanded data package would be useful for determining this.
For now, for demonstration purposes, we'll keep the flagged data.
What about null (NA) data?
sum(is.na(SAAT_30min$tempSingleMean))/nrow(SAAT_30min)
## [1] 0.2239269
mean(SAAT_30min$tempSingleMean)
## [1] NA
22% of the mean temperature values are NA. Note that this is not
additive with the flagged data! Empty data records are flagged, so this
indicates nearly all of the flagged data in our download are empty records.
Why was there no output from the calculation of mean temperature?
The R programming language, by default, won't calculate a mean (and many other
summary statistics) in data that contain NA values. We could override this
using the input parameter na.rm=TRUE in the mean() function, or just
remove the empty values from our analysis.
# create new dataframe without NAs
SAAT_30min_noNA <- SAAT_30min %>%
drop_na(tempSingleMean) # tidyr function
# alternate base R
# SAAT_30min_noNA <- SAAT_30min[!is.na(SAAT_30min$tempSingleMean),]
# did it work?
sum(is.na(SAAT_30min_noNA$tempSingleMean))
## [1] 0
Scatterplots with ggplot
We can use ggplot to create scatter plots. Which data should we plot, as we have
several options?
tempSingleMean: the mean temperature for the interval
tempSingleMinimum: the minimum temperature during the interval
tempSingleMaximum: the maximum temperature for the interval
Depending on exactly what question you are asking you may prefer to use one over
the other. For many applications, the mean temperature of the 1- or 30-minute
interval will provide the best representation of the data.
Let's plot it. (This is a plot of a large amount of data. It can take 1-2 mins
to process. It is not essential for completing the next steps if this takes too
much of your computer memory.)
Something odd seems to have happened in late April/May 2018. Since it is unlikely
Virginia experienced -50C during this time, these are probably erroneous sensor
readings and why we should probably remove data that are flagged with those quality
flags.
Right now we are also looking at all the data points in the dataset. However, we may
want to view or aggregate the data differently:
aggregated data: min, mean, or max over a some duration
the number of days since a freezing temperatures
or some other segregation of the data.
Given that in the previous tutorial,
Work With NEON's Plant Phenology Data,
we were working with phenology data collected on a daily scale let's aggregate
to that level.
To make this plot better, lets do two things
Remove flagged data
Aggregate to a daily mean.
Subset to remove quality flagged data
We already removed the empty records. Now we'll
subset the data to remove the remaining flagged data.
# subset and add C to name for "clean"
SAAT_30minC <- filter(SAAT_30min_noNA, SAAT_30min_noNA$finalQF==0)
# Do any quality flags remain?
sum(SAAT_30minC$finalQF==1)
## [1] 0
That looks better! But we're still working with the 30-minute data.
Aggregate Data by Day
We can use the dplyr package functions to aggregate the data. However, we have to
choose which data we want to aggregate. Again, you might want daily
minimum temps, mean temperature or maximum temps depending on your question.
In the context of phenology, minimum temperatures might be very important if you
are interested in a species that is very frost susceptible. Any days with a
minimum temperature below 0C could dramatically change the phenophase. For other
species or climates, maximum thresholds may be very important. Or you might be most
interested in the daily mean.
And note that you can combine different input values with different aggregation
functions - for example, you could calculate the minimum of the half-hourly
average temperature, or the average of the half-hourly maximum temperature.
For this tutorial, let's use maximum daily temperature, i.e. the maximum of the
tempSingleMax values for the day.
# convert to date, easier to work with
SAAT_30minC$Date <- as.Date(SAAT_30minC$startDateTime)
# max of mean temp each day
temp_day <- SAAT_30minC %>%
group_by(Date) %>%
distinct(Date, .keep_all=T) %>%
mutate(dayMax=max(tempSingleMaximum))
Now we can plot the cleaned up daily temperature.
# plot Air Temperature Data across 2018 using daily data
tempPlot_dayMax <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point(size=0.5) +
ggtitle("Daily Max Air Temperature") +
xlab("") + ylab("Temp (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
theme(text = element_text(size=18))
tempPlot_dayMax
Thought questions:
What do we gain by this visualization?
What do we lose relative to the 30 minute intervals?
ggplot - Subset by Time
Sometimes we want to scale the x- or y-axis to a particular time subset without
subsetting the entire data_frame. To do this, we can define start and end
times. We can then define these limits in the scale_x_date object as
follows:
scale_x_date(limits=start.end) +
Let's plot just the first three months of the year.
# Define Start and end times for the subset as R objects that are the time class
startTime <- as.Date("2018-01-01")
endTime <- as.Date("2018-03-31")
# create a start and end time R object
start.end <- c(startTime,endTime)
str(start.end)
## Date[1:2], format: "2018-01-01" "2018-03-31"
# View data for first 3 months only
# And we'll add some color for a change.
tempPlot_dayMax3m <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point(color="blue", size=0.5) +
ggtitle("Air Temperature\n Jan - March") +
xlab("Date") + ylab("Air Temperature (C)")+
(scale_x_date(limits=start.end,
date_breaks="1 week",
date_labels="%b %d"))
tempPlot_dayMax3m
## Warning: Removed 268 rows containing missing values (`geom_point()`).
Now we have the temperature data matching our Phenology data from the previous
tutorial, we want to save it to our computer to use in future analyses (or the
next tutorial). This is optional if you are continuing directly to the next tutorial
as you already have the data in R.
# Write .csv - this step is optional
# This will write to the working directory we set at the start of the tutorial
write.csv(temp_day , file="NEONsaat_daily_SCBI_2018.csv", row.names=F)
Many organisms, including plants, show patterns of change across seasons -
the different stages of this observable change are called phenophases. In this
tutorial we explore how to work with NEON plant phenophase data.
Objectives
After completing this activity, you will be able to:
work with NEON Plant Phenology Observation data.
use dplyr functions to filter data.
plot time series data in a bar plot using ggplot the function.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
This tutorial is designed to have you download data directly from the NEON
portal API using the neonUtilities package. However, you can also directly
download this data, prepackaged, from FigShare. This data set includes all the
files needed for the Work with NEON OS & IS Data - Plant Phenology & Temperature
tutorial series. The data are in the format you would receive if downloading them
using the zipsByProduct() function in the neonUtilities package.
Plants change throughout the year - these are phenophases.
Why do they change?
Explore Phenology Data
The following sections provide a brief overview of the NEON plant phenology
observation data. When designing a research project using this data, you
need to consult the
documents associated with this data product and not rely solely on this summary.
The following description of the NEON Plant Phenology Observation data is modified
from the data product user guide.
NEON Plant Phenology Observation Data
NEON collects plant phenology data and provides it as NEON data product
DP1.10055.001.
The plant phenology observations data product provides in-situ observations of
the phenological status and intensity of tagged plants (or patches) during
discrete observations events.
Sampling occurs at all terrestrial field sites at site and season specific
intervals. During Phase I (dominant species) sampling (pre-2021), three species
with 30 individuals each are sampled. In 2021, Phase II (community) sampling
will begin, with <=20 species with 5 or more individuals sampled will occur.
Status-based Monitoring
NEON employs status-based monitoring, in which the phenological condition of an
individual is reported any time that individual is observed. At every observations
bout, records are generated for every phenophase that is occurring and for every
phenophase not occurring. With this approach, events (such as leaf emergence in
Mediterranean climates, or flowering in many desert species) that may occur
multiple times during a single year, can be captured. Continuous reporting of
phenophase status enables quantification of the duration of phenophases rather
than just their date of onset while allows enabling the explicit quantification
of uncertainty in phenophase transition dates that are introduced by monitoring
in discrete temporal bouts.
Specific products derived from this sampling include the observed phenophase
status (whether or not a phenophase is occurring) and the intensity of
phenophases for individuals in which phenophase status = ‘yes’. Phenophases
reported are derived from the USA National Phenology Network (USA-NPN) categories.
The number of phenophases observed varies by growth form and ranges from 1
phenophase (cactus) to 7 phenophases (semi-evergreen broadleaf).
In this tutorial we will focus only on the state of the phenophase, not the
phenophase intensity data.
Phenology Transects
Plant phenology observations occurs at all terrestrial NEON sites along an 800
meter square loop transect (primary) and within a 200 m x 200 m plot located
within view of a canopy level, tower-mounted, phenology camera.
Diagram of a phenology transect layout, with meter layout marked.
Point-level geolocations are recorded at eight reference points along the
perimeter, plot-level geolocation at the plot centroid (star).
Source: National Ecological Observatory Network (NEON)
Timing of Observations
At each site, there are:
~50 observation bouts per year.
no more that 100 sampling points per phenology transect.
no more than 9 sampling points per phenocam plot.
1 annual measurement per year to collect annual size and disease status measurements from
each sampling point.
Available Data Tables
In the downloaded data packet, data are available in two main files
phe_statusintensity: Plant phenophase status and intensity data
phe_perindividual: Geolocation and taxonomic identification for phenology plants
phe_perindividualperyear: recorded once a year, essentially the "metadata"
about the plant: DBH, height, etc.
There are other files in each download including a readme with information on
the data product and the download; a variables file that defines the
term descriptions, data types, and units; a validation file with data entry
validation and parsing rules; and an XML with machine readable metadata.
Stack NEON Data
NEON data are delivered in a site and year-month format. When you download data,
you will get a single zipped file containing a directory for each month and site that you've
requested data for. Dealing with these separate tables from even one or two sites
over a 12 month period can be a bit overwhelming. Luckily NEON provides an R package
neonUtilities that takes the unzipped downloaded file and joining the data
files. The teaching data downloaded with this tutorial is already stacked. If you
are working with other NEON data, please go through the tutorial to stack the data
in
R or in Python
and then return to this tutorial.
Work with NEON Data
When we do this for phenology data we get three files, one for each data table,
with all the data from your site and date range of interest.
First, we need to set up our R environment.
# install needed package (only uncomment & run if not already installed)
#install.packages("neonUtilities")
#install.packages("dplyr")
#install.packages("ggplot2")
# load needed packages
library(neonUtilities)
library(dplyr)
library(ggplot2)
options(stringsAsFactors=F) #keep strings as character type not factors
# set working directory to ensure R can find the file we wish to import and where
# we want to save our files. Be sure to move the download into your working directory!
wd <- "~/Git/data/" # Change this to match your local environment
setwd(wd)
Let's start by loading our data of interest. For this series, we'll work with
date from the NEON Domain 02 sites:
Blandy Farm (BLAN)
Smithsonian Conservation Biology Institute (SCBI)
Smithsonian Environmental Research Center (SERC)
And we'll use data from January 2017 to December 2019. This downloads over 9MB
of data. If this is too large, use a smaller date range. If you opt to do this,
your figures and some output may look different later in the tutorial.
With this information, we can download our data using the neonUtilities package.
If you are not using a NEON token to download your data, remove the
token = Sys.getenv(NEON_TOKEN) line of code (learn more about NEON API tokens
in the
Using an API Token when Accessing NEON Data with neonUtilities tutorial).
If you are using the data downloaded at the start of the tutorial, use the
commented out code in the second half of this code chunk.
## Two options for accessing data - programmatic or from the example dataset
# Read data from data portal
phe <- loadByProduct(dpID = "DP1.10055.001", site=c("BLAN","SCBI","SERC"),
startdate = "2017-01", enddate="2019-12",
token = Sys.getenv("NEON_TOKEN"),
check.size = F)
## API token was not recognized. Public rate limit applied.
## Finding available files
##
# if you aren't sure you can handle the data file size use check.size = T.
# save dataframes from the downloaded list
ind <- phe$phe_perindividual #individual information
status <- phe$phe_statusintensity #status & intensity info
##If choosing to use example dataset downloaded from this tutorial:
# Stack multiple files within the downloaded phenology data
#stackByTable("NEON-pheno-temp-timeseries_v2/filesToStack10055", folder = T)
# read in data - readTableNEON uses the variables file to assign the correct
# data type for each variable
#ind <- readTableNEON('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/phe_perindividual.csv', 'NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')
#status <- readTableNEON('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/phe_statusintensity.csv', 'NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')
Let's explore the data. Let's get to know what the ind dataframe looks like.
# What are the fieldnames in this dataset?
names(ind)
## [1] "uid" "namedLocation"
## [3] "domainID" "siteID"
## [5] "plotID" "decimalLatitude"
## [7] "decimalLongitude" "geodeticDatum"
## [9] "coordinateUncertainty" "elevation"
## [11] "elevationUncertainty" "subtypeSpecification"
## [13] "transectMeter" "directionFromTransect"
## [15] "ninetyDegreeDistance" "sampleLatitude"
## [17] "sampleLongitude" "sampleGeodeticDatum"
## [19] "sampleCoordinateUncertainty" "sampleElevation"
## [21] "sampleElevationUncertainty" "date"
## [23] "editedDate" "individualID"
## [25] "taxonID" "scientificName"
## [27] "identificationQualifier" "taxonRank"
## [29] "nativeStatusCode" "growthForm"
## [31] "vstTag" "samplingProtocolVersion"
## [33] "measuredBy" "identifiedBy"
## [35] "recordedBy" "remarks"
## [37] "dataQF" "publicationDate"
## [39] "release"
# Unsure of what some of the variables are you? Look at the variables table!
View(phe$variables_10055)
# if using the pre-downloaded data, you need to read in the variables file
# or open and look at it on your desktop
#var <- read.csv('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')
#View(var)
# how many rows are in the data?
nrow(ind)
## [1] 433
# look at the first six rows of data.
#head(ind) #this is a good function to use but looks messy so not rendering it
# look at the structure of the dataframe.
str(ind)
## 'data.frame': 433 obs. of 39 variables:
## $ uid : chr "76bf37d9-c834-43fc-a430-83d87e4b9289" "cf0239bb-2953-44a8-8fd2-051539be5727" "833e5f41-d5cb-4550-ba60-e6f000a2b1b6" "6c2e348d-d19e-4543-9d22-0527819ee964" ...
## $ namedLocation : chr "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" ...
## $ domainID : chr "D02" "D02" "D02" "D02" ...
## $ siteID : chr "BLAN" "BLAN" "BLAN" "BLAN" ...
## $ plotID : chr "BLAN_061" "BLAN_061" "BLAN_061" "BLAN_061" ...
## $ decimalLatitude : num 39.1 39.1 39.1 39.1 39.1 ...
## $ decimalLongitude : num -78.1 -78.1 -78.1 -78.1 -78.1 ...
## $ geodeticDatum : chr NA NA NA NA ...
## $ coordinateUncertainty : num NA NA NA NA NA NA NA NA NA NA ...
## $ elevation : num 183 183 183 183 183 183 183 183 183 183 ...
## $ elevationUncertainty : num NA NA NA NA NA NA NA NA NA NA ...
## $ subtypeSpecification : chr "primary" "primary" "primary" "primary" ...
## $ transectMeter : num 491 464 537 15 753 506 527 305 627 501 ...
## $ directionFromTransect : chr "Left" "Right" "Left" "Left" ...
## $ ninetyDegreeDistance : num 0.5 4 2 3 2 1 2 3 2 3 ...
## $ sampleLatitude : num NA NA NA NA NA NA NA NA NA NA ...
## $ sampleLongitude : num NA NA NA NA NA NA NA NA NA NA ...
## $ sampleGeodeticDatum : chr "WGS84" "WGS84" "WGS84" "WGS84" ...
## $ sampleCoordinateUncertainty: num NA NA NA NA NA NA NA NA NA NA ...
## $ sampleElevation : num NA NA NA NA NA NA NA NA NA NA ...
## $ sampleElevationUncertainty : num NA NA NA NA NA NA NA NA NA NA ...
## $ date : POSIXct, format: "2016-04-20" ...
## $ editedDate : POSIXct, format: "2016-05-09" ...
## $ individualID : chr "NEON.PLA.D02.BLAN.06290" "NEON.PLA.D02.BLAN.06501" "NEON.PLA.D02.BLAN.06204" "NEON.PLA.D02.BLAN.06223" ...
## $ taxonID : chr "RHDA" "SOAL6" "RHDA" "LOMA6" ...
## $ scientificName : chr "Rhamnus davurica Pall." "Solidago altissima L." "Rhamnus davurica Pall." "Lonicera maackii (Rupr.) Herder" ...
## $ identificationQualifier : chr NA NA NA NA ...
## $ taxonRank : chr "species" "species" "species" "species" ...
## $ nativeStatusCode : chr "I" "N" "I" "I" ...
## $ growthForm : chr "Deciduous broadleaf" "Forb" "Deciduous broadleaf" "Deciduous broadleaf" ...
## $ vstTag : chr NA NA NA NA ...
## $ samplingProtocolVersion : chr NA "NEON.DOC.014040vJ" "NEON.DOC.014040vJ" "NEON.DOC.014040vJ" ...
## $ measuredBy : chr "jcoloso@neoninc.org" "jward@battelleecology.org" "alandes@field-ops.org" "alandes@field-ops.org" ...
## $ identifiedBy : chr "shackley@neoninc.org" "llemmon@field-ops.org" "llemmon@field-ops.org" "llemmon@field-ops.org" ...
## $ recordedBy : chr "shackley@neoninc.org" NA NA NA ...
## $ remarks : chr "Nearly dead shaded out" "no entry" "no entry" "no entry" ...
## $ dataQF : chr NA NA NA NA ...
## $ publicationDate : chr "20201218T103411Z" "20201218T103411Z" "20201218T103411Z" "20201218T103411Z" ...
## $ release : chr "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" ...
Notice that the neonUtilities package read the data type from the variables file
and then automatically converts the data to the correct date type in R.
(Note that if you first opened your data file in Excel, you might see 06/14/2014 as
the format instead of 2014-06-14. Excel can do some ~~weird~~ interesting things
to dates.)
Phenology status
Now let's look at the status data.
# What variables are included in this dataset?
names(status)
## [1] "uid" "namedLocation"
## [3] "domainID" "siteID"
## [5] "plotID" "date"
## [7] "editedDate" "dayOfYear"
## [9] "individualID" "phenophaseName"
## [11] "phenophaseStatus" "phenophaseIntensityDefinition"
## [13] "phenophaseIntensity" "samplingProtocolVersion"
## [15] "measuredBy" "recordedBy"
## [17] "remarks" "dataQF"
## [19] "publicationDate" "release"
nrow(status)
## [1] 219357
#head(status) #this is a good function to use but looks messy so not rendering it
str(status)
## 'data.frame': 219357 obs. of 20 variables:
## $ uid : chr "b69ada55-41d1-41c7-9031-149c54de51f9" "9be6f7ad-4422-40ac-ba7f-e32e0184782d" "58e7aeaf-163c-4ea2-ad75-db79a580f2f8" "efe7ca02-d09e-4964-b35d-aebdac8f3efb" ...
## $ namedLocation : chr "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" ...
## $ domainID : chr "D02" "D02" "D02" "D02" ...
## $ siteID : chr "BLAN" "BLAN" "BLAN" "BLAN" ...
## $ plotID : chr "BLAN_061" "BLAN_061" "BLAN_061" "BLAN_061" ...
## $ date : POSIXct, format: "2017-02-24" ...
## $ editedDate : POSIXct, format: "2017-03-31" ...
## $ dayOfYear : num 55 55 55 55 55 55 55 55 55 55 ...
## $ individualID : chr "NEON.PLA.D02.BLAN.06229" "NEON.PLA.D02.BLAN.06226" "NEON.PLA.D02.BLAN.06222" "NEON.PLA.D02.BLAN.06223" ...
## $ phenophaseName : chr "Leaves" "Leaves" "Leaves" "Leaves" ...
## $ phenophaseStatus : chr "no" "no" "no" "no" ...
## $ phenophaseIntensityDefinition: chr NA NA NA NA ...
## $ phenophaseIntensity : chr NA NA NA NA ...
## $ samplingProtocolVersion : chr NA NA NA NA ...
## $ measuredBy : chr "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" ...
## $ recordedBy : chr "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" ...
## $ remarks : chr NA NA NA NA ...
## $ dataQF : chr "legacyData" "legacyData" "legacyData" "legacyData" ...
## $ publicationDate : chr "20201217T203824Z" "20201217T203824Z" "20201217T203824Z" "20201217T203824Z" ...
## $ release : chr "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" ...
# date range
min(status$date)
## [1] "2017-02-24 GMT"
max(status$date)
## [1] "2019-12-12 GMT"
Clean up the Data
remove duplicates (full rows)
convert to date format
retain only the most recent editedDate in the perIndividual and status table.
Remove Duplicates
The individual table (ind) file is included in each site by month-year file. As
a result when all the tables are stacked there are many duplicates.
Let's remove any duplicates that exist.
# drop UID as that will be unique for duplicate records
ind_noUID <- select(ind, -(uid))
status_noUID <- select(status, -(uid))
# remove duplicates
## expect many
ind_noD <- distinct(ind_noUID)
nrow(ind_noD)
## [1] 433
status_noD<-distinct(status_noUID)
nrow(status_noD)
## [1] 216837
Variable Overlap between Tables
From the initial inspection of the data we can see there is overlap in variable
names between the fields.
Let's see what they are.
# where is there an intersection of names
intersect(names(status_noD), names(ind_noD))
## [1] "namedLocation" "domainID"
## [3] "siteID" "plotID"
## [5] "date" "editedDate"
## [7] "individualID" "samplingProtocolVersion"
## [9] "measuredBy" "recordedBy"
## [11] "remarks" "dataQF"
## [13] "publicationDate" "release"
There are several fields that overlap between the datasets. Some of these are
expected to be the same and will be what we join on.
However, some of these will have different values in each table. We want to keep
those distinct value and not join on them. Therefore, we can rename these
fields before joining:
date
editedDate
measuredBy
recordedBy
samplingProtocolVersion
remarks
dataQF
publicationDate
Now we want to rename the variables that would have duplicate names. We can
rename all the variables in the status object to have "Stat" at the end of the
variable name.
# in Status table rename like columns
status_noD <- rename(status_noD, dateStat=date,
editedDateStat=editedDate, measuredByStat=measuredBy,
recordedByStat=recordedBy,
samplingProtocolVersionStat=samplingProtocolVersion,
remarksStat=remarks, dataQFStat=dataQF,
publicationDateStat=publicationDate)
Filter to last editedDate
The individual (ind) table contains all instances that any of the location or
taxonomy data of an individual was updated. Therefore there are many rows for
some individuals. We only want the latest editedDate on ind.
# retain only the max of the date for each individualID
ind_last <- ind_noD %>%
group_by(individualID) %>%
filter(editedDate==max(editedDate))
# oh wait, duplicate dates, retain only the most recent editedDate
ind_lastnoD <- ind_last %>%
group_by(editedDate, individualID) %>%
filter(row_number()==1)
Join Dataframes
Now we can join the two data frames on all the variables with the same name.
We use a left_join() from the dpylr package because we want to match all the
rows from the "left" (first) dataframe to any rows that also occur in the "right"
(second) dataframe.
# Create a new dataframe "phe_ind" with all the data from status and some from ind_lastnoD
phe_ind <- left_join(status_noD, ind_lastnoD)
## Joining, by = c("namedLocation", "domainID", "siteID", "plotID", "individualID", "release")
Now that we have clean datasets we can begin looking into our particular data to
address our research question: do plants show patterns of changes in phenophase
across season?
Patterns in Phenophase
From our larger dataset (several sites, species, phenophases), let's create a
dataframe with only the data from a single site, species, and phenophase and
call it phe_1sp.
Select Site(s) of Interest
To do this, we'll first select our site of interest. Note how we set this up
with an object that is our site of interest. This will allow us to more easily change
which site or sites if we want to adapt our code later.
# set site of interest
siteOfInterest <- "SCBI"
# use filter to select only the site of Interest
## using %in% allows one to add a vector if you want more than one site.
## could also do it with == instead of %in% but won't work with vectors
phe_1st <- filter(phe_ind, siteID %in% siteOfInterest)
Select Species of Interest
Now we may only want to view a single species or a set of species. Let's first
look at the species that are present in our data. We could do this just by looking
at the taxonID field which give the four letter UDSA plant code for each
species. But if we don't know all the plant codes, we can get a bit fancier and
view both
# see which species are present - taxon ID only
unique(phe_1st$taxonID)
## [1] "JUNI" "MIVI" "LITU"
# or see which species are present with taxon ID + species name
unique(paste(phe_1st$taxonID, phe_1st$scientificName, sep=' - '))
## [1] "JUNI - Juglans nigra L."
## [2] "MIVI - Microstegium vimineum (Trin.) A. Camus"
## [3] "LITU - Liriodendron tulipifera L."
For now, let's choose only the flowering tree Liriodendron tulipifera (LITU).
By writing it this way, we could also add a list of species to the speciesOfInterest
object to select for multiple species.
speciesOfInterest <- "LITU"
#subset to just "LITU"
# here just use == but could also use %in%
phe_1sp <- filter(phe_1st, taxonID==speciesOfInterest)
# check that it worked
unique(phe_1sp$taxonID)
## [1] "LITU"
Select Phenophase of Interest
And, perhaps a single phenophase.
# see which phenophases are present
unique(phe_1sp$phenophaseName)
## [1] "Open flowers" "Breaking leaf buds"
## [3] "Colored leaves" "Increasing leaf size"
## [5] "Falling leaves" "Leaves"
phenophaseOfInterest <- "Leaves"
#subset to just the phenosphase of interest
phe_1sp <- filter(phe_1sp, phenophaseName %in% phenophaseOfInterest)
# check that it worked
unique(phe_1sp$phenophaseName)
## [1] "Leaves"
Select only Primary Plots
NEON plant phenology observations are collected along two types of plots.
Primary plots: an 800 meter square phenology loop transect
Phenocam plots: a 200 m x 200 m plot located within view of a canopy level,
tower-mounted, phenology camera
In the data, these plots are differentiated by the subtypeSpecification.
Depending on your question you may want to use only one or both of these plot types.
For this activity, we're going to only look at the primary plots.
**Data Tip:** How do I learn this on my own? Read
the Data Product User Guide and use the variables files with the data download
to find the corresponding variables names.
# what plots are present?
unique(phe_1sp$subtypeSpecification)
## [1] "primary" "phenocam"
# filter
phe_1spPrimary <- filter(phe_1sp, subtypeSpecification == 'primary')
# check that it worked
unique(phe_1spPrimary$subtypeSpecification)
## [1] "primary"
Total in Phenophase of Interest
The phenophaseState is recorded as "yes" or "no" that the individual is in that
phenophase. The phenophaseIntensity are categories for how much of the individual
is in that state. For now, we will stick with phenophaseState.
We can now calculate the total number of individuals with that state. We use
n_distinct(indvidualID) count the individuals (and not the records) in case
there are duplicate records for an individual.
But later on we'll also want to calculate the percent of the observed individuals
in the "leaves" status, therefore, we're also adding in a step here to retain the
sample size so that we can calculate % later.
Here we use pipes %>% from the dpylr package to "pass" objects onto the next
function.
# Calculate sample size for later use
sampSize <- phe_1spPrimary %>%
group_by(dateStat) %>%
summarise(numInd= n_distinct(individualID))
# Total in status by day for distinct individuals
inStat <- phe_1spPrimary%>%
group_by(dateStat, phenophaseStatus)%>%
summarise(countYes=n_distinct(individualID))
## `summarise()` has grouped output by 'dateStat'. You can override using the `.groups` argument.
inStat <- full_join(sampSize, inStat, by="dateStat")
# Retain only Yes
inStat_T <- filter(inStat, phenophaseStatus %in% "yes")
# check that it worked
unique(inStat_T$phenophaseStatus)
## [1] "yes"
Now that we have the data we can plot it.
Plot with ggplot
The ggplot() function within the ggplot2 package gives us considerable control
over plot appearance. Three basic elements are needed for ggplot() to work:
The data_frame: containing the variables that we wish to plot,
aes (aesthetics): which denotes which variables will map to the x-, y-
(and other) axes,
geom_XXXX (geometry): which defines the data's graphical representation
(e.g. points (geom_point), bars (geom_bar), lines (geom_line), etc).
The syntax begins with the base statement that includes the data_frame
(inStat_T) and associated x (date) and y (n) variables to be
plotted:
To successfully plot, the last piece that is needed is the geometry type.
To create a bar plot, we set the geom element from to geom_bar().
The default setting for a ggplot bar plot - geom_bar() - is a histogram
designated by stat="bin". However, in this case, we want to plot count values.
We can use geom_bar(stat="identity") to force ggplot to plot actual values.
# plot number of individuals in leaf
phenoPlot <- ggplot(inStat_T, aes(dateStat, countYes)) +
geom_bar(stat="identity", na.rm = TRUE)
phenoPlot
# Now let's make the plot look a bit more presentable
phenoPlot <- ggplot(inStat_T, aes(dateStat, countYes)) +
geom_bar(stat="identity", na.rm = TRUE) +
ggtitle("Total Individuals in Leaf") +
xlab("Date") + ylab("Number of Individuals") +
theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
theme(text = element_text(size=18))
phenoPlot
We could also covert this to percentage and plot that.
The plots demonstrate the nice expected pattern of increasing leaf-out, peak,
and drop-off.
Drivers of Phenology
Now that we see that there are differences in and shifts in phenophases, what
are the drivers of phenophases?
The NEON phenology measurements track sensitive and easily observed indicators
of biotic responses to climate variability by monitoring the timing and duration
of phenological stages in plant communities. Plant phenology is affected by forces
such as temperature, timing and duration of pest infestations and disease outbreaks,
water fluxes, nutrient budgets, carbon dynamics, and food availability and has
feedbacks to trophic interactions, carbon sequestration, community composition
and ecosystem function. (quoted from
Plant Phenology Observations user guide.)
Filter by Date
In the next part of this series, we will be exploring temperature as a driver of
phenology. Temperature date is quite large (NEON provides this in 1 minute or 30
minute intervals) so let's trim our phenology date down to only one year so that
we aren't working with as large a data.
Let's filter to just 2018 data.
# use filter to select only the date of interest
phe_1sp_2018 <- filter(inStat_T, dateStat >= "2018-01-01" & dateStat <= "2018-12-31")
# did it work?
range(phe_1sp_2018$dateStat)
## [1] "2018-04-13 GMT" "2018-11-20 GMT"
How does that look?
# Now let's make the plot look a bit more presentable
phenoPlot18 <- ggplot(phe_1sp_2018, aes(dateStat, countYes)) +
geom_bar(stat="identity", na.rm = TRUE) +
ggtitle("Total Individuals in Leaf") +
xlab("Date") + ylab("Number of Individuals") +
theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
theme(text = element_text(size=18))
phenoPlot18
Now that we've filtered down to just the 2018 data from SCBI for LITU in leaf,
we may want to save that subsetted data for another use. To do that you can write
the data frame to a .csv file.
You do not need to follow this step if you are continuing on to the next tutorials
in this series as you already have the data frame in your environment. Of course
if you close R and then come back to it, you will need to re-load this data and
instructions for that are provided in the relevant tutorials.
# Write .csv - this step is optional
# This will write to your current working directory, change as desired.
write.csv( phe_1sp_2018 , file="NEONpheno_LITU_Leaves_SCBI_2018.csv", row.names=F)
#If you are using the downloaded example date, this code will write it to the
# pheno data file. Note - this file is already a part of the download.
#write.csv( phe_1sp_2018 , file="NEON-pheno-temp-timeseries_v2/NEONpheno_LITU_Leaves_SCBI_2018.csv", row.names=F)
This is a tutorial in pulling data from the NEON API or Application
Programming Interface. The tutorial uses R and the R package httr, but the core
information about the API is applicable to other languages and approaches.
Objectives
After completing this activity, you will be able to:
Construct API calls to query the NEON API.
Access and understand data and metadata available via the NEON API.
Things You’ll Need To Complete This Tutorial
To complete this tutorial you will need the most current version of R and,
preferably, RStudio loaded on your computer.
If you are unfamiliar with the concept of an API, think of an API as a
‘middle person' that provides a communication path for a software application
to obtain information from a digital data source. APIs are becoming a very
common means of sharing digital information. Many of the apps that you use on
your computer or mobile device to produce maps, charts, reports, and other
useful forms of information pull data from multiple sources using APIs. In
the ecological and environmental sciences, many researchers use APIs to
programmatically pull data into their analyses. (Quoted from the NEON Observatory
Blog story:
API and data availability viewer now live on the NEON data portal.)
What is accessible via the NEON API?
The NEON API includes endpoints for NEON data and metadata, including
spatial data, taxonomic data, and samples (see Endpoints below). This
tutorial explores these sources of information using a specific data
product as a guide. The principles and rule sets described below can
be applied to other data products and metadata.
Specifics are appended to this in order to get the data or metadata you're
looking for, but all calls to an API will include the base URL. For the NEON
API, this is http://data.neonscience.org/api/v0 --
not clickable, because the base URL by itself will take you nowhere!
Your first thought is probably to use the /data endpoint. And we'll get
there. But notice above that the API call for the /data endpoint includes
the site and month of data to download. You don't want to have to guess sites
and months at random - first, you need to see which sites and months have
available data for the product you're interested in. That can be done either
through the /sites or the /products endpoint; here we'll use
/products.
Note: Checking for data availability can sometimes be skipped for the
streaming sensor data products. In general, they are available continuously,
and you could theoretically query a site and month of interest and expect
there to be data by default. However, there can be interruptions to sensor
data, in particular at aquatic sites, so checking availability first is the
most reliable approach.
Use the products endpoint to query for Woody vegetation data. The target is
the data product identifier, noted above, DP1.10098.001:
# Load the necessary libraries
library(httr)
library(jsonlite)
# Request data using the GET function & the API call
req <- GET("http://data.neonscience.org/api/v0/products/DP1.10098.001")
req
## Response [https://data.neonscience.org/api/v0/products/DP1.10098.001]
## Date: 2021-06-16 01:03
## Status: 200
## Content-Type: application/json;charset=UTF-8
## Size: 70.1 kB
The object returned from GET() has many layers of information. Entering the
name of the object gives you some basic information about what you accessed.
Status: 200 indicates this was a successful query; the status field can be
a useful place to look if something goes wrong. These are HTTP status codes,
you can google them to find out what a given value indicates.
The Content-Type parameter tells us we've accessed a json file. The easiest
way to translate this to something more manageable in R is to use the
fromJSON() function in the jsonlite package. It will convert the json into
a nested list, flattening the nesting where possible.
# Make the data readable by jsonlite
req.text <- content(req, as="text")
# Flatten json into a nested list
avail <- jsonlite::fromJSON(req.text,
simplifyDataFrame=T,
flatten=T)
A lot of the content here is basic information about the data product.
You can see all of it by running the line print(avail), but
this will result in a very long printout in your console. Instead, try viewing
list items individually. Here, we highlight a couple of interesting examples:
# View description of data product
avail$data$productDescription
## [1] "Structure measurements, including height, crown diameter, and stem diameter, as well as mapped position of individual woody plants"
# View data product abstract
avail$data$productAbstract
## [1] "This data product contains the quality-controlled, native sampling resolution data from in-situ measurements of live and standing dead woody individuals and shrub groups, from all terrestrial NEON sites with qualifying woody vegetation. The exact measurements collected per individual depend on growth form, and these measurements are focused on enabling biomass and productivity estimation, estimation of shrub volume and biomass, and calibration / validation of multiple NEON airborne remote-sensing data products. In general, comparatively large individuals that are visible to remote-sensing instruments are mapped, tagged and measured, and other smaller individuals are tagged and measured but not mapped. Smaller individuals may be subsampled according to a nested subplot approach in order to standardize the per plot sampling effort. Structure and mapping data are reported per individual per plot; sampling metadata, such as per growth form sampling area, are reported per plot. For additional details, see the user guide, protocols, and science design listed in the Documentation section in this data product's details webpage.\n\nLatency:\nThe expected time from data and/or sample collection in the field to data publication is as follows, for each of the data tables (in days) in the downloaded data package. See the Data Product User Guide for more information.\n\nvst_apparentindividual: 90\n\nvst_mappingandtagging: 90\n\nvst_perplotperyear: 300\n\nvst_shrubgroup: 90"
You may notice that some of this information is also accessible on the NEON
data portal. The portal uses the same data sources as the API, and in many
cases the portal is using the API on the back end, and simply adding a more
user-friendly display to the data.
We want to find which sites and months have available data. That is in the
siteCodes section. Let's look at what information is presented for each
site:
# Look at the first list element for siteCode
avail$data$siteCodes$siteCode[[1]]
## [1] "ABBY"
# And at the first list element for availableMonths
avail$data$siteCodes$availableMonths[[1]]
## [1] "2015-07" "2015-08" "2016-08" "2016-09" "2016-10" "2016-11" "2017-03" "2017-04" "2017-07" "2017-08"
## [11] "2017-09" "2018-07" "2018-08" "2018-09" "2018-10" "2018-11" "2019-07" "2019-09" "2019-10" "2019-11"
Here we can see the list of months with data for the site ABBY, which is
the Abby Road forest in Washington state.
The section $data$siteCodes$availableDataUrls provides the exact API
calls we need in order to query the data for each available site and month.
# Get complete list of available data URLs
wood.urls <- unlist(avail$data$siteCodes$availableDataUrls)
# Total number of URLs
length(wood.urls)
## [1] 535
# Show first 10 URLs available
wood.urls[1:10]
## [1] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2015-07"
## [2] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2015-08"
## [3] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2016-08"
## [4] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2016-09"
## [5] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2016-10"
## [6] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2016-11"
## [7] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2017-03"
## [8] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2017-04"
## [9] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2017-07"
## [10] "https://data.neonscience.org/api/v0/data/DP1.10098.001/ABBY/2017-08"
These URLs are the API calls we can use to find out what files are available
for each month where there are data. They are pre-constructed calls to the
/data endpoint of the NEON API.
Let's look at the woody plant data from the Rocky Mountain National Park
(RMNP) site from October 2019. We can do this by using the GET() function
on the relevant URL, which we can extract using the grep() function.
Note that if you want data from more than one site/month you need to iterate
this code, GET() fails if you give it more than one URL at a time.
# Get available data for RMNP Oct 2019
woody <- GET(wood.urls[grep("RMNP/2019-10", wood.urls)])
woody.files <- jsonlite::fromJSON(content(woody, as="text"))
# See what files are available for this site and month
woody.files$data$files$name
## [1] "NEON.D10.RMNP.DP1.10098.001.EML.20191010-20191017.20210123T023002Z.xml"
## [2] "NEON.D10.RMNP.DP1.10098.001.2019-10.basic.20210114T173951Z.zip"
## [3] "NEON.D10.RMNP.DP1.10098.001.vst_apparentindividual.2019-10.basic.20210114T173951Z.csv"
## [4] "NEON.D10.RMNP.DP1.10098.001.variables.20210114T173951Z.csv"
## [5] "NEON.D10.RMNP.DP0.10098.001.categoricalCodes.20210114T173951Z.csv"
## [6] "NEON.D10.RMNP.DP1.10098.001.readme.20210123T023002Z.txt"
## [7] "NEON.D10.RMNP.DP1.10098.001.vst_perplotperyear.2019-10.basic.20210114T173951Z.csv"
## [8] "NEON.D10.RMNP.DP1.10098.001.vst_mappingandtagging.basic.20210114T173951Z.csv"
## [9] "NEON.D10.RMNP.DP0.10098.001.validation.20210114T173951Z.csv"
If you've downloaded NEON data before via the data portal or the
neonUtilities package, this should look very familiar. The format
for most of the file names is:
NEON.[domain number].[site code].[data product ID].[file-specific name].
[year and month of data].[basic or expanded data package].
[date of file creation]
Some files omit the year and month, and/or the data package, since they're
not specific to a particular measurement interval, such as the data product
readme and variables files. The date of file creation uses the
ISO6801 format, for example 20210114T173951Z, and can be used to determine
whether data have been updated since the last time you downloaded.
Available files in our query for October 2019 at Rocky Mountain are all of the
following (leaving off the initial NEON.D10.RMNP.DP1.10098.001):
~.vst_perplotperyear.2019-10.basic.20210114T173951Z.csv: data table of
measurements conducted at the plot level every year
~.vst_apparentindividual.2019-10.basic.20210114T173951Z.csv: data table
containing measurements and observations conducted on woody individuals
~.vst_mappingandtagging.basic.20210114T173951Z.csv: data table
containing mapping data for each measured woody individual. Note year and
month are not in file name; these data are collected once per individual
and provided with every month of data downloaded
~.categoricalCodes.20210114T173951Z.csv: definitions of the values in
categorical variables
~.readme.20210123T023002Z.txt: readme for the data product (not specific
to dates or location)
~.EML.20191010-20191017.20210123T023002Z.xml: Ecological Metadata
Language (EML) file, describing the data product
~.validation.20210114T173951Z.csv: validation file for the data product,
lists input data and data entry validation rules
~.variables.20210114T173951Z.csv: variables file for the data product,
lists data fields in downloaded tables
~.2019-10.basic.20210114T173951Z.zip: zip of all files in the basic
package. Pre-packaged zips are planned to be removed; may not appear in
response to your query
This data product doesn't have an expanded package, so we only see the
basic package data files, and only one copy of each of the metadata files.
Let's get the data table for the mapping and tagging data. The list of files
doesn't return in the same order every time, so we shouldn't use the position
in the list to select the file name we want. Plus, we want code we can re-use
when getting data from other sites and other months. So we select file urls
based on the data table name in the file names.
Note that if there were an expanded package, the code above would return
two URLs. In that case you would need to specify the package as well in
selecting the URL.
Now we have the data and can access it in R. Just to show that the file we
pulled has actual data in it, let's make a quick graphic of the species
present and their abundances:
# Get counts by species
countBySp <- table(vst.maptag$taxonID)
# Reorder so list is ordered most to least abundance
countBySp <- countBySp[order(countBySp, decreasing=T)]
# Plot abundances
barplot(countBySp, names.arg=names(countBySp),
ylab="Total", las=2)
This shows us that the two most abundant species are designated with the
taxon codes PICOL and POTR5. We can look back at the data table, check the
scientificName field corresponding to these values, and see that these
are lodgepole pine and quaking aspen, as we might expect in the eastern
foothills of the Rocky mountains.
Let's say we're interested in how NEON defines quaking aspen, and
what taxon authority it uses for its definition. We can use the
/taxonomy endpoint of the API to do that.
Taxonomy
NEON maintains accepted taxonomies for many of the taxonomic identification
data we collect. NEON taxonomies are available for query via the API; they
are also provided via an interactive user interface, the Taxon Viewer.
NEON taxonomy data provides the reference information for how NEON
validates taxa; an identification must appear in the taxonomy lists
in order to be accepted into the NEON database. Additions to the lists
are reviewed regularly. The taxonomy lists also provide the author
of the scientific name, and the reference text used.
The taxonomy endpoint of the API has query options that are a bit more
complicated than what was described in the "Anatomy of an API Call"
section above. As described above, each endpoint has a single type of
target - a data product number, a named location name, etc. For taxonomic
data, there are multiple query options, and some of them can be used in
combination. Instead of entering a single target, we specify the query
type, and then the query parameter to search for. For example, a query
for taxa in the Pinaceae family:
The available types of queries are listed in the taxonomy section
of the API web page. Briefly, they are:
taxonTypeCode: Which of the taxonomies maintained by NEON are you
looking for? BIRD, FISH, PLANT, etc. Cannot be used in combination
with the taxonomic rank queries.
each of the major taxonomic ranks from genus through kingdom
scientificname: Genus + specific epithet (+ authority). Search is
by exact match only, see final example below.
verbose: Do you want the short (false) or long (true) response
offset: Skip this number of items at the start of the list.
limit: Result set will be truncated at this length.
For the first example, let's query for the loon family, Gaviidae, in the
bird taxonomy. Note that query parameters are case-sensitive.
And look at the $data element of the results, which contains:
The full taxonomy of each taxon
The short taxon code used by NEON (taxonID/acceptedTaxonID)
The author of the scientific name (scientificNameAuthorship)
The vernacular name, if applicable
The reference text used (nameAccordingToID)
The terms used for each field are matched to Darwin Core (dwc) and
the Global Biodiversity Information Facility (gbif) terms, where
possible, and the matches are indicated in the column headers.
loon.list$data
## taxonTypeCode taxonID acceptedTaxonID dwc:scientificName dwc:scientificNameAuthorship dwc:taxonRank
## 1 BIRD ARLO ARLO Gavia arctica (Linnaeus) species
## 2 BIRD COLO COLO Gavia immer (Brunnich) species
## 3 BIRD PALO PALO Gavia pacifica (Lawrence) species
## 4 BIRD RTLO RTLO Gavia stellata (Pontoppidan) species
## 5 BIRD YBLO YBLO Gavia adamsii (G. R. Gray) species
## dwc:vernacularName dwc:nameAccordingToID dwc:kingdom dwc:phylum dwc:class dwc:order dwc:family
## 1 Arctic Loon doi: 10.1642/AUK-15-73.1 Animalia Chordata Aves Gaviiformes Gaviidae
## 2 Common Loon doi: 10.1642/AUK-15-73.1 Animalia Chordata Aves Gaviiformes Gaviidae
## 3 Pacific Loon doi: 10.1642/AUK-15-73.1 Animalia Chordata Aves Gaviiformes Gaviidae
## 4 Red-throated Loon doi: 10.1642/AUK-15-73.1 Animalia Chordata Aves Gaviiformes Gaviidae
## 5 Yellow-billed Loon doi: 10.1642/AUK-15-73.1 Animalia Chordata Aves Gaviiformes Gaviidae
## dwc:genus gbif:subspecies gbif:variety
## 1 Gavia NA NA
## 2 Gavia NA NA
## 3 Gavia NA NA
## 4 Gavia NA NA
## 5 Gavia NA NA
To get the entire list for a particular taxonomic type, use the
taxonTypeCode query. Be cautious with this query, the PLANT taxonomic
list has several hundred thousand entries.
For an example, let's look up the small mammal taxonomic list, which
is one of the shorter ones, and use the verbose=true option to see
a more extensive list of taxon data, including many taxon ranks that
aren't populated for these taxa. For space here, we'll display only
the first 10 taxa:
mam.req <- GET("http://data.neonscience.org/api/v0/taxonomy/?taxonTypeCode=SMALL_MAMMAL&verbose=true")
mam.list <- jsonlite::fromJSON(content(mam.req, as="text"))
mam.list$data[1:10,]
## taxonTypeCode taxonID acceptedTaxonID dwc:scientificName dwc:scientificNameAuthorship
## 1 SMALL_MAMMAL AMHA AMHA Ammospermophilus harrisii Audubon and Bachman
## 2 SMALL_MAMMAL AMIN AMIN Ammospermophilus interpres Merriam
## 3 SMALL_MAMMAL AMLE AMLE Ammospermophilus leucurus Merriam
## 4 SMALL_MAMMAL AMLT AMLT Ammospermophilus leucurus tersus Goldman
## 5 SMALL_MAMMAL AMNE AMNE Ammospermophilus nelsoni Merriam
## 6 SMALL_MAMMAL AMSP AMSP Ammospermophilus sp. <NA>
## 7 SMALL_MAMMAL APRN APRN Aplodontia rufa nigra Taylor
## 8 SMALL_MAMMAL APRU APRU Aplodontia rufa Rafinesque
## 9 SMALL_MAMMAL ARAL ARAL Arborimus albipes Merriam
## 10 SMALL_MAMMAL ARLO ARLO Arborimus longicaudus True
## dwc:taxonRank dwc:vernacularName taxonProtocolCategory dwc:nameAccordingToID
## 1 species Harriss Antelope Squirrel opportunistic isbn: 978 0801882210
## 2 species Texas Antelope Squirrel opportunistic isbn: 978 0801882210
## 3 species Whitetailed Antelope Squirrel opportunistic isbn: 978 0801882210
## 4 subspecies <NA> opportunistic isbn: 978 0801882210
## 5 species Nelsons Antelope Squirrel opportunistic isbn: 978 0801882210
## 6 genus <NA> opportunistic isbn: 978 0801882210
## 7 subspecies <NA> non-target isbn: 978 0801882210
## 8 species Sewellel non-target isbn: 978 0801882210
## 9 species Whitefooted Vole target isbn: 978 0801882210
## 10 species Red Tree Vole target isbn: 978 0801882210
## dwc:nameAccordingToTitle
## 1 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 2 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 3 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 4 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 5 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 6 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 7 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 8 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 9 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## 10 Wilson D. E. and D. M. Reeder. 2005. Mammal Species of the World; A Taxonomic and Geographic Reference. Third edition. Johns Hopkins University Press; Baltimore, MD.
## dwc:kingdom gbif:subkingdom gbif:infrakingdom gbif:superdivision gbif:division gbif:subdivision
## 1 Animalia NA NA NA NA NA
## 2 Animalia NA NA NA NA NA
## 3 Animalia NA NA NA NA NA
## 4 Animalia NA NA NA NA NA
## 5 Animalia NA NA NA NA NA
## 6 Animalia NA NA NA NA NA
## 7 Animalia NA NA NA NA NA
## 8 Animalia NA NA NA NA NA
## 9 Animalia NA NA NA NA NA
## 10 Animalia NA NA NA NA NA
## gbif:infradivision gbif:parvdivision gbif:superphylum dwc:phylum gbif:subphylum gbif:infraphylum
## 1 NA NA NA Chordata NA NA
## 2 NA NA NA Chordata NA NA
## 3 NA NA NA Chordata NA NA
## 4 NA NA NA Chordata NA NA
## 5 NA NA NA Chordata NA NA
## 6 NA NA NA Chordata NA NA
## 7 NA NA NA Chordata NA NA
## 8 NA NA NA Chordata NA NA
## 9 NA NA NA Chordata NA NA
## 10 NA NA NA Chordata NA NA
## gbif:superclass dwc:class gbif:subclass gbif:infraclass gbif:superorder dwc:order gbif:suborder
## 1 NA Mammalia NA NA NA Rodentia NA
## 2 NA Mammalia NA NA NA Rodentia NA
## 3 NA Mammalia NA NA NA Rodentia NA
## 4 NA Mammalia NA NA NA Rodentia NA
## 5 NA Mammalia NA NA NA Rodentia NA
## 6 NA Mammalia NA NA NA Rodentia NA
## 7 NA Mammalia NA NA NA Rodentia NA
## 8 NA Mammalia NA NA NA Rodentia NA
## 9 NA Mammalia NA NA NA Rodentia NA
## 10 NA Mammalia NA NA NA Rodentia NA
## gbif:infraorder gbif:section gbif:subsection gbif:superfamily dwc:family gbif:subfamily gbif:tribe
## 1 NA NA NA NA Sciuridae Xerinae Marmotini
## 2 NA NA NA NA Sciuridae Xerinae Marmotini
## 3 NA NA NA NA Sciuridae Xerinae Marmotini
## 4 NA NA NA NA Sciuridae Xerinae Marmotini
## 5 NA NA NA NA Sciuridae Xerinae Marmotini
## 6 NA NA NA NA Sciuridae Xerinae Marmotini
## 7 NA NA NA NA Aplodontiidae <NA> <NA>
## 8 NA NA NA NA Aplodontiidae <NA> <NA>
## 9 NA NA NA NA Cricetidae Arvicolinae <NA>
## 10 NA NA NA NA Cricetidae Arvicolinae <NA>
## gbif:subtribe dwc:genus dwc:subgenus gbif:subspecies gbif:variety gbif:subvariety gbif:form
## 1 NA Ammospermophilus <NA> NA NA NA NA
## 2 NA Ammospermophilus <NA> NA NA NA NA
## 3 NA Ammospermophilus <NA> NA NA NA NA
## 4 NA Ammospermophilus <NA> NA NA NA NA
## 5 NA Ammospermophilus <NA> NA NA NA NA
## 6 NA Ammospermophilus <NA> NA NA NA NA
## 7 NA Aplodontia <NA> NA NA NA NA
## 8 NA Aplodontia <NA> NA NA NA NA
## 9 NA Arborimus <NA> NA NA NA NA
## 10 NA Arborimus <NA> NA NA NA NA
## gbif:subform speciesGroup dwc:specificEpithet dwc:infraspecificEpithet
## 1 NA <NA> harrisii <NA>
## 2 NA <NA> interpres <NA>
## 3 NA <NA> leucurus <NA>
## 4 NA <NA> leucurus tersus
## 5 NA <NA> nelsoni <NA>
## 6 NA <NA> sp. <NA>
## 7 NA <NA> rufa nigra
## 8 NA <NA> rufa <NA>
## 9 NA <NA> albipes <NA>
## 10 NA <NA> longicaudus <NA>
Now let's go back to our question about quaking aspen. To get
information about a single taxon, use the scientificname
query. This query will not do a fuzzy match, so you need to query
the exact name of the taxon in the NEON taxonomy. Because of this,
the query will be most useful in cases like the current one, where
you already have NEON data in hand and are looking for more
information about a specific taxon. Querying on scientificname
is unlikely to be an efficient way to figure out if NEON recognizes
a particular taxon.
In addition, scientific names contain spaces, which are not
allowed in a URL. The spaces need to be replaced with the URL
encoding replacement, %20.
Looking up the POTR5 data in the woody vegetation product, we
see that the scientific name is Populus tremuloides Michx.
This means we need to search for Populus%20tremuloides%20Michx.
to get the exact match.
This shows us the definition for Populus tremuloides Michx. does
not include a subspecies or variety, and the authority for the
taxon information (nameAccordingToID) is the USDA PLANTS
database. This means NEON taxonomic definitions are aligned with
the USDA, and is true for the large majority of plants in the
NEON taxon system.
Spatial data
How to get spatial data and what to do with it depends on which type of
data you're working with.
Instrumentation data (both aquatic and terrestrial)
The sensor_positions files, which are included in the list of available files,
contain spatial coordinates for each sensor in the data. See the final section
of the Geolocation tutorial for guidance in using these files.
Observational data - Aquatic
Latitude, longitude, elevation, and associated uncertainties are included in
data downloads. Most products also include an "additional coordinate uncertainty"
that should be added to the provided uncertainty. Additional spatial data, such
as northing and easting, can be downloaded from the API.
Observational data - Terrestrial
Latitude, longitude, elevation, and associated uncertainties are included in
data downloads. These are the coordinates and uncertainty of the sampling plot;
for many protocols it is possible to calculate a more precise location.
Instructions for doing this are in the respective data product user guides, and
code is in the geoNEON package on GitHub.
Querying a single named location
Let's look at a specific sampling location in the woody vegetation structure
data we downloaded above. To do this, look for the field called namedLocation,
which is present in all observational data products, both aquatic and
terrestrial. This field matches the exact name of the location in the NEON
database.
Here we see the first six entries in the namedLocation column, which tells us
the names of the Terrestrial Observation plots where the woody plant surveys
were conducted.
We can query the locations endpoint of the API for the first named location,
RMNP_043.basePlot.vst.
req.loc <- GET("http://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst")
vst.RMNP_043 <- jsonlite::fromJSON(content(req.loc, as="text"))
vst.RMNP_043
## $data
## $data$locationName
## [1] "RMNP_043.basePlot.vst"
##
## $data$locationDescription
## [1] "Plot \"RMNP_043\" at site \"RMNP\""
##
## $data$locationType
## [1] "OS Plot - vst"
##
## $data$domainCode
## [1] "D10"
##
## $data$siteCode
## [1] "RMNP"
##
## $data$locationDecimalLatitude
## [1] 40.27683
##
## $data$locationDecimalLongitude
## [1] -105.5454
##
## $data$locationElevation
## [1] 2740.39
##
## $data$locationUtmEasting
## [1] 453634.6
##
## $data$locationUtmNorthing
## [1] 4458626
##
## $data$locationUtmHemisphere
## [1] "N"
##
## $data$locationUtmZone
## [1] 13
##
## $data$alphaOrientation
## [1] 0
##
## $data$betaOrientation
## [1] 0
##
## $data$gammaOrientation
## [1] 0
##
## $data$xOffset
## [1] 0
##
## $data$yOffset
## [1] 0
##
## $data$zOffset
## [1] 0
##
## $data$offsetLocation
## NULL
##
## $data$locationProperties
## locationPropertyName locationPropertyValue
## 1 Value for Coordinate source GeoXH 6000
## 2 Value for Coordinate uncertainty 0.09
## 3 Value for Country unitedStates
## 4 Value for County Larimer
## 5 Value for Elevation uncertainty 0.1
## 6 Value for Filtered positions 300
## 7 Value for Geodetic datum WGS84
## 8 Value for Horizontal dilution of precision 1
## 9 Value for Maximum elevation 2743.43
## 10 Value for Minimum elevation 2738.52
## 11 Value for National Land Cover Database (2001) evergreenForest
## 12 Value for Plot dimensions 40m x 40m
## 13 Value for Plot ID RMNP_043
## 14 Value for Plot size 1600
## 15 Value for Plot subtype basePlot
## 16 Value for Plot type tower
## 17 Value for Positional dilution of precision 1.8
## 18 Value for Reference Point Position 41
## 19 Value for Slope aspect 0
## 20 Value for Slope gradient 0
## 21 Value for Soil type order Alfisols
## 22 Value for State province CO
## 23 Value for UTM Zone 13N
##
## $data$locationParent
## [1] "RMNP"
##
## $data$locationParentUrl
## [1] "https://data.neonscience.org/api/v0/locations/RMNP"
##
## $data$locationChildren
## [1] "RMNP_043.basePlot.vst.41" "RMNP_043.basePlot.vst.43" "RMNP_043.basePlot.vst.49"
## [4] "RMNP_043.basePlot.vst.51" "RMNP_043.basePlot.vst.59" "RMNP_043.basePlot.vst.57"
## [7] "RMNP_043.basePlot.vst.25" "RMNP_043.basePlot.vst.21" "RMNP_043.basePlot.vst.23"
## [10] "RMNP_043.basePlot.vst.33" "RMNP_043.basePlot.vst.31" "RMNP_043.basePlot.vst.39"
## [13] "RMNP_043.basePlot.vst.61"
##
## $data$locationChildrenUrls
## [1] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.41"
## [2] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.43"
## [3] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.49"
## [4] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.51"
## [5] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.59"
## [6] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.57"
## [7] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.25"
## [8] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.21"
## [9] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.23"
## [10] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.33"
## [11] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.31"
## [12] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.39"
## [13] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst.61"
Note spatial information under $data$[nameOfCoordinate] and under
$data$locationProperties. These coordinates represent the centroid of
plot RMNP_043, and should match the coordinates for the plot in the
vst_perplotperyear data table. In rare cases, spatial data may be
updated, and if this has happened more recently than the data table
was published, there may be a small mismatch in the coordinates. In
those cases, the data accessed via the API will be the most up to date.
Also note $data$locationChildren: these are the
point and subplot locations within plot RMNP_043. And
$data$locationChildrenUrls provides pre-constructed API calls for
querying those child locations. Let's look up the location data for
point 31 in plot RMNP_043.
req.child.loc <- GET(grep("31",
vst.RMNP_043$data$locationChildrenUrls,
value=T))
vst.RMNP_043.31 <- jsonlite::fromJSON(content(req.child.loc, as="text"))
vst.RMNP_043.31
## $data
## $data$locationName
## [1] "RMNP_043.basePlot.vst.31"
##
## $data$locationDescription
## [1] "31"
##
## $data$locationType
## [1] "Point"
##
## $data$domainCode
## [1] "D10"
##
## $data$siteCode
## [1] "RMNP"
##
## $data$locationDecimalLatitude
## [1] 40.27674
##
## $data$locationDecimalLongitude
## [1] -105.5455
##
## $data$locationElevation
## [1] 2741.56
##
## $data$locationUtmEasting
## [1] 453623.7
##
## $data$locationUtmNorthing
## [1] 4458616
##
## $data$locationUtmHemisphere
## [1] "N"
##
## $data$locationUtmZone
## [1] 13
##
## $data$alphaOrientation
## [1] 0
##
## $data$betaOrientation
## [1] 0
##
## $data$gammaOrientation
## [1] 0
##
## $data$xOffset
## [1] 0
##
## $data$yOffset
## [1] 0
##
## $data$zOffset
## [1] 0
##
## $data$offsetLocation
## NULL
##
## $data$locationProperties
## locationPropertyName locationPropertyValue
## 1 Value for Coordinate source GeoXH 6000
## 2 Value for Coordinate uncertainty 0.16
## 3 Value for Elevation uncertainty 0.28
## 4 Value for Geodetic datum WGS84
## 5 Value for National Land Cover Database (2001) evergreenForest
## 6 Value for Point ID 31
## 7 Value for UTM Zone 13N
##
## $data$locationParent
## [1] "RMNP_043.basePlot.vst"
##
## $data$locationParentUrl
## [1] "https://data.neonscience.org/api/v0/locations/RMNP_043.basePlot.vst"
##
## $data$locationChildren
## list()
##
## $data$locationChildrenUrls
## list()
Looking at the easting and northing coordinates, we can see that point
31 is about 10 meters away from the plot centroid in both directions.
Point 31 has no child locations.
For the records with pointID=31 in the vst.maptag table we
downloaded, these coordinates are the reference location that could be
used together with the stemDistance and stemAzimuth fields to
calculate the precise locations of individual trees. For detailed
instructions in how to do this, see the data product user guide.
Alternatively, the geoNEON package contains functions to make this
calculation for you, including accessing the location data from the
API. See below for details and links to tutorials.
R Packages
NEON provides two customized R packages that can carry out many of the
operations described above, in addition to other data transformations.
neonUtilities
The neonUtilities package contains functions that download data
via the API, and that merge the individual files for each site and
month in a single continuous file for each type of data table in the
download.
For a guide to using neonUtilities to download and stack data files,
check out the Download and Explore tutorial.
geoNEON
The geoNEON package contains functions that access spatial data
via the API, and that calculate precise locations for terrestrial
observational data. As in the case of woody vegetation structure,
terrestrial observational data products are published with
spatial data at the plot level, but more precise sampling locations
are usually possible to calculate.
For a guide to using geoNEON to calculate sampling locations,
check out the Geolocation tutorial.