LiDAR or Light Detection and Ranging is an active remote sensing system that can be used to measure vegetation height across wide areas. This page will introduce fundamental LiDAR (or lidar) concepts including:
What LiDAR data are.
The key attributes of LiDAR data.
How LiDAR data are used to measure trees.
The Story of LiDAR
Key Concepts
Why LiDAR
Scientists often need to characterize vegetation over large regions to answer
research questions at the ecosystem or regional scale. Therefore, we need tools
that can estimate key characteristics over large areas because
we don’t have the resources to measure each and every tree or shrub.
Conventional, on-the-ground methods to measure trees are resource
intensive and limit the amount of vegetation that can be characterized! Source:
National Geographic
Remote sensing means that we aren’t actually physically measuring things with our hands. We are using sensors which capture information about a landscape and
record things that we can use to estimate conditions and characteristics. To measure vegetation or other data across large areas, we need remote sensing
methods that can take many measurements quickly, using automated sensors.
LiDAR data collected at the Soaproot Saddle site by the National
Ecological Observatory Network's Airborne Observation Platform (NEON AOP).
LiDAR, or Light Detection AndRanging (sometimes also referred to as active laser scanning) is one remote sensing method that can be used to map structure including vegetation height, density and other characteristics across a region. LiDAR directly measures the height and density of vegetation on the ground making it an ideal tool for scientists studying vegetation over large areas.
How LiDAR Works
How Does LiDAR Work?
LiDAR is an active remote sensing system. An active system means that the system itself generates energy - in this case, light - to measure things on the
ground. In a LiDAR system, light is emitted from a rapidly firing laser. You can imagine light quickly strobing (or pulsing) from a laser light source. This light travels to the ground and reflects off of things like buildings and tree branches. The reflected light energy then returns to the LiDAR sensor where it is recorded.
A LiDAR system measures the time it takes for emitted light to travel to the ground and back, called the two-way travel time. That time is used to calculate distance traveled. Distance traveled is then converted to elevation. These measurements are made using the key components of a lidar system including a GPS that identifies the X,Y,Z location of the light energy and an Inertial Measurement Unit (IMU) that provides the orientation of the plane in the sky (roll, pitch, and yaw).
How Light Energy Is Used to Measure Trees
Light energy is a collection of photons. As photon that make up light moves towards the ground, they hit objects such as branches on a tree. Some of the
light reflects off of those objects and returns to the sensor. If the object is small, and there are gaps surrounding it that allow light to pass through, some
light continues down towards the ground. Because some photons reflect off of things like branches but others continue down towards the ground, multiple
reflections (or "returns") may be recorded from one pulse of light.
LiDAR waveforms
The distribution of energy that returns to the sensor creates what we call a waveform. The amount of energy that returned to the LiDAR sensor is known as
"intensity". The areas where more photons or more light energy returns to the sensor create peaks in the distribution of energy. Theses peaks in the waveform
often represent objects on the ground like - a branch, a group of leaves or a building.
An example LiDAR waveform returned from two trees and the ground.
Source: NEON .
How Scientists Use LiDAR Data
There are many different uses for LiDAR data.
LiDAR data classically have been used to derive high resolution elevation data models
LiDAR data have historically been used to generate high
resolution elevation datasets. Source: National Ecological Observatory
Network .
LiDAR data have also been used to derive information about vegetation structure including:
Canopy Height
Canopy Cover
Leaf Area Index
Vertical Forest Structure
Species identification (if a less dense forests with high point density LiDAR)
Cross section showing LiDAR point cloud data superimposed on the corresponding landscape profile. Source: National Ecological Observatory Network.
Discrete vs. Full Waveform LiDAR
A waveform or distribution of light energy is what returns to the LiDAR sensor. However, this return may be recorded in two different ways.
A Discrete Return LiDAR System records individual (discrete) points for the peaks in the waveform curve. Discrete return LiDAR systems identify peaks and record a point at each peak location in the waveform curve. These discrete or individual points are called returns. A discrete system may record 1-11+ returns from each laser pulse.
A Full Waveform LiDAR System records a distribution of returned light energy. Full waveform LiDAR data are thus more complex to process, however they can often capture more information compared to discrete return LiDAR systems. One example research application for full waveform LiDAR data includes mapping or
modelling the understory of a canopy.
LiDAR File Formats
Whether it is collected as discrete points or full waveform, most often LiDAR data are available as discrete points. A collection of discrete return LiDAR
points is known as a LiDAR point cloud.
The commonly used file format to store LIDAR point cloud data is called ".las" which is a format supported by the American Society of Photogrammetry and Remote
Sensing (ASPRS). Recently, the .laz format has been developed by Martin Isenberg of LasTools. The differences is that .laz is a highly compressed version of .las.
Data products derived from LiDAR point cloud data are often raster files that may be in GeoTIFF (.tif) formats.
LiDAR Data Attributes: X, Y, Z, Intensity and Classification
LiDAR data attributes can vary, depending upon how the data were collected and processed. You can determine what attributes are available for each lidar point
by looking at the metadata. All lidar data points will have an associated X,Y location and Z (elevation) values. Most lidar data points will have an intensity value, representing the amount of light energy recorded by the sensor.
Some LiDAR data will also be "classified" -- not top secret, but with specifications about what the data represent. Classification of LiDAR point clouds is an additional processing step. Classification simply represents the type of object that the laser return reflected off of. So if the light energy reflected off of a tree, it might be classified as "vegetation" point. And if it reflected off of the ground, it might be classified as "ground" point.
Some LiDAR products will be classified as "ground/non-ground". Some datasets will be further processed to determine which points reflected off of buildings
and other infrastructure. Some LiDAR data will be classified according to the vegetation type.
Exploring 3D LiDAR data in a free Online Viewer
Check out our tutorial on viewing LiDAR point cloud data using the Plas.io online viewer:
Plas.io: Free Online Data Viz to Explore LiDAR Data.
The Plas.io viewer used in this tutorial was developed by Martin Isenberg of Las Tools and his colleagues.
Summary
A LiDAR system uses a laser, a GPS and an IMU to estimate the heights of objects on the ground.
Discrete LiDAR data are generated from waveforms -- each point represent peak energy points along the returned energy.
Discrete LiDAR points contain an x, y and z value. The z value is what is used to generate height.
LiDAR data can be used to estimate tree height and even canopy cover using various methods.
A common analysis using lidar data are to derive top of the canopy height values
from the lidar data. These values are often used to track changes in forest
structure over time, to calculate biomass, and even leaf area index (LAI). Let's
dive into the basics of working with raster formatted lidar data in R!
Learning Objectives
After completing this tutorial, you will be able to:
Work with digital terrain model (DTM) & digital surface model (DSM) raster files.
Create a canopy height model (CHM) raster from DTM & DSM rasters.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges to reinforce
skills. If available, the code for challenge solutions is found in the downloadable R
script of the entire lesson, available in the footer of each lesson page.
The National Ecological Observatory Network (NEON) will provide lidar-derived
data products as one of its many free ecological data products. These products
will come in the
GeoTIFF
format, which is a .tif raster format that is spatially located on the earth.
In this tutorial, we create a Canopy Height Model. The
Canopy Height Model (CHM),
represents the heights of the trees on the ground. We can derive the CHM
by subtracting the ground elevation from the elevation of the top of the surface
(or the tops of the trees).
We will use the terra R package to work with the the lidar-derived Digital
Surface Model (DSM) and the Digital Terrain Model (DTM).
Set the working directory so you know where to download data.
wd="~/data/" #This will depend on your local environment
setwd(wd)
We can use the neonUtilities function byTileAOP to download a single DTM and DSM tile at SJER. Both the DTM and DSM are delivered under the Elevation - LiDAR (DP3.30024.001) data product.
You can run help(byTileAOP) to see more details on what the various inputs are. For this exercise, we'll specify the UTM Easting and Northing to be (257500, 4112500), which will download the tile with the lower left corner (257000,4112000). By default, the function will check the size total size of the download and ask you whether you wish to proceed (y/n). You can set check.size=FALSE if you want to download without a prompt. This example will not be very large (~8MB), since it is only downloading two single-band rasters (plus some associated metadata).
byTileAOP(dpID='DP3.30024.001',
site='SJER',
year='2021',
easting=257500,
northing=4112500,
check.size=TRUE, # set to FALSE if you don't want to enter y/n
savepath = wd)
This file will be downloaded into a nested subdirectory under the ~/data folder, inside a folder named DP3.30024.001 (the Data Product ID). The files should show up in these locations: ~/data/DP3.30024.001/neon-aop-products/2021/FullSite/D17/2021_SJER_5/L3/DiscreteLidar/DSMGtif/NEON_D17_SJER_DP3_257000_4112000_DSM.tif and ~/data/DP3.30024.001/neon-aop-products/2021/FullSite/D17/2021_SJER_5/L3/DiscreteLidar/DTMGtif/NEON_D17_SJER_DP3_257000_4112000_DTM.tif.
Now we can read in the files. You can move the files to a different location (eg. shorten the path), but make sure to change the path that points to the file accordingly.
# Define the DSM and DTM file names, including the full path
dsm_file <- paste0(wd,"DP3.30024.001/neon-aop-products/2021/FullSite/D17/2021_SJER_5/L3/DiscreteLidar/DSMGtif/NEON_D17_SJER_DP3_257000_4112000_DSM.tif")
dtm_file <- paste0(wd,"DP3.30024.001/neon-aop-products/2021/FullSite/D17/2021_SJER_5/L3/DiscreteLidar/DTMGtif/NEON_D17_SJER_DP3_257000_4112000_DTM.tif")
First, we will read in the Digital Surface Model (DSM). The DSM represents the elevation of the top of the objects on the ground (trees, buildings, etc).
# assign raster to object
dsm <- rast(dsm_file)
# view info about the raster.
dsm
## class : SpatRaster
## dimensions : 1000, 1000, 1 (nrow, ncol, nlyr)
## resolution : 1, 1 (x, y)
## extent : 257000, 258000, 4112000, 4113000 (xmin, xmax, ymin, ymax)
## coord. ref. : WGS 84 / UTM zone 11N (EPSG:32611)
## source : NEON_D17_SJER_DP3_257000_4112000_DSM.tif
## name : NEON_D17_SJER_DP3_257000_4112000_DSM
# plot the DSM
plot(dsm, main="Lidar Digital Surface Model \n SJER, California")
Note the resolution, extent, and coordinate reference system (CRS) of the raster.
To do later steps, our DTM will need to be the same.
Next, we will import the Digital Terrain Model (DTM) for the same area. The
DTM
represents the ground (terrain) elevation.
# import the digital terrain model
dtm <- rast(dtm_file)
plot(dtm, main="Lidar Digital Terrain Model \n SJER, California")
With both of these rasters now loaded, we can create the Canopy Height Model
(CHM). The CHM
represents the difference between the DSM and the DTM or the height of all objects
on the surface of the earth.
To do this we perform some basic raster math to calculate the CHM. You can
perform the same raster math in a GIS program like
QGIS.
When you do the math, make sure to subtract the DTM from the DSM or you'll get
trees with negative heights!
# use raster math to create CHM
chm <- dsm - dtm
# view CHM attributes
chm
## class : SpatRaster
## dimensions : 1000, 1000, 1 (nrow, ncol, nlyr)
## resolution : 1, 1 (x, y)
## extent : 257000, 258000, 4112000, 4113000 (xmin, xmax, ymin, ymax)
## coord. ref. : WGS 84 / UTM zone 11N (EPSG:32611)
## source(s) : memory
## varname : NEON_D17_SJER_DP3_257000_4112000_DSM
## name : NEON_D17_SJER_DP3_257000_4112000_DSM
## min value : 0.00
## max value : 24.13
plot(chm, main="Lidar CHM - SJER, California")
We've now created a CHM from our DSM and DTM. What do you notice about the
canopy cover at this location in the San Joaquin Experimental Range?
Challenge: Basic Raster Math
Convert the CHM from meters to feet and plot it.
We can write out the CHM as a GeoTiff using the writeRaster() function.
# write out the CHM in tiff format.
writeRaster(chm,paste0(wd,"CHM_SJER.tif"),"GTiff")
We've now successfully created a canopy height model using basic raster math -- in
R! We can bring the CHM_SJER.tif file into QGIS (or any GIS program) and look
at it.
Here we will provide an overview of the National Ecological Observatory
Network (NEON). Please carefully read through these materials and links that
discuss NEON’s mission and design.
Learning Objectives
At the end of this activity, you will be able to:
Explain the mission of the National Ecological Observatory Network (NEON).
Explain the how sites are located within the NEON project design.
Explain the different types of data that will be collected and provided by NEON.
The NEON Project Mission & Design
To capture ecological heterogeneity across the United States, NEON’s design
divides the continent into 20 statistically different eco-climatic domains. Each
NEON field site is located within an eco-climatic domain.
The Science and Design of NEON
To gain a better understanding of the broad scope fo NEON watch this 4 minute long
video.
Explore the NEON field site map. Do the following:
Zoom in on a study area of interest to see if there are any NEON field sites that are nearby.
Use the menu below the map to filter sites by name, type, domain, or state.
Select one field site of interest.
Click on the marker in the map.
Then click on Site Details to jump to the field site landing page.
Data Institute Participant -- Thought Questions:
Use the map above to answer these questions. Consider the research question that
you may explore as your Capstone Project at the Institute or about a current
project that you are working on and answer the following questions:
Are there NEON field sites that are in study regions of interest to you?
What domains are the sites located in?
What NEON field sites do your current research or Capstone Project ideas
coincide with?
Is the site(s) core or relocatable?
Is it/are they terrestrial or aquatic?
Are there data available for the NEON field site(s) that you are most
interested in? What kind of data are available?
Watch this 3:06 minute video exploring the data that NEON collects.
Read the
Data Collection Methods
page to learn more about the different types of data that NEON collects and
provides. Then, follow the links below to learn more about each collection method:
NEON also collects samples and specimens from which the other data products are based. These samples are also available for research and education purposes. Learn more:
NEON Biorepository.
Airborne Remote Sensing
Watch this 5 minute video to better understand the NEON Airborne Observation
Platform (AOP).
Data Institute Participant – Thought Questions:
Consider either your current or future research or the question you’d like to
address at the Institute.
Which types of NEON data may be more useful to address these questions?
What non-NEON data resources could be combined with NEON data to help address your question?
What challenges, if any, could you foresee when beginning to work with these data?
Data Tip: NEON also provides support to your own
research including proposals to fly the AOP over other study sites, a mobile
tower/instrumentation setup and others. Learn more here the
Assignable Assets programs .
Access NEON Data
NEON data are processed and go through quality assurance quality control checks at NEON headquarters in Boulder, CO.
NEON carefully documents every aspect of sampling design, data collection, processing and delivery. This documentation is freely available through the NEON data portal.
Explore NEON Data Products.
On the page for each data product in the catalog you can find the basic information
about the product, find the data collection and processing protocols, and link
directly to downloading the data.
Additionally, some types of NEON data are also available through the data portals
of other organizations. For example,
NEON Terrestrial Insect DNA Barcoding Data
is available through the
Barcode of Life Datasystem (BOLD).
Or NEON phenocam images are available from the
Phenocam network site.
More details on where else the data are available from can be found in the Availability and Download
section on the Product Details page for each data product (visit
Explore Data Products
to access individual Product Details pages).
Pathways to access NEON Data
There are several ways to access data from NEON:
Via the NEON data portal.
Explore and download data. Note that much of the tabular data is available in zipped
.csv files for each month and site of interest. To combine these files, use the
neonUtilities package (R tutorial, Python tutorial).
Use R or Python to programmatically access the data. NEON and community members
have created code packages to directly access the data through an API. Learn more
about the available resources by reading the Code Resources page or visiting the
NEONScience GitHub repo.
Using the NEON API. Access NEON data directly using a custom API call.
Access NEON data through partner's portals. Where NEON data directly overlap
with other community resources, NEON data can be accessed through the portals.
Examples include Phenocam, BOLD, Ameriflux, and others. You can learn more in the
documentation for individual data products.
Data Institute Participant – Thought Questions:
Use the Data Portal tools to investigate the data availability for the field
sites you’ve already identified in the previous Thought Questions.
What types of aquatic/terrestrial data are currently available? Remote sensing data?
Of these, what type of data are you most interested in working with for your project while at the Institute.
For what time period does the data cover?
What format is the downloadable file available in?
Where is the metadata to support this data?
Data Institute Participants: Intro to NEON Culmination Activity
Write up a brief summary of a project that you might want to explore while at the
Data Institute in Boulder, CO. Include the types of NEON (and other data) that you
will need to implement this project. Save this summary as you will be refining
and adding to your ideas over the next few weeks.
The goal of this activity if for you to begin to think about a Capstone Project
that you wish to work on at the end of the Data Institute. This project will ideally be
performed in groups, so over the next few weeks you'll have a chance to view the other
project proposals and merge projects to collaborate with your colleagues.
Once you have Git and Bash installed, you are ready to configure Git.
On this page you will:
Create a directory for all future GitHub repositories created on your computer
To ensure Git is properly installed and to create a working directory for GitHub,
you will need to know a bit of shell -- brief crash course below.
Crash Course on Shell
The Unix shell has been around longer than most of its users have been alive.
It has survived so long because it’s a power tool that allows people to do
complex things with just a few keystrokes. More importantly, it helps them
combine existing programs in new ways and automate repetitive tasks so they
aren’t typing the same things over and over again. Use of the shell is
fundamental to using a wide range of other powerful tools and computing
resources (including “high-performance computing” supercomputers).
Set up the directory where we will store all of the GitHub repositories
during the Institute,
Make sure Git is installed correctly, and
Gain comfort using bash so that we can use it to work with Git & GitHub.
Accessing Shell
How one accesses the shell depends on the operating system being used.
OS X: The bash program is called Terminal. You can search for it in Spotlight.
Windows: Git Bash came with your download of Git for Windows. Search Git Bash.
Linux: Default is usually bash, if not, type bash in the terminal.
Bash Commands
$
The dollar sign is a prompt, which shows us that the shell is waiting for
input; your shell may use a different character as a prompt and may add
information before the prompt.
When typing commands, either from these tutorials or from other sources, do not
type the prompt ($), only the commands that follow it.
In these tutorials, subsequent lines that follow a prompt and do not start with
$ are the output of the command.
listing contents - ls
Next, let's find out where we are by running a command called pwd -- print
working directory. At any moment, our current working directory is our
current default directory. I.e., the directory that the computer assumes we
want to run commands in unless we explicitly specify something else. Here, the
computer's response is /Users/neon, which is NEON’s home directory:
$ pwd
/Users/neon
**Data Tip:** Home Directory Variation - The home
directory path will look different on different operating systems. On Linux it
may look like `/home/neon`, and on Windows it will be similar to
`C:\Documents and Settings\neon` or `C:\Users\neon`.
(It may look slightly different for different versions of Windows.)
In future examples, we've used Mac output as the default, Linux and Windows
output may differ slightly, but should be generally similar.
If you are not, by default, in your home directory, you get there by typing:
$ cd ~
Now let's learn the command that will let us see the contents of our own
file system. We can see what's in our home directory by running ls --listing.
$ ls
Applications Documents Library Music Public
Desktop Downloads Movies Pictures
(Again, your results may be slightly different depending on your operating
system and how you have customized your filesystem.)
ls prints the names of the files and directories in the current directory in
alphabetical order, arranged neatly into columns.
**Data Tip:** What is a directory? That is a folder! Read the section on
Directory vs. Folder
if you find the wording confusing.
Change directory -- cd
Now we want to move into our Documents directory where we will create a
directory to host our GitHub repository (to be created in Week 2). The command
to change locations is cd followed by a directory name if it is a
sub-directory in our current working directory or a file path if not.
cd stands for "change directory", which is a bit misleading: the command
doesn't change the directory, it changes the shell's idea of what directory we
are in.
To move to the Documents directory, we can use the following series of commands
to get there:
$ cd Documents
These commands will move us from our home directory into our Documents
directory. cd doesn't print anything, but if we run pwd after it, we can
see that we are now in /Users/neon/Documents.
If we run ls now, it lists the contents of /Users/neon/Documents, because
that's where we now are:
$ pwd
/Users/neon/Documents
$ ls
data/ elements/ animals.txt planets.txt sunspot.txt
Now we can create a new directory called GitHub that will contain our GitHub
repositories when we create them later.
We can use the command mkdir NAME-- “make directory”
$ mkdir GitHub
There is not output.
Since GitHub is a relative path (i.e., doesn't have a leading slash), the
new directory is created in the current working directory:
$ ls
data/ elements/ GitHub/ animals.txt planets.txt sunspot.txt
**Data Tip:** This material is a much abbreviated form of the
Software Carpentry Unix Shell for Novices
workhop. Want a better understanding of shell? Check out the full series!
Is Git Installed Correctly?
All of the above commands are bash commands, not Git specific commands. We
still need to check to make sure git installed correctly. One of the easiest
ways is to check to see which version of git we have installed.
Git commands start with git.
We can use git --version to see which version of Git is installed
$ git --version
git version 2.5.4 (Apple Git-61)
If you get a git version number, then Git is installed!
If you get an error, Git isn’t installed correctly. Reinstall and repeat.
Setup Git Global Configurations
Now that we know Git is correctly installed, we can get it set up to work with.
When we use Git on a new computer for the first time, we need to configure a
few things. Below are a few examples of configurations we will set as we get
started with Git:
our name and email address,
to colorize our output,
what our preferred text editor is,
and that we want to use these settings globally (i.e. for every project)
On a command line, Git commands are written as git verb, where verb is what
we actually want to do.
Set up you own git with the following command, using your own information instead
of NEON's.
The four commands we just ran above only need to be run once:
the flag --global tells Git to use the settings for every project in your user
account on this computer.
You can check your settings at any time:
$ git config --list
You can change your configuration as many times as you want; just use the
same commands to choose another editor or update your email address.
Now that Git is set up, you will be ready to start the Week 2 materials to learn
about version control and how Git & GitHub work.
**Data Tip:**
GitDesktop
is a GUI (one of many) for
using GitHub that is free and available for both Mac and Windows operating
systems. In NEON Data Skills workshops & Data Institutes will only teach how to
use Git through command line, and not support use of GitDesktop
(or any other GUI), however, you are welcome to check it out and use it if you
would like to.
Run the installer and follow the steps below (these may look slightly different depending on Git version number):
Welcome to the Git Setup Wizard: Click on "Next".
Information: Click on "Next".
Select Destination Location: Click on "Next".
Select Components: Click on "Next".
Select Start Menu Folder: Click on "Next".
Adjusting your PATH environment:
Select "Use Git from the Windows Command Prompt" and click on "Next".
If you forgot to do this programs that you need for the event will not work properly.
If this happens rerun the installer and select the appropriate option.
Configuring the line ending conversions: Click on "Next".
Keep "Checkout Windows-style, commit Unix-style line endings" selected.
Configuring the terminal emulator to use with Git Bash:
Select "Use Windows' default console window" and click on "Next".
Configuring experimental performance tweaks: Click on "Next".
Completing the Git Setup Wizard: Click on "Finish".
This will provide you with both Git and Bash in the Git Bash program.
Install Bash for Mac OS X
The default shell in all versions of Mac OS X is bash, so no
need to install anything. You access bash from the Terminal
(found in
/Applications/Utilities). You may want to keep
Terminal in your dock for this workshop.
Install Bash for Linux
The default shell is usually Bash, but if your
machine is set up differently you can run it by opening a
terminal and typing bash. There is no need to
install anything.
Git Setup
Git is a version control system that lets you track who made changes to what
when and has options for easily updating a shared or public version of your code
on GitHub. You will need a
supported
web browser (current versions of Chrome, Firefox or Safari, or Internet Explorer
version 9 or above).
Git installation instructions borrowed and modified from
Software Carpentry.
Git for Windows
Git should be installed on your computer as part of your Bash install.
Install Git on Macs by downloading and running the most recent installer for
"mavericks" if you are using OS X 10.9 and higher -or- if using an
earlier OS X, choose the most recent "snow leopard" installer, from
this list.
After installing Git, there will not be anything in your
/Applications folder, as Git is a command line program.
**Data Tip:**
If you are running Mac OSX El Capitan, you might encounter errors when trying to
use git. Make sure you update XCODE.
Read more - a Stack Overflow Issue.
Git on Linux
If Git is not already available on your machine you can try to
install it via your distro's package manager. For Debian/Ubuntu run
sudo apt-get install git and for Fedora run
sudo yum install git.
Setting Up R & RStudio
Windows R/RStudio Setup
Please visit the CRAN Website to download the latest version of R for windows.
Download the latest version of Rstudio for Windows
Double click the file to install it
Once R and RStudio are installed, click to open RStudio. If you don't get any error messages you are set. If there is an error message, you will need to re-install the program.
Once it's downloaded, double click the file to install it
Once R and RStudio are installed, click to open RStudio. If you don't get any error messages you are set. If there is an error message, you will need to re-install the program.
Linux R/RStudio Setup
R is available through most Linux package managers.
You can download the binary files for your distribution
from CRAN. Or
you can use your package manager (e.g. for Debian/Ubuntu
run sudo apt-get install r-base and for Fedora run
sudo yum install R).
Under Installers select the version for your distribution.
Once it's downloaded, double click the file to install it
Once R and RStudio are installed, click to open RStudio. If you don't get any error messages you are set. If there is an error message, you will need to re-install the program.
Once R and RStudio are installed (in
Install Git, Bash Shell, R & RStudio
), open RStudio to make sure it works and you don’t get any error messages. Then,
install the needed R packages.
Install/Update R Packages
Please make sure all of these packages are installed and up to date on your
computer prior to the Institute.
The rhdf5 package is not on CRAN and must be downloaded directly from
Bioconductor. The can be done using these two commands directly in your R
console.
From the section titled HDF-Java 2.1x Pre-Built Binary Distributions
select the HDFView download option that matches the operating system and
computer setup (32 bit vs 64 bit) that you have. The download will start
automatically.
Open the downloaded file.
Mac - You may want to add the HDFView application to your Applications
directory.
Windows - Unzip the file, open the folder, run the .exe file, and follow
directions to complete installation.
Open HDFView to ensure that the program installed correctly.
**Data Tip:**
The HDFView application requires Java to be up to date. If you are having issues
opening HDFView, try to update Java first!
Install QGIS
QGIS is a free, open-source GIS program. Installation is optional for the 2018
Data Institute. We will not directly be working with QGIS, however, some past
participants have found it useful to have during the capstone projects.
To install QGIS:
Download the QGIS installer on the
QGIS download page here. Follow the installation directions below for your
operating system.
Windows
Select the appropriate QGIS Standalone Installer Version for your computer.
The download will automatically start.
Open the .exe file and follow prompts to install (installation may take a
while).
Open QGIS to ensure that it is properly downloaded and installed.
Select the current version of QGIS. The file download (.dmg format) should
start automatically.
Once downloaded, run the .dmg file. When you run the .dmg, it will create a
directory of installer packages that you need to run in a particular order.
IMPORTANT: read the READ ME BEFORE INSTALLING.rtf file!
Install the packages in the directory in the order indicated.
GDAL Complete.pkg
NumPy.pkg
matplotlib.pkg
QGIS.pkg - NOTE: you need to install GDAL, NumPy and matplotlib in order to
successfully install QGIS on your Mac!
**Data Tip:** If your computer doesn't allow you to
open these packages because they are from an unknown developer, right click on
the package and select Open With >Installer (default). You will then be asked
if you want to open the package. Select Open, and the installer will open.
Once all of the packages are installed, open QGIS to ensure that it is properly
installed.
LINUX
Select the appropriate download for your computer system.
Note: if you have previous versions of QGIS installed on your system, you may
run into problems. Check out
Verifiability and reproducibility are among the cornerstones of the scientific
process. They are what allows scientists to "stand on the shoulder of giants".
Maintaining reproducibility requires that all data management, analysis, and
visualization steps behind the results presented in a paper are documented and
available in full detail. Reproducibility here means that someone else should
either be able to obtain the same results given all the documented inputs and
the published instructions for processing them, or if not, the reasons why
should be apparent.
From Reproducible Science Curriculum
## Learning Objectives
At the end of this activity, you will be able to:
Summarize the four facets of reproducibility.
Describe several ways that reproducible workflows can improve your workflow and research.
Explain several ways you can incorporate reproducible science techniques into
your own research.
Getting Started with Reproducible Science
Please view the online slide-show below which summarizes concepts taught in the
Reproducible Science Curriculum.
Reproducibility spectrum for published research.
Source: Peng, RD Reproducible Research in Computational Science Science (2011): 1226–1227 via Reproducible Science Curriculum
The Nature Publishing group has also created a
Reporting Checklist
for its authors that focuses primaily on reporting issues but also includes
sections for sharing code.
Recent open-access issue of
Ecography
focusing on reproducible ecology and software packages available for use.
A nice short blog post with an annotated bibliography of "Top 10 papers discussing reproducible research in computational science" from Lorena Barba:
Barba group reproducibility syllabus.
After completing this tutorial, you will be able to:
Define hyperspectral remote sensing.
Explain the fundamental principles of hyperspectral remote sensing data.
Describe the key attributes that are required to effectively work with
hyperspectral remote sensing data in tools like R or Python.
Describe what a "band" is.
Mapping the Invisible
About Hyperspectral Remote Sensing Data
The electromagnetic spectrum is composed of thousands of bands representing
different types of light energy. Imaging spectrometers (instruments that collect
hyperspectral data) break the electromagnetic spectrum into groups of bands
that support classification of objects by their spectral properties on the
earth's surface. Hyperspectral data consists of many bands -- up to hundreds of
bands -- that cover the electromagnetic spectrum.
The NEON imaging spectrometer collects data within the 380nm to 2510nm portions
of the electromagnetic spectrum within bands that are approximately 5nm in
width. This results in a hyperspectral data cube that contains approximately
426 bands - which means big, big data.
Key Metadata for Hyperspectral Data
Bands and Wavelengths
A band represents a group of wavelengths. For example, the wavelength values
between 695nm and 700nm might be one band as captured by an imaging spectrometer.
The imaging spectrometer collects reflected light energy in a pixel for light
in that band. Often when you work with a multi or hyperspectral dataset, the
band information is reported as the center wavelength value. This value
represents the center point value of the wavelengths represented in that band.
Thus in a band spanning 695-700 nm, the center would be 697.5).
Imaging spectrometers collect reflected light information within
defined bands or regions of the electromagnetic spectrum. Source: National
Ecological Observatory Network (NEON)
Spectral Resolution
The spectral resolution of a dataset that has more than one band, refers to the
width of each band in the dataset. In the example above, a band was defined as
spanning 695-700nm. The width or spatial resolution of the band is thus 5
nanometers. To see an example of this, check out the band widths for the
Landsat sensors.
Full Width Half Max (FWHM)
The full width half max (FWHM) will also often be reported in a multi or
hyperspectral dataset. This value represents the spread of the band around that
center point.
The Full Width Half Max (FWHM) of a band relates to the distance
in nanometers between the band center and the edge of the band. In this
case, the FWHM for Band C is 5 nm.
In the illustration above, the band that covers 695-700nm has a FWHM of 5 nm.
While a general spectral resolution of the sensor is often provided, not all
sensors create bands of uniform widths. For instance bands 1-9 of Landsat 8 are
listed below (Courtesy of USGS)