Workshop
NEON Data Institute 2018: Remote Sensing with Reproducible Workflows using Python
National Ecological Observatory Network
-
Data Institute Overview
The 2018 Institute focuses on remote sensing of vegetation using open source tools and reproducible science workflows -- the primary programming language will be Python.
Through data intensive live-coding, short presentations, and small group work, we will cover topics including:
- Background theoretical concepts related to LiDAR and hyperspectral remote sensing
- Fundamental concepts required to ingest, visualize, process, and analyze NEON hyperspectral and LiDAR data.
- Best practices on reproducible research workflows: the importance of documentation, organization, version control, and automation.
- Scientific spatio-temporal applications of remote sensing data using open-source tools, namely Python and Jupyter Notebooks.
- Machine learning for prediction of biophysical variables such as above-ground biomass using NEON LiDAR and ground measurements.
- Classification of hyperspectral data using deep-learning approaches.
- Using remote sensing data products with in situ data to quantify uncertainty associated with remote sensing observations.
This Institute will be held at the NEON project headquarters 9-14 July 2018. In addition to the six days of in-person training, there are three weeks of pre-institute materials is to ensure that everyone comes to the Institute ready to work in a collaborative research environment. Pre-institute materials are online & individually paced, expect to spend 1-5 hrs/week depending on your familiarity with the topic.
Schedule
Please note that slight changes may be made to the schedule of the Data Institute.
Time | Day | Description |
---|---|---|
-- | 1 - 7 June | Computer Setup Materials |
-- | 8 -14 June | Intro to NEON & Reproducible Science |
-- | 15 - 21 June | Version Control & Collaborative Science with Git & GitHub |
-- | 22 June - 5 July | Documentation of Your Workflow with Jupyter Notebooks – due to 4 July holiday this will be a 2-week interval |
-- | 9 - 14 July | Data Institute |
7:50am - 6:30 pm | Monday | Working with HDF5 & Hyperspectral Remote Sensing |
8:00am - 6:30pm | Tuesday | Reproducible & Automated Workflows, Working with LiDAR data |
8:00am - 6:30pm | Wednesday | Remote Sensing Uncertainty |
8:00am - 6:30pm | Thursday | Measuring Vegetation & Working at Scale with CyVerse |
8:00am - 6:30pm | Friday | Hyperspectral Classification/Waveform Lidar & Applications in Remote Sensing |
9:00am - 5:00pm | Saturday | Applications cont. & Presentations |
Instructors
Dr. Tristan Goulden, Associate Scientist-Airborne Platform, Battelle NEON Project: Tristan is a Remote Sensing Scientist at NEON specializing in LiDAR. He also co-leads NEON’s Remote Sensing IPT (integrated product team) which focusses on developing algorithms and associated documentation for all of NEON’s remote sensing data products. His past research focus has been on characterizing uncertainty in LiDAR observations/processing and propagating the uncertainty into downstream data products. During his PhD, he focused on developing uncertainty models for topographic attributes (elevation, slope, aspect), hydrological products such as watershed boundaries, stream networks, as well as stream flow and erosion at the watershed scale. His past experience in LiDAR has included all aspects of the LIDAR workflow including; mission planning, airborne operations, processing of raw data, and development of higher level data products. During his graduate research he applied these skills on LiDAR flights over several case study watersheds of study as well as some amazing LiDAR flights over the Canadian Rockies for monitoring change of alpine glaciers. His software experience for LiDAR processing includes Applanix’s POSPac MMS, Optech’s LMS software, Riegl’s LMS software, LAStools, Pulsetools, TerraScan, QT Modeler, ArcGIS, QGIS, Surfer, and self-written scripts in Matlab for point-cloud, raster, and waveform processing.
Bridget Hass, Remote Sensing Data Processing Technician, Battelle NEON Project: Bridget is a Remote Sensing Data Scientist at NEON. Her daily work includes processing LiDAR and hyperspectral data collected by NEON's Aerial Observation Platform (AOP). Prior to joining NEON, Bridget worked in marine geophysics as a shipboard technician and research assistant. She is excited to be a part of producing NEON's AOP data and to share techniques for working with this data during the 2018 Data Institute.
Dr. Naupaka Zimmerman, Assistant Professor of Biology, University of San Francisco: Naupaka’s research focuses on the microbial ecology of plant-fungal interactions. Naupaka brings to the course experience and enthusiasm for reproducible workflows developed after discovering how challenging it is to keep track of complex analyses in his own dissertation and postdoctoral work. As a co-founder of the International Network of Next-Generation Ecologists and an instructor and lesson maintainer for Software Carpentry and Data Carpentry, Naupaka is very interested in providing and improving training experiences in open science and reproducible research methods.
Dr. Tyson Swetnam, Science Informatician with CyVerse and Research Associate with Bio5 Institute at the University of Arizona: Tyson’s recent research focuses on geomorphology and biogeochemical cycling and involves collaborations with many groups including the University of Utah, the Agricultural Research Service Southwest Watershed Research Center, the Arizona Remote Sensing Center, and Santa Rita Experimental Range. With CyVerse, Tyson is working to deploy the Spatial Data Infrastructure (SDI) for life science and agricultural research. He also works closely with the NSF Critical Zone Observatory Network, OpenTopography, and XSEDE deploying scalable GIS applications running on CyVerse resources. In the past he has collaborated with numerous state and federal agencies on geospatial research projects and is always on the lookout for new collaborations in both the public and private sector. You can follow Tyson on YouTube and Twitter!
Registration & Logistics
Registration for the Data Institute is now closed. Applications for future institutes open in January or February and are announced in the Upcoming Events section.
Read here for more information on the logistics of the Data Institute.
This page includes all of the materials needed for the Data Institute including the pre-institute materials. Please use the sidebar menu to find the appropriate week or day. If you have problems with any of the materials please email us or use the comments section at the bottom of the appropriate page.
Please note that slight changes may be made to the schedule of the Data Institute.
Pre-Institute: Computer Set Up Materials
It is important that you have your computer setup, prior to diving into the pre-institute materials in week 2 on 15 June, 2018! Please review the links below to setup the laptop you will be bringing to the Data Institute.
Let's Get Your Computer Setup!
Go to each of the following tutorials and complete the directions to set your computer up for the Data Institute. On Thursday of the Data Institute, we will focus on high performance/cloud computing with Tyson Swetnam from CyVerse. In order to have access to these materials, please also complete the necessary downloads/account creations in the section, Prerequisites -> Downloads, access, and services (only), found in Tyson's materials.
Install Git, Bash Shell, Python
This page outlines the tools and resources that you will need to install Git, Bash and Python applications onto your computer as the first step of our Python skills tutorial series.
Checklist
Detailed directions to accomplish each objective are below.
- Install Bash shell (or shell of preference)
- Install Git
- Install Python 3.x
Bash/Shell Setup
Install Bash for Windows
- Download the Git for Windows installer.
- Run the installer and follow the steps bellow:
- Welcome to the Git Setup Wizard: Click on "Next".
- Information: Click on "Next".
- Select Destination Location: Click on "Next".
- Select Components: Click on "Next".
- Select Start Menu Folder: Click on "Next".
- Adjusting your PATH environment: Select "Use Git from the Windows Command Prompt" and click on "Next". If you forgot to do this programs that you need for the event will not work properly. If this happens rerun the installer and select the appropriate option.
- Configuring the line ending conversions: Click on "Next". Keep "Checkout Windows-style, commit Unix-style line endings" selected.
- Configuring the terminal emulator to use with Git Bash: Select "Use Windows' default console window" and click on "Next".
- Configuring experimental performance tweaks: Click on "Next".
- Completing the Git Setup Wizard: Click on "Finish".
This will provide you with both Git and Bash in the Git Bash program.
Install Bash for Mac OS X
The default shell in all versions of Mac OS X is bash, so no
need to install anything. You access bash from the Terminal
(found in
/Applications/Utilities
). You may want to keep
Terminal in your dock for this workshop.
Install Bash for Linux
The default shell is usually Bash, but if your
machine is set up differently you can run it by opening a
terminal and typing bash
. There is no need to
install anything.
Git Setup
Git is a version control system that lets you track who made changes to what when and has options for easily updating a shared or public version of your code on GitHub. You will need a supported web browser (current versions of Chrome, Firefox or Safari, or Internet Explorer version 9 or above).
Git installation instructions borrowed and modified from Software Carpentry.
Git for Windows
Git should be installed on your computer as part of your Bash install.Git on Mac OS X
Video Tutorial
Install Git on Macs by downloading and running the most recent installer for
"mavericks" if you are using OS X 10.9 and higher -or- if using an
earlier OS X, choose the most recent "snow leopard" installer, from
this list.
After installing Git, there will not be anything in your
/Applications
folder, as Git is a command line program.
Git on Linux
If Git is not already available on your machine you can try to
install it via your distro's package manager. For Debian/Ubuntu run
sudo apt-get install git
and for Fedora run
sudo yum install git
.
Setting Up Python
Python is a popular language for scientific computing and data science, as well as being a great for general-purpose programming. Installing all of the scientific packages individually can be a bit difficult, so we recommend using an all-in-one installer, like Anaconda.
Regardless of how you choose to install it, **please make sure your environment
is set up with Python version 3.7 (at the time of writing, the gdal
package did not work
with the newest Python version 3.6). Python 2.x is quite different from Python 3.x
so you do need to install 3.x and set up with the 3.7 environment.
We will teach using Python in the Jupyter Notebook environment, a programming environment that runs in a web browser. For this to work you will need a reasonably up-to-date browser. The current versions of the Chrome, Safari and Firefox browsers are all supported (some older browsers, including Internet Explorer version 9 and below, are not). You can choose to not use notebooks in the course, however, we do recommend you download and install the library so that you can explore this tool.
Windows
Download and install Anaconda. Download the default Python 3 installer (3.7). Use all of the defaults for installation except make sure to check Make Anaconda the default Python.
Mac OS X
Download and install Anaconda. Download the Python 3.x installer, choosing either the graphical installer or the command-line installer (3.7). For the graphical installer, use all of the defaults for installation. For the command-line installer open Terminal, navigate to the directory with the download then enter:
bash Anaconda3-2020.11-MacOSX-x86_64.sh
(or whatever you file name is)
Linux
Download and install Anaconda. Download the installer that matches your operating system and save it in your home folder. Download the default Python 3 installer.Open a terminal window and navigate to your downloads folder. Type
bash Anaconda3-2020.11-Linux-ppc64le.sh
and then press tab. The name of the file you just downloaded should appear.
Press enter. You will follow the text-only prompts. When there is a colon at
the bottom of the screen press the down arrow to move down through the text.
Type yes
and press enter to approve the license. Press enter to
approve the default location for the files. Type yes
and press
enter to prepend Anaconda to your PATH
(this makes the Anaconda
distribution the default Python).
Install Python packages
We need to install several packages to the Python environment to be able to work with the remote sensing data
- gdal
- h5py
If you are new to working with command line you may wish to complete the next setup instructions which provides and intro to command line (bash) prior to completing these package installation instructions.
Windows
Create a new Python 3.7 environment by opening Windows Command Prompt and typing
conda create –n py37 python=3.7 anaconda
When prompted, activate the py37 environment in Command Prompt by typing
activate py37
You should see (py37) at the beginning of the command line. You can also test
that you are using the correct version by typing python --version
.
Install Python package(s):
- gdal:
conda install gdal
- h5py:
conda install h5py
Note: You may need to only install gdal as the others may be included in the default.
Mac OS X
Create a new Python 3.7 environment by opening Terminal and typing
conda create –n py37 python=3.7 anaconda
This may take a minute or two.
When prompted, activate the py37 environment in Command Prompt by typing
source activate py37
You should see (py37) at the beginning of the command line. You can also test
that you are using the correct version by typing python --version
.
Install Python package(s):
- gdal:
conda install gdal
- h5py:
conda install h5py
Linux
Open default terminal application (on Ubuntu that will be gnome-terminal).Launch Python.
Install Python package(s):
- gdal:
conda install gdal
- h5py:
conda install h5py
Set up Jupyter Notebook Environment
In your terminal application, navigate to the directory (cd
) that where you
want the Jupyter Notebooks to be saved (or where they already exist).
Open Jupyter Notebook with
jupyter notebook
Once the notebook is open, check which version of Python you are in by using the prompts
# check what version of Python you are using.
import sys
sys.version
You should now be able to work in the notebook.
The gdal
package that occasionally has problems with some versions of Python.
Therefore test out loading it using
import gdal
.
Additional Resources
- Setting up the Python Environment section from the Python Bootcamp
- Conda Help: setting up an environment
- iPython documentation: Kernals
Set up GitHub Working Directory - Quick Intro to Bash
Checklist
Once you have Git and Bash installed, you are ready to configure Git.
On this page you will:
- Create a directory for all future GitHub repositories created on your computer
To ensure Git is properly installed and to create a working directory for GitHub, you will need to know a bit of shell -- brief crash course below.
Crash Course on Shell
The Unix shell has been around longer than most of its users have been alive. It has survived so long because it’s a power tool that allows people to do complex things with just a few keystrokes. More importantly, it helps them combine existing programs in new ways and automate repetitive tasks so they aren’t typing the same things over and over again. Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including “high-performance computing” supercomputers).
This section is an abbreviated form of Software Carpentry’s The Unix Shell for Novice’s workshop lesson series. Content and wording (including all the above) is heavily copied and credit is due to those creators (full author list).
Our goal with shell is to:
- Set up the directory where we will store all of the GitHub repositories during the Institute,
- Make sure Git is installed correctly, and
- Gain comfort using bash so that we can use it to work with Git & GitHub.
Accessing Shell
How one accesses the shell depends on the operating system being used.
- OS X: The bash program is called Terminal. You can search for it in Spotlight.
- Windows: Git Bash came with your download of Git for Windows. Search Git Bash.
- Linux: Default is usually bash, if not, type
bash
in the terminal.
Bash Commands
$
The dollar sign is a prompt, which shows us that the shell is waiting for input; your shell may use a different character as a prompt and may add information before the prompt.
When typing commands, either from these tutorials or from other sources, do not
type the prompt ($
), only the commands that follow it.
In these tutorials, subsequent lines that follow a prompt and do not start with
$
are the output of the command.
listing contents - ls
Next, let's find out where we are by running a command called pwd
-- print
working directory. At any moment, our current working directory is our
current default directory. I.e., the directory that the computer assumes we
want to run commands in unless we explicitly specify something else. Here, the
computer's response is /Users/neon
, which is NEON’s home directory:
$ pwd
/Users/neon
If you are not, by default, in your home directory, you get there by typing:
$ cd ~
Now let's learn the command that will let us see the contents of our own
file system. We can see what's in our home directory by running ls
--listing.
$ ls
Applications Documents Library Music Public
Desktop Downloads Movies Pictures
(Again, your results may be slightly different depending on your operating system and how you have customized your filesystem.)
ls
prints the names of the files and directories in the current directory in
alphabetical order, arranged neatly into columns.
Change directory -- cd
Now we want to move into our Documents directory where we will create a
directory to host our GitHub repository (to be created in Week 2). The command
to change locations is cd
followed by a directory name if it is a
sub-directory in our current working directory or a file path if not.
cd
stands for "change directory", which is a bit misleading: the command
doesn't change the directory, it changes the shell's idea of what directory we
are in.
To move to the Documents directory, we can use the following series of commands to get there:
$ cd Documents
These commands will move us from our home directory into our Documents
directory. cd
doesn't print anything, but if we run pwd
after it, we can
see that we are now in /Users/neon/Documents
.
If we run ls
now, it lists the contents of /Users/neon/Documents
, because
that's where we now are:
$ pwd
/Users/neon/Documents
$ ls
data/ elements/ animals.txt planets.txt sunspot.txt
To use cd
, you need to be familiar with paths, if not, read the section on
Full, Base, and Relative Paths .
Make a directory -- mkdir
Now we can create a new directory called GitHub
that will contain our GitHub
repositories when we create them later.
We can use the command mkdir NAME
-- “make directory”
$ mkdir GitHub
There is not output.
Since GitHub
is a relative path (i.e., doesn't have a leading slash), the
new directory is created in the current working directory:
$ ls
data/ elements/ GitHub/ animals.txt planets.txt sunspot.txt
Is Git Installed Correctly?
All of the above commands are bash commands, not Git specific commands. We still need to check to make sure git installed correctly. One of the easiest ways is to check to see which version of git we have installed.
Git commands start with git
.
We can use git --version
to see which version of Git is installed
$ git --version
git version 2.5.4 (Apple Git-61)
If you get a git version number, then Git is installed!
If you get an error, Git isn’t installed correctly. Reinstall and repeat.
Setup Git Global Configurations
Now that we know Git is correctly installed, we can get it set up to work with.
The text below is modified slightly from Software Carpentry's Setting up Git lesson.
When we use Git on a new computer for the first time, we need to configure a few things. Below are a few examples of configurations we will set as we get started with Git:
- our name and email address,
- to colorize our output,
- what our preferred text editor is,
- and that we want to use these settings globally (i.e. for every project)
On a command line, Git commands are written as git verb
, where verb
is what
we actually want to do.
Set up you own git with the following command, using your own information instead of NEON's.
$ git config --global user.name "NEON Science"
$ git config --global user.email "neon@BattelleEcology.org"
$ git config --global color.ui "auto"
Then set up your favorite text editor following this table:
Editor | Configuration command |
---|---|
nano | $ git config --global core.editor "nano -w" |
Text Wrangler | $ git config --global core.editor "edit -w" |
Sublime Text (Mac) | $ git config --global core.editor "subl -n -w" |
Sublime Text (Win, 32-bit install) | $ git config --global core.editor "'c:/program files (x86)/sublime text 3/sublime_text.exe' -w" |
Sublime Text (Win, 64-bit install) | $ git config --global core.editor "'c:/program files/sublime text 3/sublime_text.exe' -w" |
Notepad++ (Win) | $ git config --global core.editor "'c:/program files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" |
Kate (Linux) | $ git config --global core.editor "kate" |
Gedit (Linux) | $ git config --global core.editor "gedit -s -w" |
emacs | $ git config --global core.editor "emacs" |
vim | $ git config --global core.editor "vim" |
The four commands we just ran above only need to be run once:
the flag --global
tells Git to use the settings for every project in your user
account on this computer.
You can check your settings at any time:
$ git config --list
You can change your configuration as many times as you want; just use the same commands to choose another editor or update your email address.
Now that Git is set up, you will be ready to start the Week 2 materials to learn about version control and how Git & GitHub work.
Install QGIS & HDF5View
Install HDFView
The free HDFView application allows you to explore the contents of an HDF5 file.
To install HDFView:
-
Click to go to the download page.
-
From the section titled HDF-Java 2.1x Pre-Built Binary Distributions select the HDFView download option that matches the operating system and computer setup (32 bit vs 64 bit) that you have. The download will start automatically.
-
Open the downloaded file.
- Mac - You may want to add the HDFView application to your Applications directory.
- Windows - Unzip the file, open the folder, run the .exe file, and follow directions to complete installation.
- Open HDFView to ensure that the program installed correctly.
Install QGIS
QGIS is a free, open-source GIS program. Installation is optional for the 2018 Data Institute. We will not directly be working with QGIS, however, some past participants have found it useful to have during the capstone projects.
To install QGIS:
Download the QGIS installer on the QGIS download page here. Follow the installation directions below for your operating system.
Windows
- Select the appropriate QGIS Standalone Installer Version for your computer.
- The download will automatically start.
- Open the .exe file and follow prompts to install (installation may take a while).
- Open QGIS to ensure that it is properly downloaded and installed.
Mac OS X
- Select KyngChaos QGIS download page. This will take you to a new page.
- Select the current version of QGIS. The file download (.dmg format) should start automatically.
- Once downloaded, run the .dmg file. When you run the .dmg, it will create a directory of installer packages that you need to run in a particular order. IMPORTANT: read the READ ME BEFORE INSTALLING.rtf file!
Install the packages in the directory in the order indicated.
- GDAL Complete.pkg
- NumPy.pkg
- matplotlib.pkg
- QGIS.pkg - NOTE: you need to install GDAL, NumPy and matplotlib in order to successfully install QGIS on your Mac!
Once all of the packages are installed, open QGIS to ensure that it is properly installed.
LINUX
- Select the appropriate download for your computer system.
- Note: if you have previous versions of QGIS installed on your system, you may run into problems. Check out
Pre Week 1: Introduction to NEON & Reproducible Science
In the first week of the pre-institute activities, we will review the NEON project. We will also provide you with a general overview of reproducible science. Over the next few weeks will we ask you to review materials and submit something that demonstrates you have mastered the materials.
Learning Objectives
After completing these activities, you will be able to:
- Describe the NEON project, the data collected, and where to access more information about the project.
- Know how to access other code resources for working with NEON data.
- Explain why reproducible workflows are useful and important in your research.
Week 1 Assignment
After reviewing the materials below, please write up a summary of a project that you are interested working on at the Data Institute. Be sure to consider what data you will need (NEON or other). You will have time to refine your idea over the next few weeks. Save this document as you will submit it next week as a part of week 2 materials!
Deadline: Please complete this by Thursday June 14th @ 11:59 MDT.
Week 1 Materials
Please carefully read and review the materials below:
NEON Introduction will be posted by 7 June
Introduction to the National Ecological Observatory Network (NEON)
Here we will provide an overview of the National Ecological Observatory Network (NEON). Please carefully read through these materials and links that discuss NEON’s mission and design.
Learning Objectives
At the end of this activity, you will be able to:
- Explain the mission of the National Ecological Observatory Network (NEON).
- Explain the how sites are located within the NEON project design.
- Explain the different types of data that will be collected and provided by NEON.
The NEON Project Mission & Design
To capture ecological heterogeneity across the United States, NEON’s design divides the continent into 20 statistically different eco-climatic domains. Each NEON field site is located within an eco-climatic domain.
The Science and Design of NEON
To gain a better understanding of the broad scope fo NEON watch this 4 minute long video.
Please, read the following page about NEON's mission.
Data Institute Participants -- Thought Question: How might/does the NEON project intersect with your current research or future career goals?
NEON's Spatial Design
The Spatial Design of NEON
Watch this 4:22 minute video exploring the spatial design of NEON field sites.
Please read the following page about NEON's Spatial Design:
Read this primer on NEON's Sampling Design
Read about the different types of field sites - core and relocatable
NEON Field Site Locations
Explore the NEON Field Site map taking note of the locations of
- Aquatic & terrestrial field sites.
- Core & relocatable field sites.
Explore the NEON field site map. Do the following:
- Zoom in on a study area of interest to see if there are any NEON field sites that are nearby.
- Use the menu below the map to filter sites by name, type, domain, or state.
- Select one field site of interest.
- Click on the marker in the map.
- Then click on Site Details to jump to the field site landing page.
Data Institute Participant -- Thought Questions: Use the map above to answer these questions. Consider the research question that you may explore as your Capstone Project at the Institute or about a current project that you are working on and answer the following questions:
- Are there NEON field sites that are in study regions of interest to you?
- What domains are the sites located in?
- What NEON field sites do your current research or Capstone Project ideas coincide with?
- Is the site(s) core or relocatable?
- Is it/are they terrestrial or aquatic?
- Are there data available for the NEON field site(s) that you are most interested in? What kind of data are available?
Data Tip: You can download maps, kmz, or shapefiles of the field sites here.
NEON Data
How NEON Collects Data
Watch this 3:06 minute video exploring the data that NEON collects.
Read the Data Collection Methods page to learn more about the different types of data that NEON collects and provides. Then, follow the links below to learn more about each collection method:
- Aquatic Observation System (AOS)
- Aquatic Instrument System (AIS)
- Terrestrial Instrument System (TIS) -- Flux Tower
- Terrestrial Instrument System (TIS) -- Soil Sensors and Measurements
- Terrestrial Organismal System (TOS)
- Airborne Observation Platform (AOP)
All data collection protocols and processing documents are publicly available. Read more about the standardized protocols and how to access these documents.
Specimens & Samples
NEON also collects samples and specimens from which the other data products are based. These samples are also available for research and education purposes. Learn more: NEON Biorepository.
Airborne Remote Sensing
Watch this 5 minute video to better understand the NEON Airborne Observation Platform (AOP).
Data Institute Participant – Thought Questions: Consider either your current or future research or the question you’d like to address at the Institute.
- Which types of NEON data may be more useful to address these questions?
- What non-NEON data resources could be combined with NEON data to help address your question?
- What challenges, if any, could you foresee when beginning to work with these data?
Data Tip: NEON also provides support to your own research including proposals to fly the AOP over other study sites, a mobile tower/instrumentation setup and others. Learn more here the Assignable Assets programs .
Access NEON Data
NEON data are processed and go through quality assurance quality control checks at NEON headquarters in Boulder, CO. NEON carefully documents every aspect of sampling design, data collection, processing and delivery. This documentation is freely available through the NEON data portal.
- Visit the NEON Data Portal - data.neonscience.org
- Read more about the quality assurance and quality control processes for NEON data and how the data are processed from raw data to higher level data products.
- Explore NEON Data Products. On the page for each data product in the catalog you can find the basic information about the product, find the data collection and processing protocols, and link directly to downloading the data.
- Additionally, some types of NEON data are also available through the data portals of other organizations. For example, NEON Terrestrial Insect DNA Barcoding Data is available through the Barcode of Life Datasystem (BOLD). Or NEON phenocam images are available from the Phenocam network site. More details on where else the data are available from can be found in the Availability and Download section on the Product Details page for each data product (visit Explore Data Products to access individual Product Details pages).
Pathways to access NEON Data
There are several ways to access data from NEON:
- Via the NEON data portal. Explore and download data. Note that much of the tabular data is available in zipped .csv files for each month and site of interest. To combine these files, use the neonUtilities package (R tutorial, Python tutorial).
- Use R or Python to programmatically access the data. NEON and community members have created code packages to directly access the data through an API. Learn more about the available resources by reading the Code Resources page or visiting the NEONScience GitHub repo.
- Using the NEON API. Access NEON data directly using a custom API call.
- Access NEON data through partner's portals. Where NEON data directly overlap with other community resources, NEON data can be accessed through the portals. Examples include Phenocam, BOLD, Ameriflux, and others. You can learn more in the documentation for individual data products.
Data Institute Participant – Thought Questions: Use the Data Portal tools to investigate the data availability for the field sites you’ve already identified in the previous Thought Questions.
- What types of aquatic/terrestrial data are currently available? Remote sensing data?
- Of these, what type of data are you most interested in working with for your project while at the Institute.
- For what time period does the data cover?
- What format is the downloadable file available in?
- Where is the metadata to support this data?
Data Institute Participants: Intro to NEON Culmination Activity
Write up a brief summary of a project that you might want to explore while at the Data Institute in Boulder, CO. Include the types of NEON (and other data) that you will need to implement this project. Save this summary as you will be refining and adding to your ideas over the next few weeks.
The goal of this activity if for you to begin to think about a Capstone Project that you wish to work on at the end of the Data Institute. This project will ideally be performed in groups, so over the next few weeks you'll have a chance to view the other project proposals and merge projects to collaborate with your colleagues.
The Importance of Reproducible Science
Verifiability and reproducibility are among the cornerstones of the scientific process. They are what allows scientists to "stand on the shoulder of giants". Maintaining reproducibility requires that all data management, analysis, and visualization steps behind the results presented in a paper are documented and available in full detail. Reproducibility here means that someone else should either be able to obtain the same results given all the documented inputs and the published instructions for processing them, or if not, the reasons why should be apparent. From Reproducible Science Curriculum
- Summarize the four facets of reproducibility.
- Describe several ways that reproducible workflows can improve your workflow and research.
- Explain several ways you can incorporate reproducible science techniques into your own research.
Getting Started with Reproducible Science
Please view the online slide-show below which summarizes concepts taught in the Reproducible Science Curriculum.
View Reproducible Science Slideshow
A Gap In Understanding
Reproducibility and Your Research
How reproducible is your current research?
View Reproducible Science Checklist
- Do you currently apply any of the items in the checklist to your research?
- Are there elements in the list that you are interested in incorporating into your workflow? If so, which ones?
Additional Readings (optional)
- Nature has collated and published (with open-access) a special archive on the Challenges of Irreproducible Science .
- The Nature Publishing group has also created a Reporting Checklist for its authors that focuses primaily on reporting issues but also includes sections for sharing code.
- Recent open-access issue of Ecography focusing on reproducible ecology and software packages available for use.
- A nice short blog post with an annotated bibliography of "Top 10 papers discussing reproducible research in computational science" from Lorena Barba: Barba group reproducibility syllabus.
Pre Week 2: Version Control & Collaborative Science
The goal of the pre-institute materials is to ensure that everyone comes to the Institute ready to work in a collaborative research environment. If you recall, from last week, the four facets of reproducibility are documentation, organization, automation, and dissemination.
This week we will focus on learning to use tools to help us with these facets: Git and GitHub. The Git Hub environment supports both a collaborative approach to science through code sharing and dissemination, and a powerful version control system that supports both efficient project organization, and an effective way to save your work.
Learning Objectives
After completing these activities, you will be able to:
- Summarize the key components of a version control system
- Know how to setup a GitHub account
- Know how to setup Git locally
- Work in a collaborative workflow on GitHub
Week 2 Assignment
The assignment for this week is to revise the Data Institute capstone project summary that you developed last week. You will submit your project summary, with a brief biography to introduce yourself, to a shared GitHub repository.
Please complete this assignment by Thursday June 21st @ 11:59 PM MDT.
If you are familiar with forked repos and pull requests GitHub, and the use of Git in the command line, you may be able to complete the assignment without completing all tutorials in the series.
Assignment: Version Control with GitHub
DUE: 21 June 2018
During the NEON Data Institute, you will share the code that you create daily
with everyone on the NEONScience/DI-NEON-participants
repo.
Through this week’s tutorials, you have learned the basic skills needed to successfully share your work at the Institute including how to:
- Create your own GitHub user account,
- Set up Git on your computer (please do this on the computer you will be bringing to the Institute), and
- Create a Markdown file with a biography of yourself and the project you are interested in working on at the Institute. This biography was shared with the group via the Data Institute’s GitHub repo.
Checklist for this week’s Assignment:
You should have completed the following after Pre-institute week 2:
- Fork & clone the NEON-DataSkills/DI-NEON-participants repo.
- Create a .md file in the
participants/2018-RemoteSensing/pre-institute2-git
directory of the repo. Name the document LastName-FirstName.md. - Write a biography that introduces yourself to the other participants. Please
provide basic information including:
- name,
- domain of interest,
- one goal for the course,
- an updated version of your Capstone Project idea,
- and the list of data (NEON or other) to support the project that you created during last week’s materials.
- Push the document from your local computer to your GithHub repo.
- Created a Pull Request to merge this document back into the NEON-DataSkills/DI-NEON-participants repo.
NOTE: The Data Institute repository is a public repository, so all members of the Institute, as well as anyone in the general public who stumbles on the repo, can see the information. If you prefer not to share this information publicly, please submit the same document but use a pseudonym (cartoon character names would work well) and email us with the pseudonym so that we can connect the submitted document to you.
Have questions? No problem. Leave your question in the comment box below. It's likely some of your colleagues have the same question, too! And also likely someone else knows the answer.
Version Control with GitHub
Pre-Institute Week 3: Documentation of Your Workflow
In week 3, you will use Jupyter Notebooks (formerly iPython Notebooks) to document code and efficiently publish code results & outputs. You will practice your Git skills by publishing your work in the NEONScience/DI-NEON-participants GitHub repository. While working with the notebooks, you will also learn about NEON RGB imagery.
In addition, you will watch a video that provides an overview of the NEON Vegitation Indices that are available as data products in preparation for Monday's materials.
Learning Objectives
After completing these activities, you will be able to:
- Use Jupyter Notebooks to create code with formatted context text
- Describe the value of documented workflows
- Plot a NEON RGB Camera Tile
- Plot a Histogram of a Single Band of an RGB Camera Tile
NEON RGB Camera Imagery
To prepare for the introduction to working with NEON's RGB camera imagery data in this week's lessons. Please watch this 18 minute long video by NEON project scientist Bill Gallery about these data.
Jupyter Notebooks
Please go through the following series to learn to work with Jupyter Notebooks.
Document Your Code with Jupyter Notebooks Tutorial Series
Assignment
The assignment this week is to create a Jupyter Notebook that show's how to plot NEON RGB camera data. Use the following tutorial (along with the previous series on Notebooks) to create this Notebook. Once done you will convert your Notebook to a PDF for easy dissemination of your results. If you have challenges converting to PDF, try to convert to html.
When done, submit your Notebook and PDF/html files to the GitHub participants/2018-RemoteSensing/pre-institute3-Jupyter
directory. Review the week 2 materials if you have would like a refresher on GitHub and the commands associated with adding, committing, pushing, and making a pull request.
The week 3 assignment is due at 11:59pm on 5 July 2018.
Plotting a NEON RGB Camera Image (GeoTIFF) in Python
Trouble starting Python? In your Command Prompt/Terminal window, activate the your desired Python version. For the 2018 Data Institute, we will use Python 3.5 and in the set up created this environment as "p35".
activate p35
source activate p35
jupyter notebook
You should now be able to navigate to where you want to save your new Jupyter Notebook for the tutorial. This tutorial is set up to use your Python 3.5 kernel. Directions for setting up this kernel were in the Introduction to Using Jupyter Notebooks tutorial.
Remote Sensing Indicies
On Monday of the Data Institute, we will work with hyperspectral remote sensing data. In the afternoon, we will work in groups to create scripts to calculate various indices from the NEON hyperspectral data. To prepare for this section, you may want to watch the following presentation (29 minutes) by NEON project scientist David Hulslander on the remote sensing indices and those calculated as part of NEON Data Products.
Learning Objectives
After completing these activities, you will be able to:
- Open and work with raster data stored in HDF5 format in Python
- Explain the key components of the HDF5 data structure (groups, datasets and attributes)
- Open and use attribute data (metadata) from an HDF5 file in Python
All activities are held in the the Classroom unless otherwise noted.
Time | Topic | Instructor/Location |
---|---|---|
7:45 | Arrive at NEON to be ready for start at 8:00 | |
8:00 | Welcome & Introductions | |
8:45 | NEON AOP Logistics | Tristan Goulden |
9:15 | BREAK | |
9:30 | NEON Tour | |
11:00 | SHORT BREAK | |
11:05 | Fundamentals of Hyperspectral Remote Sensing & HDF5 format (related video) | David Hulslander |
11:30 | Explore NEON HDF5 format with Viewer | Tristan Goulden |
12:00 | LUNCH | Classroom/Patio |
13:00 | Work with Hyperspectral Remote Sensing data & HDF5 | Bridget Hass |
NEON AOP Hyperspectral Data in HDF5 format with Python - Tiled Data | ||
Band Stacking, RGB & False Color Images, and Interactive Widgets in Python - Tiled Data | ||
Plot a Spectral Signature in Python - Tiled Data | ||
Calculate NDVI & Extract Spectra Using Masks in Python - Tiled Data | ||
15:15 | BREAK | |
15:30 | Calculate Other Indices; Small Group Coding | Megan Jones |
16:30 | Reproducible Workflows, part I (associated GitHub repo) | Naupaka Zimmerman |
17:30 | End of Day Wrap Up | Megan Jones |
Additional Information
- In the morning, we will be touring the NEON facilities including several labs. Please wear long pants and close-toed shoes to conform to lab safety standards.
- Many individuals find the temperature of the classroom where the Data Institute is held to be cooler than they prefer. We recommend you bring a sweater or light jacket with you.
- You will have the opportunity to eat your lunch on an outdoor patio - hats, sunscreen, and sunglasses may be appreciated.
Additional Resources
- Participants looking for more background on the HDF5 format may find this tutorial useful: Hierarchical Data Formats - What is HDF5? tutorial
- During the 2016 Data Institute, Dr. Dave Schimel gave a presentation on the importance of "Big Data, Open Data, and Biodiversity" and is very much related to the themes of this Data Institute. If interested, you can watch the video here.
Tuesday: Reproducible Workflows & Lidar
In the morning, we will focus on data workflows, organization and automation as a means to write more efficient, usable code. Later, we will review the basics of discrete return and full waveform lidar data. We will then work with some NEON lidar derived raster data products.
Learning Objectives
After completing these activities, you will be able to:
- Explain the difference between active and passive sensors.
- Explain the difference between discrete return and full waveform LiDAR.
- Describe applications of LiDAR remote sensing data in the natural sciences.
- Describe several NEON LiDAR remote sensing data products.
- Explain why modularization is important and supports efficient coding practices.
- How to modularize code using functions.
- Integrate basic automation into your existing data workflow.
Time | Topic | Instructor/Location |
---|---|---|
8:00 | Reproducible Workflows, part II (associated GitHub repo) | Naupaka Zimmerman |
10:00 | BREAK | |
10:15 | Reproducible Workflows, cont. | Naupaka Zimmerman |
12:00 | LUNCH | Classroom/Patio |
13:00 | An Introduction to Discrete Lidar (related video) | Tristan Goulden |
13:20 | An Introduction to Waveform Lidar (related video) | Keith Krause |
13:40 | Rasters & TIFF tags | Tristan Goulden |
14:00 | Working with Lidar Data | Bridget Hass |
Classify a Raster using Threshold Values | ||
Merge GeoTIFF Files | ||
Extra: Mask a Raster Using Threshold Values in Python | ||
Extra: Create a Hillshade from a Terrain Raster in Python | ||
15:00 | BREAK | |
15:00 | Lidar, cont. | |
16:30 | Lidar Small Group Coding Activity | Tristan & Bridget |
17:30 | End of Day Wrap Up | Megan Jones |
Wednesday: Uncertainty
Today, we will focus on the importance of uncertainty when using remote sensing data.
Learning Objectives
After completing these activities, you will be able to:
- Measure the differences between a metric derived from remote sensing data and the same metric derived from data collected on the ground.
Time | Topic | Instructor/Location |
---|---|---|
8:00 | Uncertainty & Lidar Data Presentation (related video) | Tristan Gouldan |
8:30 | Exploring Uncertainty in LiDAR Data | Tristan Goulden |
10:30 | BREAK | |
10:45 | Lidar Uncertainty cont. | Tristan Goulden |
12:00 | LUNCH | Classroom/Patio |
13:00 | Spectral Calibration & Uncertainty Presentation (video) | |
13:30 | Hyperspectral Variation Uncertainty Analysis in Python | Tristan Goulden |
Assessing Spectrometer Accuracy using Validation Tarps with Python | Tristan Goulden | |
15:00 | BREAK | |
15:15 | Hyperspectral Validation | Tristan Goulden |
16:15 | Small group coding activity | |
17:30 | End of Day Wrap Up | Megan Jones |
Thursday: Vegetation & HPC
On Thursday, we will begin to think about the different types of analysis that we can do by fusing LiDAR and hyperspectral data products.
Learning Objectives
After completing these activities, you will be able to:
- Classify different spectra from a hyperspectral data product
- Map the crown of trees from hyperspectral & lidar data
- Calculate biomass of vegetation
Time | Topic | Instructor/Location |
---|---|---|
8:00 | Vertical scaling of remote sensing at NEON core sites: from handheld cameras to EOS | Tyson Swetnam |
9:15 | Phenocam Network & Data | Bijan Seyednasrollah |
9:40 | NEON Vegetation Structure Data | Natalie Robinson |
10:15 | BREAK | |
10:30 | Biomass Calc. & Tree crown mapping | Tristan Goulden |
12:00 | LUNCH | Classroom/Patio |
13:00 | Scaling Up: Using HPC/CyVerse platforms | Tyson Swetnam |
15:00 | BREAK | |
15:15 | Scaling Up, cont. | Tyson Swetnam |
17:45 | Capstone Brainstorm & Group Selection | Megan Jones |
Additional Resources
This year we will primarily focus on NEON Vegetation Structure data, however, NEON offers a wide variety of data on vegetation. For an overview of the observational sampling of vegetation data watch these videos:
Friday: Applications in Remote Sensing
Today, we will break into two group in the morning to work with waveform lidar or hyperspectral classification. Then in the afternoon, you will use all of the skills you've learned at the Institute to work on a group project that uses NEON or related data!
Learning Objectives
During this activity you will:
- Apply the skills that you have learned to process data using efficient coding practices.
- Apply your understanding of remote sensing data and use it to address a science question of your choice.
- Implement version control and collaborate with your colleagues through the GitHub platform.
Time | Topic | Location |
---|---|---|
8:00 | Concurrent sessions | |
Working with waveform lidar | Tristan Goulden - Longs Peak | |
Hyperspectral Classification | Bridget Hass - Classroom | |
Unsupervised Spectral Classification in Python: KMeans & PCA | ||
Unsupervised Spectral Classification in Python: Endmember Extraction | ||
10:30 | BREAK | |
10:45 | Accessing NEON data | Megan Jones |
12:00 | LUNCH | Classroom/Patio |
13:00 | Groups begin work on capstone project | Breakout rooms |
Instructors available on an as needed basis for consultation & help | ||
16:30 | End of day wrap up | Classroom |
18:00 | Time to leave the building (if group opts to work after wrap up) |
Additional Resources
For additional information on Classification of Hyperspectral Data with a variety of methods, please refer to the content taught by Dr. Paul Gader during the 2017 Data Institute.
Saturday: Applications in Remote Sensing
Time | Topic | Instructor |
---|---|---|
9:00 | Groups continue on capstone projects | Breakout rooms |
Instructors available on an as needed basis for consultation & help | ||
12:00 | LUNCH | Classroom/Patio |
13:00 | Presentations Start | Classroom |
16:30 | Final Questions & Institute Debrief | Classroom |
17:00 | End |