Welcome to the Malcolm Knapp Research Forest! During your time in the MGEM program, you will be exposed to a wide range of remote sensing and GIS technologies, datasets and workflows that equip you to answer questions about our environment. Remote sensing datasets can typically be characterized by three core elements: temporal resolution, spectral resolution, and spatial resolution. Temporal resolution refers to the revisit time of a sensor, aka how long it takes to complete full coverage of the earth for satellite based sensors. Spectral resolution refers to unique portions of the electromagnetic spectrum captured by a sensor. And finally, the spatial resolution of a sensor refers to the dimensions of a pixel captured by that sensor. Depending on instrument design, satellite-based remote sensing platforms may provide data with resolutions ranging from coarse (i.e. 50km SMOS Pixels) to fine scales (i.e. 3m Planetscope).

Image Classification & Mixed Pixel Problems

Landscape-level analysis of satellite data often requires that pixels be classified using comprehensive categories or descriptors. For example, quantifying changes in forest cover over time requires identifying which pixels represent forest, and which do not. Images can be classified into only a few classes (e.g. forest or non-forest), or many classes representing more complex landscapes (e.g. deciduous, broadleaf, mixed-wood, treed wetland). Depending on the spatial resolution of the dataset you are working with, the land cover composition within a pixel may comprise more than one of these classes. This is commonly referred to as the ‘Mixed Pixel Problem’, and introduces uncertainty in classification tasks.

In this exercise, you will simulate the spatial resolutions of three popular satellite remote sensing platforms: PlanetScope, Sentinel2, and Landsat. By mapping out “pixels” on the landscape at MKRF, you will investigate the effect of the mixed pixel problem on your ability to classify the landscape into meaningful categories. The main goals for the day are a) to experience what the spatial resolution of some global satellite datasets look like on the ground, and b) to understand the limitations of representing complex land cover through the classification of satellite data pixels.

Part 1

The first part of this exercise involves mapping out your own ‘pixels’ in the MKRF research forest, and observing the landscape features that each of these pixels contain. For this exercise, you need to form into 6 groups, which will be provided with a compass and transect tape. You will also need to assign 1 note-taker to mark down your observations in the field. When you are ready:

  1. Locate your first study site, which will be marked with a cone.(Also marked in the interactive map in Part 2)
  2. Map out a 3-meter PlanetScope pixel around the cone, using a compass and the transect tape provided. Orient your imaginary grid towards true north. Mark the corners of the pixel with your group members. (HINT: the magnetic declination at Loon Lake is +16°). You will have to adjust your compass accordingly. If you are using a compass app on your phone, make sure that true north is enabled.)
  3. Repeat step 3 for a 10-meter Sentinel 2 pixel and a 30-meter Landsat pixel.
  4. Decide if the pixel is mixed or homogenous and note down your response.
  5. As a group, discuss and record the features visible on the landscape.
  6. Based on the recorded features, come up with a land cover class to assign to for each platform, in each pixel. This step is somewhat subjective; you can disagree with your group members!



Part 1 Discussion Questions

Add your answers to the table

  1. Were there any sites dominated by one particular land cover class for all three resolutions / platforms?
  2. Imagine each pixel in the year 2000. Look for clues about the site’s history. Do you think that you would have assigned it to a different landcover class 20 years ago?
  3. Is the value of a pixel determined equally by reflectance from the center and reflectance from the corners? In other words, does the sensor “see” the entire area represented by a pixel?


Once you are done filling out the table by the end of the lab, click the ‘pdf’ button to export your table.

Part 2

Compare your observations to real satellite imagery of the study area and consider the spectral values of the real pixel corresponding to each site.

  1. Locate each site on the images of the study area and identify the pixel in the imagery corresponding to the site.
  2. Describe the pixel in the datasheet. What is its color? Does it have high or low reflectance? (If you’re color blind, don’t worry about wavelength. Just consider how much light is being reflected.)
  3. Look at the NDVI images and estimate the value for the pixel at each site.


Part 2 Discussion Questions

  1. Why do you think that the range of NDVI values differs so much between sensors?

  2. What are the brightest and darkest areas in each image?