Friday, December 9, 2016

Lab 8 - Spectral Signature Analysis & Resource Monitoring

GOAL AND BACKGROUND

The goal of this lab is to explore spectral signature analysis. This includes learning to graph and interpret reflectance information along learning simple band ratio analysis techniques.


METHODS

First, examples of various surfaces were found within the study area, Chippewa and Eau Claire Counties. This included standing water, moving water, forested areas, riparian vegetation. crops, urban grass, dry soil, moist soil, rock, asphalt highways, airport runways, and a concrete surface. The reflectance bands of each of these areas was then graphed and interpreted.

Next, band ratio analysis was explored. First, NDVI was used, which creates a ratio of NIR and the red band. This shows areas of healthy vegetation. After this was performed, ferrous minerals were analyzed using a ratio of MIR and NIR.


RESULTS

Shown below is the plotted bands of various surfaces. The IR bands (4-6) are the most effective at differentiating between the channels, as there is the most variation.Vegetation tends to have a high reflectance in these bands because IR is damaging to plants. However, water tends to readily absorb IR, which can be used in determining the water content in surfaces, such as soil. This is shown by dry soil having a lower IR absorbance than moist soil.



Shown below are maps of the study area after band ratios were used to determine areas of dense, healthy vegetation and ferrous minerals, respectively.






SOURCES

Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Tuesday, December 6, 2016

Lab 7 - Photogrammetry

GOAL AND BACKGROUND

The goal of this lab was to explore photogrammetric tools for use on satellite images. Specifically, stereoscopy and orthorectification in Erdas Imagine are used.


METHODS

Stereoscopy was first used to generate some anaglyphs. The data used was of Eau Claire county. The first anaglyph was created using a DEM elevation image. The second anaglyph was created using a DSM image generated from LiDAR data. A section of this is shown in the results section.

The next part of the lab explored orthorectification. This was done with Erdas Imagine Lecia Photogrammetric Suite (LPS), which is a tool with a variety of applications, one of which is orthorectification. The study area for this part was Palm Springs, California.

First, a project was created within the tool and the first image was brought in. This image was then assigned 11 control points that matched to a reference image. Control points were chosen based on easy to identify features with edges, such as intersections. Once the control points were taken, the second image was brought in. Features on the image were then matched to control points previously taken. A triangulation tool was then run along with a resampling tool, so the images would match up well with the same spatial resolution. The final images were then output and checked.


RESULTS

Shown below is the anaglyph generated from the DTM, which was much more successful than the anaglyph generated from the DEM. You will need stereoscopic (blue and red) glasses to view it correctly.



In the orthorectification process, the control points between the two images is shown below.



The final output is shown below, along with a larger-scale view of the border of the two images. The border is difficult to see, which means the orthorectification process was successful. It is slightly visible running at a diagonal through the center of the image.






SOURCES

National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005.

Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau Claire County and Chippewa County governments respectively.

Spot satellite images are from Erdas Imagine, 2009.

Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.

National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.

Tuesday, November 15, 2016

Lab 6 - Geometric Correction

GOAL AND BACKGROUND

The goal of this lab was to learn how to use geometric corrections to correct distortion. This was done by using polynomial transformations to adjust an image with a reference image.


METHODS

First an image of Chicago was corrected using a first order polynomial. First order polynomials require three Ground Control Points (GCPs) to correct the image. Four were used here, as more GCPs increase accuracy. The GCPs were placed on the study image, then placed on the same feature on the reference image. The GCPs had to be spread out across the image to minimize potential distortion. They were adjusted to maximize accuracy and minimize the RMS error. The goal was to have the RMS errors below 2. Lastly, the corrected image was extracted as a .img file.

Second, an image of Sierra Leone was used corrected with a third order polynomial. Third order polynomials require ten GCPs. Twelve were used to maximize accuracy. Once again, RMS error was brought well below 2, and the image was extracted as a .img file.


RESULTS

Shown below is the RMS error table from the GCPs in the Chicago image. Notice they are all well below the goal of 2.



Shown below is a screen capture of the final GCP locations. Notice how they are spread out.



Shown below is the RMS error table from the GCPs in the Sierra Leone image. These are also well below the goal of 2.



Shown below is the final screen capture of the GCP tool. Once again, they are spread out across the study area.




SOURCES

Earth Resources Observation and Science Center, United States Geological Survey.

Illinois Geospatial Data Clearing House. 

Thursday, November 10, 2016

Lab 5 - LiDAR

GOAL AND BACKGROUND

The goal of this lab was to explore the lidar analysis tools within ArcMap. This included projecting LAS (LiDAR) data and using tools to extract data.


METHODS

Projecting LAS data included reading through the metadata to determine the coordinate system and assigning it to the data.

The data was then projected in ArcMap and adjusting the symbology to show elevation and to show the TIN surface. The contour surface was also explored. To use contours without buildings, a filter was used to only show ground points.

The profile tool was also explored. This tool involves drawing a line through the study area and all points along the line are shown. An example of this is shown in the results section.

The last tools explored were extracting DSM and DTM image rasters from the data. These are shown in the results section. First an image of first return points was created. A hillshade tool was then applied to the image to make it easier to read it as a three-dimensional image.

Next, another image raster was created, this time with ground points only. This meant all structures in the data were not included. Once again hillshade was used on the result. This result was much smoother, since it lacked the structures from before that created a rough texture and abrupt changes in elevation.

Lastly, an intensity image was generated, which showed the intensity of returns.


RESULTS

Shown below is the result of using the contour option with only ground points. The options for which points could be selected are shown on the right of the screenshot.



An example of the profile tool is shown below. A line is visible in the LiDAR data across residential houses, and all points within the line's buffer are shown in the tool window.



The first returns images are shown below. First is the original result, followed by the image with the hillshade affect applied. The abnormalities in the water features is caused by the high absorbance of water, resulting in very low point density. Noise points in these low-density areas tends to create the distortion that appears in the image.




Below are the results from the ground-only images. First is the original image, followed by the image with the hillshade applied. Notice that the image is much smoother with the absence of buildings.



Lastly, the intensity image that was generated is shown below.




SOURCES

Department of Planning and Development. Eau Claire Point Cloud. Collected 2013, May 13.

Tuesday, November 1, 2016

Lab 4 - Miscellaneous Image Functions

GOAL AND BACKGROUND

The goal of this lab was to explore various image functions in the Erdas software. These functions were: delineating a study area, image fusion (pansharpening), radiometric enhancement, linking satellite images to Google Earth, resampling images, image mosaicking, and using binary change detection.


METHODS

Delineating a study area was completed by using an inquire box to outline the study area, then creating a subset image using the coordinates of the inquire box. A subset was also created using an area of interest shapefile.

The next image function used was image fusion. This is done to optimize spatial resolution. The multiplicative function of pansharpening was used to raise an image with a 30 m spatial resolution to a 15 m resolution.

Next, radiometric enhancement techniques were performed. The technique used was haze reduction. It was dont with a raster image function within the Erdas program.

Next a satellite image was linked to Google Earth. This was done using a tool within the Erdas program. The viewers were linked and synced so a movement in one was mimicked in the other.

The next function performed was resampling. A satellite image was first resampled to a higher spatial resolution (30 m to 15 m) using the nearest neighbor method, then bilinear interpolation was used.

Image mosaicking was then performed using adjacent satellite images. First Mosaic Express was used, then Mosaic Pro. The Mosaic Pro mosaic used histogram matching to make the images more similar.

The last image function used was binary change detection. A binary change image was first created using a tool within Erdas. The histogram was used to determine the threshold of values that would signify a change. Areas with values above this threshold were exported to crate a binary change image.


RESULTS

The study area subsets are shown blow. The first is the subset created by the inquire box. The second was created by using an area of interest shapefile.





The results of pansharpening are shown below. The left window is the original image, and the right window is the pansharpened image.



The result from the haze reduction is shown below. The original image is on the left panel, and the product of haze reduction is on the right. Notice the haze that was in the Southeastern section was removed.



The results from resampling are shown below. The original image is first, followed by nearest neighbor and bilinear interpolation. There is no discernible difference between the original and nearest neighbor. However, bilinear interpolation is noticeably smoother at the price of being blurrier and less accurate.






The results from image mosaicking are shown below. The first was done with Mosaic Express. Notice how the colors of the images differ and the border is noticeable. The next image was created using Mosaic Pro. Notice how the colors were matched with histogram matching and the border between the two images is less pronounced.




The next images are from the binary change detection. The histogram with marked thresholds is shown below. Followed by that is the binary change areas overlaid on the original study area. The red signifies areas that have changed.





SOURCES

Data obtained from Cyril Wilson for use in 338 course.