Tuesday, November 15, 2016

Lab 6 - Geometric Correction

GOAL AND BACKGROUND

The goal of this lab was to learn how to use geometric corrections to correct distortion. This was done by using polynomial transformations to adjust an image with a reference image.


METHODS

First an image of Chicago was corrected using a first order polynomial. First order polynomials require three Ground Control Points (GCPs) to correct the image. Four were used here, as more GCPs increase accuracy. The GCPs were placed on the study image, then placed on the same feature on the reference image. The GCPs had to be spread out across the image to minimize potential distortion. They were adjusted to maximize accuracy and minimize the RMS error. The goal was to have the RMS errors below 2. Lastly, the corrected image was extracted as a .img file.

Second, an image of Sierra Leone was used corrected with a third order polynomial. Third order polynomials require ten GCPs. Twelve were used to maximize accuracy. Once again, RMS error was brought well below 2, and the image was extracted as a .img file.


RESULTS

Shown below is the RMS error table from the GCPs in the Chicago image. Notice they are all well below the goal of 2.



Shown below is a screen capture of the final GCP locations. Notice how they are spread out.



Shown below is the RMS error table from the GCPs in the Sierra Leone image. These are also well below the goal of 2.



Shown below is the final screen capture of the GCP tool. Once again, they are spread out across the study area.




SOURCES

Earth Resources Observation and Science Center, United States Geological Survey.

Illinois Geospatial Data Clearing House. 

Thursday, November 10, 2016

Lab 5 - LiDAR

GOAL AND BACKGROUND

The goal of this lab was to explore the lidar analysis tools within ArcMap. This included projecting LAS (LiDAR) data and using tools to extract data.


METHODS

Projecting LAS data included reading through the metadata to determine the coordinate system and assigning it to the data.

The data was then projected in ArcMap and adjusting the symbology to show elevation and to show the TIN surface. The contour surface was also explored. To use contours without buildings, a filter was used to only show ground points.

The profile tool was also explored. This tool involves drawing a line through the study area and all points along the line are shown. An example of this is shown in the results section.

The last tools explored were extracting DSM and DTM image rasters from the data. These are shown in the results section. First an image of first return points was created. A hillshade tool was then applied to the image to make it easier to read it as a three-dimensional image.

Next, another image raster was created, this time with ground points only. This meant all structures in the data were not included. Once again hillshade was used on the result. This result was much smoother, since it lacked the structures from before that created a rough texture and abrupt changes in elevation.

Lastly, an intensity image was generated, which showed the intensity of returns.


RESULTS

Shown below is the result of using the contour option with only ground points. The options for which points could be selected are shown on the right of the screenshot.



An example of the profile tool is shown below. A line is visible in the LiDAR data across residential houses, and all points within the line's buffer are shown in the tool window.



The first returns images are shown below. First is the original result, followed by the image with the hillshade affect applied. The abnormalities in the water features is caused by the high absorbance of water, resulting in very low point density. Noise points in these low-density areas tends to create the distortion that appears in the image.




Below are the results from the ground-only images. First is the original image, followed by the image with the hillshade applied. Notice that the image is much smoother with the absence of buildings.



Lastly, the intensity image that was generated is shown below.




SOURCES

Department of Planning and Development. Eau Claire Point Cloud. Collected 2013, May 13.

Tuesday, November 1, 2016

Lab 4 - Miscellaneous Image Functions

GOAL AND BACKGROUND

The goal of this lab was to explore various image functions in the Erdas software. These functions were: delineating a study area, image fusion (pansharpening), radiometric enhancement, linking satellite images to Google Earth, resampling images, image mosaicking, and using binary change detection.


METHODS

Delineating a study area was completed by using an inquire box to outline the study area, then creating a subset image using the coordinates of the inquire box. A subset was also created using an area of interest shapefile.

The next image function used was image fusion. This is done to optimize spatial resolution. The multiplicative function of pansharpening was used to raise an image with a 30 m spatial resolution to a 15 m resolution.

Next, radiometric enhancement techniques were performed. The technique used was haze reduction. It was dont with a raster image function within the Erdas program.

Next a satellite image was linked to Google Earth. This was done using a tool within the Erdas program. The viewers were linked and synced so a movement in one was mimicked in the other.

The next function performed was resampling. A satellite image was first resampled to a higher spatial resolution (30 m to 15 m) using the nearest neighbor method, then bilinear interpolation was used.

Image mosaicking was then performed using adjacent satellite images. First Mosaic Express was used, then Mosaic Pro. The Mosaic Pro mosaic used histogram matching to make the images more similar.

The last image function used was binary change detection. A binary change image was first created using a tool within Erdas. The histogram was used to determine the threshold of values that would signify a change. Areas with values above this threshold were exported to crate a binary change image.


RESULTS

The study area subsets are shown blow. The first is the subset created by the inquire box. The second was created by using an area of interest shapefile.





The results of pansharpening are shown below. The left window is the original image, and the right window is the pansharpened image.



The result from the haze reduction is shown below. The original image is on the left panel, and the product of haze reduction is on the right. Notice the haze that was in the Southeastern section was removed.



The results from resampling are shown below. The original image is first, followed by nearest neighbor and bilinear interpolation. There is no discernible difference between the original and nearest neighbor. However, bilinear interpolation is noticeably smoother at the price of being blurrier and less accurate.






The results from image mosaicking are shown below. The first was done with Mosaic Express. Notice how the colors of the images differ and the border is noticeable. The next image was created using Mosaic Pro. Notice how the colors were matched with histogram matching and the border between the two images is less pronounced.




The next images are from the binary change detection. The histogram with marked thresholds is shown below. Followed by that is the binary change areas overlaid on the original study area. The red signifies areas that have changed.





SOURCES

Data obtained from Cyril Wilson for use in 338 course.