jamesswartwood.github.io

Personal website for showcasing projects, etc.

View on GitHub

Hydrology Research Tool - Pole Detection w/ Uwimg Library


Video Introduction


Contents

GitHub Repository


Background

In hydrology studies in the last decade, red poles have been installed in front of remote cameras due to the advantage of providing continuous monitoring of snow depth through snow height change on the pole in the images. So far, manual extraction of the snow depth is recommended because current methods of automated extraction often produce inaccurate results, as daylight and weather changes influence the color and textures of the poles. Manual extraction involves calculating the length in pixels of the portion of the pole that is visible to the camera. The snow depth around the pole can then be estimated by subtracting that from the known length of the pole and converting the pixel count into the metric system. This process requires prior knowledge of the camera intrinsics and the placement of the pole.

Example images:

Camera 1

Camera 2

Camera 3


Project Description

In this project, we explored methods to automate detecting the top and bottom points of the red poles using the uwimg computer vision library that students develop while taking CSE 455 at the University of Washington, Seattle. This approach must account for differences in pole locations in the images and weather/daylight images that make change pixel color values. There are many ways to implement the detection of objects in an image.


Developed Algorithm

This is a general summary of the algorithm. Small intermediate steps are glossed over. For full understanding of process, refer to the code itself.

The code for this project can be found in this dedicated GitHub repository: jamesswartwood/pole-detection

1. Sweep image for red pixel.
2. Expand sideways from the red pixel, identifying potential edges of the pole and measuring prospective pole width.
3. Travel either up or down from the original red pixel a distance of the measured width.
    - Make sure no edge is hit in the process. This ensures that we land on another pixel on the body of the pole. Otherwise, continue step 1.
4. Expand sideways from this second red pixel, identifying potential edges of the pole and measuring prospective pole width.
    - If the second measured pole width matches the first, we have found the pole. Otherwise, continue step 1.
5. Use the identified pixels and measured pole widths to find two points along the very center of the pole.
6. Calculate the tilt of the pole by finding the slope between the two points.
7. Project down the length of the pole using the measured slope to find the bottom edge.
    - Before the bottom is found, occasionally recalibrate to the center of the pole to account for any bend in the pole and recalculate the slope.
8. Project up the length of the pole to find top edge of the red portion of the pole.
9. Project up further still to find the top edge of the yellow portion of the pole.
10. Output the top and bottom points of the pole. Update the image with annotations of the detection.

Results

Statistical Results

Camera 3 was selected for large-scale testing because of the large amount of change in snow-depth across the dataset. 1305 images were manually labeled for comparison. Here are the results in spreadsheet form: Statistical Results

Summary:

Runtime:

Classification:

Detection:

Visual Results

These results are a small sample of all the images this algorithm was tested on. Full results can be found on GitHub: jamesswartwood/pole-detection

Camera 1

Camera 2

Camera 3


Conclusion

This automated method of detecting the poles has great promise. The algorithm was efficient, with a runtime of 0.1206 seconds per images. It was very consistent in its ability to classify the presence of a pole, displaying an accuracy rating of 98.6% on the camera 3 dataset. The detection proved to be accurate on the vast majority of images from camera 3, but there were a portion of the images where the estimation greatly overshot or undershot the pole length, causing an average difference of 71 pixels per measurement across the dataset. There is much work to be done in fine-tuning it to account for all environmental factors, which were the main cause of discrepancy between the manual and automated labeling. Changes in the environment, including lighting, lens blur due to snow, and falling objects covering the bottom of the pole, made it difficult for the algorithm to accurately detect the edges along the sides, top, and bottom of the poles. Given more time, adjustments could be made to the conditions of these actions to improve the general accuracy of the program.


Credits

Author

James Swartwood is a third-year student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington. He aspires to become a computer vision developer and machine learning engineer. - Last Updated 06/02/2022

Acknowledgements

Thank you to Catherine Breen, the environmental scientist consulted during this project.

Thank you to Dr. Joseph Redmon for teaching the CSE 455 class and providing insight into computer vision methods for this project.