Assessment of bridge engineers on output display size in automatic detection of free lime using deep learning

Mai Yoshikura, Takahiro Minami, Tomotaka Fukuoka, Makoto Fujiu, Junichi Takayama

Last modified: 2024-05-06

Abstract


Conventional close visual inspection of bridges has high cost and lack of skilled engineers. New technologies, such as AI, UAV, and robots, can be provided to help the inspection process and substitute previous inspection methods to save labor effort and reduce costs. We develop damage detection system for bridge inspection by adopting image recognition technology based on deep learning. It detects damage from bridge images and provides the accurate outline. Such technology can reduce inspection work by detecting the damage instead of inspectors, and they can focus on important tasks such as damage determination. However, it takes a lot of time to collect and annotate for training images. Although linear damage such as cracks requires a fine outline for each pixel, planar damage such as free lime is presumed to be allowable even at low precise boundaries. If low precise boundaries are allowed, training data is obtained in less time. To determine damage with the same accuracy as close visual inspection, the limits of allowable low precision display need to be determined. This study examined the limis of low precise boundaries for free lime. The bridge engineers compared with the detection output of gradually reduced precision boundaries and investigated the limits of the low precision they allow.

Keywords


bridge inspection, free lime, output display, automatic detection, AI, deep learning

Full Text: PDF