Session: 02-06-01: Machine Learning in Structural Dynamics and Aeroelasticity
Paper Number: 152541
152541 - Unsupervised Anomalous Sound Detection of Noisy Real-World Production Facility Using Deep Autoencoder and Spectrogram
Oil and gas production stands as the backbone of the global energy sector, often referred to as the lifeblood of modern civilization. A production facility is a facility for the field handling, conditioning, treating, processing, and/or measuring of gas, oil, and/or water. The equipment used for oil or gas production located on a production installation, and includes separation, treating and processing facilities, equipment and facilities used in support of production operations, storage areas or tanks and dependent personnel accommodations. The equipment needs to be appropriately maintained and inspected to assure sustainable, efficient and safe production. High accuracy in fault detection is essential as failure to detect faults in the process equipment can result in higher repair costs, production downtime, and nonproductive time and health and safety implications for workers. For various reasons, it is highly desirable to limit the need for human physical presence and intervention in industrial process plants, and inspection and maintenance operations should ideally be converted to condition based predictive to reduce the overall in-operation failure rate, and use remote inspection and intervention solutions such as robotics to handle as many tasks.
Anomalous sound detection aims to identify whether the sound emitted from an equipment is normal or anomalous. The emerging technologies of artificial intelligence and machine learning have opened new opportunities for automatically detecting process equipment malfunctions. One of the significant challenges in putting anomalous sound detection to practical use is how to detect unknown abnormal sounds in a situation in which only normal sounds are available as the vast majority of the training data. In real-life production facilities, actual anomalous sounds rarely occur and can be highly diverse as well. Therefore, if not impossible, it is practically challenging to collect all possible abnormal sound patterns as training data, considering that it is necessary for unseen abnormal sounds to be detected. Such challenge can be overcome by considering unsupervised detection of anomalous sounds.
In this study, we develop an unsupervised deep learning methodology to automatically detect anomalous sounds collected from a microphone carried by an inspection robot, where all the audio data are collected near a hot oil pump in a noisy real-world oil and gas production facility. Over 500 time-series audio clips with 10s duration each are used for training and evaluating the model, which include normal condition and abnormal condition such as gas leakage, and pump contamination/clogging. We start by training a deep autoencoder architecture simply using normal data, and the training data are generated by computing a spectrogram (i.e., “visual representation or image of the sound”) from each signal and extracting features from these. Such deep autoencoder can take spectrograms of the original time-series audio signals as inputs and act as a classifier to discriminate between normal and abnormal sounds. Specifically, the reconstruction error generated by the autoencoder model is calculated to measure the degree of abnormality of the sound event. Based on the test dataset that contains abnormal cases, the developed autoencoder can achieve robust prediction with a recall of 1.0, a precision of 0.73, and a F1 score of 0.85. The developed methodology has broader impact and can be readily extended to detect audio anomalies for other types of rotating equipment in the process train such as cooling fans, motors, and so on. The current model has been deployed to a Cloud-based software platform as the compute engine to enable autonomous inspection and predictive maintenance for oil and gas production facilities.
Presenting Author: Fei Song SLB
Presenting Author Biography: Fei Song is currently a senior data scientist with SLB, working on developing AI/ML based autonomous solutions for SLB-midstream production systems.
Unsupervised Anomalous Sound Detection of Noisy Real-World Production Facility Using Deep Autoencoder and Spectrogram
Paper Type
Technical Paper Publication