Sound as a new data source for industry 4.0

The inflection in a voice tells us how someone is feeling. The rattle in a car reminds us something needs to be checked. The hum of a refrigerator assures us it’s working as it should. All day long, humans use sound as a data source, and we mostly do it without consciously thinking about it.
For some people, sound is part — though a rarely acknowledged one — of their jobs. Technicians, machine operators and maintenance teams use sounds as data points, to give them clues about the health and operability of the machines they work with. Sometimes those sounds obviously reveal a problem, which can be backed up by other measures that show the working condition of equipment. Other times, equipment sounds are more subtle and tech teams and operators can only guess something may be wrong, especially if no other measured parameters show a problem.
Capturing missing data
The meaning behind sounds in an industrial setting has largely been locked inside the minds of experienced maintenance and operations personnel. Until now, there’s been no way to capture or share industrial sounds or the knowledge of experienced personnel who understand the meanings behind them. This data has existed all along but there’s been no practical way to capture and use it. Enter the application of AI to acoustic data. Using AI, acoustic data is a new source of information that can be harvested, learned from and applied to real-world problems.
Industrial noise can be broken down into discrete sounds and recorded. Relevant sounds are captured and turned into spectrographs, visualizations that show sound as wavelengths. Subject matter experts, humans who understand what the sounds mean, annotate and label the sounds. With this data and information, models are built that learn to detect when a sound is “good” or “normal” versus a sound that is “bad” or “abnormal,” indicating that there’s a problem or defect. As more data is collected, the models are refined.
Real-world applications in industrial and manufacturing settings
IBM Acoustic Insights enables organizations to use plant and equipment sounds to automatically detect operational anomalies and defects and identify product quality issues. Through the use of this AI application, the skilled process of sound inspection becomes automated and faster for product quality tracking, helping to increase yield, reduce scrap and reduce human inspection time. Sound analysis helps quickly identify defects in finished products, as well as monitor and track anomalies in in-process products. It is also being used to help with preventive maintenance, to determine when a machine or part may be close to failure and needs repair or replacement.
Currently, the best candidates for using Acoustic Insights are businesses that have adequate data to be captured, with repeatable processes that can be recorded and sufficient failure occurrences so that anomalies and variances from normal sounds can be captured.
A current real-world application is a car manufacturer that uses robotic welding equipment to assemble chassis. Using robotics, these chassis can be assembled in about 6 minutes. Without the use of AI, it takes hours or even days for a welding problem to become apparent. That can result in hundreds of poorly welded chassis. With AI, microphones are placed around welders, as close as possible to where sound emanates. With Acoustic Insights and the use of edge computing to reduce latency, the sound of a poor weld creates an alert as soon as it occurs. This allows the system to stop the welding immediately, and have maintenance to fix the welder, which dramatically decreases the number of poor welds produced.
The application possibilities are only growing
Acoustic data is an emerging field. Right now, Acoustic Insights is mostly being applied in manufacturing and industrial settings. But IBM continues to push the boundaries of its application. For example, IBM has extracted acoustic data from tennis matches to build statistics on the match. Tennis sound data was also used to serve up the best Wimbledon highlights for fans. IBM built an AI system that scanned real-time tennis match clips and then ranked them by giving them an excitement score. This helped the digital team find the most exciting clips within minutes of each match’s completion.
IBM keeps exploring areas where this can be applied, including industrial livestock operations for early detection of animal disease. Efforts are also being made to capture vibrational and ultrasonic sounds that are outside the range of human hearing. The theory being that once a human hears an anomalous sound, the machine is likely close to failure. The hope is that by capturing vibrational and ultrasonic sounds, problems can be found and corrected that much faster.
Could sound be an untapped source of data inside your business? Learn more about Acoustic Insights and IBM Operations Consulting that helps manufacturing and industry streamline operations and optimize assets through preventive maintenance.