Share this post:
Computer chips can fail in countless ways, from countless sources in the manufacturing process. Even with terabytes of sensor data pouring from the manufacturing equipment used to make them, there are still semiconductor lemons, depressing wafer yield, performance and reliability, which drives up manufacturing and speculative maintenance intervention costs. IBM and Tokyo Electron Limited (TELTM) think injecting cognitive computing into the chip-making equipment might be the solution. So, the two recently signed a Joint Development Program to add cognitive capabilities to TEL’s chip manufacturing equipment to track down anomalies – and keep their equipment running perfectly.
A typical semiconductor fab generates upwards of 1.5 terabytes of data per hour, over 5 million transactions per day, from more than 50,000 sensors, not to mention the more than 400 TB of storage to support real-time applications, like a fab’s process control loops that keep each complex action consistent.
|Detecting anomalies from noisy multivariate time-series sensor data on manufacturing equipment has been a focus for IBM Research in industrial products from mining, steel, petroleum, and automotive – on top of semiconductors. IBM is leading in the development and deployment of AI techniques for anomaly detection in manufacturing. New machine learning techniques have succeeded not only in the revelation and prediction of anomalies, but also for their utility in root cause diagnostics.  
IBM Research is developing advanced machine learning “pipelines” that are built to construct prediction models using the big data of structured data, as well as unstructured operational data from the tools, logs, and on-wafer measurements.¹ These AI pipelines are used to assess and predict anomalies, and to optimize yield, performance, and reliability of the chips we make at our semiconductor research fab in Albany, NY. With TEL, we’re taking the TBs of sensor data from manufacturing equipment and wafer measurements for anomaly detection – to identify hidden patterns that could be precursors of abnormal behavior.
Detecting anomalies one nanometer at a time
Cognitive anomaly warnings would be far more sophisticated and sensitive than typical fault detection methods (like simple alarms on individual sensors), generating tighter control of tool performance, and feedback loops to correct or reset equipment to a desired state. Adding cognitive to the process will also identify and control for equipment “variations,” those subtle changes to an equipment’s state, influenced by equipment process history, wafer history, or other changes to fab facilities.
Any change in equipment state can induce minute changes in composition or dimension with dramatic impact on chip function. For example, our most advanced chips will not tolerate more than a 1nm variation – a level of accuracy beyond the resolution of the most advanced metrology techniques. Yesterday’s reasonable 3nm variation would now depress yield, performance, and reliability of a chip.
And whether a given equipment or process can reach a specific process control (like 1nm variation control) will drive many other elements of a chip, such as device architecture, material selection, integration strategies, and ultimately the value proposition of scaling to 7nm and eventually 5nm. Cognitive will be ‘de rigueur’ to extract value from semiconductor technology scaling.
Analyzing data at the edge for a better chip
The joint research with TEL will first focus on understanding the ontology of tool behaviors by ingesting structured data from sensors on TEL’s etching equipment, which is used in Extreme Ultraviolet-patterned interconnects (the wires that connect the transistor to other parts of the chip). This data will be coupled with IBM’s wafer measurements, alongside unstructured data, such as maintenance logs and images, to automatically learn patterns that are precursors of abnormality. These modes of failure patterns can be subsequently deployed on TEL’s etching equipment to identify abnormalities in real time to allow mitigation in an operational mode.
Ultimately, TEL could incorporate cognitive in streaming analytics on their equipment, at the “edge” (as in, computing where the data exists, versus needing to transport the data to a central compute hub). They could also use a hybrid cloud option with streaming analytics at the edge, and asset fleet analytics on a secure cloud. By developing anomaly detection models that link TEL’s equipment sensor data to downstream wafer performance and yield, they can show their own customers a path to anomaly-free chip manufacturing. For IBM, our research efforts on these AI pipelines will feed into the new Watson IoT for Manufacturing solution.
TEL and IBM have partnered for 18 years of joint development programs in semiconductor research and development, and are expanding their strategic relationship towards cognitive projects, including cognitive manufacturing.
About Tokyo Electron
Tokyo Electron Limited (TEL), established in 1963, is a leading supplier of innovative semiconductor and flat panel display (FPD) production equipment worldwide.
All of TEL’s semiconductor and FPD production equipment product lines maintain high market shares in their respective global segments.
 Cross Industry Analytics Solution Library for Resource and Operations Management
J. Kalagnanam, Y. Lee, T. Ide, H. Hahn
KEIT PD Issue Report, September 2015, Vol 15-9, Pages 33-51, ISSN 2234-3873
 Change Detection Using Directional Statistics
Tsuyoshi Ide, Dzung T. Phan, Jayant Kalagnanam, Proceedings of the Twenty-Fifrth International Joint Conference on Artificial Intelligence (IJCAI 16), 2016
 “Sparse Gaussian Markov Random Field Mixtures for Anomaly Detection”
Tsuyoshi Ide, Ankush Khandelwal, Jayant Kalagnanam, Proceedings of the 2016 IEEE International Conference on Data Mining (ICDM 16, December 13-15, 2016, Barcelona, Spain), pp.955-960