Improving chip yield rates with cognitive manufacturing

Share this post:

When we think about manufacturing in electronics we usually think about the assembly of the final product, rather than the manufacture of all of the individual components.  Right at the start of the supply chain are the microchips themselves.  As components, they are relatively cheap per unit, but the design complexity and manufacturing setup costs are staggering.  Last month Intel announced it was investing $7 billion dollars over the next 4 years to complete its Fab42 facility in Arizona to produce 7nm chips, while in December TSMC announced their plans to invest roughly $16 billion dollars in a fab facility for 5nm and 3nm chips.

At such a small scale, the density of components can lead to unpredictable behavior with electrostatic and quantum effects that cannot be adequately modeled using existing tools and techniques.  Designers must often make compromises to try and minimize these effects.

More importantly, it also means that previously insignificant imperfections in the silicon substrate or within the manufacturing processes itself can significantly impact the wafer yield. While manufacturing yields are a closely guarded secret, broadly speaking, as the density and clock speed goes up, the yield decreases.   Thus, as chips shrink to 5nm and 3nm, yield rates decrease, which impacts profitability.   Something needs to change.

Semiconductor manufacturing is already well advanced in autonomous operation and process automation, but there is always room for improvement.

In a recent IBM study, “Why cognitive manufacturing matters in electronics,” 92 percent of electronics executives indicated that they were planning or implementing big data analytics, with 83 percent seeing moderate to significant ROI. Meanwhile 100 percent indicated that they were planning for or implementing AI/cognitive computing with 83 percent seeing moderate to significant ROI.

While these numbers seem high, they are not that surprising.  For example, AMD have been using Hadoop to improve yield predictions for the last several years, and at ISSCC 17, TSMC illustrated that the speed of a chip could be increased by 40Mhz through the use of cognitive (machine learning) techniques to predict congestion during place and route.

In addition, the EE Times article AI Tapped to Improve Design discusses a research program IBM has launched along with eight other companies and three universities to investigate the application of machine learning in electronics design.

In the same IBM study I noted earlier, 100 percent of respondents indicated they were implementing cloud computing, with 73 percent indicating they had attained a moderate to significant ROI.

Security is often cited as a barrier to cloud adoption, and in an industry dominated by design intellectual property (IP), such a high rate of adoption seems strange.   However, many semiconductor companies run large compute farms that many classify as private cloud.  The design and verification of chips takes significant compute power, and the desire to apply cognitive techniques will continue to drive the demand for compute. There is significant interest in hybrid cloud where specific design/big data/cognitive workloads are sent off premises.   Many of these private/hybrid cloud environments utilize automation solutions from IBM Spectrum Computing.

With today’s complex designs, millions of verifications can be required.  But what happens if you find a serious bug a few days before tape out?  Delaying tape out can have significant revenue impact.  This is one area where cognitive systems may be able to help.  If during the design process we can input knowledge about the various verification runs, we might be able to determine which of those million tests we really do need to rerun to test the fix, and which have no relevance – reducing the retest cycle and minimizing the market impact.

With such large investments in manufacturing, the combination of analytics, automation and cognitive offers significant competitive advantage and ROI.  Learn about how IBM Spectrum Computing solutions can help enhance agility and productivity here.

More Storage stories

A future of powerful clouds

Hybrid cloud storage, Multicloud, Storage

In a very good way, the future is filled with clouds. In the realm of information technology, this statement is especially true. Already, the majority of organizations worldwide are taking advantage of more than one cloud provider.[1] IBM calls this a “hybrid multicloud” environment – “hybrid” meaning both on- and off-premises resources are involved, and more

Accelerating data for NVIDIA GPUs

AI, Big data & analytics, Storage

These days, most AI and big data workloads need more compute power and memory than one node can provide. As both the number of computing nodes and the horsepower of processors and GPUs increases, so does the demand for I/O bandwidth. What was once a computing challenge can now become an I/O challenge. For those more

Secure, efficient and high-performance deployment on premises and in the cloud

Flash storage, Multicloud, Storage

In Part one of this 2-part post on IBM Storage for hybrid multicloud, I discussed the two pieces of software at the center of our approach. The first is IBM Storage Insights, a free cloud-based service that simplifies your operations. The second is IBM Spectrum Virtualize, the strategic storage software foundation. In Part 2, we’ll more