Use ML to Extract Design Data to Accelerate and Improve SoC Designs – EEJournal

On the one hand, I’m extremely excited and excited about all the amazing things I’m hearing right now about enterprise-level artificial intelligence (AI), machine learning (ML), and deep learning deployments. (DL). But (and there is always a “but”)…

In fact, before we jumped headlong into the fray with gusto and abandon, someone asked me earlier today to explain the differences between AI, ML, and DL. Well, in a nutshell, AI refers to any technology that allows machines to simulate (some say “mimic”) human behaviors and decision-making abilities. Meanwhile, ML is a subset of AI that has the ability to examine data, automatically learn from the data, and then use what it learns to make assessments and decisions. enlightened. In turn, DL is a subset of ML that uses layered structured algorithms to implement artificial neural networks (ANN). With architectures inspired by neural networks in biological brains, ANNs can learn and make more sophisticated assessments and decisions than more traditional ML models.

About nothing, I usually ask Alexa to “tell me a stupid cat joke” every night shortly before closing my eyes and letting the sandman transport me to the Land of the Nod. His joke last night was:

Q: Why did the cat cross the road?

A: Because the chicken had a laser pointer!

I laughed. My wife (Gina the Magnificent) laughed. Alex laughed…

By the way, did you see the HuffPost article about how a Google engineer was recently placed on administrative leave after he started telling people that the AI ​​program he was working on had become sentient?

And, as a totally unrelated aside, have you heard of the Australian startup that grows living human neurons and then embeds them into traditional computer chips, as described in this video (now I can’t stop thinking about the “It’s alive! It’s alive!” scene, which has to be one of the most classic lines in horror movie history).

When it comes to using live neurons, the term “mind-blowing” seems somewhat inappropriate. On the other hand, having the game of pong– which was one of the first computer games to be made – because one of the first tasks of these bio-machine-brain hybrids seems rather fitting in a wacky way.

“But what has agitated your current cogitations and piqued your current thoughts?” I hear you cry. Well, I was just chatting with Mark Richards, who is the senior director of product marketing at Synopsis.

As a reminder, in 2020, Synopsys launched DSO.ai (Design Space Optimization AI), which is described as “the industry’s first standalone artificial intelligence (AI) application for chip design”. As the folks at Synopsys put it, “DSO.ai searches for optimization targets in very large chip design solution spaces, using reinforcement learning to improve power, performance, and surface area. By developing Massively exploring design workflow options while automating less consequential decisions, the award-winning DSO.ai increases engineering productivity while quickly delivering results you could only imagine before.

DSO.ai makes its presence felt early in the design process for system-on-chips (SoCs) and multi-chip modules (MCMs). Another tool in the Synopsys arsenal is SiliconDash, which lingers in the post-silicon part of the process. SiliconDash is an industrial big data analytics solution for fabless businesses. It provides comprehensive, real-time, end-to-end intelligence and control of SoC and MCM manufacturing and testing operations for executives, managers, product engineers, test engineers, quality engineers, support, device engineers, performance engineers and test operators.

All of which brings us to Synopsys’ latest offering, DesignDashwhich deals with the main design and numerical analysis part of the project process (think “RTL to GDSII”).

I will try to convey everything Mark told me as succinctly as possible. Let’s start with two of the biggest industry-wide challenges. The first is the growing design/designer productivity deficit caused by increasing design/system complexity, ever more challenging PPA (power, performance and area) goals, designer resource crunch (there is no not enough) and inefficiency. debugging and optimization workflows, which defy all time-to-market objectives (TTM).

The second major challenge is the low visibility and observability of the design process, long opaque if one is generous, and whose opacity increases as the capacity and complexity of the projects increase. Designers and managers typically have a limited view of the entire design process, making it difficult to track exactly what’s going on and equally difficult to improve the situation.

One of the interesting things to contemplate is the large amount of data generated by the different tools. A typical project involves thousands of tool flow executions for tasks such as design space exploration, early feasibility analysis, architectural refinement, block reinforcement, approval, regression testing…and the list goes on.

A dizzying amount of data is generated during the design process of an SoC or MCM (Image source: Synopsys)

As the Hitchhiker’s Guide says, “Space is big. Really big. You simply won’t believe how huge and breathtaking it is. I mean, you might think it’s way down the road from the pharmacy, but that’s just peanuts in space. I feel much the same about the amount of data generated during a SoC or MCM design. It’s a lot. It’s really a lot. You simply wouldn’t believe the amount of data there is.

In turn, it reminds me of the “holistic detective” Dirk gently which uses the “fundamental interconnectedness of all things” to solve the whole crime and find the whole person. What I mean here is that all data generated by all SoC/MCM design tools is fundamentally connected. The trick is to understand all the connectivities and dependencies. Gaining this understanding is crucial when it comes to designing and debugging effectively and efficiently.

Like most things, it sounds great if you speak loudly and gesticulate furiously, but how are we poor mortals to gain this deity-like understanding? Well, if you’ve been paying attention, you might remember I mentioned that the folks at Synopsys just introduced DesignDash, which provides a complete design optimization solution driven by data visibility and artificial intelligence (AI) to improve efficiency, productivity, and efficiency associated with SoC/MCM design.

DesignDash: Product debugging and optimization, evolved (Image source: Synopsys)

DesignDash offers a combination of big data + analytics + learning. Fully integrated with the Synopsys digital design family (and including easy support for third-party tools), DesignDash offers native full-stream data extraction from RTL to GDSII implementation/approval tools and provides visualizations complete to display data for all tools. together. Additionally, it deploys in-depth and ML analytics to provide a real-time, unified, 360-degree view of design and implementation activities. It harnesses the vast potential hidden in data to improve the quality and speed of decisions, then it augments those decisions with intelligent guidance.

This is where things start to get interesting, because this capability opens the doors to all sorts of analytical possibilities, including business intelligence (resource planning, resource utilization trends, overall resources), project intelligence (PPA trend/status, QoR causality and closure, design effectiveness profiling), and predictive analytics (historical pattern extraction, cross-project insights, prescriptive closure guidance).

One example that caught my attention is ML’s ability to detect patterns, identify anomalies, and determine relationships over time and between tools. This means that DesignDash will be able to provide Root Cause Analysis (RCA), helping to identify and fix the origin of a problem that is manifesting down the process in this project and in future projects. Things will only get more interesting as DesignDash has more and more designs under its belt, allowing it to identify problems and predict results earlier in the development process.

Unfortunately, this is where my mind begins to run wild and free through the maze of possibilities. Could it be that at some point in the future, managers spend part of their day documenting the perceived mood and state of mind of each team member? For example, “Sally seems to be in a good mood”, “Henry seems to be a bit grumpy”, and “Jack and Jill were dating, but they just broke up”. What I think is that when something in the design takes the shape of a pear, the system could identify the fact that, six months earlier, Cuthbert (who is terrified of going to the dentist) discovered that he had a cavity, which led to a lack of attention and concentration. I can also imagine the system reporting that better results are achieved when certain designers work together (“You can expect a 5% reduction in area if A and B are combined), while other combinations of members of the team tend to have less salubrious effects.

And that brings me to… but no! I refuse to be drawn into the possibility of “Mood Forecasts” to evening news like “A wave of low pressure is rolling in from the Northeast. It will arrive in our area around 3:00 p.m. tomorrow, so that would be a good time to take out your happy pills. Do you have any thoughts you would like to share?

Abdul J. Gaspar