AI has helped design over 100 chips, should we be worried?

Shawn Knight

Posts: 15,296   +192
Staff member
The big picture: Artificial intelligence-powered chat and knowledge platforms like ChatGPT, Google Bard and Microsoft's new Bing with AI assistance are getting the lion's share of attention as of late but this is not the only field in which artificial intelligence is making great strides.

Electronic design automation firm Synopsys recently revealed that clients working with its Synopsys DSO.ai (Design Space Optimization AI) have created 100 commercial tape-outs across a wide range of fields and nodes.

One client, STMicroelectronics, achieved the first-ever commercial design tape-out using AI on the cloud (in this case, Microsoft Azure). According to Philippe d'Audigier, the SoC hardware design director at STMicroelectronics, the system increased their power, performance and area (PPA) exploration productivity by more than 3x.

Junhyun Chun, head of SoC at SK Hynix, said Synopsys' design system delivered a 15 percent cell area reduction and a five percent die shrink in a recent project. "Synopsys DSO.ai brings a huge amount of design team efficiency, giving our engineers more time to create differentiated features for our next generation of products," Chun added.

According to Synopsys, clients on average are seeing more than 3x productivity increases, up to 25 percent lower total power usage and significant reductions in die size, all while using fewer resources. Having Synopsys DSO.ai on the team also automates many menial tasks, freeing up human employees to focus on other tasks.

Jean Boufarhat, corporate VP of engineering for the Azure hardware and infrastructure teams at Microsoft, said they are committed to democratizing advanced chip design so it was a natural move to host the Synopsys DSO.ai design system on Azure.

Entertaining AI that creates amusing images or (attempts to) answers questions based on queries is one thing, but having artificial intelligence design hardware is a whole different can of worms. What happens if something designed by an AI system malfunctions and injuries or kills someone? Who do you point the finger at?

Worse yet, what happens if AI starts designing covert backdoors or actually becomes sentient? It makes for great science fiction material but we are rapidly approaching a point where it could become reality and that is a bit alarming.

Permalink to story.

 
What happens if something designed by an AI system malfunctions and injuries or kills someone? Who do you point the finger at?
Whoever manufactures and sells the product is responsible, naturally. The buyer isn't concerned with who designed it.
 
The backdoor issue has always existed regardless if an AI was involved, but that is probably something an AI could help us with fighting against. Could probably design an AI to be able identify parts within a chip and identify and then flag things it couldn't identify for a human to check out.
 
The misunderstandings about 'AI' are unbelievable. For some reason about 5 years ago, what we used to call modelling algorithm's got rebranded and sexed-up to call them AI. Ever since, there has been widespread hysteria. There is no sentience or ability to gain sentience in the AI algorithms used to optimise chip layouts or in GPT. They are purely pattern recognition formulas based around pre-canned models and rules. They have no concept of generating backdoors or gaining sentience. If somebody coded something malicious to replicate in that way, then obviously it could replicate itself, but it still wouldn't be an AI with any kind of sentience or self-awareness. Maybe in 20 years though...
 
The misunderstandings about 'AI' are unbelievable. For some reason about 5 years ago, what we used to call modelling algorithm's got rebranded and sexed-up to call them AI. Ever since, there has been widespread hysteria. There is no sentience or ability to gain sentience in the AI algorithms used to optimise chip layouts or in GPT. They are purely pattern recognition formulas based around pre-canned models and rules. They have no concept of generating backdoors or gaining sentience. If somebody coded something malicious to replicate in that way, then obviously it could replicate itself, but it still wouldn't be an AI with any kind of sentience or self-awareness. Maybe in 20 years though...

Some good points - If some system had a greater knowledge of science , all tech etc , all research etc and came up with new ways , new structures , new node structure etc - then colour me impressed - But if it's just a hard NP problem ( eg travelling salesman ) then yeah super smart algorithms , time saving strategy solver.
Produce this result: using these required outputs , with these criteria , with these limitations etc

Plus isn't "AI" sometimes seeded by skilled humans - hey try this or that ?

I think it's exponentially easier to model world climate , than particle physics - cue why lets try these compounds for super conductivity or polymer batteries- We still can not ask a supercomputer to pop out the optimum solutions as those calculations are hard - hell calculating pure formulas ( not applied ) for multi object gravity is super hard when N increases not by much - sun & earth easy , sun, earth , moon harder but still easy enough
 
First, no sentience isn’t coming up any time soon, that’s an *****ic interpretation a used by the mislabelling of pattern recognition software based on large datasets as ‘intelligence’.

2nd, like any other tool, the developer is responsible for it. There is no difference between applied machine learning and say, finite element analysis. Both makes it easier to make stuff, and simultaneously easier to miss errors because direct design work is transferred to a model.
Guess what happens to a civil engineer if a building collapses because a fem model spat out the wrong answer, even with the engineer inputting correct assumptions to the model. The engineer gets sued. It is their job to be capable of realising when a model has made a mistake. That’s why we don’t put random people with a software course in control of making FEA on construction projects, but ask someone who understands the underlying mathematical framework.
 
The misunderstandings about 'AI' are unbelievable. For some reason about 5 years ago, what we used to call modelling algorithm's got rebranded and sexed-up to call them AI. Ever since, there has been widespread hysteria. There is no sentience or ability to gain sentience in the AI algorithms used to optimise chip layouts or in GPT. They are purely pattern recognition formulas based around pre-canned models and rules. They have no concept of generating backdoors or gaining sentience. If somebody coded something malicious to replicate in that way, then obviously it could replicate itself, but it still wouldn't be an AI with any kind of sentience or self-awareness. Maybe in 20 years though...
Exactly. But it'll probably take far longer than that.

Until we get a quantum leap in computing power, they won't even be able to become sentient. Properly dealing with and learning on organic data (and not in some very narrow range) in real-time is out of their scope. Something that they would require to even come close to sentience...
 
Ai is going to be able to do complicated and repetitive tasks much better than us and it will be able to search a wider array of possible solutions to problems faster than we can. What we are seeing now is that it's not necessarily just the blue-collar workers being replaced by robots that we have to worry about, but really any career in which the human element can be replaced by logic. In fact, the more rigid the discipline, the easier it will be for a computer to perform it. The more uncertainty and nuisance problems are to solve, those will be the jobs that are the hardest to replace (but often not given as much respect by our society) as rigid disciplines. The next two decades will be very interesting as we learn how to navigate this new world, what will humans do in the future? There will be resistance of course, but eventually, just like computers, everyone will adopt Ai.

However, Ai will never become sentient. It will always be algorithms. The algorithms may even be bias. For example ChatGPT was asked what to do in a situation where many people would die or it would have to say a racial slur to defuse a timebomb. ChatGPT made it clear that using the racial slur was more dangerous than trying other ways to disarm the bomb! So, don't think for a minute "Ai" is somehow able to think, ChatGPT was obviously programmed in a way that it could not supersede its programming even by saying a racial slur to save lives.

It's hard to say what this means for civilization. It depends on what we decide to do with it and how much we allow it to do. Ai could easily be the best or worst technology humans have devised. But judging from how things are going so far, I'm leaning towards the worst.
 
I'm frankly more worried about the remaining portion designed by human intelligence, which let's face it, has not exactly always been fully transparent, honest, ethical, or impervious to error or external control.

And that's basically 100% of it since to this point I think the AI part is more marketing than substance.
 
No need to worry. AI will never happen until we get down to the subatomic , our consciousness and AI does and will rely on the uncertainty principle , there can be no self awareness without it.
 
The backdoor issue has always existed regardless if an AI was involved, but that is probably something an AI could help us with fighting against. Could probably design an AI to be able identify parts within a chip and identify and then flag things it couldn't identify for a human to check out.
This is circular. One AI does what you describe while, simultaneously, other AIs fight against that.

It's a big mess. The best that any power can hope for is to have as much control as possible over the design and production environments (as with today).
 
Back