没有找到相关文章

Will AI invade chip design and kill engineers?

发布日期:2023-05-06 11:37:14 【返回列表】
摘要:With the development of relevant discussions, the entry point of artificial intelligence for the chip industry is increasingly focused on the EDA field, that is, how to use the powerful capabilities of artificial intelligence to help achieve chip design, verification and testing more efficiently.

Will AI invade chip design and kill engineers? 

In EDA, the area where AI gets the most attention is how to achieve design optimization. The design optimization here refers to how to search efficiently in a design space with a huge search space. Specific tasks here can include layout wiring, and a combination of inputs for verification and testing.

 AI solutions to these problems are now roughly divided into two categories. One is the giants that are large enough and have strong AI research and development capabilities and the ability to customize chip design processes. Such giants can have the ability to develop their own relevant AI technologies and apply them in their own chip design process to improve the efficiency and chip quality of the design process. In such companies, the second is the representative one is Google. Google has the world's top artificial intelligence team, but also has its own chip TPU. Most importantly, the key is that Google team is very keen to apply artificial intelligence in a variety of new application scenarios, so it is reasonable for Google to use artificial intelligence to improve chip design. According to Google published in the journal paper "Nature"  "A graph placement methodology for fast chip design", we know it has been applied artificial intelligence to greatly improve the research chip layout wiring ability, the layout of the reinforcement learning model wiring algorithm implementation performance has gone beyond the results of the artificial layout wiring, the key is the technology has been used in the generation of Google TPU. In other words, Google uses AI to design its own AI chip (TPU) to further train more powerful AI to design the next generation of AI chips. A positive cycle like —— now looks at least on Google's side.

Will AI invade chip design and kill engineers.png

 In addition to Google, Nvidia has also accumulated a lot in AI cabling technology. Its research team published last month that its own AI algorithm DREAMPlace / AutoDMP can complete the layout of 256-core RISC-V processors in just 2.5 hours, with performance and surpassing other related algorithms. Of course, the AI model here is running on Nvidia's powerful multi-card GPU server. Although Nvidia does not specify the commercialization of this self-developed AI layout algorithm, we believe that it is very promising to improve the design efficiency and quality of the next generation of Nvidia GPU when it is mature enough.

 In addition to the giants that develop their own chips and AI algorithms, another important industry-related field is the traditional EDA manufacturers. Both Cadence and Synopsys have announced years of significant investment in AI development, and both Cadence and Synopsys have recently had related product launches. At SNUG 2023, Synopsys announced the Synopsys. The ai's new generation of EDA tools powered by artificial intelligence, which includes the DSO for optimization design. AI, VSO for efficiently generating validation vectors and improving debug efficiency. And ai, and the TSO for generating the test vector.ai。According to the official Synopsys data, the DSO. AI is mainly used to improve the optimization of the design space to achieve the improvement of PPA. At present, 160 companies have used DSO. The chip of of chip, DSO. The ai can achieve a power reduction of up to 15% and can greatly reduce the design time (up to three times). While using the VSO. A i and TSO. AI, users can also greatly reduce the time and efficiency of verification and testing. Cadence also released Allegro X in early April, where AI features automatically and efficiently generate PCB design layout and wiring of key signals, thus reducing design time.

 If design / validation optimization is the most concerned area of traditional EDA tools, then another important one that does not get enough attention in traditional EDA tools is the design input, especially the RTL code writing assistance related to digital logic design. This field has long been considered to using any text editor, and has always been out of EDA; but recently, with the popularity of large language models and the use of copilot using large language models to assist computer code for Python and more, the use of similar copilot technology in RTL code is becoming a potential hot trend. Copilot Technology according to the context of the user to write code automatically prompt and complete possible code, so as to reduce the user need to input code and reduce the bug in the process of code writing, thus greatly increase the efficiency of user code writing, the future with the ability can even more and more automatic RTL code writing, so the user only need to give a brief prompt (prompt), artificial intelligence can give a draft code for the user to use.

To sum up, the EDA industry has officially entered the era of AI, and we are expected to see more AI-powered EDA appear in the future.

lADPKGJl2j0-fVbNAj_NBAA_1024_575.jpg


01 The core technology behind ai EDA

 As mentioned earlier, the core technologies behind AI-enabling EDA are two milestone technologies, namely reinforcement learning and large language models.

 First, reinforcement learning is mainly used for optimization problems in EDA, including the most layout wiring, and test / validation vector generation. The main challenge of this kind of problem is that the parameter optimization space is huge, if use violence search through all possible parameter combination is not realistic (for example, in layout problems, each design of the logic door can be placed in almost anywhere on the map, for the current logic door scale of tens of millions of design easily violence search until the earth extinction cannot complete).

 Traditional EDA uses heuristic algorithms such as annealing algorithms, which should be a great success by controlling the computational time of the layout wiring problem, thus creating today's chip boom. The main advantage of the heuristic algorithm is the fast computing speed and the small computational demand, but not necessarily able to find the global design parameters. On the other hand, the main principle of reinforcement learning is to learn the results of different parameter combinations from the data, so as to learn a more efficient parameter space search method in a data-driven way. If the algorithm is properly designed and the training data is good enough, it can achieve better results than the heuristic algorithm.

 In 2016, Deepmind's AlphaGo beat Lee and used the reinforcement learning model, which learns from the existing massive Go game data to surpass human beings. In fact, the optimization of go and EDA optimization problem, are in a huge search space (such as go every step has a very high degree of freedom led to a large search space, and EDA layout and test / validation vector generation is similar) in the form of efficient find * solution, so in fact in AlphaGO success at that time, academia has a lot about reinforcement learning use in the field of EDA, and today we finally see reinforcement learning technology in the field of EDA.

 In addition to reinforcement learning, another key AI technology is the big language model, which can mainly help the EDA industry to help engineers speed up code writing and reduce the chance of error. The large language model represented by ChatGPT (Large Language Model, LLM), by learning rules from massive text, can understand the needs of users to express in natural language and generate natural language text that users can understand. The "natural language" here includes not only the language we usually speak, but also the programming language we write, including Verilog, which is commonly used in circuit design. At present, the most successful LLM-based code writing aid is Github copilot, which can help the user automatically complete the code (for example, after entering the first few characters of a line of code, the copilot can estimate what code the user wants to write and prompt the user to complete it automatically) and automatically find the bugs in the code. We believe that by fine-tuning the large language model on the existing RTL code, it is very promising that a tool can help chip design engineers quickly complete the code, thus greatly improving the efficiency of engineers.


02 How will artificial intelligence affect the work of engineers?

 Ai-driven EDA will undoubtedly further boost the semiconductor industry, but will chip engineers be robbed by artificial intelligence? We believe that, in general, just as the emergence of EDA did not steal the jobs of chip engineers, the next generation of AI-powered EDA is mainly a tool to improve efficiency, and will not replace human engineers.

 First, let's start with the front-end design field. For chips, AI EDA can mainly help to use large language models to improve the efficiency and quality of digital logic design, so there is no replacement relationship, but to provide a more convenient tool. For digital design engineers, the most essential work is to complete the circuit design, such as splitting a large system into multiple smaller functional modules, completing the function and interface definitions of each module, and using code to implement these modules. For now, the AI language model mainly helps complete code rather than directly; and even if AI can automatically write code in the future, it cannot replace the essential work of digital design engineers, namely the definition and design of digital modules.

 In the field of back-end design, reinforcement learning-based AI has greatly improved the efficiency and quality of layout cabling. At present, the design process of most chips is first manual by engineers to complete the high-level layout (floorplan), after the estimated performance can reach the goal, EDA tools carry out the next specific layout and wiring, and verified and fine-tuning by the engineers. We believe that as the efficiency and quality of AI layout further improves, it is possible for floorplan to be left to EDA tools, and the responsibilities of engineers to increasingly change to provide reasonable constraint and optimization goals for EDA tools, and to verify the quality of the designs generated by EDA tools. In this sense, it is possible that AI will do more work that engineers do by hand, but that doesn't mean that AI will replace these engineers, but that these engineers will have additional responsibilities (i. e., providing reasonable input to tools and verifying output) and improving overall efficiency. For other layout wiring processes, AI is more about providing a higher quality tool and will not replace engineers.

In fact, AI is likely to provide more jobs in the chip industry. We know that AI model training requires a large amount of data, and AI models may require different fine-tuning training for different designs. As a result, the chip design industry may need more engineers who can optimize AI specifically.


03 Industry dynamics brought about by ai EDA

 Finally, we'll analyze how AI will further empower EDA in the future.

 First, first of all, the chip design is getting bigger and bigger, and from another perspective, the search space is getting bigger. In addition, as Moore's Law gets closer to the physical limits, the overall industry has higher and higher requirements for chip design PPA. Therefore, the use of AI to drive further improvements in chip design performance will be more and more used, and we believe that the more complex and freedom of design of the place, the AI can play a greater role.These areas include advanced packaging, especially 3D packaging; as mobile chips, high performance computing chips and other areas that have very high requirements for chip design PPA —— This is why we see Google and Nvidia these high performance computing chip companies have a big investment in AI EDA, in the future we expect more such companies to use AI EDA to improve PPA.

In addition, another noteworthy point is that artificial intelligence may bring new changes to the industry, is artificial intelligence requires a lot of data to train, and now the chip design data are each company's intellectual property rights, how to ensure that the training model at the same time to ensure that intellectual property rights will not be violated is also a need industry to deal with the problem. We believe that large companies with a lot of design accumulation will be a lot of AI EDA customers, because they can already train good performance models based on their own design data. As for the design accumulated less small and medium-sized companies or into a short time startup how to use artificial intelligence EDA will be a thinking of the whole industry, such as whether there will be some data sharing organization, used to share some is not sensitive design to train model and use together, or there will be some based on encryption computing training method in as far as possible to protect the design of intellectual property rights while the model can use as much data as possible to complete the training, are possible direction.