You Are Here:Home > Article Detail

Generative AI Can design chips

Date:2023-05-23 11:22:53 【Close】
Summary:Google's PaLM 2 already has the basic Verilog code generation capability, which can generate basic modules and composite modules. Of course, the quality of its code generation needs to be improved. In addition to PaLM 2,

Generative AI Can design chips

 Since last year, the generative AI (Generative) represented by ChatGPT has been standing in the spotlight of the whole world. ChatGPT Natural language-based input can be understood and corresponding output produced. ChatGPT Based on the large language model technology, through the use of massive corpus training, can answer users' various questions, but also can help users to complete some simple tasks, including completing document writing and even Python code writing and so on.

On May 10, Google unveiled ChatGPT, the PaLM 2 language model, at the IO conference. According to Google, one of the most important user experiences of ChatGPT generative language models is to help users write code, and one of PaLM 2 is the support of more than 20 programming languages. Among them, for chip design engineers, the highlight of * is that PaLM 2 supports Verilog, the most commonly used programming language in the field of digital circuit design.

 It is better to try. Now PaLM 2 has been launched on Google's Bard platform for open beta, so we also tried to use Bard to experience the ability of PaLM 2 to generate Verilog code. In the trial, we asked Bard to generate two codes, one to generate a FIFO (one of the most common modules in digital circuits) and the other to generate a module containing two previously written fios and the output of the * FIFO inputs to the input of the second FIFO. The generation method is very simple. We only need to give Bard a natural language-based instruction (prompt), and Bard can complete the corresponding code generation in just a few seconds. For example, in * experiments, we used the instruction "generate a piece of Verilog code to implement FIFO", and the results are shown below:


image.png

 

From the result, the syntax of the generated code is correct and the logic is basically correct, but the signal logic of FIFO full and attribute is not completely correct (of course, the logic of FIFO full and empty is also a common problem in the interview, so it is not so easy to be completely correct). In terms of code style, we can also add more prompts to the instructions, such as "add more comments to the code", "use parameters to define the interface width" and so on.

 In the second experiment, we mainly looked at whether Bard could reuse the previously generated modules, and generate new and larger modules based on this. The instruction we use is "Write a module that includes two FIFO modules that you wrote before, and the output of FIFO is connected to the input of the second FIFO".


image.png

Here we can see that the code generated is basically correct, so we think that PaLM 2 basically has the ability to generate complex code based on the bottom-up.

 The evolution of generative AI in the chip design field

 As we can see from the above experiments, Google's PaLM 2 already has basic Verilog code generation capabilities, which can generate basic modules and composite modules, but of course, the quality of its code generation needs to be improved. In addition to PaLM 2, we believe that other companies have launched ChatGPT-like large language models that are also likely to add support for the Verilog hardware description language.

 According to Google's information released at the IO conference, the large language model of ChatGPT has become an important assistant for many engineers when they write code. If we refer to the software development engineers in the IT field using the ChatGPT class big language model to assist in the development of code writing, we believe that the big language model is also very likely to play an important role in the chip industry. Here, based on the role of the big language model in the development process, we can roughly divide it into three applications.* The application generates code directly from the user's instructions, the two examples we give earlier in this article. The second application is to help the engineer automatically complete the code when the code. For example, the engineer only needs to input the first few characters of a line of code, and the large language model can automatically help to complete the code according to the context of the code, thus saving the engineer's development time. The third application is to help engineers analyze code and debugs. Just as ChatGPT can help users optimize the Python code and find the bugs in the code, large language models trained on relevant data can also achieve similar functions in Verilog.

 Looking to the future, the reference big language model in IT industry application trajectory, we think the big language model for chip design help is expected to start from the code automatic completion, because it is also a big language model in IT industry entrance —— we have seen similar Github Co-pilot code completion products have been many IT company application to help software engineers improve programming efficiency. Relatively speaking, code completion class application for large language model requirement is relatively low, the current model has been able to achieve quite high accuracy, so we expect to be in the field of chip design will be used in the field of Verilog based on large language model code completion tools will soon help engineers improve efficiency (estimate Google internal chip team has begun to use similar tools).

After the code completion, with the further development of the large language model, the large language model that automatically generates the code according to the user's instructions will also be more and more applied. This kind of code directly generated class application from now still need and the whole project development process further —— exactly this kind of code automatically generated application is the most suitable for use in the underlying module, or in the upper integration between modules, also need to further explore, but however ChatGPT potential in the field of automatic code writing, can put the original artificial hours to write the code in a few seconds, such efficiency will undoubtedly bring revolutionary changes to the industry and chip development process.

 At present, ChatGPT large language models have had a very good effect in the code writing of popular programming languages such as Python, which proves that large language models can realize automatic code writing, completion and debug can be realized in theory and engineering. The main reason why Google's PaLM 2 support for Verilog still needs to be further improved is that the amount of training data is insufficient. From the perspective of the number of training data, there is a large amount of open source Python code on the Internet to train large language models to complete high-quality code generation, but the number of Verilog code that can be used to train large language models on the Internet may be several orders of magnitude less than popular languages such as Python. It's not that the amount of Verilog code written by humans is not enough, but that the vast majority of Verilog code is not open source, but the intellectual property of chip companies. For example, Google is unlikely to get Qualcomm's Verilog code when training PaLM.Who will take the lead in developing big language models for chip design in the future? We believe that there are several forces to be reckoned with:

 The first are large technology companies with full-stack technology capabilities that have both the ability to develop large language models and successful chip businesses, including Google in the United States and Huawei in China. Technically, these companies have accumulated a lot of Verilog-related code to train large language models, and on a business basis, these companies also have the drive to use large language models to improve the efficiency of chip design teams.

 EDA giants followed, including Synopsys, Cadence, etc. These EDA companies have a strong business driver and sense of urgency, because the big language model AI will indeed become the next revolutionary change in the EDA industry, who takes the lead in this field will gain an advantage in the next generation of EDA competition; in terms of technology accumulation, these companies have good AI model capabilities, and a huge amount of Verilog code data available for training models (because these EDA companies have quite successful IP business, and accumulated enough high-quality code data while developing these IP).

Finally, the power of the open-source community cannot be ignored. From the perspective of large language models, the open source community has made a lot of meaningful explorations on the basis of CahtGPT and open source LLAMA language models. In addition, with the increase of open source projects such as RISC-V, the open source community will have more and more data. We expect that the open source community will have the opportunity to realize some small but beautiful novel applications based on large language models, which can also promote the technological development of the entire large language model in the field of chip design.

 How the generative AI can affect the work of the chip design engineers

 So what will the daily work of chip engineers change as ChatGPT AI plays an increasingly important role in chip design? Since ChatGPT class generative AI is mainly for front-end work such as code writing, our discussion here is mainly for front-end digital design engineers.

 First, for chip engineers whose main job is front-end module design and integration, we expect ChatGPT tools to help complete the code and increase efficiency. In the next three to five years, the direct use of ChatGPT class generative AI first module code writing is expected to get real application. From this perspective, we do not believe that the work of front-end engineers will be replaced; instead, the work of digital front-end engineers may increasingly focus on the function definition of the module and how to use the generative AI to describe the design; from this perspective, there may even be some standardized module function definition description language, so that the AI to produce reasonable code.

In addition, the work of chip verification engineers will become more and more important. Generative AI can generate code in a few seconds, but the quality of its generation needs to improve. From this Angle, chip validation on the one hand need to ensure that the AI generated code no bugs, and, more importantly, chip validation needs to code and closed loop, such as how to implement a set of workflow, let AI generated code can quickly use testbench to ensure that the function is correct, and there is a way to tell where function wrong to prompt AI AI changes, thus after many iterations can make AI automatically generate the correct code. Although it may require several iterations, because each time the code generation time is very short, the code generation time is still much faster than handwriting. In addition, using generative AI to automatically generate testbench and verify the required assertion will also change the workflow of validation engineers, who will spend more time teaching AI to generate the right code, thus greatly improving efficiency.


粤ICP备16107064号-1

© 2011-2022 Easybuyic.com all rights reserved.