The AI industry chain behind ChatGPT
Home » News » News » The AI industry chain behind ChatGPT

The AI industry chain behind ChatGPT

Views: 25     Author: Site Editor     Publish Time: 2023-02-28      Origin: Site

The AI industry chain behind ChatGPT

Recently, a chat robot called "ChatGPT" has become famous in the semiconductor industry. This AI program, launched by the artificial intelligence laboratory Open AI at the end of November last year, received more than 1 million registrations within 5 days of its launch, and exceeded 100 million by the end of January this year.


The popularity of ChatGPT has eased the depressed atmosphere in the semiconductor industry to a certain extent. Recent market news shows that TSMC’s 5nm demand has suddenly increased, and the capacity utilization rate in the second quarter may be fully loaded. According to industry insiders in the semiconductor supply chain, TSMC’s urgent orders come from Nvidia, AMD, and Apple’s AI and data center platforms. The explosion of ChatGPT has increased the momentum of customers’ purchases.


The official said that ChatGPT's "Chat" refers to chatting, which is its presentation form; "GPT" stands for Generative Pre-trained Transformer (generated pre-trained transformation model), which is the pre-trained model that supports its operation.


Generally speaking, ChatGPT is a super-intelligent dialogue AI product based on a large-scale language model. Whether it is discussing the latest content creation method AIGC (AI Generated Content) on the Internet, or the explosive ChatGPT, its essence is different. It is discussing the AI industry chain behind it.


AI chips: GPU, FPGA, ASIC will benefit

The three major elements of artificial intelligence are data, algorithms, and computing power. ChatGPT, which is upgraded based on OpenAI's third-generation large model GPT-3, and the ultimate source of computing power is the chip. The explosion of ChatGPT represents a new round of breakthroughs in AI chip technology. According to public information, AI computing power chips generally refer to accelerating AI applications, mainly divided into GPU, FPGA, and ASIC.

01

GPU

Since the computing power of the CPU is very limited, and it is difficult to process parallel operations, the CPU is generally used with an acceleration chip. In the cloud training chips in the AI era, GPU occupies a large share and is regarded as the core of computing power in the AI era. In terms of the GPU market structure, the revenue of Nvidia, AMD, and Intel almost monopolizes the entire GPU industry.


Currently, the computing cluster behind ChatGPT uses AI chips from Nvidia. OpenAI has stated that ChatGPT is a super AI completed in cooperation with Nvidia and Microsoft. Microsoft built a supercomputer cluster in its own cloud, the Azure HPC Cloud, and provided it to OpenAI. It is reported that the supercomputer has 285,000 CPU (central processing unit) cores and more than 10,000 AI chips.


Although Nvidia has the first-mover advantage this time, there are still many companies in the market catching up, such as Google's tensor processor TPU, Baidu's Kunlun series, Huawei HiSilicon's Ascend series, and Alibaba's Pingtouge's Hanguang 800 wait.

02

FPGA

FPGA (Field Programmable Gate Array), also known as Field Programmable Gate Array, refers to a digital integrated circuit that changes and configures the internal connection structure and logic units of the device through software means to complete the established design functions.


FPGA chips have obvious advantages in real-time (fast data signal processing speed), flexibility, etc., and can also be programmed and parallelized, occupying an irreplaceable position in the field of deep learning.


Compared with CPU/GPU/ASIC, FPGA has higher speed and extremely low computing energy consumption, and is often used as a small-volume substitute for dedicated chips. When building AI models, FPGAs need to be combined with CPUs to implement deep learning functions, and they are jointly applied to deep learning models, which can also achieve huge computing power requirements.


From the perspective of the market structure, in the global FPGA chip market, two companies, Xilinx and Intel, account for most of the market share. Due to the high technical and financial barriers of FPGA chips, there is a big gap between Chinese companies in this field.

03

ASICs

ASIC (Application Specific Integrated Circuit), that is, an application specific integrated circuit, its computing power and computing efficiency can be customized according to the specific needs of users, and it is widely used in smart terminals such as artificial intelligence equipment, virtual currency mining equipment, consumable printing equipment, military defense equipment, etc. .


ASIC chips can be divided into TPU chips, DPU chips and NPU chips according to different terminal functions. Among them, TPU (Tensor Processing Unit) is a tensor processor dedicated to machine learning. DPU (Data Processing Unit), which can provide engines for computing scenarios such as data centers. NPU (Neural-network Processing Unit) is a neural network processor that simulates human neurons and synapses at the circuit layer, and uses deep learning instruction sets to directly process large-scale electronic neurons and synapses.


Compared with GPUs and FPGAs, ASICs lack flexibility, especially in fields such as AI and servers. In the case of continuous iteration of various algorithms, the characteristics of ASIC chips have become its burden. However, Horizon CEO Yu Kai once publicly stated that once the software algorithm is fixed, ASIC must be the future direction. In terms of power consumption per watt, ASIC can be 30-50 times higher than GPU, which will also be the future of the industry. competition focus.


At present, Google, Intel, and Nvidia have successively released ASIC chips such as TPU and DPU. China Domestic manufacturers have also begun to target this market and make rapid efforts. For example, the Cambrian has launched a series of ASIC acceleration chips, and Huawei has also designed The Ascend 310 and Ascend 910 series ASIC chips.


HBM/Chiplet is expected to benefit

Overall, driven by AIGC (AI Generated Content, AI production content), AI industrialization is switching from software to hardware, the semiconductor + AI ecology is gradually clear, and AI chip products will be implemented on a large scale. The core of the hardware side includes AI chips/GPU/CPU/FPGA/AISoC, etc. In AI chips, computing power and information transmission rate become key technologies, and the balance of chip performance and cost also drives the surrounding ecology, including industrial chains such as HBM/Chiplet benefit.

01

Emerging Storage HBM

According to public information, AI dialogue programs need large-capacity, high-speed storage support during the execution of calculations. The industry expects that the development of AI chips will further expand the demand for high-performance memory chips.


Samsung Electronics said that demand for high-performance high-bandwidth memory (HBM), which provides data for GPUs and artificial intelligence accelerators, will expand. In the long run, as AI chatbot services expand, demand for high-performance HBM of 128GB or more for CPUs and high-capacity server DRAM is expected to increase.


Recently, Korean media reported that after the start of 2023, HBM orders from Samsung and SK Hynix, two major storage manufacturers, will increase rapidly, and the price will also rise. Market sources revealed that the price of HBM3 DRAM has increased by 5 times recently.

02

Chiplets

In addition, Chiplet technology cannot be ignored, it is a key technology for laying out advanced manufacturing processes and accelerating computing power upgrades. Chiplet heterogeneous technology can not only break through the blockade of advanced manufacturing processes, but also greatly improve the yield rate of large chips, reduce design complexity and design costs, and reduce chip manufacturing costs.


At present, Chiplets have been widely used in server chips. AMD is the leader in chiplet server chips. Its chiplet-based first-generation AMDEPYC processor is equipped with 8 "Zen" CPU cores, 2 DDR4 memory channels and 32 PCIe channels. In 2022, AMD officially released the fourth-generation EPYC processor, which has up to 96 5nm Zen4 cores, and uses a new generation of Chiplet technology, combining 5nm and 6nm processes to reduce costs.


Intel's 14th-generation Core Meteor Lake adopts the intel 4 process for the first time, and introduces the Chiplet small chip design for the first time. It is expected to be launched in the second half of 2023. At least the performance-to-watt ratio goal is to reach 1.5 times that of the 13th-generation Raptor Lake.


In short, under the wave of global digitalization and intelligence, applications such as smartphones, autonomous driving, data centers, and image recognition promote the rapid growth of the AI chip market. In the future, more companies will focus on the production of AI chips.


More Integrated 
More Guarantee
 
 

QUICK LINKS

CONTACT US

  86-15938119156
  86-13939161616
  sales@finewinwafers.com
  Room 908, Building 2, Lijing Yunshang, Renmin road,  Jiaozuo, 454000, Henan, China
Wechat/Whatsapp:86-15938119156
                                     86-13939161616

SEARCH PRODUCT

© 2023 Jiaozuo Commercial FineWin Co., Ltd. All rights reserved. propylene glycol cotton bag