Satya Nadella, chief government officer of Microsoft Corp., through the firm’s Ignite Highlight occasion in Seoul, South Korea, on Tuesday, Nov. 15, 2022.
SeongJoon Cho | Bloomberg | Getty Photos
Microsoft is emphasizing to traders that graphics processing items are a vital uncooked materials for its fast-growing cloud enterprise. In its annual report launched late Thursday, the software program maker added language about GPUs to a threat issue for outages that may come up if it might’t get the infrastructure it wants.
The language displays the rising demand on the prime know-how corporations for the {hardware} that is vital to offer synthetic intelligence capabilities to smaller companies.
AI, and particularly generative AI that includes producing human-like textual content, speech, movies and pictures in response to folks’s enter, has turn into extra common this yr, after startup OpenAI’s ChatGPT chatbot grew to become successful. That has benefited GPU makers corresponding to Nvidia and, to a smaller extent, AMD.
“Our datacenters depend upon the provision of permitted and buildable land, predictable vitality, networking provides, and servers, together with graphics processing items (‘GPUs’) and different parts,” Microsoft stated in its report for the 2023 fiscal yr, which ended June 30.
That is certainly one of three passages mentioning GPUs within the regulatory submitting. They weren’t talked about as soon as within the earlier yr’s report. Such language has not appeared in latest annual studies from different giant know-how corporations, corresponding to Alphabet, Apple, Amazon and Meta.
OpenAI depends on Microsoft’s Azure cloud to carry out the computations for ChatGPT and varied AI fashions, as a part of a posh partnership. Microsoft has additionally begun utilizing OpenAI’s fashions to reinforce present merchandise, corresponding to its Outlook and Phrase purposes and the Bing search engine, with generative AI.
These efforts and the curiosity in ChatGPT have led Microsoft to hunt extra GPUs than it had anticipated.
“I’m thrilled that Microsoft introduced Azure is opening personal previews to their H100 AI supercomputer,” Jensen Huang, Nvidia’s CEO, stated at his firm’s GTC developer convention in March.
Microsoft has begun trying exterior its personal knowledge facilities to safe sufficient capability, signing an settlement with Nvidia-backed CoreWeave, which rents out GPUs to third-party builders as a cloud service.
On the identical time, Microsoft has spent years constructing its personal customized AI processor. All the eye on ChatGPT has led Microsoft to hurry up the deployment of its chip, The Info reported in April, citing unnamed sources. Alphabet, Amazon and Meta have all introduced their very own AI chips over the previous decade.
Microsoft expects to extend its capital expenditures sequentially this quarter, to pay for knowledge facilities, commonplace central processing items, networking {hardware} and GPUs, Amy Hood, the corporate’s finance chief, stated Tuesday on a convention name with analysts. “It is general will increase of acceleration of general capability,” she stated.
WATCH: NVIDIA’s GPU and parallel processing stays vital for A.I., says T. Rowe’s Dom Rizzo