LOGIN | Register
Cooperation
美国超微公司 (AMD.US) 2025年第三季度业绩电话会
文章语言:
EN
Share
Minutes
原文
会议摘要
AMD achieved record revenues in Q3 2025, driven by strong AI and data center sales, with a focus on expanding its AI compute franchise through partnerships with OpenAI and Oracle, and the launch of next-generation products, aiming for significant revenue growth and market share expansion by 2026.
会议速览
AMD's Q3 2025 Financial Results and Forward-Looking Statements
The call discusses AMD's third quarter 2025 financial outcomes, focusing on non-GAAP measures. It previews upcoming analyst events, highlights forward-looking statements, and acknowledges risks impacting future performance. Key financial disclosures and strategic insights are shared.
AMD's Record Quarter Highlights Growth in Data Center and AI Businesses
AMD delivered record revenue and profitability, driven by strong sales in data center AI and server CPU segments, with significant growth in Epic processor and Instinct GPU sales, reflecting robust demand from hyperscalers and enterprises.
Epic Platforms Gain Enterprise Traction with 5th Gen Adoption and Future Venice Processor
Epic platforms, optimized for enterprise workloads, see strong adoption across various sectors. With over 170 5th gen platforms in market from multiple vendors, significant wins are reported. Looking ahead, the upcoming 2nm Venice processors, currently in testing, promise enhanced performance and efficiency, backed by strong customer interest and pre-launch cloud deployments.
AMD's Instinct GPU Business Accelerates with Widespread Adoption and Software Advancements
AMD's Instinct GPU business is experiencing significant growth, driven by the ramp-up of MI series GPU sales and deployments. The company highlights partnerships with major cloud and AI providers, including Oracle, Crusoe, DigitalOcean, and Tensor Wave, for Mi 350 series public cloud offerings. Additionally, AMD showcases progress in software with the launch of ROCm 7, which delivers enhanced performance and introduces new features for AI development. The open software strategy is gaining traction with contributions from developers, solidifying ROCm as a platform for AI development.
AMD's Data Center AI Business Enters New Growth Phase with Mi 400 Series and Helios Rackscale Solutions
AMD's data center AI business is poised for significant growth with the launch of Mi 400 series accelerators and Helios rackscale solutions. The company has secured a multi-year agreement with OpenAI, establishing AMD as a core compute provider. Oracle and Cisco are also key partners, with Oracle deploying Mi 450 GPUs across its cloud infrastructure and Cisco building a large-scale AI cluster powered by Instinct Mi 350x GPUs. The US Department of Energy has selected AMD's upcoming Mi 430x GPUs and Venice CPUs for the next flagship supercomputer, solidifying AMD's leadership in powering the world's most powerful computers. This growth trajectory is expected to generate tens of billions in annual revenue by 2027.
AMD's Record-Qtr Growth Driven by PC, Gaming, and Embedded Segments
AMD reported record third-quarter sales with significant year-over-year growth in PC processors, gaming, and embedded segments. The company highlighted strong demand for its Ryzen CPUs, Radeon graphics, and embedded solutions, alongside robust design wins and market adoption, setting a positive outlook for future growth.
Record Revenue and Robust Growth Outlook for Q4 2025
Strong third-quarter financial results, with record revenue of $9.2 billion, driven by significant growth in data center and client gaming segments. Outlook for Q4 2025 includes revenue guidance of approximately $9.6 billion, reflecting continued momentum in data center and embedded segments, and strategic investments in AI opportunities.
Initiating Audience Question and Answer Session with Instructions
Announcement of a Q&A session initiation with instructions on how to join the queue for questions, emphasizing one question and one follow-up per participant.
Data Center Demand Outlook and Product Transition Strategy
Discusses strong Q3 performance in data center AI and server sales, anticipates continued demand growth, and outlines plans for Mi 355 ramp and Mi 450 series launch in 2026.
Strategies for Enhancing Visibility and Allocation in AI Compute Demands
Discusses the strategic planning and collaboration with major AI clients to ensure adequate power and supply chain readiness, highlighting efforts to model allocation and enhance visibility in critical customer engagements.
Helios System's Market Potential and Customer Engagement Post-OCP
The dialogue highlights the strong market response and customer interest in the Helios system and Mi 450 following the OCP show. It discusses the crossover point for discrete versus system sales, noting increased engagement with customers who are eager to explore the system's features. Early adopters for Mi 4 are expected to focus on rack-scale solutions, with anticipation for further form factor developments.
Industry Leader Discusses Power and Component Constraints for Next Year's Rack Scale Deployments
An industry leader emphasizes the need for collective planning among the ecosystem to address power requirements and component availability issues for upcoming large-scale deployments, highlighting both data center infrastructure and power as potential gating factors.
Ensuring Robust Supply Chain for High Compute Demand in Future Power Plans
The company is collaborating with supply chain partners to ensure sufficient capacity in silicon, memory, packaging, and components over the next two years. Despite tight conditions, significant growth in compute demand is anticipated, with efforts focused on increasing power and supply to meet the challenge.
Sustainability of CPU Demand Amid AI Growth and Supply Chain Readiness
The dialogue highlights the sustained CPU demand driven by AI workloads, predicting a positive environment into 2026. It reassures supply readiness for growth, particularly in 2026, and notes the broadening of demand among major clients, indicating a durable trend beyond seasonal fluctuations.
AMD's Progress in Enhancing Rock with R7 for Developer Support and Competitive Edge
AMD has significantly improved Rock with R7, enhancing performance and framework support, ensuring a smooth experience for new customers. The focus is on expanding libraries and environments for emerging workloads like training, inference, and reinforcement learning. Continuous investment is pledged to maintain a competitive edge in developer support.
Data Center GPU Business: Framework for Gross Margin Improvement During Product Transition
Discussion on the framework for improving gross margins during the transition to new data center GPU products, emphasizing top line revenue growth and margin expansion.
High-Level Discussion on Customer Growth and OpenAI's Impact in 2026 and Beyond
The dialogue touches on growth expectations for 2026 and beyond, focusing on OpenAI and large customers' role in driving tens of billions in scripts. It hints at broader customer penetration strategies, promising more detailed insights at an upcoming analyst day.
Expanding Partnerships and Customer Engagements for Scalable Deliveries
Highlights significant customer engagements including OpenAI, OCI, and Department of Energy, emphasizing the company's focus on scalable deliveries and robust supply chain preparation for multiple large-scale partnerships.
Data Center Growth Comparison: Servers vs. DTS Year Over Year
The dialogue discusses the year-over-year growth of servers and data center AI within a data center context. It concludes that both areas grew nicely, with servers exhibiting slightly better growth compared to data center AI.
Clarifying Guidance on Data Center and Server Growth Projections
A discussion unfolds around the interpretation of 'strong double digits' growth for servers and data centers, aiming to align expectations with provided guidance, while also addressing inquiries about GPU projections and Mi 350 ramp-up, emphasizing the need for clarification on previously stated forecasts.
Strong Performance of Servers and Data Centers Highlighted
The dialogue emphasizes the robust growth in servers and data centers, with a focus on double-digit percentage increases, suggesting satisfaction with their performance.
OpenAI Deal Boosts Market Engagement and Diversification of NVIDIA's GPU Revenue
Announcement of long-term OpenAI collaboration, alongside showcasing Helios rack, accelerates customer interest and engagement at higher scales. NVIDIA aims to diversify its data center GPU revenue, targeting multiple customers of similar scale by 2027-2028, mitigating risks associated with customer concentration.
AI-Driven Compute Demand Boosts Server CPU Sales and Future Prospects
Discussion highlights strong server CPU demand driven by AI workloads, predicting growth in ASPs and turnover. Early engagement with next-gen CPUs indicates market readiness for advanced compute solutions, expanding the AI silicon TAM outlook.
Collaboration with OpenAI Boosts Software Stack Development
The partnership with OpenAI is enhancing software stack robustness, with shared work on hardware, software, and systems. Feedback from large customers and AI native companies is strengthening the stack, focusing on training and inference improvements, and accelerating kernel development.
GPU Utilization Trends and Mi 308 Supply Update
The dialogue explores the practical utilization duration of GPUs beyond standard depreciation, noting a trend towards extending the use of older models for AI compute, especially in inference. It also addresses the current status and potential impact of Mi 308 shipments, highlighting the ongoing negotiation with customers regarding demand and the expected update on opportunities in the coming months.
Discussion on Inventory Readiness for Emerging Market Demand
The dialogue revolves around preparing inventory for potential market openings, with ongoing work in process and uncertainty regarding future demand shaping inventory strategies.
AMD's Strategy to Differentiate in the AI Compute Market and Creative Partnership Agreements
AMD discusses its competitive positioning in the AI compute market, highlighting the strengths of its MiLor series and emphasizing time-to-market, total cost of ownership, and deep partnerships. The company also reflects on a unique partnership structure involving warrants, suggesting openness to creative equity-based agreements to address global processing power demands, while looking forward to leveraging its capabilities in emerging AI opportunities.
要点回答
Q:How did the data center segment contribute to AMD's third quarter revenue?
A:The data center segment revenue increased 22% year over year to $4.3 billion, primarily due to the ramp of the Instinct Mi 350 series GPUs and server share gains. Server CPU revenue also reached an all-time high as the adoption of 5th gen EPYC processors accelerated.
Q:What is the significance of the new 2 nm Venice processors for AMD?
A:The significance of the new 2 nm Venice processors for AMD lies in their substantial gains in performance, efficiency, and compute density. The processors are in the labs performing very well and have generated strong customer pull and engagement, indicating a growing demand for more data center compute.
Q:What progress has AMD made in the data center AI market?
A:In the data center AI market, AMD's Instinct GPU business is experiencing revenue growth, with a sharp ramp of MI series GPU sales and broader deployment of the MI LOR series. Multiple deployments of the MI 350 series are ongoing with large cloud and AI providers, and the MI 300 series GPUs are being deployed by AI developers.
Q:What new release did AMD launch for AI development, and what improvements does it offer?
A:AMD launched ROCM 7, the most advanced and feature-rich release to date for AI development. It offers up to 4.6x higher inference and 3x higher training performance compared to the previous version, introduces seamless distributed inference, enhanced code portability across hardware, and new enterprise tools for solution deployment and management.
Q:What are the details of the upcoming Mi 400 series and Helios platform?
A:The upcoming Mi 400 series combines a new compute engine with industry-leading memory capacity and advanced networking capabilities, aiming for a major leap in performance for AI training and inference workloads. The Mi 400 series will power Helios, AMD's rack-scale AI platform, designed to redefine performance and efficiency at data center scale. Helios integrates Instinct Mi 400 series GPUs, Venice EPYC CPUs, and SambaNova CXL in a double-wide rack solution that supports the performance, power, cooling, and serviceability needed for next-generation AI infrastructure.
Q:Who announced plans to deploy AMD's Mi 450 series GPUs and when?
A:Oracle announced they will be a lead launch partner for the Mi 450 series, deploying tens of thousands of these GPUs across Oracle Cloud Infrastructure starting in 2026 and expanding through 2027.
Q:What are the prospects for AMD's AI business in 2027?
A:AMD's AI business is entering a new phase of growth and is on a clear trajectory towards tens of billions in annual revenue by 2027, driven by leadership in rackscale technology, expanding customer adoption, and large-scale global deployments.
Q:What are the highlights of the embedded segment's revenue performance?
A:The embedded segment's revenue decreased 8% year over year to $857 million, but there was sequential revenue growth and an increase in sell-through due to strengthening demand across various markets. New solutions were introduced that extended leadership in adaptive and X6 computing, and design momentum remained strong with record design wins.
Q:What trends are driving growth across AMD's businesses?
A:The growth across AMD's businesses is being driven by the expansion of the data center and client segment markets, accelerating adoption of Instinct platforms and EPYC and Ryzen CPUs, and the demand for more powerful, efficient, and intelligent computing in various business, science, and societal areas.
Q:What will be discussed at the financial analyst meeting next week?
A:At the financial analyst meeting next week, the company looks forward to providing details on data center AI growth plans, client and gaming segment performance, and an outlook for the fourth quarter of fiscal 2025.
Q:What were the revenue and operating income results for the client gaming segment?
A:The client gaming segment revenue was a record of 4 billion, up 73% year over year and 12% sequentially. The operating income was 867 million or 21% of revenue, compared to 288 million or 12% a year ago.
Q:What was the company's cash flow and share repurchase activity for the quarter?
A:The company generated 1.8 billion in cash from operating activities of continuing operations and a record free cash flow of 1.5 billion. They returned 89 million to shareholders through share repurchases, totaling 1.3 billion in share repurchases for the first three quarters of 2025. They have 9.4 billion authorization remaining for share repurchases and ended the quarter with 7.2 billion in cash, cash equivalents, and short-term investments.
Q:What is the fourth quarter 2025 outlook in terms of revenue and other financial metrics?
A:The fourth quarter 2025 outlook includes an expected revenue of approximately 9.6 billion, plus or minus 300 million. The midpoint represents about 25% year-over-year revenue growth. The company expects non-GAAP gross margin to be approximately 54.5%, operating expenses to be about 2.8 billion, net interest and other expenses to be about 37 million, a non-GAAP effective tax rate of 22%, and diluted share count to be approximately 1.65 billion shares.
Q:How is the company positioned to capitalize on AI opportunities and what is the expected revenue growth?
A:The company is strategically investing to capitalize on expanding AI opportunities across all end markets, which is expected to drive sustainable long-term revenue growth and earnings expansion for compelling shareholder value creation.
Q:Can you provide the CPU and GPU mix for Q3 and Q4, and how is the transition from Milo to Milor being managed?
A:The data center had a very strong Q3 outperforming both server and data center AI businesses without any Mi 308 sales. The Mi 350 has ramped well, with customers showing visibility of elevated demand in the next few quarters. For the first half of 2026, the company expects Mi 300 and 350 to continue ramping, with the Mi 450 series coming online in the second half of 2026. The company is closely working with OpenAI and CSP partners to ensure readiness for deployment.
Q:What is the level of visibility the company has with OpenAI regarding the engagement and the expansion of the relationship?
A:The company is excited about the relationship with OpenAI and is closely planning with them as well as with CSP partners. The visibility on the Mi 450 ramp is good, and preparations are underway for deployment. The company has a unique time to ensure power availability and supply chain, with the first gigawatts starting deployment in the second half of 2026.
Q:What are the expectations for discrete sales versus system sales in the coming year and what has been the customer response to the new offerings?
A:There is excitement around Mi 450 in Helios, with numerous customers engaging their engineering teams to understand the system better. While specific details on sales expectations and customer responses are not provided, the interest in Mi 450 and Helios has grown with recent announcements and customer interactions.
Q:How does the company view the role of its full rack scale solution and what is the anticipated demand from early adopters?
A:The full rack scale solution is of significant interest, and early customers for Mi 4 are expected to be around the rackscale form factor, with other form factors also being available for the Mi 450 series.
Q:What are the concerns regarding the constraints on power requirements and component availability for data centers, and what steps are being taken to address these?
A:There is a concern about whether constraints will arise from the unavailability of components or from the data center's capacity, including infrastructure and power. The industry is being advised to plan together as an ecosystem, working with customers on power plans, supply chain partners, and more. There is confidence in the strong supply chain's ability to deliver the significant growth rates and large amounts of compute required.
Q:How does the current CPU demand trend look, and is there a seasonal pattern to expect in the first half of the next year?
A:The trend in CPU demand has been positive for some time, with broadening demand and significant forecasts from large hyperscale clients for 2026. The demand is attributed to the need for general-purpose compute for AI workloads. The demand environment is not expected to be seasonal and is considered durable. CPU demand for 2026 is anticipated to be positive, although precise guidance will be provided closer to the end of the year.
Q:What progress has been made with the Radeon 7, and how is the developer community supported?
A:Great progress has been made with Radeon 7, which is a significant step forward in terms of performance and supported frameworks. Full zero support for the newest models and native support for the newest frameworks have been emphasized. The company is continuing to enhance libraries and the overall environment, particularly for newer workloads that combine training, inference, and reinforcement learning. The commitment is to further improve the development experience for customers.
Q:What is the priority for the data center GPU business?
A:The priority for the data center GPU business is to expand top-line revenue growth and gross margin dollars while driving the gross margin percentage up.
Q:What customer engagements and partnerships are contributing to the company's roadmap?
A:The company has seen great traction among the largest customers, with significant engagements including OpenAI, Oracle Cloud Infrastructure (OCI), and partnerships with the Department of Energy. There are multiple customers expected to have very significant scale in the M450 generation, and the company has built multiple customer engagements with a diverse set of partners to ensure supply chain capacity.
Q:How did the data center and server growth compare year over year?
A:Both the data center and servers grew nicely year over year in terms of dollar and percentage basis.
Q:What is the growth expectation for data center and servers in the current quarter?
A:The growth for the data center and servers in the current quarter is expected to be up strongly in a double-digit percentage range. Sequentially, data center will be up double digits and servers are projected to grow strongly as well.
Q:What is the expected growth contribution from data center GPUs in the 2027 to 2028 timeframe?
A:It is anticipated that data center GPU revenue could constitute half of the company's total revenue in the 2027 to 2028 timeframe.
Q:What is the risk associated with the OpenAI deal for the company?
A:While the OpenAI deal has been positive, the company has a broad set of customers and is dimensioning the supply chain to support multiple customers at a similar scale. They have always been engaged with a number of customers and are confident in their ability to manage customer concentration risk.
Q:What factors are driving the strength in server demand?
A:The strength in server demand is driven by a broad-based CPU demand across various workloads, including server refreshes and the expansion of AI workloads requiring more traditional compute. This is evidenced by the strong pull on the latest generation and early engagement with new products like Genoa and Venice.
Q:What are the updated views on the AI Silicon TAM opportunity for large deployments?
A:The updated view on the AI Silicon TAM opportunity for large deployments suggests that the market is going up, and the initial estimate of a $500 billion opportunity is considered conservative. The exact updated numbers will be provided next week.
Q:How is the collaboration with OpenAI expected to impact the development of the software stack?
A:The collaboration with OpenAI is expected to significantly contribute to the development of a broader and deeper software stack. The relationship with OpenAI, along with work on hardware, software, and future roadmaps, is intended to make significant progress, especially through the work being done together on Triton. This collaboration is also seen as valuable for strengthening the software stack and includes work with other AI native companies.
Q:In what ways is the company enhancing its software stack, and what is the role of AI in this process?
A:The company is enhancing its software stack by receiving feedback from large customers and actively working with AI native companies that develop on their stack. This has led to significant progress in the training and inference stack, with plans to invest more in this area. Additionally, AI is being utilized to accelerate the rate of development of the rocombe kernel and the overall ecosystem.
Q:What indications are there regarding the potential for extending the useful life of GPUs?
A:There are indications that suggest customers may be planning to extend the useful life of GPUs beyond the typical 3-5 year depreciation, as there is a desire to use the latest GPUs for new data centers and a need for more AI compute. This has led to the observation that older generations of GPUs, such as the Mi 300x, are still being used for inference.
Q:What is the company's readiness regarding the potential shipment of Mi 308 GPUs, and how significant could this be?
A:The company's readiness for the potential shipment of Mi 308 GPUs is uncertain due to a dynamic situation with the product. While some licenses for Mi 308 have been received, the demand environment and overall opportunity are still being assessed. The ability to update the situation in the next couple of months will depend on how the demand environment shapes up.
Q:Does the company have a product ready for the potential opening of the market for Mi 308 GPUs?
A:The company has work in process for Mi 308 GPUs, but the extent of the inventory adjustment needed will depend on how the demand environment develops.
Q:How does the company plan to differentiate itself in the market when competing with other GPU and ASIC vendors, including OpenAI?
A:The company plans to differentiate itself by capitalizing on the global need for more AI compute, which has been driven by leaders like OpenAI but is also shared by many large customers. Differentiation strategies include having strong products like the Mi 450 series, focusing on time to market, total cost of ownership, deep partnerships, and future roadmap considerations. The company has gained insights from its AI roadmap and believes in its ability to capture a significant market share.
Q:Is the company open to similar creative approaches in the future to meet the growing demand for processing power?
A:The company is open to conceptually similar creative approaches to meet the growing demand for processing power. The unique agreement with OpenAI focused on deep partnerships and long-term scale, aligning incentives for all parties. Future partnerships could involve other AI users or sovereign AI opportunities, and each would be considered a unique opportunity to bring the company's full capabilities to the ecosystem.
play
English
English
进入会议
1.0
0.5
0.75
1.0
1.5
2.0