DigitalOcean (DOCN.US) 2025年第三季度业绩电话会
文章语言:
简
繁
EN
Share
Minutes
原文
会议摘要
DigitalOcean reports strong Q3 earnings with 16% revenue growth, $230 million in revenue, and record organic ARR growth. The company is raising revenue and free cash flow outlooks for 2025 and 2026, driven by AI and digital native customers. Strategic investments in data center and GPU capacity are being accelerated to support growth. Key achievements include signing multiple eight-figure contracts and launching innovative features like Paces Cold Storage, reinforcing DigitalOcean's position in the cloud market.
会议速览
The call, moderated by the conference operator, focuses on reviewing DigitalOcean's Q3 2025 financial results. It highlights the disclaimer on forward-looking statements, emphasizing the risks and non-GAAP measures discussed, with details available in SEC filings and press releases. The CEO and CFO will provide further insights.
The company surpassed Q3 revenue and profitability targets, achieved record organic ARR growth, and expanded AI platform capabilities. Accelerating growth is driven by AI native customers, high-pen digital enterprises, and new customer revenue. Future investments in data centers and GPU capacity are planned to sustain growth while maintaining margins.
The dialogue highlights the growth of AI native customers leveraging the Unified Agent Cloud, showcasing a 41% year-over-year revenue increase. It emphasizes strategic partnerships with companies like Faldo AI and NewSprick, utilizing advanced AI infrastructure for scalable inference workloads, GPU droplets, and comprehensive storage solutions, underscoring the platform's role in accelerating generative AI content creation and digital media advancements.
DigitalOcean's AI platform evolves to support agentic workflows and enterprise ASIC, integrating serverless inferencing, knowledge-based services, and AI agent orchestration. Highlights include a significant multi-year contract with a global digital systems integrator and the launch of the DigitalOcean AI Partner Program, aimed at simplifying AI infrastructure for native enterprises.
The dialogue highlights the company's commitment to product innovation within its cloud stack, specifically addressing the needs of high-spending, digital native customers. It showcases two case studies: Bright Data, which uses the cloud for AI model training, and Vpn Super, which migrated workloads for enhanced security and reliability. The company introduced new features like Paces Cold Storage and automated database scaling, which have been adopted by over 35% of high-ARR customers, significantly boosting their growth rates.
Company boosts 2025 and 2026 revenue guidance, accelerates investments in GPU capacity, data centers, and engineering to meet growing demand, aiming for 18%-20% revenue growth in 2026, a year ahead of schedule, while maintaining strong adjusted free cash flow margins.
Highlights robust Q3 revenue growth, driven by AI and unified identity cloud, with record incremental organic ARR and strong NDR, setting a foundation for accelerated growth in 2026 and beyond.
The company achieved significant financial gains in Q3, with accelerated revenue, improved gross profit, and a 15% increase in adjusted EBITDA. Non-GAAP diluted net income per share saw a 4% increase, and adjusted free cash flow surged, partly due to an equipment financing arrangement. GAAP diluted net income per share jumped 358% year over year, driven by a one-time tax valuation allowance reversal and debt extinguishment gains.
Company reports robust Q3 financials with $237 million cash balance, $1.6 billion share repurchases since IPO, and a new $100 million authorization. Projects modest interest expenses, introduces unlevered adjusted free cash flow metric, and forecasts revenue and cash flow margin growth for 2025.
The dialogue outlines projected financial performance for 2025, including revenue growth of approximately 16% year over year, reaching $896 to $897 million. Adjusted EBITDA margins are forecasted between 38.5% and 39.5%, with non-GAAP diluted earnings per share expected at 35 to 40 cents for Q4 and $2 to $2.05 for the full year. Adjusted free cash flow margin is projected at 18% to 19%, factoring in impacts from Qi refinancing actions.
Company updates on accelerating 2026 growth through strategic investments in data centers and GPU capacity, achieving revenue growth targets a year earlier, and maintaining strong financial health. Plans to lease 30,000 MW of incremental data center capacity, with anticipation of high 30s%-40% adjusted EBITDA margins and mid to high teens adjusted free cash flow. Also discusses commitment to maintaining a healthy balance sheet and net leverage in the mid-20s by 2026.
The dialogue discusses the intersection of AI and private equity, focusing on large contracts with AI-native companies seeking infrastructure solutions, and contrasts this with the AI cohort's past slower growth patterns.
DigitalOcean integrates AI and cloud services into a unified platform, focusing on scalable inferencing workloads for growing customer bases. This strategy involves expanding data center capacity to support AI-native companies achieving traction in consumer and enterprise markets, emphasizing the crossover between AI and cloud capabilities.
The dialogue explores the influence of hyperscalers' migrations on the growth of multi-cloud offerings and partner networks, highlighting the significance of such shifts in driving platform advancements and market dynamics.
The discussion focuses on the increasing migration workloads due to dissatisfaction with incumbent cloud providers and the attractiveness of new capabilities such as advanced networking, auto-scaling databases, cold storage, virtual private cloud, and Direct Connect. These enhancements, along with the platform's readiness for AI native workloads, make it more appealing for incoming migrations from hyperscaler clouds.
Discussion covers the ramp-up of customer workloads using existing and new data center capacities, with a focus on AI inference scale-up. New capacity will progressively come online, starting early 2026, ensuring a smooth revenue ramp. Existing data centers will initially support workload demands, while new data centers will significantly enhance capacity, particularly in the first half of 2026.
Discusses the evolution of AI cloud services, emphasizing the provision of a unified agent cloud stack tailored for AI native companies, integrating GPU, storage, orchestration, and more, driven by customer feedback and market trends.
The discussion highlights the strategic importance of software differentiation and a mature full-stack cloud platform in attracting AI native companies. It emphasizes the shift in focus from hardware to software as companies evolve to meet enterprise needs, with plans to aggressively expand platform functionality based on customer and market opportunities.
The discussion focuses on how a company prioritizes customer demands while maintaining competitive positioning and considering long-term revenue opportunities in AI. The strategy involves being customer-centric, patient, and disciplined in pursuing AI advancements, particularly in inferencing and agent workflows. Confidence in the company's software expertise and platform depth is highlighted, aiming to be a first mover in agent development life cycle, resonating with AI natives and growing customer footprints.
Confidence in reaching 20% revenue growth by 2026 stems from successful growth with large customers, doubling AI business, and high incremental revenue from product-led growth. The company anticipates AI to become a significant revenue segment, reaching mid to high teens percentage of total revenue. Strong visibility from multiple committed contracts further bolsters confidence in future growth.
Discussion revolves around integrating AI revenue, particularly from scalable workloads, into net dollar retention (NDR) metrics. The approach considers AI's growing predictability and resemblance to traditional cloud services, aiming to enhance communication on business resilience and growth prospects.
Discussed the expansion of large customers as a key driver for growth, contrasting with the lower NDR from smaller, premium customers. Highlighted the importance of blending NDR metrics to understand true performance, emphasizing the strong growth from increased expansion among major clients.
The dialogue discusses preliminary guidance for revenue growth, emphasizing the company's strategy for maintaining disciplined investment in durable revenue growth while aiming for mid-to-high teens in adjusted free cash flow margins, despite uncertainties in CapEx and market evolution.
A query explores the shift towards leasing, focusing on comparing gross margins between owned data centers/equipment and leased capacity.
A discussion on the financial implications of expanding data center capacity, emphasizing variable costs, initial lumpiness in expenses, and the timeline for achieving steady-state gross margins. The speaker highlights previous experiences with similar expansions and reassures stakeholders that these factors have been integrated into future financial guidance.
要点回答
Q:What are the key financial results for Digitalocean's third quarter?
A:For Digitalocean's third quarter, the key financial results include exceeding revenue and profitability guidance with 16% revenue growth, the highest incremental organic ARR in the company's history at $44 million, a 21% trailing 12 months adjusted free cash flow margin, and strong balance sheet with adjusted EBITDA and non-GAAP earnings per share well above guidance.
Q:What does the increase in the largest customers' revenue signify for Digitalocean?
A:The increase in the largest customers' revenue, which grew 41% year over year and now account for 26% of total revenue, signifies strong customer retention and growth on the Digitalocean platform. It indicates that customers are able to scale without leaving the platform and are increasingly leveraging both AI and general-purpose cloud capabilities.
Q:How is the unified AI cloud platform performing and what are its capabilities?
A:The unified AI cloud platform is gaining increasing traction with larger, well-funded AI native companies that are in inference mode. It offers a full-stack inference platform for AI native customers with their own models, featuring a powerful line-up of GPUs in both bare metal and droplet configurations for optimized inference workloads globally on DL. The platform also includes a generative AI model platform for major customers like Canva and Shopify, providing text to image and text to video models.
Q:What is the strategic partnership between Digitalocean and Fall, and what are its benefits for customers?
A:The strategic partnership between Digitalocean and Fall aims to accelerate generative AI content creation by making image and audio generation more accessible. Through this partnership, Fall will host and run its models on Digitalocean's infrastructure, which will facilitate applications across creative and enterprise use cases. This partnership benefits customers by allowing them to create agents that understand and generate various forms of input, significantly expanding the range of problems they can solve.
Q:What is Newspick and what technology do they utilize to deliver local news and information?
A:Newspick is an AI-native customer leveraging a unified cloud to deliver timely and relevant local news and information to 40 million monthly active users. They utilize AI infrastructure to train and deploy complex recommendation systems and natural language processing models for personalization.
Q:What are the benefits of using DigitalOcean's AI native capabilities and full-stack general purpose cloud?
A:The benefits include optimizing cost and performance, enabling the use of vector search services, and providing high throughput performance for both GPU and non-GPU tasks. Features like NFS and U-Droplet facilitate easy storage attachment and provisioning, and applications can be scaled while maintaining speed, reliability, and efficiency.
Q:What does the AI platform layer typically enable for companies?
A:The AI platform layer enables companies that are users or consumers of AI to build intelligent applications without directly managing the infrastructure.
Q:How does the AI platform support serverless inferencing and what are some of the key features?
A:The AI platform supports serverless inferencing across popular models and includes a knowledge-based service that allows for data integration and improved accuracy. It also offers built-in safety measures, visual agent orchestration, enterprise-grade features like observability, git integration, and auto-scaling.
Q:What significant customer agreement was signed post-Q3 for the AI platform and what is its focus?
A:A significant customer signed an eight-figure per year multi-year contract to leverage the AI cloud for AI transformation for its digital native enterprise customer base. The focus is on the software engineering lifecycle, including planning, backlog, and roadmap management, release planning, execution, and customer support.
Q:What is the purpose of the Digitalocean AI Partner Program?
A:The purpose of the Digitalocean AI Partner Program is to empower AI and digital native enterprises with a unified cloud and AI platform to seamlessly build and scale intelligent applications using agents, without dealing with fragmented infrastructure.
Q:Which new features were introduced in Q3 to support digital native enterprises scaling on DigitalOcean?
A:New features include PacesCold Storage, an enterprise object storage solution for massive data management, and enhanced managed databases with automated storage auto-scaling, ensuring seamless scaling and no downtime across major database engines.
Q:How do the new features introduced in the past year affect customers' growth rates?
A:Customers with over 100,000 in ARR who have adopted at least one new feature released over the past year have seen a several hundred basis points increase in their growth rate after adopting those new products.
Q:How does the company plan to support its growth in 2026 and beyond?
A:To support growth, the company has ordered more GPU capacity to meet the needs from AI native customers and secured around 30 MW of incremental data center capacity. They have also added equipment financing and ramped engineering resources to accelerate the unified AI cloud roadmap, along with targeted investments in sales and marketing.
Q:What strategic investments have been made by the company to support growth, and what is their impact?
A:Strategic investments include increased equipment financing, ramped engineering resources for the unified AI cloud roadmap, and targeted investments in sales and marketing. These investments are expected to build on the company's recent success and set them up for strong growth in 2026 and 2027.
Q:What financial metrics is the company confident it will achieve, and when?
A:The company expects to deliver 18% to 20% growth in 2026, achieving its 2027 growth target a full year earlier than previously projected. They will maintain strong adjusted free cash flow margins in the mid to high teens.
Q:What was the company's performance in Q3, and how does it relate to the full year and Q4 outlook?
A:In Q3, the company delivered strong performance with guidance on both revenue and profitability. The unified AI cloud is gaining traction, resulting in strong revenue growth from high-spending customers, exceeding current capacity. This momentum and visibility into demand have led to the company raising its revenue and adjusted free cash flow outlook for 2025 and 2026.
Q:What are the key details regarding the company's Q3 revenue, profitability, and customer metrics?
A:Q3 revenue was $230 million, up 16% year over year, with a record for the highest revenue growth since Q3 2023. Revenue growth was balanced across the unified AI cloud and was primarily driven by higher spending AI digital native customers. The company delivered the highest incremental organic AR in company history at $44 million, with strong AI ML revenue growth and customers whose annualized run rate revenue grew 40% year over year. Gross profit was $137 million, with adjusted EBITDA at $100 million, and non GAAP diluted net income per share at 54 cents.
Q:What was the impact of recent financial activities on the company's adjusted free cash flow?
A:Recent financial activities, including equipment financing and balance sheet transactions, impacted adjusted free cash flow positively. Adjusted free cash flow was $85 million, or 37% of revenue, an increase from the prior year's $19 million. The company entered into an equipment financing arrangement to better align investments with future revenue generation, contributing to the increase in adjusted free cash flow.
Q:What is the updated outlook for revenue and adjusted free cash flow margin?
A:The updated outlook for the fourth quarter of 2025 includes revenue expected to be in the range of $237 to $238 million, representing approximately 16% year-over-year growth. For the full year 2025, the projected revenue is between $896 to $897 million. The adjusted EBITDA margin is expected to be in the range of 38.5% to 39.5%, and the full year adjusted free cash flow margin is projected to be between 18% to 19%.
Q:How will the new data centers and GPU capacity investments affect growth and financials?
A:The new data centers and investments in incremental GPU capacity are expected to further accelerate growth, with the potential to deliver script to script growth in revenue, a full year earlier than previously projected. While COGS and operating expenses will increase in early 2026 as the company ramps into new data center capacity, it anticipates delivering high 30s% to 40% adjusted EBITDA margins while maintaining mid to high teens adjusted free cash flow.
Q:What is the composition of the multiple 8 figure committed contracts mentioned?
A:The multiple 8 figure committed contracts primarily involve AI native companies that are looking to leverage the company's infrastructure as well as a series of agents for building AI experiences for software engineering, utilizing their AI platform layer. However, as explained in the earlier remarks, the lines between AI and cloud are becoming increasingly blurred, hence the contracts reflect a more unified cloud platform approach being called 'agent-based cloud'.
Q:How do these new contracts compare with previous engagements, especially those related to AI?
A:These new contracts are different from previous engagements as they are not just with AI companies; they also include traditional cloud customers that are increasingly using AI-related services like storage artifacts, networking capabilities, and others. This indicates a blurring of the line between AI and cloud services and reflects a more integrated approach to technology usage among the customers.
Q:What is the reason behind the scaling of customers' commitments and how is the company responding?
A:The scaling of customers' commitments is attributed to the growing use of AI among customers who initially started with the company's AI platform but are now expanding to include other cloud services. This expansion is prompting the company to scale with these customers by expanding its data center footprint to support their growth.
Q:How are the new contracts contributing to the company's financial outlook?
A:The new AI workloads are contributing positively to the company's financial outlook as they are durable, predictable, and offer a great opportunity to scale with customers as they achieve real-world traction globally. The company is focusing on capacity expansion to support marquee AI native companies that are finding success with real end customers.
Q:What factors are driving the migration of workloads to new cloud providers?
A:The migration of workloads to new cloud providers is driven by a combination of factors, including dissatisfaction with an incumbent cloud provider and the吸引力 of new solutions offered by the new cloud provider. Specific factors include advanced networking capabilities, various droplet configurations or storage options like cold storage, and features such as auto scaling for DBAs. These are fundamental building block type capabilities that are particularly attractive to customers with sophisticated workloads.
Q:What new capabilities are contributing to the attractiveness of the company's platform for migration and AI native workloads?
A:The company's platform is being enhanced by new capabilities such as virtual private cloud, Direct Connect, and others which make it more attractive for migration workloads. Additionally, features related to AI native workloads include an expanded data center capacity that will come online progressively through 2026, with an expectation of a revenue ramp that is expected to be relatively smooth over time. Moreover, the company is focusing on building a unified agent cloud stack catering to AI native companies that require not only GPU and inferencing capabilities but also agent workflow, storage, databases, authentication, authorization, and orchestration from a Kubernetes perspective.
Q:How does the company plan to accommodate the new 8-figure contracts signed after the quarter of close?
A:The company plans to accommodate these new 8-figure contracts by leveraging existing data center capacity and expanding their data center footprint progressively through 2026. They have a build schedule from providers and coordinate closely to ensure they receive a warm shell in new data centers, allowing them to start racking servers as soon as the capacity is available. Some capacity is already available in existing data centers, which, combined with the new builds, will facilitate the ramp-up needed to support the workload from these contracts.
Q:What is the anticipated timeline for bringing new data center capacity online?
A:The new data center capacity is expected to come online progressively through 2026. Most of the capacity is anticipated to be available in the first half of the next year. In the fourth quarter of the upcoming year, some of the build-outs will be paid for, which indicates that the capacity will become available in the months and quarters following that. This is expected to result in an early ramp of data center capacity, allowing time for the deployment of GPUs and for customers to ramp up, leading to a relatively smooth revenue ramp over the course of the year.
Q:How does the company view its competitive positioning and strategy in relation to AI capabilities and customer needs?
A:The company's strategy is focused on AI native companies that build real businesses running models in inferencing mode. These companies need a unified agent cloud stack that includes not only GPU and inferencing capabilities but also agent workflow, storage databases, authentication, authorization, and orchestration from a Kubernetes perspective. The company's strategy aligns with customer needs for AI applications and has been executed upon by providing a comprehensive software stack. The enhancements and feedback from customers have been integrated into the platform rapidly, showcasing the power of co-inventing with customers and leveraging the company's strength in software differentiation and a 12-year-old full-stack general-purpose cloud. This focus on software and AI differentiation positions the company favorably for scaling AI native companies.






