OpenAI and Oracle (NYSE: ORCL) have announced a collaboration to expand data centre capacity for OpenAI’s long-horizon compute programme known as “Stargate”, aiming to add high-density infrastructure on Oracle Cloud Infrastructure to support training and deployment of advanced AI models. Detailed financial terms, location and a commissioning timetable were not disclosed.
The initiative is intended to secure additional GPU clusters, high-bandwidth networking and power-dense facilities for frontier model development and large-scale inference. OpenAI continues to run substantially on Microsoft Azure, and the Oracle arrangement is positioned to complement that footprint with additional capacity and geographic diversity.
The move follows earlier steps by the companies to work together at scale. In June 2024, Oracle said OpenAI would use Oracle Cloud Infrastructure to extend Microsoft Azure AI capacity, with workloads connected through the Oracle Interconnect for Microsoft Azure, according to a company statement and Reuters reporting at the time. That arrangement underscored an emerging multi-cloud pattern for the most compute-intensive AI applications.
Stargate has been described in prior reporting by the Financial Times and The Information as a code-name for a next-generation compute campus intended to support significantly larger AI models over the second half of the decade. Those reports indicated the project would likely involve multiple partners across chips, cloud infrastructure, power and real estate. Specific cost estimates and site decisions have not been confirmed by the companies.
Oracle has been investing in high-performance AI infrastructure, including bare metal servers, RDMA networking and large-scale clusters built around Nvidia (NASDAQ: NVDA) GPUs. Oracle has also said it plans to support systems based on Nvidia’s Blackwell architecture as they become available, part of a broader effort to offer capacity for AI training and inference. OpenAI, which is closely partnered with Microsoft (NASDAQ: MSFT), has sought to diversify access to compute and energy to meet rising demand for services such as ChatGPT and to develop future models.
Industry analysts expect that building out a project on the scale of Stargate would require long-term power procurement, potentially including nuclear and renewable sources, in addition to specialised cooling and grid interconnection. Data centre construction timelines, supply chain constraints for advanced GPUs and networking gear, and permitting processes are likely to be key factors influencing delivery.
For Oracle, deeper ties with OpenAI support its strategy to grow Oracle Cloud Infrastructure’s role in AI workloads and to monetise recent capital expenditure on GPU capacity. For OpenAI, a multi-cloud approach mitigates concentration risk, can reduce latency for enterprise customers across regions and may help align compute availability with the cadence of new model releases.
The broader market context remains fluid. Governments continue to scrutinise AI infrastructure, including export controls on advanced chips and incentives for domestic manufacturing. Power grid constraints in several regions have prompted developers to consider on-site generation and long-term power purchase agreements. The ultimate configuration of Stargate will depend on how these regulatory and supply factors evolve.
- Site and power: watch for confirmed locations and long-term power contracts, including potential low-carbon sources.
- Compute supply: availability timelines for next-generation GPUs and high-bandwidth memory, as well as potential use of alternative accelerators.
- Network design: details of high-speed interconnects and interoperability with Azure for multi-cloud scaling.
- Capital plan: disclosure on financing, partners and phased capacity additions.
- Governance and safety: commitments around model testing, reliability and responsible deployment at larger scales.
Neither OpenAI nor Oracle provided additional detail beyond the headline collaboration, and further announcements are expected as siting, supply and regulatory workstreams reach milestones. Previous cross-cloud work between Oracle and Microsoft provides a technical precedent for linking capacity across providers for AI workloads, a pattern that could feature in the Stargate design as it develops.








