Key Takeaways
- AI infrastructure investment faces potential overheating despite long-term demand.
- Data center design requires flexibility for rapid GPU and power density advancements.
- Coordination gaps among industry players significantly slow AI infrastructure deployment.
- Addressing power grid limitations and community relations is critical for AI growth.
Deep Dive
- Discussion frames the AI bubble talk as a recurring pattern seen with new general-purpose technologies like the internet.
- While AI itself may not be in a bubble, segments of the market show overheating due to unprecedented speed of capital expenditure.
- A JP Morgan estimate suggests $5 trillion in AI investments by 2030 would require $650 billion in annual revenue for a 10% return.
- Current AI services are generating tens of billions, creating a notable gap compared to the required revenue stream.
- Infrastructure build-outs must consider flexibility, as the timeline for artificial general intelligence (AGI) remains uncertain.
- Core markets, including North Virginia, Dallas, and London, report long queues and low data center vacancies.
- Graphics Processing Units (GPUs) have a short 5-6 year shelf life, with NVIDIA's rapid advancements potentially leading to obsolete equipment.
- Data center building shells have a 20-40 year lifespan, contrasting sharply with customer-provided server equipment's 3-6 year refresh cycle.
- Evolving technology necessitates retooling existing data centers due to significantly increased power density per rack, requiring upgrades to power delivery and cooling systems.
- Future-proofing data center designs for AI, particularly regarding liquid cooling and higher power densities, presents significant current complexity.
- The cost for retooling data centers to accommodate hyperscalers' needs is typically covered by additional non-recurring charges within contracts.
- A 15-year take-or-pay data center contract can involve negotiations for non-recurring charges and lease rate adjustments if new designs are introduced.
- KKR has a track record of recontracting data center assets at potentially higher rates, driven by supply/demand imbalances and the difficulty of relocating established ecosystems.
- Older contracts are often significantly below current market rates, creating negotiation points where hyperscalers may consider leaving or operators may upgrade capacity to increase lease rates.
- KKR aims to re-engineer data center construction using a 'molecule to the rack' approach to address a 'coordination tax' within the industry.
- This inefficiency stems from a lack of coordination among data center operators, power companies, and capital providers, slowing deployment cycles.
- KKR integrates data center platforms with power generation, transmission, and utility relationships, including a partnership with ECP leveraging power generation assets.
- A pilot project in Bosque, Texas, co-locates a data center with two hyperscalers adjacent to a power plant, requiring negotiations with regulators and communities.
- The power plant can divert electricity from data centers back to the grid during strain, with data centers utilizing on-site generation, a complex engineering feat.
- Hyperscale data center costs range from $12 million to $18 million per megawatt, with state-of-the-art liquid cooling adding $2 million to $5 million per megawatt.
- The industry's current lack of standardization in liquid cooling technologies leads to varying costs for similar solutions.
- Long-term success depends on building customer relationships and providing innovative solutions, rather than just maximizing short-term EBITDA.
- The traditional view that hyperscalers require power 24/7 with no flexibility is evolving, with discussions around on-site batteries and generation.
- There is optimism for operators innovating and hyperscalers embracing flexibility for efficient resource use, though full utilization of renewable resources via battery storage is still developing.
- Significant obstacles to infrastructure projects include supply chain issues, community opposition, and rising electricity costs.
- The U.S. faces a 'collision course' of ideas, balancing the need to win the AI race with protecting ratepayers and making long-term investments amid technological uncertainty.
- Building approximately 70,000 miles of high-voltage power lines in the next decade in the U.S. would take 200 years at the current pace.
- Optimism remains for solutions, including federal policies for grid efficiency, equitable cost-sharing, fast-tracking authority, and utilizing federal land for infrastructure.
- The next few years will test operators' ability to prudently allocate capital and deliver mega facilities, potentially leading to a split between successful and struggling companies.