Data center and power market implications - III
AES already has significant relationships with Data Center Companies:
Number two, we very early on identified data centers and technology companies as our -- really, our sweet spot in terms of future deals, and we've been working with them. So we have innovated coming up with things like hourly match carbon-free energy with the data centers.
So we've already done 6 gigawatts. So we're out ahead. These are actual projects that have been done. So we have very close relationships with them. And I would say that coming up with various ways of us helping them meet their needs, certainly things that we have been discussing for some time.
Enlight is mostly seeing demand through utility customers:
In the West, which is obviously power constrained and with big data center builds over the next decade, that is contracted via the utility. So the utility will leave the corporate demand through a PPA direct with the utility. So we won't see it in an RFP with the data center providers directly. We'll see it with increased demand from the utilities and just given the interconnection positions we hold in the West, we think that puts us in a really position to deliver in the near term for those data center clients, which is a huge value to them.
Vistra optimistic on load growth with around ~35 expected:
There has been much discussion in recent months about the substantial power demand growth forecast, including from the potential build-out of data centers and other sources of electricity demand. Third-party research indicates data center-related activity could approach 35 gigawatts of additional demand by 2030. However, our teams also see multiple additional potential drivers of demand in the geographies we serve. These drivers include: continued reshoring of industrial activity as evidenced by multiple large chip manufacturing site build-outs, partially due to the CHIPS Act; increased electrification of commercial, industrial, and residential load across the country, as evidenced by the expectation of approximately 20 gigawatts of additional power demand in West Texas by 2030; and strong population growth, particularly in the state of Texas, which has been steady at 1.5% to 2% per year.
With these drivers, we see the potential demand outcome skewing higher, albeit with a wider range. In their most recent report, PJM's load growth expectations through 2030 doubled from their 2023 estimate. In Texas, recent reports from ERCOT suggest load growth from 2030 in a wide range from as low as 1.6% per year to as high as 6% growth per year or even higher if more than half of the large loads recently discussed to ERCOT actually materialize.
Instead of responding to RFPs, Vistra is running it’s own RFPs flipping the narrative around in some ways on procurement:
I would say gas has become as interesting to many of them as nuclear has, in fact, even a preference for some. So from our standpoint, all options are on the table with 40,000 megawatts. And we've got, obviously, 12 states and 40,000 megawatts that we can do some of our projects with. But we've actually flipped it a little bit so we've actually put out some RFPs ourselves. So instead of just responding to the inbounds, we've actually gone out to the marketplace to handle actually multiple conversations simultaneously and see what the best opportunity might be for us. And so that process has not concluded yet, but we're in the middle of that process. And we're very excited about the interest.
EPA rules are likely to be litigated per Vistra:
I actually think that means more load comes and then we might have to build more gas along with the wind, solar and battery that is, as you know, already heavily in the queue. But this gas from a reliability standpoint, I think, will play a role one way or the other. I just don't see it being combined -- brand new combined cycles for that purpose until there's more clarity about these EPA rules. They're likely to be litigated. I think it's tough to invest into an environment where you've got uncertainty with protracted litigation. And so I think it's going to be difficult to create new baseload assets with confidence. And that's why I think the existing baseload assets are getting as much attention as they are.
Capital Power states that ~50% of the load can be met through dispatchable power:
If you look at -- so if I took a hyper data center, 1,000 megawatts minimum, 1 million square feet. If you were to serve that by renewables, for example, at best, renewables are going to be able to serve 60% if you wanted to meet the latency requirement of that facility at best. At worst, it's going to be 40%. So for every hyper data center, you are going to need somewhere between 40% and 60% dispatchable. So if I back into the Microsoft transaction announced last week with Brookfield, if they're going to sign up 10.9 gigawatts, you're going to need at least 6 gigawatt of firm and you could need upwards of 10 gigawatts to be able to meet the needs of it. And that's assuming it's not Five Sigma availability.
Beyond the US, there’s a considerable amount of interest in Data Centers in Canada as well:
The Desert Southwest also has strong forecast demand growth at 2% per annum, which equates to 7 gigawatts of demand growth by 2035, coupled with 7 gigawatts of thermal baseload retirements. There is also significant interest in the construction of large data centers in the Phoenix area. It should also be noted that there is considerable interest in data center builds in Alberta, which could result in longer-term offtake agreements for the repowered Genesee units.
Wheeling power from AZ to CA is an option:
Having said that, it doesn't take away from the fact that adding assets in California and Arizona, there are synergies, both from a commercial perspective, trading perspective and operational perspective. And right now, Arizona is a real hot bed right now on the data center side. So we feel good about both elements.
On the European side of the business, Encavis is going for lower-scale corporate PPAs:
In terms of prices, I would say that, they are quite stable. Of course, they are going down compared to one year ago. Just to give you a couple of figures, in Spain, again, we always talk about 10 year pay-as-produced. We see prices between EUR 30 and EUR 35.
In Germany, we see something around EUR 60, EUR 65, which is the same, maybe around EUR 60 in Italy, still a little bit higher, can be around EUR 60, EUR 65 more on the upper end. These are sort of the more the countries where we are currently being more active in terms of sourcing. Again, maybe slight decrease but marginal compared to the last quarter. This was the first one.
The second one was on off takers. Yes, tech companies are still keen to source green power. I think that the evolution around AI is sort of putting pressure on them. It seems like, AI is going to significantly require more energy than sort of the traditional business. This is definitely a good thing for us.
But to be honest, and as I mentioned before, we are more looking for players, who are a little bit less price sensitive maybe than the big corporates, who actually reach out with their power and can negotiate, sort of, hard. We see SMEs. We see sort of a lot of industrial players, again, not the biggest ones, but I would say the medium as well as medium enterprises who still want to procure the power to stabilize their cost positions and want to also have a good visibility on the cost in long-term.
In Spain, 40$/MWh PPAs convert to 12% IRRs:
We announced last May that we closed the purchase of 435-megawatt models at 0.0911 million per megawatt, which is up an absolute record of all times. At the same time, we continue working on the PPA market. We see prices in the range of the 40s and this allows us to maintain our 12% IRR return target.
Still early in terms of energy reduction potential of models:
The second area which we've been steadfast in is in machine learning. Lots of prior customer engagements have shown excellent results. One of the new things that we're doing this year, just getting started, is applying quantum to large language models. We're just at the beginning of that journey, but I hope to have results within a year or so. If we're successful there, we'll be able to offload significant workloads from GPUs and significantly reduce the energy requirements for data centers for large language models. So it's an exciting area for the company going forward
Jabil sees legacy data centers going through a retrofit:
Let me hit the power and cooling question first. One of the things we are seeing in the data center space, the legacy data centers are going through a retrofit. And that retrofit, we're looking at capabilities there where we can offer services to data centers from a cooling distribution unit perspective.
If you look at new deployments, the legacy ones are more liquid to air, the newer deployments are liquid to liquid cooling. Well, we have internal capabilities around that, and we'll be continuing to look at small capability-driven transactions in that space as well. The key here is to expand our service around server integration and into the whole data center building infrastructure as well. And we -- I think that would be a big differentiator.
Data Centers need 100% redundancy:
here's a number of factors within what's driving data center growth, as we've talked about. One, which is AI proliferation. That means, obviously, there's a lot more significance relating to that. Secondly, also the size of a data center is growing much, much larger. So now I think you put out a note today referencing 15 to 40 gen sets for a data center, that's for a relatively modest data center today. There are bigger data centers today, and that is probably only going to get bigger as we see AI come on. So that is, again, a large upside.
Nvidia:
Certainly, there are some geographic benefits and differences between training and inference. Most folks don't -- can do training anywhere in the globe. So we see big training clusters being put up. Usually, it's a function of where they can get the data center space and can tap into the grid, having good access to power, and the economics there is very important.
But training doesn't need to be localized.
This is a key tidbit - training doesn’t need to be localized!
But training doesn't need to be localized. If you've ever used a remote desktop that's halfway around the world, you can feel the lag and latency. Training sign for that. But inference you kind of do need to be near the user. Some inference workloads might be fine. Batch processing inference, fine. Doing a longer chat bot might be okay. But if you're doing gen AI search, you're asking your browser a question of information you want to get an answer back, you want that answer quickly. If it's too slow, then it just -- immediately your quality of service plummets.
Blackwells can be deployed almost anywhere:
At the same time, we also make sure that they can take that -- the same building blocks and vary the sizes and capabilities. GB200, the NVL72 is designed for trillion parameter inference. For the more modest-sized 70B or 7B, we have an NVL 2, which is just 2 Grace Blackwells tied together, which fit nicely in standard server design and could be deployed anywhere including at the edge, the telco edge.
Water being a great wave to transfer heat:
Water is a fantastic mover of heat. Your house is built with insulation that is nothing more than just trapping air. Air is actually an insulator. It's not a good transfer to heat, but water is excellent at it. If you ever jumped from a 70-degree pool from a 70-degree air, it feels really cold. That's because water is sucking the heat right out of you. It's really good at moving heat around. And that efficiency goes right to more GPUs, more capabilities and denser, more capable AI systems.
Digital Realty points that cloud enabled DCs have 1 billion in capex, so it’s massive scale:
So this transition from diversified REIT into multi-strat alternative asset manager, I think was timed correctly. It was a good time and a good decision for us to do it. And generally, a recognition that traditional REITs, traditional digital infrastructure REITs like American Tower, Equinix, Digital Realty, all of them are actually now creating joint ventures where they're out raising third-party capital to grow, because they recognize, particularly when you're dealing with cloud and AI, it's not a $20 million CapEx opportunity. It's not $100 million. If you're building a cloud-enabled data center, it's measured in $800 million, $900 million, $1 billion of CapEx for one location.
13GWs of power is around 4 trillion in spend:
So we're literally at the ground floor of AI. Customers that we've had for a decade or more are building these big language learning based models. And so I tell investors, look, AI is going to take about minimum 10 years to build and the reference point I give people is think about public cloud, this year, public cloud will be 11 years old. And we've spent about $3.8 trillion of CapEx manifesting the public cloud, which is about 13 gigawatts of power. And so that $3.7 trillion of CapEx is not only the data center spend, but the servers, the fiber optic cabling, the edge computing, all of that infrastructure that it requires to make public cloud work. We believe AI follows a very similar flight path as public cloud has followed.
6-7 trillion to around 50GWs; not sure how the match checks out, but still huge:
So look, we think it's 10 years, but there's only one big difference. We're going to spend twice the CapEx. We'll spend close to $6 trillion to $7 trillion in CapEx and the total amount of compute power will be about 50 gigawatts. So remember, again, we're 10 years into public cloud, and we're at about 13 gigawatts of power consumed at data centers. We're talking about that going up by a factor of almost 3.5x over the next 10 years.
Edge AI is still a bit away:
We're probably still 12 to 18 months away from being in true generative AI. And then once we do get to generative AI, it then moves to what we call edge AI. That's about 5 to 6 years away. And then we have sort of what I would call more consumer-based AI, which is down at the street level, which is probably 7, 8 years off, which where those applications are sitting at cell towers, small cells, mobile infrastructure that's more edge based.
Demand is >50GWs, but transmission capacity is ~10GWs:
And so when we look at the aging transmission grid in the U.S. and in Europe and literally, there's less than 10 gigawatts of power available to power European and U.S. data center space.
Land banking is again a strategy, similar to renewables:
Data centers are starting to move in that direction based on scarcity. It's really hard to build a data center. Getting a will serve letter for power from a utility company, really hard as well. And so by having a land bank and having these global operations where we've got multiple shovels in the ground at the same time, when a big cloud customer shows up to us and says, look, we need you to go fast in Melbourne, can you do it? We say yes. When a particular cloud customer came to us in Johannesburg and said, nobody wants to build in Johannesburg, can you do it? We said, "yes, we'll do it". 100-megawatt data center turned up in 18 months.
xx
Now I'm not going to tell they don't build data centers themselves. They do. Today, I would tell you that most of the cloud guys self-perform about 30% to 40% of the time. And then they outsource 60% to 70% of the time. You go to towers, it's like 90% outsourced. You go to fiber, it's like 100% almost outsourced. Nobody builds their own fiber anymore, it's too expensive. So shared infrastructure is the way you get to the right return at the end of the day.
Securitization of DC capacity boosts returns:
If I go build a great data center again, I'll pick on Melbourne. If I go build a data center for a cloud company A in Melbourne, and I turn up 50 megawatts for them. I'm only going to get an unlevered return of like 8%, 9% in a market like Melbourne, which is pretty tight. If I securitize it, that return moves to 14%, 15%. I pick up 400 and 500 basis points available. But then if I build 2 more small data halls next to it, and I put in Cloud Guide G and Cloud Guide C, all of a sudden, my returns jump. And then you get to that high teens, low 20s IRR and you get into that 2.5, 3 mark territory, which is really what makes digital infrastructure appealing for LPs today.
Nippon Technologies:
Also with regards to ChatGPT, if you're going to deliver AI and ChatGPT, then you need real-time processing. You also need to consider learning capabilities as well. So batch processing is going to be required. And also store these data in the cold storage. So therefore, the functions of data centers could be distributive going forward. That is our expectation. And the batch processing, this could be done where there is surplus electricity. For example, data [indiscernible] when you have surplus solar power available. Then maybe there, you could do the learning manufacturing for Generative AI such as ChatGPT.
Source: Seeking Alpha