In order to do our analysis, we gathered data from 5 major cloud compute providers โ Microsoft Azure, Amazon Web Services, Google Cloud Platform,
Scaleway Cloud, and OVH Cloud โ about the price and nature of their AI-specific compute offerings (i.e. all instances that have GPUs).
For each instance, we looked at its characteristics โ the type and number of GPUs and CPUs that it contains, as well as the quantity of memory
it contains and its storage capacity. For each CPU and GPU model, we looked up its TDP (Thermal Design Potential) -- its power consumption
under the maximum theoretical load), which is an indicator of the operating expenses required to power it. For GPUs specifically, we also looked
at the Manufacturer's Suggested Retail Price (MSRP), i.e. how much that particular GPU model cost at the time of its launch, as an indicator
of the capital expenditure required for the compute provider to buy the GPUs to begin with.