Home Industry AWS launches EC2 capacity blocks for short-term GPU compute

AWS launches EC2 capacity blocks for short-term GPU compute

Amazon Web Services (AWS) has announced the general availability of EC2 Capacity Blocks for machine learning (ML), enabling customers to reserve GPU capacity for short-duration ML projects.

EC2 Capacity Blocks can be utilized with P5 instances, which use Nvidia H100 Tensor Core GPUs. The EC2 UltraClusters are interconnected with second-generation Elastic Fabric Adapter (EFA) networking.

Customers can schedule their EC2 Capacity Blocks up to eight weeks in advance for a duration of one to 14 days, in cluster sizes ranging from one to 64 instances.

The price of an EC2 Capacity Block depends on available supply and demand for EC2 Capacity Blocks at the time users purchase a reservation. The operating system price is billed at per-second granularity. Find pricing details here.

EC2 Capacity Blocks are initially available in the AWS US East (Ohio) Region, with plans for future expansion to other AWS Regions and Local Zones.

[Image courtesy: AWS]

Exit mobile version