AWS releases P2 instances for Amazon EC2 for artificial intelligence, HPC, big data processing



Amazon Web Services (AWS), an Amazon.com company announced availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud (Amazon EC2) designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are powerful GPU instances available in the cloud.

P2 instances, designed for general-purpose GPU compute applications using CUDA and OpenCL. It uses the Amazon Linux AMI, pre-installed with deep learning frameworks such as Caffe and Mxnet, and can also use the NVIDIA AMI with GPU driver and CUDA toolkit pre-installed for rapid onboarding.

P2 instances provide up to 16 NVIDIA K80 GPUs, 64 vCPUs and 732 GiB of host memory, with a combined 192 GB of GPU memory, 40 thousand parallel processing cores, 70 teraflops of single precision floating point performance, and over 23 teraflops of double precision floating point performance. P2 instances also offer GPUDirect (peer-to-peer GPU communication) capabilities for up to 16 GPUs, so that multiple GPUs can work together within a single host.

Cluster P2 instances in a scale-out fashion with Amazon EC2 ENA-based Enhanced Networking, so you can run high-performance, low-latency compute grid. P2 is ideal for distributed deep learning frameworks, such as MXNet, that scale out with near perfect efficiency.

P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments.

To offer ideal performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.

“Two years ago, we launched G2 instances to support customers running graphics and compute-intensive applications,” said Matt Garman, Vice President, Amazon EC2. “Today, as customers embrace heavier GPU compute workloads such as artificial intelligence, high-performance computing, and big data processing, they need even higher GPU performance than what was previously available. P2 instances offer seven times the computational capacity for single precision floating point calculations and 60 times more for double precision floating point calculations than the largest G2 instance, providing the best performance for compute-intensive workloads such as financial simulations, energy exploration and scientific computing.”

MapD is a GPU database for interactive SQL querying and visualization of multi-billion record datasets.

“As the leader in GPU-powered databases and visual analytics applications, we are deeply invested in the emergence of large, cloud-based GPU instances and P2 is the most powerful we have seen,” said Todd Mostak, CEO and founder, MapD. “Our performance on Amazon EC2 P2 instances is exceptional. On a dollar-to-dollar basis across a set of standard SQL benchmarks, MapD is 78 times faster on Amazon EC2 P2 instances than CPU-based solutions. Furthermore, these speedups were seen over multi-billion row datasets, speaking directly to our ability to deliver performance at scale with these instances. With this launch, our customers can now query and visualize billions of rows of data within milliseconds while enjoying the flexibility, scalability and reliability they have come to expect from AWS.”

Sonus delivers secure, cloud optimized solutions for real time communications used by the world’s leading service providers and enterprises. “Real time communications are rapidly evolving, and they require transcoding between formats for use on multiple devices,” said Mykola Konrad, vice president for product management and marketing, Sonus. “GPUs are becoming more of a disruptor for transcoding services and they offer a cost effective solution for scaling our Session Border Controller application in the cloud. Because of our collaboration with AWS, Sonus has developed the industry’s first GPU optimized session border controller by leveraging the GPU parallel computing power and Enhanced Networking of Amazon EC2 P2 instances, which decreases network costs for our customers.”

Customers can launch P2 instances using the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and third-party libraries. P2 instances are available in three instance sizes: p2.16xlarge with 16 GPUs, p2.8xlarge with 8 GPUs, and p2.xlarge with 1 GPU. P2 instances are available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Amazon Machine Images (AMIs) from AWS, NVIDIA, and other sellers are available in the AWS Marketplace to help customers get started within minutes. The AWS Deep Learning AMI comes preinstalled with MXNet and Caffe deep learning frameworks to enable customers to reduce model training time from weeks to hours. It also lets them experiment with artificial intelligence without making large upfront capital expenditures. The AMI from NVIDIA includes pre-installed drivers and the CUDA toolkit and is designed for developers working on a range of GPU-intensive workloads.

Leave a Reply

WWPI – Covering the best in IT since 1980