High Performance Computing

Advances in high performance computing (HPC) are essential to keep pace with the largest and most complex computational needs created by our rapidly evolving society. For example, modern research programs often involve the use of computationally demanding and detailed multi-level simulations, big data analyses, and large-scale computations.

HPC scientists devise computing solutions at the absolute limits of scale and speed. In this compelling field, technical knowledge and ingenuity combine to drive systems using the largest number of processors at the fastest speeds with the least amount of storage and energy. Attempting to operate at these scales pushes the limits of the underlying technologies, including the capabilities of the programming environment and the reliability of the system.

HPC researchers develop efficient, reliable, and fast algorithms, software, tools and applications. The software that runs on these systems must be carefully constructed and balance many factors to achieve the best performance specific to the computing challenge.

The field involves parallel processing with tightly integrated clusters consisting of hundreds to hundreds of thousands of processors, terabytes to petabytes of high-performance storage, and high bandwidth/low latency interconnects that consume megawatts of power. A wide variety of architectures are used that differ in the composition and balance of cores (such as CPUs, GPUs, and hybrid architectures).