There are mainly two types of people in the scientific computing world: those who produce data and those who consume it. Those who have models and generate data from those models, a process known as ‘simulation’, and those who have data and infer models from the data (‘analytics’).
Simulations often require large amount of computations so they are often run on generic High-Performance Computing (HPC) infrastructures built on a cluster of powerful high-end machines linked together with high-bandwidth low-latency networks. The cluster is often augmented with hardware accelerators (co-processors such as GPUs or FPGAs) and a large and fast parallel filesystem, all setup and tuned by systems administrators. By contrast, in analytics, the focus is on the storage and access of the data so analytics is often performed on a BigData infrastructure suited for the problem at hand. Those infrastructure offer specific data stores and are often installed in a more or less self-service way on a public or private ‘Cloud’ typically built on top of ‘commodity’ hardware.
Outline, Data on the Evolution of Interests and Communities
More on the Evolution of Interests and Communities
MLaroundHPDC/HPC and MLAutotuning
Learning Model Details and Agent Based Simulations
Challenges and Opportunities, Conclusions
Big Data analysis can be extremely time and resource consuming. Luckily, there is a solution that’s no stranger to complex analysis and data evaluation: High Performance Computing (HPC). In this course you’ll learn about HPC in a Big Data World.
Big Data requires the right HPC infrastructure and resources to support the high-performance data analytics that power artificial intelligence applications. Traditional enterprise IT technology can’t handle the complex and time-critical workloads that these applications require.
Deployed on premises, at the edge, or in the cloud, HPC solutions are used for a variety of purposes across multiple industries. Examples include: Research labs. HPC is used to help scientists find sources of renewable energy, understand the evolution of our universe, predict and track storms, and create new materials.
Cluster computing or High-Performance computing frameworks is a form of computing in which a bunch of computers (often called nodes) are connected through a LAN (local area network) so that they behave like a single machine.
video_library Rich Learning Content
assessment Quizzes & Projects
verified_user Taught by Industry Pros
timer Full Lifetime Access
forum Student Support Community
grade Certificate of Completion