
9/10 - Distributed Memory and Parallel Computing
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
About this listen
Marc Hartung and I will discuss distributed memory in parallel computing in this episode, with tools like OpenMPI. We also discuss some of the hardware aspects around HPC systems and how shared and distributed memory computations differ.
Links:
- https://www.open-mpi.org OpenMPI homepage
- https://docs.open-mpi.org/ the docs for OpenMPI
- https://www.mpi-forum.org The MPI Forum (who write the MPI standard)
- http://openshmem.org/site/ OpenSHMEM
- https://en.wikipedia.org/wiki/Distributed_memory summary page on distributed memory
- https://en.wikipedia.org/wiki/InfiniBand InfiniBand network solution
- https://www.nextplatform.com/2022/01/31/crays-slingshot-interconnect-is-at-the-heart-of-hpes-hpc-and-ai-ambitions/ Slingshot network solution
- https://en.wikipedia.org/wiki/Partitioned_global_address_space
- https://www.techtarget.com/whatis/definition/von-Neumann-bottleneck the bottleneck named after John von Neumann
- https://en.wikipedia.org/wiki/Floating_point_operations_per_second overview of FLOPS (floating point operations per second)
- https://www.openmp.org/wp-content/uploads/HybridPP_Slides.pdf OpenMP and OpenMPI working together in a hybrid solution
- https://blogs.fau.de/hager/hpc-book Georg Hager/Gerhard Wellein book on HPC for scientists and engineers
Don't be shy - say Hi
This podcast is brought to you by the Advanced Research Computing Centre of the University College London, UK.
Producer and Host: Peter Schmidt
adbl_web_global_use_to_activate_T1_webcro805_stickypopup
No reviews yet