8%
    
    
    02.06.2025
        
    
    	
        . It was Beowulf. No, not the Scandinavian warrior, but an approach to high-performance computing (HPC) that uses common x86 processors, conventional Ethernet networking, the Message Passing Interface (MPI
    
 
		    
				        
    8%
    
    
    20.06.2012
        
    
    	
         boot times.
Adding users to the compute nodes.
Adding a parallel shell tool, pdsh, to the master node.
Installing and configuring ntp
 (a key component for running MPI jobs).
These added
    
 
		    
				        
    8%
    
    
    05.04.2013
        
    
    	
         is counterproductive. You are paying more and getting less. However, new workloads are being added to HPC all of the time that might be very different from the classic MPI applications in HPC and have different
    
 
		    
				        
    8%
    
    
    12.08.2015
        
    
    	
         in the name of better performance. Meanwhile, applications and tools have evolved to take advantage of the extra hardware, with applications using OpenMP to utilize the hardware on a single node or MPI to take
    
 
		    
				        
    8%
    
    
    15.12.2016
        
    
    	
         implemented the HPF extensions, but others did not. While the compilers were being written, a Message Passing Interface (MPI) standard for passing data between processors, even if they weren’t on the same node
    
 
		    
				        
    8%
    
    
    21.03.2017
        
    
    	
         and binary data, can be used by parallel applications (MPI), has a large number of language plugins; and is fairly easy to use.
In a previous article, I introduced HDF5, focusing on the concepts and strengths
    
 
		    
				        
    8%
    
    
    03.04.2019
        
    
    	
         on, people integrated MPI (Message Passing Interface) with OpenMP for running code on distributed collections of SMP nodes (e.g., a cluster of four-core processors).
With the ever increasing demand
    
 
		    
				        
    8%
    
    
    17.01.2023
        
    
    	
         is more important than some people realize. For example, I have seen Message Passing Interface (MPI) applications that have failed because the clocks on two of the nodes were far out of sync.
Next, you
    
 
		    
				        
    8%
    
    
    26.02.2014
        
    
    	
        ). In fact, that’s the subject of the next article.
The Author
Jeff Layton has been in the HPC business for almost 25 years (starting when he was 4 years old). He can be found lounging around at a nearby
    
 
		    
				        
    8%
    
    
    07.03.2019
        
    
    	
         by thread, which is really useful if the code uses MPI, which often uses extra threads. The second option lists the routines that use the most time first and the routines that use the least amount of time