Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Spell check suggestion: laptop MPI ?

Refine your search
  • [x] Creation time: Last three months
Sort order
  • Date
  • Score
Content type
  • Article (Print) (7)
  • Article (2)
Keywords
100%
Getting started with I/O profiling
30.11.2025
Home »  Archive  »  2012  »  Issue 08: FreeNAS  » 
© Photosani, Fotolia.com
Jeff Layton ... of the compute nodes that are running a particular application, perhaps as part of a job script. When the MPI job runs, you will get an output file for each node, but before collecting them together, be sure you
92%
openlava – Hot resource manager
31.10.2025
Home »  Archive  »  2012  »  Issue 12: NAS S...  » 
© Konrad Bak, 123RF.com
Jeff Layton ... . The output (if any) follows: host name: n0001 host name: n0001 PS: Read file for stderr output of this job. One final test – running a simple MPI program – computes the value of pi (see the job script
92%
Monitor your nodes with collectl
30.11.2025
Home »  Archive  »  2012  »  Issue 09: Windo...  » 
© nobeastsofierce, 123RF.com
Jeff Layton ... . If the cluster is running MPI codes, then perhaps measuring the interconnect (x in brief mode and X in detailed mode) is important. This could also include Lustre [7] if you are using it in your cluster, as well
90%
Introduction to OpenMP programming
31.10.2025
Home »  Archive  »  2012  »  Issue 12: NAS S...  » 
© Frank Rohde, 123RF.com
Jeff Layton ... the similar routines: 036 ! 037 ! "omp_get_wtime ( )" in OpenMP, 038 ! "MPI_Wtime ( )" in MPI, 039 ! and "tic" and "toc" in MATLAB. 040 ! 041 ! Licensing: 042 ! 043 ! This code is distributed
19%
Fishing with Remora
09.12.2025
Home »  Articles  » 
 
a specific step. This arrangement works like an MPI barrier function (MPI_Barrier ). Although collecting these timings gives you useful information, they won’t reveal which parallel process finishes last
18%
Julia: A new language for technical computing
30.11.2025
Home »  Archive  »  2012  »  Issue 10: Traff...  » 
. Applying these lessons to HPC, you might ask, "how do I tinker with HPC?" The answer is far from simple. In terms of hardware, a few PCs, an Ethernet switch, and MPI get you a small cluster; or, a video card
17%
Top three HPC roadblocks
30.11.2025
Home »  Archive  »  2012  »  Issue 08: FreeNAS  » 
© Alexey Bogatyrev, 123RF.com
MPI on a cluster, OpenMP on an SMP, CUDA/OpenCL on a GPU-assisted CPU, or any combination thereof. These choices have far-reaching economic and performance consequences. Those commercial software
17%
Monitoring Tools for Admins
04.11.2025
Home »  Articles  » 
, data virtualization service (DVS) Nonuniform memory access (NUMA) properties Network topology Message passing interface (MPI) communication statistics Power consumption CPU temperatures
17%
Avoiding common mistakes in high-performance computing
30.11.2025
Home »  Archive  »  2012  »  Issue 09: Windo...  » 
© Maxim Kazmin, 123RF.com
.e., MPI) and a batch scheduler. This arrangement offers a quick victory for the administrator, but could cause serious upgrade issues and downtime in the future. For instance, upgrading to a new

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice