Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Spell check suggestion: laptop MPI ?

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (108)
  • Article (Print) (69)
  • News (5)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 ... 10 11 12 13 14 15 16 17 18 19 Next »

8%
Monitor Your Nodes with collectl
28.03.2012
Home »  HPC  »  Articles  » 
/O. But measuring CPU and memory usage are very important, maybe even at the detailed level. If the cluster is running MPI codes, then perhaps measuring the interconnect (x for brief mode and X for detailed mode
8%
Moving Your Data – It’s Not Always Pleasant
08.05.2013
Home »  HPC  »  Articles  » 
and many nodes, so why not use these cores to copy data? There is a project to do just this: dcp is a simple code that uses MPI and a library called libcircle to copy a file. This sounds exactly like what
8%
Stat-like Tools for Admins
19.11.2014
Home »  HPC  »  Articles  » 
"Top-like tools for admins" by Jeff Layton, ADMIN , issue 23, pg. 86, http://www.admin-magazine.com/Archive/2014/23/Top-like-tools-for-admins vmstat: http://en.wikipedia.org/wiki/Vmstat vmstat man
8%
Top Three HPC Roadblocks
12.01.2012
Home »  HPC  »  Articles  » 
not move the overall market forward. For example, a user has many choices to express parallelism. They can use MPI on a cluster, OpenMP on an SMP, CUDA/OpenCL on a GPU-assisted CPU, or any combination
8%
What's Ahead for OpenMP?
05.12.2011
Home »  HPC  »  Articles  » 
; and FAQs. With that, we will try to knock down some of the myths people hold about OpenMP when compared with various other models, such as MPI, Cilk, or AMP. Somehow, we are the favorite comparison for all
8%
Resource Management with Slurm
05.11.2018
Home »  HPC  »  Articles  » 
Machine=slurm-ctrl 13 # 14 SlurmUser=slurm 15 SlurmctldPort=6817 16 SlurmdPort=6818 17 AuthType=auth/munge 18 StateSaveLocation=/var/spool/slurm/ctld 19 SlurmdSpoolDir=/var/spool/slurm/d 20 SwitchType=switch/none 21 Mpi
8%
Determining CPU Utilization
25.02.2016
Home »  HPC  »  Articles  » 
is causing the core to become idle. The reasons for CPU utilization to drop could be from waiting on I/O (reads or writes) or because of network traffic from one node to another (possibly MPI communication
8%
The Resurrection of bWatch
02.06.2025
Home »  Articles  » 
Photo by NASA Hubble Space Telescope on Unsplash
. It was Beowulf. No, not the Scandinavian warrior, but an approach to high-performance computing (HPC) that uses common x86 processors, conventional Ethernet networking, the Message Passing Interface (MPI
8%
Warewulf Cluster Manager – Completing the Environment
20.06.2012
Home »  HPC  »  Articles  » 
boot times. Adding users to the compute nodes. Adding a parallel shell tool, pdsh, to the master node. Installing and configuring ntp (a key component for running MPI jobs). These added
8%
The Cloud’s Role in HPC
05.04.2013
Home »  HPC  »  Articles  » 
is counterproductive. You are paying more and getting less. However, new workloads are being added to HPC all of the time that might be very different from the classic MPI applications in HPC and have different

« Previous 1 ... 10 11 12 13 14 15 16 17 18 19 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice