Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (144)
  • Article (139)
  • News (31)
  • Blog post (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 8 9 10 ... 32 Next »

13%
Living with multiple and many cores
14.03.2013
Home »  Archive  »  2013  »  Issue 13: IPv6...  » 
was particularly effective in HPC because clusters were composed of single- or dual-processor (one- or two-core) nodes and a high-speed interconnect. The Message-Passing Interface (MPI) mapped efficiently onto
13%
The History of Cluster HPC
15.02.2012
Home »  HPC  »  Articles  » 
paths might be worth exploring. In particular, the software issue is troubling. Most traditional HPC code uses MPI (Message Passing Interface) to communicate between cores. Although MPI will work
12%
REMORA
18.09.2017
Home »  HPC  »  Articles  » 
) CPU utilization I/O usage (Lustre, DVS) NUMA properties Network topology MPI communication statistics Power consumption CPU temperatures Detailed application timing To capture
12%
Using a Bash script to mirror external monitors
10.06.2015
Home »  Archive  »  2015  »  Issue 27: Fault...  » 
Lead Image © bowie15, 123RF.com
Like many of my colleagues, I use my own laptop to play back presentations at conferences. My Dell Latitude E6430 works perfectly on Ubuntu. However, one critical problem remains: when I connect ... If you use your Linux laptop for public presentations – or other tasks that require an external display – you are probably familiar with the problem of making your computer's display resolution fit
12%
Getting Started with HPC Clusters
05.06.2013
Home »  HPC  »  Articles  » 
). The developers of Warewulf routinely use VMs on their laptops for development and testing, as do many developers, so it’s not an unusual choice. Once the cluster is configured, you can also run your MPI
12%
HPCCM with Docker and Podman
09.09.2024
Home »  HPC  »  Articles  » 
(MPI) library. Moreover, I want to take the resulting Dockerfile that HPCCM creates and use Docker and Podman to build the final container image. Development Container One of the better ways to use
12%
HDF5 and Parallel I/O
17.05.2017
Home »  HPC  »  Articles  » 
improve application performance and the ability to run larger problems. The great thing about HDF5 is that, behind the scenes, it is performing MPI-IO. A great deal of time has been spent designing
12%
Listing 6
21.08.2012
Home »  HPC  »  Articles  »  Warewulf 4 Code  » 
 
Listing 6: Torque Job Script [laytonjb@test1 TEST]$ more pbs-test_001 1  #!/bin/bash 2  ### 3  ### Sample script for running MPI example for computing PI (Fortran 90 code) 4  ### 5  ### Jeff Layton
12%
Embarrassingly parallel computation
01.06.2024
Home »  Archive  »  2024  »  Issue 81: Load...  » 
Lead Image © Lucy Baldwin, 123RF.com
used the second example (mpiPI.c) to test the approach [7] and compiled with mpicc mpiPI.c -o mpiPI -lm Take the time to study the code in Listing 1 to understand its operation and the basics
11%
HPC Software Road Gets a Bit Smoother
10.09.2013
Home »  HPC  »  Articles  » 
domains. Assuming that your application is scalable or that you might want to tackle larger data sets, what are the options to move beyond OpenMP? In a single word, MPI (okay, it is an acronym). MPI

« Previous 1 2 3 4 5 6 7 8 9 10 ... 32 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice