HPC - Admin Magazine

  • Home
  • Articles
  • News
  • Newsletter
  • ADMIN
  • Shop
  • Privacy Policy
Search

Articles

News

Vendors

    Whitepapers

    Write for Us

    About Us

    Search

    Refine your search
    Sort order
    • Date
    • Score
    Content type
    • Article (122)
    • News (4)
    Keywords
    Creation time
    • Last day
    • Last week
    • Last month
    • Last three months
    • Last year

    « Previous 1 2 3 4 5 6 7 8 9 10 11 ... 13 Next »

    10%
    Oak Ridge has a New Gigantic Supercomputer in the Works
    19.11.2014
    Home »  News  » 
     
    performance without have to scale to hundreds or thousands of Message Passing Interface (MPI) tasks.” ORNL says it will use the Summit system to study combustion science, climate change, energy storage
    10%
    Top Three HPC Roadblocks
    12.01.2012
    Home »  Articles  » 
    . Even a quad-core desktop or laptop can present a formidable parallel programming challenge. In its long history, parallel programming tools and languages seem to be troubled by a lack of progress. Just
    10%
    Determining CPU Utilization
    25.02.2016
    Home »  Articles  » 
    one class to the next) was used on a laptop with 8GB of memory using two cores (OMP_NUM_THREADS=2 ). Initial tests showed that the application finished in a bit less than 60 seconds. With an interval
    10%
    Interview with Gregory Kurtzer, Developer of Singularity
    21.04.2016
    Home »  Articles  » 
    . At present, several dependency solvers have been developed, but Singularity already knows how to deal with linked libraries, script interpreters, Perl, Python, R, and OpenMPI. An example of this can be seen
    10%
    Why Good Applications Don’t Scale
    13.10.2020
    Home »  Articles  » 
    of programming. As an example, assume an application is using the Message Passing Interface (MPI) library to parallelize code. The first process in an MPI application is the rank 0 process , which handles any I
    10%
    Parallel Programming with OpenMP
    21.11.2012
    Home »  Articles  » 
    , it’s very easy to get laptops with at least two, if not four, cores. Desktops can easily have eight cores with lots of memory. You can also get x86 servers with 64 cores that access all of the memory
    10%
    Modern Fortran – Part 3
    25.01.2017
    Home »  Articles  » 
    -dimensional array from one-dimensional arrays. The use of coarrays can be thought of as opposite the way distributed arrays are used in MPI. With MPI applications, each rank or process has a local array; then
    10%
    Parallel Versions of Familiar Serial Tools
    28.08.2013
    Home »  Articles  » 
    with libgpg-error 1.7. MPI library (optional but required for multinode MPI support). Tested with SGI Message-Passing Toolkit 1.25/1.26 but presumably any MPI library should work. Because these tools
    10%
    Container Best Practices
    22.01.2020
    Home »  Articles  » 
    provides the security of running containers as a user rather than as root. It also works well with parallel filesystems, InfiniBand, and Message Passing Interface (MPI) libraries, something that Docker has
    10%
    StackIQ Offers Enterprise HPC Product
    24.11.2012
    Home »  News  » 
     
    + command-line interface. It includes updates to many modules, including: the HPC Roll (which contains a preconfigured OpenMPI environment), as well as the Intel, Dell, Univa Grid Engine, Moab, Mellanox, Open

    « Previous 1 2 3 4 5 6 7 8 9 10 11 ... 13 Next »

    © 2025 Linux New Media USA, LLC – Legal Notice