HPC - Admin Magazine

  • Home
  • Articles
  • News
  • Newsletter
  • ADMIN
  • Shop
  • Privacy Policy
Search

Articles

News

Vendors

    Whitepapers

    Write for Us

    About Us

    Search

    Spell check suggestion: laptop MPI ?

    Refine your search
    • [x] Content type: Article
    Sort order
    • Date
    • Score
    Keywords
    Creation time
    • Last day
    • Last week
    • Last month
    • Last three months
    • Last year

    « Previous 1 2 3 4 5 6 7 8 9 10 11 Next »

    13%
    Profiling Is the Key to Survival
    19.12.2012
    Home »  Articles  » 
    ’t cover it here. MPI Profiling and Tracing For HPC, it’s appropriate to discuss how to profile and trace MPI (Message-Passing Interface) applications. A number of MPI profiling tools are available
    13%
    Exploring the HPC Toolbox
    04.11.2011
    Home »  Articles  » 
    of parallel programming. It is always worthwhile to check whether a useful piece of software already exists for the problem. If a program needs the Message Passing Interface (MPI), or is at least capable
    13%
    Gathering Data on Environment Modules
    10.09.2012
    Home »  Articles  » 
    ------------------------------ compilers/gcc/4.4.6 module-info mpi/openmpi/1.6-open64-5.0 compilers/open64/5.0 modules null dot mpi/mpich2/1.5b1-gcc-4.4.6 use
    13%
    Darshan I/O Analysis for Deep Learning Frameworks
    18.08.2021
    Home »  Articles  » 
    part, darshan-util , postprocesses the data. Darshan gathers its data either by compile-time wrappers or dynamic library preloading. For message passing interface (MPI) applications, you can use
    13%
    hwloc: Which Processor Is Running Your Service?
    07.11.2011
    Home »  Articles  » 
    -project of the larger Open MPI community [2], is a set of command-line tools and API functions that allows system administrators and C programmers to examine the NUMA topology and to provide details about each processor
    13%
    Benchmarks Don’t Have to Be Evil
    12.03.2015
    Home »  Articles  » 
    definitely stress the processor(s) and memory, especially the bandwidth. I would recommend running single-core tests and tests that use all of the cores (i.e., MPI or OpenMP). A number of benchmarks
    13%
    Julia: A New Language For Technical Computing
    31.05.2012
    Home »  Articles  » 
    . Applying these lessons to HPC, you might ask, “how do I tinker with HPC?” The answer is far from simple. In terms of hardware, a few PCs, an Ethernet switch, and MPI get you a small cluster; or, a video card
    13%
    Is Hadoop the New HPC?
    23.04.2013
    Home »  Articles  » 
    on a separate computer. The results can be combined when the job is finished because the map step has no dependencies. The popular mpiBLAST tool takes the same approach by breaking the human genome file
    13%
    Parallel Julia – Jumping Right In
    29.06.2012
    Home »  Articles  » 
    standard “MPI is still great” disclaimer. Higher level languages often try to hide the details of low-level parallel communication. With this “feature” comes some loss of efficiency, similar to writing
    13%
    Introduction to HDF5
    22.02.2017
    Home »  Articles  » 
    to build the HDF5 libraries since they will require an MPI library with MPI-IO support. MPI-IO is a low-level interface for carrying out parallel I/O. It gives you a great deal of flexibility but also

    « Previous 1 2 3 4 5 6 7 8 9 10 11 Next »

    © 2025 Linux New Media USA, LLC – Legal Notice