Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Spell check suggestion: %25x20mm 22%25390 Llorente 22 ?

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (53)
  • Article (25)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 8 Next »

81%
MPI Apps with Singularity and Docker
18.03.2020
Home »  HPC  »  Articles  » 
        2 hours ago         9.83GB                                         49cbd14ae32f        3 hours ago         269MB ubuntu              18.04               72300a873c2c        3 weeks ago         64.2MB
80%
Go testing frameworks
05.08.2024
Home »  Archive  »  2024  »  Issue 82: Sover...  » 
Lead Image © Lucy Baldwin, 123RF.com
.Exit(1) 15 } 16 17 run(os.Args[1]) 18 } 19 20 func row() { 21 for i := 0; i < size; i++ { 22 for j := 0; j < size; j++ { 23 array[i][j]++ 24 } 25 } 26 } 27 28
80%
Data center networking with OpenSwitch
11.10.2016
Home »  Archive  »  2016  »  Issue 35: Persi...  » 
Lead Image © higyou, 123RF.com
Hub account and the Git [7] and Gerrit [8] tools. The project page [9] explains the individual steps. If you have dealt with Yocto [10] in the past, you should be able to find your way around pretty quickly
80%
Resource Management with Slurm
05.11.2018
Home »  HPC  »  Articles  » 
One way to share HPC systems among several users is to use a software tool called a resource manager. Slurm, probably the most common job scheduler in use today, is open source, scalable, and easy ... Default=none 22 SlurmctldPidFile=/var/run/slurmctld.pid 23 SlurmdPidFile=/var/run/slurmd.pid 24 ProctrackType=proctrack/cgroup 25 PluginDir=/usr/lib/slurm 26 ReturnToService=1 27 TaskPlugin=task/cgroup 28 # TIMERS ... One way to share HPC systems among several users is to use a software tool called a resource manager. Slurm, probably the most common job scheduler in use today, is open source, scalable, and easy
79%
Getting started with I/O profiling
30.11.2025
Home »  Archive  »  2012  »  Issue 08: FreeNAS  » 
© Photosani, Fotolia.com
of read requests issued to the device per second. w/s : Number of write requests issued to the device per second. rMB/s : Number of megabytes read from the device per second. wMB/s : Number
79%
Resource Management with Slurm
13.12.2018
Home »  Archive  »  2018  »  Issue 48: Secur...  » 
Lead Image © Vladislav Kochelaevs, fotolia.com
. Listing 2 sinfo $ sinfo -s PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST p100 up infinite 4/9/3/16 node[212-213,215-218,220-229] sbatch To submit a batch serial
79%
Monitor your nodes with collectl
30.11.2025
Home »  Archive  »  2012  »  Issue 09: Windo...  » 
© nobeastsofierce, 123RF.com
25-E SSD [9] courtesy of Intel is mounted as /dev/sdd ext4 filesystem with the default options Open MPI [10] v1.5.4 NAS Parallel Benchmarks 3.3.1-MPI [11] Iozone [12] Daemon ... 9
79%
Speed up your MySQL database
11.02.2016
Home »  Archive  »  2016  »  Issue 31: Tunin...  » 
Lead Image © Mikhail Dudarev, 123RF.com
| 2015-09-13 15:35:42 | 24 | 8 | Washing machine | 41.1 C | 2015-09-13 15:35:42 | 25 | 9 | Pot plant moisture | 74% rel. | 2015-09-13 15:35:42 | 26 | 10 | Refrigerator | 6
79%
Error-correcting code memory keeps single-bit errors at bay
14.11.2013
Home »  Archive  »  2013  »  Issue 17: Cloud...  » 
Lead Image © Igor Stevanovic, 123RF.com
. This translates to Google's experiencing about 25,000-75,000 correctable errors (CE) per billion device hours per megabit, which translates to 2,000-6,000 CE/GB-yr (or about 250-750 CE/Gb-yr). This is much higher
78%
ClusterHAT
10.07.2017
Home »  HPC  »  Articles  » 
with the original Raspberry Pi Model A, ranging from two to more than 250 nodes. That early 32-bit system had a single core running at 700MHz with 256MB of memory. You can build a cluster of five RPi3 nodes with 20

« Previous 1 2 3 4 5 6 7 8 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice