Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (318)
  • Article (141)
  • Blog post (3)
  • News (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 8 9 ... 47 Next »

34%
HPC Storage strace Snippet
15.02.2012
Home »  HPC  »  Newsletter  »  2012-02-15 HPC...  » 
 
32KB < < 128KB 64 128KB < < 256KB 0 256KB < < 512KB 2 512KB < < 1MB 3 1MB < < 10MB 87 10MB < < 100MB 0 100MB < < 1GB
34%
Rethinking RAID (on Linux)
16.03.2021
Home »  HPC  »  Articles  » 
.4MBps and random reads 1.9MBps. The good news is that whereas random writes dropped a tiny bit to 1.2MBps (Listing 6), random reads increased to almost double the throughput with a rate of 3.3MBps
34%
Listing 3
21.08.2012
Home »  HPC  »  Articles  »  Warewulf 4 Code  » 
 
Listing 3: Installing ganglia-gmond into the Master Node [root@test1 RPMS]# yum install ganglia-gmond-3.4.0-1.el6.i686.rpm Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror ... Listing 3 for Warewulf – Part 4 ... Listing 3 ... Listing 3: Warewulf – Part 4
34%
Linux device mapper writecache
30.01.2020
Home »  Archive  »  2020  »  Issue 55: AWS L...  » 
Lead Image © lightwise, 123R.com
lvm2 --- <232.89g <232.89g /dev/sdb lvm2 --- <6.37t <6.37t Next, I add both volumes into a new volume group labeled vg-cache, $ sudo vgcreate vg-cache /dev/nvme0n1 /dev
34%
Warewulf Cluster Manager – Administration and Monitoring
21.08.2012
Home »  HPC  »  Articles  » 
, with the use of chkconfig, that ganglia always starts when the master node boots: [root@test1 ganglia]# chkconfig --list | more NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off acpid
33%
Distributed storage with Sheepdog
07.10.2014
Home »  Archive  »  2014  »  Issue 23: 10 Ti...  » 
Lead Image © kateleigh, 123RF.com
.snap s ntestvm1.img 5 8.0 GB 292 MB 2.4 GB 2014-03-01 11:42 982a3a 2 mar.snap s ntestvm1.img 6 8.0 GB 128 MB 2.6 GB 2014-03-10 19:48 982a3b 2 mar2.snap ntestvm1.img 0 8.0 GB
33%
Fundamentals of I/O benchmarking
11.04.2016
Home »  Archive  »  2016  »  Issue 32: Measu...  » 
Lead Image © Kheng Ho Toh, 123RF.com
(512 MB) copied, 49.1424 s, 10.4 MB/s If you want to empty the read and write cache for benchmark purposes, you can do so using: sync; echo 3 > /proc/sys/vm/drop_caches Sequential access
33%
Linux Writecache
19.11.2019
Home »  HPC  »  Articles  » 
 volume "/dev/sdb" successfully created. Then, I verify that the volumes have been appropriately labeled: $ sudo pvs   PV           VG Fmt  Attr PSize    PFree      /dev/nvme0n1    lvm2 ---  <232.89g <232.89g   /dev/sdb        lvm2 ---    <6
33%
Building a virtual NVMe drive
25.03.2020
Home »  Archive  »  2020  »  Issue 56: Secur...  » 
Lead Image © lassedesignen, 123RF
] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[5] sde1[4] sdc1[2] sdb1[1] nvme0n1p1[0](J) 20508171264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU
33%
Best practices for KVM on NUMA servers
20.05.2014
Home »  Archive  »  2014  »  Issue 20: IPv6...  » 
Lead Image © Joe Belanger, 123RF.com
Viewing Server Topology 01 # numactl --hardware available: 8 nodes (0-7) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 node 0 size: 16373 MB node 0 free: 15837 MB node 1 cpus: 10 11 12 13 14 15 16 17 18 19 node 1

« Previous 1 2 3 4 5 6 7 8 9 ... 47 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice