Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Spell check suggestion: %6gbps Llorente 12 ?

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (629)
  • Article (138)
  • News (13)
  • Blog post (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 79 Next »

17%
Server virtualization with Citrix XenServer
30.11.2025
Home »  Archive  »  2010  »  Issue 2: Backup...  » 
Goss Vitalij, Fotolia
the biggest rentable cloud today: the Amazon Web Service. On the basis of this extremely mature technology, Citrix launched version 5.6 of its XenServer product family in May 2010. XenServer, the product built ... Version 5.6 of Citrix XenServer is a feature-stripped version of the virtualization product and is available free, in addition to the commercial Advanced, Enterprise, and Platinum editions.
17%
A simple approach to the OCFS2 cluster filesystem
30.11.2025
Home »  Archive  »  2010  »  Issue 1: System...  » 
© Kheng Ho Toh, 123RF.com
of development, the programmers released version 1.0 of OCFS2, and it made its way into the vanilla kernel (2.6.16) just a year later. Version 1.2 became more widespread, with a great deal of support from various ... The vanilla kernel includes two cluster filesystems: OCFS2 has been around since 2.6.16 and is thus senior to GFS2. Although OCFS2 is non-trivial under the hood, it is fairly simple to deploy.
17%
Tuning I/O Patterns in C
31.07.2013
Home »  HPC  »  Articles  » 
Code Example 1 #include 2 3 /* Our structure */ 4 struct rec 5 { 6 int x,y,z; 7 float value; 8 }; 9 10 int main() 11 { 12 int counter; 13 struct rec my
17%
SDS configuration and performance
13.02.2017
Home »  Archive  »  2017  »  Issue 37: Distr...  » 
Lead Image © Jakub Jirsak, 123RF.com
10.6MBps. Ceph and Lizard presumably achieved a higher throughput here thanks to distribution over multiple servers. Figure 3: The result looks a little
17%
Rethinking RAID (on Linux)
25.03.2021
Home »  Archive  »  2021  »  Issue 62: Lean...  » 
Lead Image © nikkikii, 123RF.com
a tiny bit to 1.2MBps (Listing 6), random reads increased to almost double the throughput with a rate of 3.3MBps (Listing 7). Listing 6 Random Write to RAID $ sudo fio --bs=4k --ioengine
17%
vglibc
01.08.2012
Home »  HPC  »  Articles  »  Warewulf Cluste...  »  Warewulf 3 Code  » 
 
=========================================================================================================================================== Installing: glibc i686 2.12-1.47.el6_2.9 sl-security 4.3 M Installing for dependencies: nss
16%
glibc
01.08.2012
Home »  HPC  »  Articles  »  Warewulf Cluste...  »  Warewulf 3 Code  » 
 
=========================================================================================================================================== Installing:  glibc-devel                       x86_64                    2.12-1.47.el6_2.9                        sl-security                    966 k Installing for dependencies:  glibc
16%
open64
01.08.2012
Home »  HPC  »  Articles  »  Warewulf Cluste...  »  Warewulf 3 Code  » 
 
_2.3.4) for package: open64-5.0-0.x86_64 --> Running transaction check ---> Package glibc.i686 0:2.12-1.47.el6_2.9 will be installed --> Processing Dependency: libfreebl3.so for package: glibc-2.12
16%
Combining Directories on a Single Mountpoint
19.05.2014
Home »  HPC  »  Articles  » 
 FUSE kernel interface version 7.12 Because I’m interested in using SSHFS-MUX just as I would SSHFS, I’m not going to test the MUX features of SSHFS-MUX. SSHFS-MUX Performance on Linux As with SSHFS, I want to test
16%
Nmon: All-Purpose Admin Tool
17.12.2014
Home »  HPC  »  Articles  » 
read and write throughput and, for some reason, adds the throughput for sdb and sdb1 even though sdb only has one partition. As a result, nmon is reporting a total write throughput of 254.6MBps rather

« Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 ... 79 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice