Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (506)
  • Article (151)
  • News (13)
  • Blog post (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 ... 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ... 68 Next »

26%
Server virtualization with Citrix XenServer
04.08.2011
Home »  CloudAge  »  Articles  » 
 
. Depending on the functional scope, the prices are between US$ 1,000 and US$ 5,000 including 12 months of free upgrades, news, and information. For newcomers to the world of server virtualization, Citrix
26%
VAX emulation with OpenVMS
18.07.2013
Home »  Archive  »  2013  »  Issue 15: What’...  » 
Ole Houen, 123RF
rq2 ra92 08 set rq3 cdrom 09 10 attach rq0 d0.dsk 11 attach rq1 d1.dsk 12 attach rq2 d2.dsk 13 14 attach -r rq3 cdrom.iso 15 16 set rl disable 17 set ts disable 18 19 set xq mac=08-00-2B-AA-BB-CC 20
26%
Review: Accelerator card by OCZ for ESX server
16.05.2013
Home »  Archive  »  2013  »  Issue 14: Samba 4  » 
© gortan123, 123RF.com
on the iSCSI network, reaching a total of 500MBps. At 500MBps, the going would start to get tough, even for SATA 3.0 (and even older versions running at 150 and 300MBps would have long since given up
26%
Resource Management with Slurm
05.11.2018
Home »  HPC  »  Articles  » 
 # for your environment. 05 # 06 # 07 # slurm.conf file generated by configurator.html. 08 # 09 # See the slurm.conf man page for more information. 10 # 11 ClusterName=compute-cluster 12 Control
26%
Resource Management with Slurm
13.12.2018
Home »  Archive  »  2018  »  Issue 48: Secur...  » 
Lead Image © Vladislav Kochelaevs, fotolia.com
.conf file generated by configurator.html. 08 # 09 # See the slurm.conf man page for more information. 10 # 11 ClusterName=compute-cluster 12 ControlMachine=slurm-ctrl 13 # 14 SlurmUser=slurm 15 Slurmctld
26%
Maintaining Android in the enterprise
21.08.2014
Home »  Archive  »  2014  »  Issue 22: OpenS...  » 
Lead Image © Iaroslav Neliubov, 123RF.com
* 10 * daemon started successfully * 11 List of devices attached 12 015d8bed0d3c0814 device If you use the commands from the SDK regularly, it makes sense to add its path, preferably like
26%
Warewulf 4 – Time and Resource Management
17.01.2023
Home »  HPC  »  Articles  » 
.5.13-2.el8                          appstream        29 k  numactl-libs               x86_64 2.0.12-13.el8                         baseos           35 k  ohpc-filesystem            noarch 2.6-2.3.ohpc.2
26%
Building a HPC cluster with Warewulf 4
04.04.2023
Home »  Archive  »  2023  »  Issue 74: The F...  » 
Photo by Andrew Ly on Unsplash
.5.13-2.el8 appstream 29 k numactl-libs x86_64 2.0.12-13.el8 baseos 35 k ohpc-filesystem noarch 2.6-2.3.ohpc.2
26%
Kubernetes k3s lightweight distro
25.03.2020
Home »  Archive  »  2020  »  Issue 56: Secur...  » 
Lead Image Photo by Josh Rakower on Unsplash
, according to the README file, requires "half the memory, all in a binary less than 40MB" to run. By design, it is authored with a healthy degree of foresight by the people at Rancher [3]. The GitHub page [4
26%
Tuning SSD RAID for optimal performance
09.08.2015
Home »  Archive  »  2015  »  Issue 28: SSD RAID  » 
Lead Image © ELEN, Fotolia.com
,000 IOPS mark. The latency test illustrates the differences between read and write access. About 0.12ms are added from read only, through 65/35 mixed, to write only in a HWR. The increase is about 0.10ms per

« Previous 1 ... 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ... 68 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice