Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (36)
  • Article (24)
  • Blog post (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 Next »

22%
Security first with the Hiawatha web server
11.04.2016
Home »  Archive  »  2016  »  Issue 32: Measu...  » 
Lead Image © Tatiana Popov, 123RF.com
-fastcgi are running, as expected. Listing 1 Process List root 589 0.0 0.3 142492 3092 ? Ss 20:35 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; www
22%
Automation Scripting with PHP
16.10.2012
Home »  Articles  » 
 
:Ethernet HWaddr 08:00:27:b0:21:7e inet addr:192.168.1.85 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb0:217e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU
22%
Warewulf Cluster Manager – Completing the Environment
20.06.2012
Home »  HPC  »  Articles  » 
was there. To test whether this worked, ssh to the node n0001 as root. [root@test1 ~]# ssh n0001 Last login: Sat May 26 12:00:06 2012 from 10.1.0.250 The /etc/hosts on the master node works fine
22%
Working with the Lustre Filesystem
24.02.2022
Home »  HPC  »  Articles  » 
.255.255.255  broadcast 0.0.0.0         inet6 fe80::bfd3:1a4b:f76b:872a  prefixlen 64  scopeid 0x20         ether 42:01:0a:80:00:02  txqueuelen 1000  (Ethernet)         RX packets 11919  bytes 61663030 (58.8 Mi
22%
Lustre HPC distributed filesystem
07.04.2022
Home »  Archive  »  2022  »  Issue 68: Autom...  » 
Photo by Efe Kurnaz on Unsplash
,BROADCAST,RUNNING,MULTICAST> mtu 1460 inet 10.0.0.2 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::bfd3:1a4b:f76b:872a prefixlen 64 scopeid 0x20 ether 42:01:0a:80:00:02 txqueuelen 1000
21%
LXC 1.0
03.12.2015
Home »  Archive  »  2015  »  Issue 30: OpenD...  » 
Lead Image © designpics, 123RF.com
/24 !10.0.3.0/24 root@ubuntu:~# ps -eaf | grep dnsmas lxc-dns+ 1047 1 0 18:24 ? 00:00:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf ... LXC 1.0, released in early 2014, was the first stable version for managing Linux containers. We check out the lightweight container solution to see whether it is now ready for production. ... LXC 1.0
21%
Real-World HPC: Setting Up an HPC Cluster
04.11.2011
Home »  HPC  »  Articles  » 
through NAT. Listing 1 has some critical settings for the /etc/sysconfig/iptables  file. Listing 1: /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH
21%
Warewulf Cluster Manager – Administration and Monitoring
21.08.2012
Home »  HPC  »  Articles  » 
just two nodes: test1, which is the master node, and n0001, which is the first compute node): [laytonjb@test1 ~]$ pdsh -w test1,n0001 uptime test1: 18:57:17 up 2:40, 5 users, load average: 0.00, 0.00
21%
Tips and Tricks for Containers
12.05.2020
Home »  HPC  »  Articles  » 
): root@c31656cbd380:/# apt-get update Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] ... Fetched 18.0 MB in 9s (1960 kB/s) After the package repositories are synced, I can
21%
Resource Management with Slurm
05.11.2018
Home »  HPC  »  Articles  » 
Name=slurm-node-0[0-1] Gres=gpu:2 CPUs=10 Sockets=1 CoresPerSocket=10 \ ThreadsPerCore=1 RealMemory=30000 State=UNKNOWN PartitionName=compute Nodes=ALL Default=YES MaxTime=48:00:00 DefaultTime=04:00:00 \ Max

« Previous 1 2 3 4 5 6 7 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice