16%
22.06.2012
0-07:06:40
10
32400
0-09:00:00
11
40000
0-11:06:40
12
48400
0-13:26:40
13
57600
0-16:00:00
14
67600
16%
07.10.2014
. The second number is percent CPU load from the system (0.3%sy), and the next is percentage of jobs that are "nice" [2] (0.0%ni). After that, Top lists percent overall CPU time idle (86.3%id; four real cores
16%
24.02.2022
.255.255.255 broadcast 0.0.0.0
inet6 fe80::bfd3:1a4b:f76b:872a prefixlen 64 scopeid 0x20
ether 42:01:0a:80:00:02 txqueuelen 1000 (Ethernet)
RX packets 11919 bytes 61663030 (58.8 Mi
16%
07.04.2022
,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.0.0.2 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::bfd3:1a4b:f76b:872a prefixlen 64 scopeid 0x20
ether 42:01:0a:80:00:02 txqueuelen 1000
16%
16.10.2012
:Ethernet HWaddr 08:00:27:b0:21:7e
inet addr:192.168.1.85 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feb0:217e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU
16%
08.10.2015
224984 49332 pts/30 S+ 12:20 0:16 /usr/bin/python /usr/bin/magnum-api
stack 19844 0.0 1.4 228088 57308 pts/31 S+ 12:20 0:03 /usr/bin/python /usr/bin/magnum-conductor
$ which magnum
/bin
16%
20.06.2012
was there. To test whether this worked, ssh
to the node n0001
as root.
[root@test1 ~]# ssh n0001
Last login: Sat May 26 12:00:06 2012 from 10.1.0.250
The /etc/hosts
on the master node works fine
15%
30.11.2025
are started and stopped by init scripts for the cluster framework and OCFS2.
Listing 5
OCFS2 Processes
# ps -ef|egrep '[d]lm|[o]cf|[o]2'
root 3460 7 0 20:07 ? 00:00:00 [user
15%
21.08.2012
just two nodes: test1, which is the master node, and n0001, which is the first compute node):
[laytonjb@test1 ~]$ pdsh -w test1,n0001 uptime
test1: 18:57:17 up 2:40, 5 users, load average: 0.00, 0.00
15%
21.08.2012
will allocate 4 cores
15 ### using 3 processors on 1 node.
16 #PBS -l nodes=1:ppn=3
17
18 ### Tell PBS the anticipated run-time for your job, where walltime=HH:MM:SS
19 #PBS -l walltime=0:10:00
20
21 ### Load