Admin Magazine
 
  • News
  •  
  • Articles
  •  
  • Tech Tools
  •  
  • Subscribe
  •  
  • Archive
  •  
  • Whitepapers
  •  
  • Digisub
  •  
  • Write for Us!
  •  
  • Newsletter
  •  
  • Shop
  • DevOps
  • Cloud Computing
  • Virtualization
  • HPC
  • Linux
  • Windows
  • Security
  • Monitoring
  • Databases
  • all Topics...
Search
Login
ADMIN Magazine on Facebook
GooglePlus

Search

Refine your search
Sort order
  • Date
  • Score
Content type
  • Article (Print) (114)
  • Article (65)
  • Blog post (1)
Keywords
Creation time
  • Last day
  • Last week
  • Last month
  • Last three months
  • Last year

« Previous 1 2 3 4 5 6 7 8 9 10 11 12 ... 18 Next »

21%
Working with the Lustre Filesystem
24.02.2022
Home »  HPC  »  Articles  » 
.255.255.255  broadcast 0.0.0.0         inet6 fe80::bfd3:1a4b:f76b:872a  prefixlen 64  scopeid 0x20         ether 42:01:0a:80:00:02  txqueuelen 1000  (Ethernet)         RX packets 11919  bytes 61663030 (58.8 Mi
21%
Lustre HPC distributed filesystem
07.04.2022
Home »  Archive  »  2022  »  Issue 68: Autom...  » 
Photo by Efe Kurnaz on Unsplash
,BROADCAST,RUNNING,MULTICAST> mtu 1460 inet 10.0.0.2 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::bfd3:1a4b:f76b:872a prefixlen 64 scopeid 0x20 ether 42:01:0a:80:00:02 txqueuelen 1000
21%
Automation Scripting with PHP
16.10.2012
Home »  Articles  » 
 
:Ethernet HWaddr 08:00:27:b0:21:7e inet addr:192.168.1.85 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb0:217e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU
21%
Warewulf Cluster Manager – Completing the Environment
20.06.2012
Home »  HPC  »  Articles  » 
was there. To test whether this worked, ssh to the node n0001 as root. [root@test1 ~]# ssh n0001 Last login: Sat May 26 12:00:06 2012 from 10.1.0.250 The /etc/hosts on the master node works fine
20%
Warewulf Cluster Manager – Administration and Monitoring
21.08.2012
Home »  HPC  »  Articles  » 
just two nodes: test1, which is the master node, and n0001, which is the first compute node): [laytonjb@test1 ~]$ pdsh -w test1,n0001 uptime test1: 18:57:17 up 2:40, 5 users, load average: 0.00, 0.00
20%
Listing 6
21.08.2012
Home »  HPC  »  Articles  »  Warewulf 4 Code  » 
 
 will allocate 4 cores 15 ### using 3 processors on 1 node. 16 #PBS -l nodes=1:ppn=3 17 18 ### Tell PBS the anticipated run-time for your job, where walltime=HH:MM:SS 19 #PBS -l walltime=0:10:00 20 21 ### Load
20%
Tips and Tricks for Containers
12.05.2020
Home »  HPC  »  Articles  » 
): root@c31656cbd380:/# apt-get update Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] ... Fetched 18.0 MB in 9s (1960 kB/s) After the package repositories are synced, I can
20%
Real-World HPC: Setting Up an HPC Cluster
04.11.2011
Home »  HPC  »  Articles  » 
through NAT. Listing 1 has some critical settings for the /etc/sysconfig/iptables  file. Listing 1: /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH
20%
Julia Distributed Arrays
15.08.2012
Home »  HPC  »  Articles  » 
, elements of the distributed array, j (created above) can be changed (not just the local references; in this case, rows 1-2 and columns 1-8): julia> j[2,2]=0; julia> j[8,8]=0; julia> print(j); 8x8 Float
20%
Machine learning and security
02.02.2021
Home »  Archive  »  2021  »  Issue 61: Secur...  » 
Lead Image © Vlad Kochelaevskiy, 123RF.com
.sin(periods * 2 * np.pi * t) 12 return max(value, 0.0) 13 else: 14 value = np.sin(periods * 2 * np.pi * t) 15 return max(value, 0.0) 16 17 # building the data vector 18 my_data = [] 19 i = 0 20 while

« Previous 1 2 3 4 5 6 7 8 9 10 11 12 ... 18 Next »

Service

  • Article Code
  • Contact
  • Legal Notice
  • Privacy Policy
  • Glossary
    • Backup Test
© 2025 Linux New Media USA, LLC – Legal Notice