27%
11.02.2016
-backup /etc /mnt/backup
# rdiff-backup --list-increments /mnt/backup/
Found 2 increments:
increments.2015-03-15T09:15: 19+01:00.dir Sun Mar 15 09:15:19 2015
increments.2015-03-19T20:15: 46+01:00.dir Thu
27%
05.12.2019
TIME CMD
Root 1 0 0 19:05 ? 00:00:00 sleep 1000
Listing 2
Process on the Host
$ ps -ef|grep sleep
Cherf 30328 29757 0 20:44 ? 00:00:00 sleep 1000
Cherf 30396 3353
27%
09.01.2013
SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data
26%
07.02.2019
< n; i++)
{
c[i] = a[i] * b[i]
}
}
...
#pragma acc parallel loop copyin(e)
present(a) copyout(f) {
for (int j=0; j < n; j++)
{
f[i] = 2.0*e[j] + (1.0/4.00*(a[j]*4
26%
19.09.2019
= np.array([10, 20, 30, 40])
print('a+b:\n', add_ufunc(a, b))
The answer should be:
a+b:
[11 22 33 44]
In the previous example, you had to put everything that was to run on the GPU into a single Numba
26%
11.06.2014
[PID] 38533; svchost.exe, PID 1560; and WScript.exe, PID 1744). Process jh (PID 38533) was spawned by a parent process – [Not Available] (528)
– with a start time of 1601-01-01 00:00:00Z
(Figure 1
26%
21.04.2015
-r--r-- 2 root root 6 3. Feb 18:36 .glusterfs/0d/19/0d19fa3e-5413-4f6e-abfa-1f344b687ba7
#
# ls -alid dir1 .glusterfs/fe/9d/fe9d750b-c0e3-42ba-b2cb-22ff8de3edf0 .glusterfs
/00/00
26%
16.05.2013
://wiki.scilab.org/Linalg%20performances
Compiling
http://wiki.scilab.org/Compiling%20Scilab%205.x%20under%20GNU-Linux%20Unix
Parallel computing
http
26%
19.10.2012
12-core AMD processors ranging in speed from 2.2 to 2.9GHz with 24 to 128GB of RAM per server and up to 1TB of scratch local storage per node.
Getting applications running POD HPC clouds can be quite
26%
17.02.2015
for x in rep.df.rx2(2)])
17
18 devs.png(file=path, width=512, height=512)
19 ro.r.plot(x_vals, y_vals, xlab=x_lab, ylab=y_lab, main=main)
20 devs.dev_off()
21
22 rep