17%
16.03.2021
:
Version : 1.2
Creation Time : Sat Jan 9 15:52:00 2021
Raid Level : raid1
Array Size : 244065408 (232.76 GiB 249.92 GB)
Used Dev Size : 244065408 (232.76 GiB 249.92 GB)
Raid
17%
31.10.2025
is now possible. To do this, you need to modify the /etc/fstab file to point to the desired snapshot, rather than to subvol=@ (here, this is @apt-snapshot-2012-07-23_08:52:34). One grub-update and one
17%
30.11.2025
_is_true "$OCF_RESKEY_realtime"; then
47 asterisk_extra_params="-F -p"
48 else
49 asterisk_extra_params="-F"
50 fi
51
52 ocf_run ${OCF_RESKEY_binary} -G $OCF_RESKEY_group \
53
17%
02.08.2021
%util
sda 10.91 6.97 768.20 584.64 4.87 18.20 30.85 72.31 13.16 20.40 0.26 70.44 83.89 1.97 3.52
nvme0n1 58.80 12.22 17720.47 48.71 230
17%
25.03.2021
.2
Creation Time : Sat Jan 9 15:52:00 2021
Raid Level : raid1
Array Size : 244065408 (232.76 GiB 249.92 GB)
Used Dev Size : 244065408 (232.76 GiB 249.92 GB)
Raid Devices : 2
17%
28.03.2012
20120310 13:40:10 sdb 136 93 6483 2 8 40 47 2 17 7 90 sda 0 0 0 0 0 0 0 0 0 0 0
20120310 13:40:20 sdb 60 69 2200 2 11 52 36 2 30 6 37 sda 0 0 0 0 0 0 0 0 0 0 0
20120310 13:40:30 sdb 2 0 16 7 37 175 21 1 59 6
17%
04.11.2011
.width + kx];
47 }
48 }
49 // Clamp values to {0, ..., 255} and store them
50 out.data[y * out.width + x] = clampuchar((int) convolutionSum);
51 }
52 }
53
17%
18.08.2021
/O operation sequences are presented. The chart shows roughly 54,000 total write and perhaps 6,000 total read operations. For the write I/O, most were sequential (about 52,000), with about 47,000 consecutive
17%
07.11.2023
5.3M 8.2k 5.3M 1% /run/lock
/dev/nvme1n1p1 1.1T 488G 468G 52% /home
/dev/nvme0n1p1 536M 6.4M 530M 2% /boot/efi
/dev/sda1 6.0T 3.4T 2.4T 60% /home2
tmpfs
17%
05.12.2014
/bro/bin/bro-cut -d ts id.resp_h assigned_ip lease_time
2014-10-30T04:04:04-0500 192.168.1.2 192.168.1.27 86400.000000
2014-10-30T15:54:52-0500 192.168.1.1 192.168.1.14 86400.000000
ls known