Performance
Of course DRBD's write performance depends on the performance of the network
and the performance of the contributing disks. Due to Linuxs Buffer-Cache
the read-performance is hard to measure. The ratio of DRBD's
read-performance to the disk's read-performance is probably about the
same as the ratio of DRBD uncon. to disk write.
It is quite easy to get such performance values from your setup, just
run ~drbd/benchmark/run.sh. If you do so, please consider contributing
your results to this page. Especially interesting would be benchmarks from
a setup with a high network bandwidth and high/long latency.
The values in the columns Disk write, DRBD uncon., Prot. A, B and C are
for sequential writing. The amount used for the write tests and the
network test is given in the set size field.
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
0.95 MB/s | 0.5/0.5/0.8 ms | 10M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.13 | alpha | 528.48 | 4.49 MB/s | 4.68 MB/s | 0.70 MB/s | 0.73 MB/s | 0.65 MB/s |
Node2 | Linux | 2.2.13 | i586 | 47.82 | 2.09 MB/s | 2.11 MB/s | 0.95 MB/s | 0.94 MB/s | 0.91 MB/s |
|
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
1.04 MB/s | 0.5/0.5/0.7 ms | 55M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.13 | alpha | 528.48 | 4.94 MB/s | 4.81 MB/s | 0.74 MB/s | 0.74 MB/s | 0.73 MB/s |
Node2 | Linux | 2.2.13 | i586 | 47.82 | 2.05 MB/s | 1.16 MB/s | 0.95 MB/s | 0.94 MB/s | 0.93 MB/s |
|
- This is a 10baseT (10 MBit/sec BNC) network, limiting DRBD's performance
- You can see here that the benchmarking script is quite independent of the used set size.
- These values also suggest that having the faster node (Node1) on the receiver's(secondary) side increases performance.
These are benchmarks from my machines,
on which I am developing DRBD.
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
4.33 MB/s | 0.1/0.8/36.4 ms | 10M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.12-20 | i686 | 265.42 | 2.27 MB/s | 1.99 MB/s | 1.30 MB/s | 1.29 MB/s | 1.51 MB/s |
Node2 | Linux | 2.2.12-20 | i686 | 265.42 | 2.23 MB/s | 2.04 MB/s | 1.30 MB/s | 1.31 MB/s | 1.59 MB/s |
|
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
9.70 MB/s | 0.1/0.8/34.3 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.12-20 | i686 | 265.42 | 2.26 MB/s | 1.96 MB/s | 1.34 MB/s | 1.34 MB/s | 1.48 MB/s |
Node2 | Linux | 2.2.12-20 | i686 | 265.42 | 2.25 MB/s | 1.99 MB/s | 1.31 MB/s | 1.30 MB/s | 1.49 MB/s |
|
- Here the disks are limiting DRBD's performance.
- The strange thing about these values is that protocol C was fastest.
Thanks, Joe, for these values.
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
8.60 MB/s | 0.0/0.0/0.3 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.14-9cl | i586 | 897.84 | 11.17 MB/s | 10.21 MB/s | 3.67 MB/s | 3.59 MB/s | 3.56 MB/s |
Node2 | Linux | 2.2.14-14cl | i586 | 799.54 | 8.66 MB/s | 7.80 MB/s | 3.80 MB/s | 3.69 MB/s | 3.99 MB/s |
|
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
8.70 MB/s | 0.0/0.0/0.2 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.14-9cl | i586 | 897.84 | 14.88 MB/s | 12.90 MB/s | 4.94 MB/s | 4.68 MB/s | 4.57 MB/s |
Node2 | Linux | 2.2.14-14cl | i586 | 799.54 | 10.97 MB/s | 9.11 MB/s | 5.35 MB/s | 5.30 MB/s | 5.20 MB/s |
|
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
0.94 MB/s | 0.3/0.3/0.5 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.14-9cl | i586 | 897.84 | 15.79 MB/s | 14.18 MB/s | 1.06 MB/s | 1.04 MB/s | 1.01 MB/s |
Node2 | Linux | 2.2.14-14cl | i586 | 799.54 | 11.34 MB/s | 9.83 MB/s | 0.87 MB/s | 0.92 MB/s | 0.87 MB/s |
|
- Between the first and the second run Fábio used hdparm to tune the IDE
interface, which is also visible in DRBD's performance.
- In the first two we also get the best performance, if the faster node
is in secondary state.
- The third run was with other networking cards (10MBit/sec), the
interesting thing is that we now get better results when the faster
node is in primary state.
Fábio sent these. Thanks.
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
11.10 MB/s | 0.0/0.0/0.2 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.14 | i686 | 498.07 | 11.23 MB/s | 10.40 MB/s | 6.63 MB/s | 6.28 MB/s | 5.63 MB/s |
Node2 | Linux | 2.2.14 | i586 | 466.94 | 6.76 MB/s | 6.06 MB/s | 6.26 MB/s | 6.46 MB/s | 5.40 MB/s |
|
Network bandwidth | Network latency (min/avg/max) | Set Size | Drbd rev. |
11.10 MB/s | 0.0/0.0/0.2 ms | 100M | 0.5.3 |
| OS | Rev. | Arch | BogoMips | Disk write | Drbd uncon. | Prot. A | Prot. B | Prot. C |
Node1 | Linux | 2.2.14 | i686 | 498.07 | 11.30 MB/s | 9.65 MB/s | 6.14 MB/s | 6.18 MB/s | 5.29 MB/s |
Node2 | Linux | 2.2.14 | i586 | 466.94 | 6.62 MB/s | 6.34 MB/s | 6.27 MB/s | 6.27 MB/s | 6.08 MB/s |
|
- These are the best values until now.
- The network was a cross-over cable.
- Here DRBD's write performance gets very close to the slowest disk in
the setup.
Thanks to Clemens, who provided his machines
for these benchmark runs.
back
Philipp Reisner
Last modified: Thu Mar 28 15:29:09 CET 2000