Starting with the FortiOS 5.x Fortinet have a built-in iperf3 client so we can load test connected lines. If new to iperf, please read more here iperf.fr.
Iperf in Fortigate comes with some limitations and quirks, so let's have a better look at them:
- The version used (in 5.x and 6.x firmware so far) is 3.0.9. This means it will not work with the iperf2 and its subversions.
- The tool can work as CLIENT only, i.e. it does not accept -s option. This means we can NOT run iperf test between 2 Fortigates, one of the peers has to be some Linux/Windows server with iperf3 -s running. It does NOT mean we can test only one direction, though - the command accepts -R option for reverse traffic.
- As you will see below, the command asks for Client and Server interfaces. The Server interface means on which interface the remote server is located. The Client interface means ... something, but it didn't matter what I put as long it was an enabled interface and it had an IP address.
- The tool accepts most of the command line options as a regular iperf3, except those mentioned already.
So let's configure and run the test.
The default configuration is like that and it will test the throughput between 2 interfaces of the Fortigate itself, not very interesting:
diagnose traffictest show Show the current configuration:
server-intf: port1 client-intf: port3 port: 162 proto: TCP
To run the test, let's set the port to 5201, protocol to TCP and client interface to port2 (as port3 is down in this Fortigate):
diagnose traffictest port 5201
diagnose traffictest proto 0 // 1 is for UDP and 0 is for TCP.
diagnose traffictest client client-intf port2
We are ready to run the iperf test. On the remote server 188.8.131.52 I have iperf3 -s running.
diagnose traffictest run -c 184.108.40.206:
Connecting to host 220.127.116.11, port 5201 [ 5] local 172.31.44.106 port 50670 connected to 18.104.22.168 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 24.8 MBytes 208 Mbits/sec 271 2.22 MBytes [ 5] 1.00-2.00 sec 39.9 MBytes 335 Mbits/sec 5 1.03 MBytes [ 5] 2.00-3.00 sec 14.7 MBytes 123 Mbits/sec 131 619 KBytes [ 5] 3.00-4.00 sec 12.3 MBytes 103 Mbits/sec 1 594 KBytes [ 5] 4.00-5.00 sec 8.02 MBytes 67.2 Mbits/sec 1 361 KBytes [ 5] 5.00-6.00 sec 7.83 MBytes 65.7 Mbits/sec 0 385 KBytes [ 5] 6.00-7.00 sec 7.83 MBytes 65.7 Mbits/sec 0 397 KBytes [ 5] 7.00-8.00 sec 7.83 MBytes 65.7 Mbits/sec 0 403 KBytes [ 5] 8.00-9.00 sec 7.83 MBytes 65.7 Mbits/sec 0 404 KBytes [ 5] 9.00-10.00 sec 7.83 MBytes 65.7 Mbits/sec 0 419 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 139 MBytes 116 Mbits/sec 409 sender [ 5] 0.00-10.00 sec 137 MBytes 115 Mbits/sec receiver
Now let's run load test using UDP, and bandwidth of 50 Mb/sec.
dia traffic protocol 1 // this is not strictly needed if we use -u below, but why not ...
dia traffic run -c 22.214.171.124 -u -b 50M
To see all the available options for FortiOS version of the iperf3, run:
dia traffictest run -h
FG1 # dia traffictest run -h -f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -F, --file name xmit/recv the specified file -A, --affinity n/n,m set CPU affinity -V, --verbose more detailed output -J, --json output in JSON format -d, --debug emit debugging output -v, --version show version information and quit -h, --help show this message and quit -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode) -t, --time # time in seconds to transmit for (default 10 secs) -n, --bytes #[KMG] number of bytes to transmit (instead of -t) -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) -l, --len #[KMG] length of buffer to read or write (default 128 KB for TCP, 8 KB for UDP) -P, --parallel # number of parallel client streams to run -R, --reverse run in reverse mode (server sends, client receives) -w, --window #[KMG] TCP window size (socket buffer size) -C, --linux-congestion <algo> set TCP congestion control algorithm (Linux only) -M, --set-mss # set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -4, --version4 only use IPv4 -6, --version6 only use IPv6 -S, --tos N set the IP 'type of service' -L, --flowlabel N set the IPv6 flow label (only supported on Linux) -Z, --zerocopy use a 'zero copy' method of sending data -O, --omit N omit the first n seconds -T, --title str prefix every output line with this string --get-server-output get results from server [KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-