Azure VM Network Bandwidth

July 2017 – Updated with new VPN Gateway types 

In this blog post we look at some network bandwidth tests for a variety of Azure VM sizes.

The tests have been run between two VM’s in the same VNet. Network bandwidth testing has been done with Linux using iperf3 running on CentOS 7.2 and Windows 2016 using the Ntttcp tool.

nettest1

Both single stream and multi stream tests were used. Of course your actual throughput numbers will vary from the ones seen in the tests below due to a number of factors (OS Type, workload characteristics etc….)

Note:  Microsoft is currently in the process of implementing some Azure Network optimisations:

    1. “Receive Side Scaling” – https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth
    2. “Accelerated Networking” – https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-accelerated-networking-portal

The table below has been updated to include optimization 1. further updates will follow later in the year.
Entries in green include optimisation 1 “Receive Side Scaling”.

OS VM Type # Cores Single Stream Throughput ‘N’ Streams Throughput (1x #Cores)
CentOS (with 1) A0 Basic 0.25 10 Mbps 10 Mbps
CentOS (with 1) A1 Basic 1 100 Mbps 100 Mbps
CentOS (with 1) A2 Basic 2 200 Mbps 200 Mbps
CentOS (with 1) A3 Basic 4 400 Mbps 395 Mbps
CentOS (with 1)
A4 Basic 8 805 Mbps 816 Mbps
Windows A4 Basic 8 737 Mbps 734 Mbps
CentOS (with 1) A1 Standard 4 500 Mbps 500 Mbps
CentOS (with 1) A2 Standard 4 500 Mbps 500 Mbps
CentOS (with 1) A3 Standard 4 1000 Mbps 1000 Mbps
CentOS (with 1) A4 Standard 8 1990 Mbps 2000 Mbps
CentOS (with 1) A6 Standard 8 1000 Mbps 1000 Mbps
CentOS (with 1) A6 Standard 8 2000 Mbps 2000 Mbps
CentOS (with 1) A1 v2 1 495 Mbps 488 Mbps
Windows A1 v2 1 411 Mbps 467 Mbps
CentOS (with 1) A2 v2 2 500 Mbps 492 Mbps
CentOS (with 1) A4 v2 4 998 Mbps 999 Mbps
CentOS (with 1) A8 v2 8 1980 Mbps 1910 Mbps
CentOS (with 1) A8 8 4000 Mbps 4200 Mbps
CentOS (with 1) A9 16 4550 Mbps 7850 Mbps
CentOS (with 1) A10 8 3990 Mbps 3995 Mbps
Windows A10 8 1820 Mbps 3942 Mbps
CentOS (with 1) A11 16 4410 Mbps 8000 Mbps
CentOS (with 1) D1 v2 1 750 Mbps 726 Mbps
CentOS (with 1) D2 v2 2 1500 Mbps 1500 Mbps
CentOS (with 1) D3 v2 4 3000 Mbps 3000 Mbps
CentOS (with 1) D4 v2 8 4950 Mbps 6000 Mbps
CentOS (with 1) D5 v2 16 4840 Mbps 12000 Mbps
CentOS (with 1) D11 v2 2 1500 Mbps 1500 Mbps
CentOS (with 1) D12 v2 4 3000 Mbps 3000 Mbps
CentOS (with 1) D13 v2 8 4210 Mbps 5990 Mbps
CentOS (with 1) D14 v2 16 4990 Mbps 11900 Mbps
CentOS (with 1) D15 v2 20 4440 Mbps 15500 Mbps
Windows D15 v2 20 1002 Mbps 12176 Mbps
CentOS (with 1) F1 1 750 Mbps 749 Mbps
CentOS (with 1) F2 2 1500 Mbps 1490 Mbps
CentOS (with 1) F4 4 2990 Mbps 2995 Mbps
Windows (with 1) F4 4 928 Mbps 2640 Mbps
CentOS (with 1) F8 8 3490 Mbps 5990 Mbps
Windows F8 8 390 Mbps 4388 Mbps
CentOS (with 1) F16 16 4110 Mbps 11800 Mbps
Windows (with 1) F16 16 1096 Mbps 8416 Mbps
CentOS (with 1) G1 2 2000 Mbps 2000 Mbps
CentOS (with 1) G2 4 3270 Mbps 4000 Mbps
CentOS (with 1) G3 8 3160 Mbps 8000 Mbps
CentOS (with 1) G4 16 3970 Mbps 8880 Mbps
Windows G4 16 1602 Mbps 9488 Mbps
Windows (with 1) G4 16 1904 Mbps 7856 Mbps
CentOS (with 1) G5 32 3850 Mbps 13700 Mbps
CentOS (with 1) NV6 6 4850 Mbps 5970 Mbps
CentOS (with 1)
NV12 12 4760 Mbps 12200 Mbps

Test method

In each CentOS VM:

$ sudo yum -y update          (a very important step !)

Ensure Receive Side Scaling is enabled see:  https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth

$ wget https://iperf.fr/download/fedora/iperf3-3.1.3-1.fc24.x86_64.rpm
$ sudo yum install iperf3-3.1.3-1.fc24.x86_64.rpm

On one VM run iperf in server mode
$ iperf3 -s

On another VM run single stream test:
$ iperf3 -c ip-of-server

For the multiple streams test:
$ iperf3 -c  ip-of-server  -P n

Where n =  number of cores in VM

In each Windows 2016 VM:

Apply the latest Windows updates then download ntttcp from here

On one VM (ip=w.x.y.z) run ntttcp in receiver mode
C:> ntttcp -r -m 1,*,w.x.y.z
&
C:> ntttcp -r -m n,*,w.x.y.z   (for the multi-thread test)
Where n = 8x number of cores in VM

On another VM run single thread test ntttcp in sender mode:
C:> ntttcp -s -m 1,*,w.x.y.z.

For the multi thread test:
C:> ntttcp -s -m n,*,w.x.y.z.

Where n =  the number of cores in VM

Peering VNets (Directly connected)

nettest2

Testing between VM’s in directly peered VNets showed no noticeable difference.

Peered VNets in the same region

Testing between VM’s indirectly peered via the new gateway types (VPMGW1, 2 & 3) shows bandwidth up to 2.8Gbs a big step forward from the previous gateway types that returned a maximum of  980 Mbps when using the now depreciated ‘High Performance’ gateway.

nettest3

Gateway Type Single Stream Throughput ‘N’ Stream Throughput (1x #Cores)
VPNGW1 700 Mbps 717 Mbps
VPNGW2 1400 Mbps 1430 Mbps
VPNGW3 1810 Mbps 2880 Mbps

For comparison here are the results form the now depreciated gateway types Standard and High Performance:

Gateway Type Single Stream Throughput ‘N’ Streams Throughput (1x #Cores)
Standard 472Mbps 580 Mbps
High Performance 720 Mbps 980 Mbps

Peered VNets via two BGP Gateways one in each region

nettest4

Gateway Type Single Stream Throughput ‘N’ Streams Throughput 
VPNGW1 550 Mbps 670 Mbps
VPNGW2 650 Mbps 780 Mbps
VPNGW3 650 Mbps 650 Mbps

For comparison here are the results form the now depreciated gateway types Standard and High Performance:

Gateway Type Single Stream Throughput ‘N’ Streams Throughput 
Standard 200 Mbps 280 Mbps
High Performance 210 Mbps 411 Mbps

6 thoughts on “Azure VM Network Bandwidth”

  1. I do trust all of the ideas you have presented for your post.
    They’re very convincing and can certainly work. Nonetheless, the posts are very
    quick for novices. May you please lengthen them a little from subsequent time?
    Thank you for the post.

    Like

Leave a comment