Installing OMS on CentOS 5.11

If your migrating old workloads to Azure occasionally you may not be able to use the latest Operating Systems. In these circumstances it may be useful to still be able to use the OMS agent to monitor your VM.

For CentOS – you really need to get to version 5.8 or above which has native support for Hyper-V, the Hypervisor that runs under the covers on Azure

Fix the CentOS repos to get the last released patches:

# cd /etc/yum.repos.d

edit CentOS-Base.repo and comment mirror list – change baseurl to start:

baseurl=http://vault.centos.org/5.11/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
baseurl=http://vault.centos.org/5.11/updates/$basearch/

set the date correctly & hostname ip in /etc/hosts

Get the repo for syslog (needed later) 

# cd /etc/yum.repos.d
# wget http://rpms.adiscon.com/v8-stable/rsyslog.repo

Update the OS and install required software

# yum update
# yum groupinstall “Development Tools”
# yum groupinstall “Development Libraries”
# yum install python-ctypes
# yum install rsyslog

Update openssl (for TLS 1.2 support)

# wget https://www.openssl.org/source/openssl-1.0.2o.tar.gz  —no-check-certificate
# tar -zxvf openssl-*.tar.gz
# cd openssl-*
# ./config -fpic shared && make && make install
# echo “/usr/local/ssl/lib” >> /etc/ld.so.conf$ ldconfig

Install updated version of wget

# wget http://ftp.gnu.org/gnu/wget/wget-1.19.5.tar.gz –no-check-certificate
# tar -xzvf wget-1.19.5.tar.gz
# cd wget-1.19.5
# make clean
# ./configure –with-ssl=openssl –with-libssl-prefix=/usr/local/ssl
# make && make install
# yum -y remove wget
# mv /usr/bin/wget /usr/bin/wget.orig
# ln -s /usr/local/bin/wget /usr/bin/wget

Install Updated version of Python

need late 2.7 (2.7.15) to support tls1.2

# wget https://www.python.org/ftp/python/2.7.15/Python-2.7.15.tgz –no-check-certificate
# export CCFLAGS=”-I/usr/local/ssl/include/openssl”
# export LD_LIBRARY_PATH=”/usr/local/ssl/lib/”
# export LDFLAGS=”-L/usr/local/ssl/lib”
# make clean
# ./configure  –with-ensurepip=install –prefix=/usr/local

The final result of ./configure –prefix=$DEPLOY may looks like follows:

Python build finished, but the necessary bits to build these modules were not found:
_tkinter           bsddb185           dl
gdbm               imageop            sunaudiodev

To find the necessary bits, look in setup.py in detect_modules() for the module’s name.
It will show you the modules that can not be build, note that some of them are unnecessary or deprecated:
 
_tkinter: For tkinter graphy library, unnecessary if you don’t develop tkinter programs.
bsddb185: Older version of Oracle Berkeley DB. Undocumented. Install version 4.8 instead.
dl: For 32-bit machines. Deprecated. Use ctypes instead.
imageop: For 32-bit machines. Deprecated. Use PIL instead.
sunaudiodev: For Sun hardware. Deprecated

# make
# make altinstall

Check we have TLS 1.2 support

# python2.7
>>> import ssl
>>> print ssl.OPENSSL_VERSION
1.0.2

# pip2.7 uninstall cryptography
# pip2.7 install cryptography

Setup the python virtual environment

#/usr/local/bin/easy_install-2.7 virtualenv
# mkdir -p va/oms
# virtualenv va/oms
# cd va/oms/bin
# source activate

edit /etc/sysconfig/selinux
disabled

# wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/OMSAgent_v1.6.0-42/omsagent-1.6.0-42.universal.x64.sh –no-check-certificate

# ./omsagent-1.6.0-42.universal.x64.sh –install

(oms) [root@localhost bin]# ./omsadmin.sh -w 2 -s
………….
Starting Operations Management Suite agent (
info      Configured omsconfig

Start OMS and check the logs for any problems

# /opt/microsoft/omsagent/bin/service_control restart
# tail -100 /var/opt/microsoft/omsagent/log/omsagent.log

clearing selinux messages:

# audit2allow -a
# audit2allow -a -M myallow
# semodule -i myallow.pp

Install updated curl Package

# mkdir ~/src/$ cd ~/src/
This installs curl in /usr/local/bin/curl
# cd /root/src$ wget http://curl.haxx.se/download/curl-7.42.1.tar.gz
# tar -xzvf curl-*.tar.gz
# cd curl-*
# ./configure –with-ssl=/usr/local/ssl –disable-ldap && make && make install

Install updated openssh

# wget https://mirror.bytemark.co.uk/pub/OpenBSD/OpenSSH/portable/openssh-7.7p1.tar.gz –no-check-certificate
# ./configure –with-ssl-dir=/usr/local/ssl
# make
# make install

Install Azure waagent

# wget https://github.com/Azure/WALinuxAgent/archive/master.zip
(oms) python setup.py install –register-service
# more /usr/lib/systemd/system/waagent.service
# python -u /usr/sbin/waagent -daemon

Azure to Terraform (az2tf)

‘Reverse Engineering’ Azure to Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform has great support for Azure, and it’s capabilities are being added to frequently see link

Configuration files describe to Terraform the components needed to run a single application or your entire Azure subscription. Terraform generates an execution plan describing what it will do to reach the desired state (terraform plan) , and then executes (terraform apply) it to build the described infrastructure in Azure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The above is taken form the Terraform website and Terraform is a great tool for creating new Azure Infrastructure and has some real advantages, notably the terraform plan capabilities and the simple elegant configuration language used to describe the infrastructure.

One problem with Terraform though is:  What if you already have a Azure deployment you have built with ARM templates , PowerShell, cli2 or by just using the portal, how can you bring these under the control of terraform ?

Terraform Import Challenges

A partial answer to this problem is to use a “terraform import” command that allows you to import the current state of an Azure resource, but there are a few problems with this approach:

  • Multiple terraform imports have to be performed, referencing the correct full Azure resource ID.
  • You have to create a “stub” terraform configuration file for each of the resources before the import will work
  • The terraform stub file will have lots of information/parameters missing and does not represent your current infrastructure
  • You have to pay close attention to naming conventions as many Azure resources are interlinked. For example for a single VM you may have NIC(s), Public IP(s), NSG(s) , VNET, Subnet(s) , Route Table(s), Managed Disk(s) and A Storage Account for diagnostics.  All these have to be consistently named as they will be cross-referenced in the terraform configuration files
  • A terraform plan command will likely show you there is much work to be done editing in all the missing information into your terraform configuration files.

The ‘az2tf’ Tool

What is needed is a tool that will read your current Azure deployment and automatically creates fully populated terraform configuration files and perform the corresponding ‘terraform import’ commands.

A subsequent ‘terraform plan’ command should report zero additions/deletions are required to the infrastructure and ideally zero ‘changes’  (The later is hard to achieve due to transient bugs and inconsistencies in how some string values are handled – but they are being worked through.).

This neutral output from the terraform plan command will give confidence that the automatically generated terraform configuration file are an accurate representation of the existing infrastructure within Azure.

A tool that can achieve the above called “az2tf” and is freely available on github here

This is written in a number “bash” shell scripts (one for each terraform provider) called from a master script ‘az2tf.sh’ and is intended to be run form the Terraform marketplace image:

https://docs.microsoft.com/en-us/azure/terraform/terraform-vm-msi

The tool is “use at your own risk”  – but as it makes no changes to Azure and simply reads information , generates config files and runs a terraform plan it’s safe to use on any Azure subscription.

Using the ‘az2tf’ tool

Ensure you have setup and authorised both terraform and Azure cli2 correctly for your subscription (you’ll need read access – and also access (list, read) to any KeyVaults you use)

Download ‘az2tf’ or clone from github into an empty directory on your terraform vm

Run from the command line:

./az2tf.sh  -s <your subscription id>

or

./az2tf.sh  -s <your subscription id> -g <a resource group name>

Wait patiently – it’s slow – but good 🙂

As it runs it will:

  • Create a new sub directory tf.<your subscription id>
  • You’ll  then see it looping around the terraform providers generating terraform config files and performing terraform imports.
  • For the final step it runs a terraform plan command

 

More details on the project page README.md on github

As this is a ‘hobby project’ there will be issues as a tool like this will need extensive testing to find all the edge cases, so check back regularly on github for updates and also for support of new azurerm terraform providers.

Part 2 (tbd) will go on the talk about how the tool was put together, using the primary tools – Azure cli2 ,bash (particularly jq and printf).

 

 

 

 

 

 

Calling Linux Custom Script Extensions from PowerShell

If you run Linux VM’s on Azure, then you at some point will want to call a Custom Script Extension that runs your own script (bash etc) to perform some operations within the virtual machine.

There are a few options for doing this including of course using chef and puppet, or as is documented here simply calling a Custom Script Extension with your own script.

When experimenting with doing this I found a lot of the documentation describing how to do this was out of date principally because it did not use the “customScript” extension type  (publisher  Microsoft.Azure.Extensions).

Note for Windows VM’s you use a different custom extension type “CustomScriptExtension” (publisher Microsoft.Compute) – see the commented section in the middle of the PowerShell script below

Listed below is the PowerShell to execute a command “command2.sh” which is stored as a blob in an Azure storage account in a container named “myscripts”. The PowerShell also grabs any output (stdout) from command2.sh – (getting stderr would be done in a similar way)

This script assumes you are using ARM and have previously logged into Azure from PowerShell (login-AzureRmAccount) and if required set the default subscription (Select-AzureRmSubscription)

$rg='your-resource-group'
$vmname='yourvm'
$storageaccountname='your-storage-account-name'
$cont='myscripts'
$Extensionname='customScript'
$vm = Get-AzurermVM -Name $vmname -ResourceGroupName $rg
# get the storage key
$key = (Get-AzureRmStorageAccountKey -Name $storageaccountname -ResourceGroupName $rg).value[0]
if (!$key) {
    write-output "Could not find a storage key"
    exit
}
#
# check if there's an existing custom script extension
# if there is remove it - your only allowed one at a time
#
$extname = ($VM.Extensions | Where { $_.VirtualMachineExtensionType -eq 'customScript' }).name
if ($extname) {
    write-output "removing existing extension: $extname"
    remove-azurermvmextension -name $extname  -ResourceGroupName $rg  -VMName $vmname -force
    write-output "removed - waiting 10 seconds ...."
    start-sleep -Seconds 10
}
# get extension types
# for windows use:
# Get-AzureRmVMExtensionImage -Location westeurope -PublisherName Microsoft.Compute -Type CustomScriptExtension
# for Linux use:
# Get-AzureRmVMExtensionImage -Location westeurope -PublisherName Microsoft.Azure.Extensions -Type customScript
#
#
#For Linux:
#
# Setup for call to Set-AzureRmExtension
#
$TheURI = "https://$storageaccountname.blob.core.windows.net/myscripts/command2.sh"
$Settings = @{"fileUris" = @($TheURI); "commandToExecute" = "./command2.sh"};
$ProtectedSettings = @{"storageAccountName" = $storageaccountname; "storageAccountKey" = $key};
#
Set-AzureRmVMExtension -ResourceGroupName $rg -Location $vm.location -VMName $vmname -Name "customScript" -Publisher "Microsoft.Azure.Extensions" -Type "customScript" -TypeHandlerVersion "2.0" -Settings $Settings -ProtectedSettings $ProtectedSettings
#
if ($?) {
  write-output "set extension ok"
  #
  # Get script extension output
  #
  $extout=((Get-AzureRmVM -Name $VMName -ResourceGroupName $RG -Status).Extensions | Where-Object {$_.Name -eq $ExtensionName}).statuses.Message
  #
  # Parse the stdout 
  #
  $stdout=$extout.substring($extout.indexof('[stdout]')+8,$extout.indexof('[stderr]')-$extout.indexof('[stdout]')-8)
  $stdout=$stdout.trim()
  write-output "stdout from command: $settings.commandToExecute"
  $stdout
  }
  else
  {
    write-output "set extension problem?"
  }
#
#

 

The above will be particularly useful when developing runbooks that are called as steps in a Azure Site Recovery plan.

If Multiple Linux VM’s are involved in the recovery plan it’s often necessary to query hostnames, assigned IP addresses or other information from Virtual Machine A and feed that  information as parameter into Virtual Machine B etc.  so it can correctly configure itself with Virtual Machine A’s information as part of the failover process.

Eg. As part of the failover plan – custom script extension 1 runs on VM1 (database server) and returns the hostname/ip address etc  – which are then passed as parameters into custom extension script 2 run by VM2 (web server) which using the script reconfigures itself to use the new hostname/ip address given to the database server as it failed over into Azure, as this may be different to what was used ‘on-premise’ (or from the source Azure region if your using Azure to Azure ASR).

Azure VM Network Bandwidth

July 2017 – Updated with new VPN Gateway types 

In this blog post we look at some network bandwidth tests for a variety of Azure VM sizes.

The tests have been run between two VM’s in the same VNet. Network bandwidth testing has been done with Linux using iperf3 running on CentOS 7.2 and Windows 2016 using the Ntttcp tool.

nettest1

Both single stream and multi stream tests were used. Of course your actual throughput numbers will vary from the ones seen in the tests below due to a number of factors (OS Type, workload characteristics etc….)

Note:  Microsoft is currently in the process of implementing some Azure Network optimisations:

    1. “Receive Side Scaling” – https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth
    2. “Accelerated Networking” – https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-accelerated-networking-portal

The table below has been updated to include optimization 1. further updates will follow later in the year.
Entries in green include optimisation 1 “Receive Side Scaling”.

OS VM Type # Cores Single Stream Throughput ‘N’ Streams Throughput (1x #Cores)
CentOS (with 1) A0 Basic 0.25 10 Mbps 10 Mbps
CentOS (with 1) A1 Basic 1 100 Mbps 100 Mbps
CentOS (with 1) A2 Basic 2 200 Mbps 200 Mbps
CentOS (with 1) A3 Basic 4 400 Mbps 395 Mbps
CentOS (with 1)
A4 Basic 8 805 Mbps 816 Mbps
Windows A4 Basic 8 737 Mbps 734 Mbps
CentOS (with 1) A1 Standard 4 500 Mbps 500 Mbps
CentOS (with 1) A2 Standard 4 500 Mbps 500 Mbps
CentOS (with 1) A3 Standard 4 1000 Mbps 1000 Mbps
CentOS (with 1) A4 Standard 8 1990 Mbps 2000 Mbps
CentOS (with 1) A6 Standard 8 1000 Mbps 1000 Mbps
CentOS (with 1) A6 Standard 8 2000 Mbps 2000 Mbps
CentOS (with 1) A1 v2 1 495 Mbps 488 Mbps
Windows A1 v2 1 411 Mbps 467 Mbps
CentOS (with 1) A2 v2 2 500 Mbps 492 Mbps
CentOS (with 1) A4 v2 4 998 Mbps 999 Mbps
CentOS (with 1) A8 v2 8 1980 Mbps 1910 Mbps
CentOS (with 1) A8 8 4000 Mbps 4200 Mbps
CentOS (with 1) A9 16 4550 Mbps 7850 Mbps
CentOS (with 1) A10 8 3990 Mbps 3995 Mbps
Windows A10 8 1820 Mbps 3942 Mbps
CentOS (with 1) A11 16 4410 Mbps 8000 Mbps
CentOS (with 1) D1 v2 1 750 Mbps 726 Mbps
CentOS (with 1) D2 v2 2 1500 Mbps 1500 Mbps
CentOS (with 1) D3 v2 4 3000 Mbps 3000 Mbps
CentOS (with 1) D4 v2 8 4950 Mbps 6000 Mbps
CentOS (with 1) D5 v2 16 4840 Mbps 12000 Mbps
CentOS (with 1) D11 v2 2 1500 Mbps 1500 Mbps
CentOS (with 1) D12 v2 4 3000 Mbps 3000 Mbps
CentOS (with 1) D13 v2 8 4210 Mbps 5990 Mbps
CentOS (with 1) D14 v2 16 4990 Mbps 11900 Mbps
CentOS (with 1) D15 v2 20 4440 Mbps 15500 Mbps
Windows D15 v2 20 1002 Mbps 12176 Mbps
CentOS (with 1) F1 1 750 Mbps 749 Mbps
CentOS (with 1) F2 2 1500 Mbps 1490 Mbps
CentOS (with 1) F4 4 2990 Mbps 2995 Mbps
Windows (with 1) F4 4 928 Mbps 2640 Mbps
CentOS (with 1) F8 8 3490 Mbps 5990 Mbps
Windows F8 8 390 Mbps 4388 Mbps
CentOS (with 1) F16 16 4110 Mbps 11800 Mbps
Windows (with 1) F16 16 1096 Mbps 8416 Mbps
CentOS (with 1) G1 2 2000 Mbps 2000 Mbps
CentOS (with 1) G2 4 3270 Mbps 4000 Mbps
CentOS (with 1) G3 8 3160 Mbps 8000 Mbps
CentOS (with 1) G4 16 3970 Mbps 8880 Mbps
Windows G4 16 1602 Mbps 9488 Mbps
Windows (with 1) G4 16 1904 Mbps 7856 Mbps
CentOS (with 1) G5 32 3850 Mbps 13700 Mbps
CentOS (with 1) NV6 6 4850 Mbps 5970 Mbps
CentOS (with 1)
NV12 12 4760 Mbps 12200 Mbps

Test method

In each CentOS VM:

$ sudo yum -y update          (a very important step !)

Ensure Receive Side Scaling is enabled see:  https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth

$ wget https://iperf.fr/download/fedora/iperf3-3.1.3-1.fc24.x86_64.rpm
$ sudo yum install iperf3-3.1.3-1.fc24.x86_64.rpm

On one VM run iperf in server mode
$ iperf3 -s

On another VM run single stream test:
$ iperf3 -c ip-of-server

For the multiple streams test:
$ iperf3 -c  ip-of-server  -P n

Where n =  number of cores in VM

In each Windows 2016 VM:

Apply the latest Windows updates then download ntttcp from here

On one VM (ip=w.x.y.z) run ntttcp in receiver mode
C:> ntttcp -r -m 1,*,w.x.y.z
&
C:> ntttcp -r -m n,*,w.x.y.z   (for the multi-thread test)
Where n = 8x number of cores in VM

On another VM run single thread test ntttcp in sender mode:
C:> ntttcp -s -m 1,*,w.x.y.z.

For the multi thread test:
C:> ntttcp -s -m n,*,w.x.y.z.

Where n =  the number of cores in VM

Peering VNets (Directly connected)

nettest2

Testing between VM’s in directly peered VNets showed no noticeable difference.

Peered VNets in the same region

Testing between VM’s indirectly peered via the new gateway types (VPMGW1, 2 & 3) shows bandwidth up to 2.8Gbs a big step forward from the previous gateway types that returned a maximum of  980 Mbps when using the now depreciated ‘High Performance’ gateway.

nettest3

Gateway Type Single Stream Throughput ‘N’ Stream Throughput (1x #Cores)
VPNGW1 700 Mbps 717 Mbps
VPNGW2 1400 Mbps 1430 Mbps
VPNGW3 1810 Mbps 2880 Mbps

For comparison here are the results form the now depreciated gateway types Standard and High Performance:

Gateway Type Single Stream Throughput ‘N’ Streams Throughput (1x #Cores)
Standard 472Mbps 580 Mbps
High Performance 720 Mbps 980 Mbps

Peered VNets via two BGP Gateways one in each region

nettest4

Gateway Type Single Stream Throughput ‘N’ Streams Throughput 
VPNGW1 550 Mbps 670 Mbps
VPNGW2 650 Mbps 780 Mbps
VPNGW3 650 Mbps 650 Mbps

For comparison here are the results form the now depreciated gateway types Standard and High Performance:

Gateway Type Single Stream Throughput ‘N’ Streams Throughput 
Standard 200 Mbps 280 Mbps
High Performance 210 Mbps 411 Mbps

Azure Application Proxy – In Action

This blog post demonstrates how to use Azure Application Proxy.

Azure Application Proxy enables you to take an internal web application and make it securely available outside of your organisation. A few different authentication options can be enabled for your internal application to help secure it:

  • If your application does not use and form of sign-in then Azures Active Directory (AAD) sign-in can be added to the public endpoint Azure Application Proxy provides.
  • Pass through, relying on you on premise  authentication.
  • If your application does use Active Directory sign-in then you have the option to set up and use AAD based single sign-on. This post demonstrates that option.

If you would like to test the scenario described above, you may want to firstly create a simple application rather than using a real application. Part 1 of this blog shows how to do that using Visual Studio. If you already have a web application that authenticated against your local Active Directory, you can skip Parts 1,2 and 3. If you already have your Azure Active Directory synchronized with your local domain you can skip through Part 4 as well.

Part 1 – Creating a simple application with Visual Studio

Part 2 – Publishing the Application to a local IIS server

Part 3 – Setting up IIS for Authentication

Part 4 – Set up your Local Domain and Directory Synchronization

Part 5 – Enable Azure Application Proxy