windows azure

Understanding SLA vs Downtime

Understanding SLA (Service Level Agreement) is very important after you choose to start from scratch OR migrate your application to Cloud.

You can find the SLA provided by Microsoft for its Azure services here.

Usually the SLAs will be 99.9 (three9s) to 99.999 (five 9s). At a first glance, you can think there is not a major difference in them. However when you are hosting a production application, it is very important to understand what the 9’s really mean w.r.t the downtime.

Here is a table for you to understand SLA vs Downtime

SLA percentageDowntime per weekDowntime per monthDowntime per year
991.68 hours7.2 hours3.65 days
99.910.1 minutes43.2 minutes8.76 hours
99.955 minutes21.6 minutes4.38 hours
99.991.01 minutes4.32 minutes52.56 minutes
99.9996 seconds25.9 seconds5.26 minutes

Advertisement

Azure – Expand OS Drive for Azure Virtual Machine

When you create an Azure Virtual Machine, it comes with an OS drive having a default size. The size is different for Windows and Linux machines. ~128GB and ~30GB for Windows and Linux machines respectively. The OS drive can be expanded to a maximum size of 2TB as of this writing. Use the below code to allocate space to your Azure VM’s OS drive. After executing the code, login to the virtual machine and expand (Disk Management for Windows) the drive using the newly allocated space.

Resize a Managed Disk:

## Resize OS disk size for Managed Disk

# Set Resource Group and Virtual Machine name
$resourceGroupName = 'my_resource_group_name'
$virtualMachineName = 'my_virtual_machine_name'

# Create VM reference object
$vm = Get-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

# Stop the Virtual Machine
Stop-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

# Create a reference to the Managed disk and set the size as required
$disk= Get-AzureRmDisk -ResourceGroupName $resourceGroupName -DiskName $vm.StorageProfile.OsDisk.Name
$disk.DiskSizeGB = 1023
Update-AzureRmDisk -ResourceGroupName $resourceGroupName -Disk $disk -DiskName $disk.Name

# Start the Azure Virtual Machine
Start-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

Resize an Unmanaged Disk:

## Resize OS disk size for Unmanaged Disk

# Set Resource Group and Virtual Machine name
$resourceGroupName = 'my-resource-group-name'
$virtualMachineName = 'my-vm-name'

# Create VM reference object
$vm = Get-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

# Stop the Virtual Machine
Stop-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

# Create a reference to the Unmanaged disk and set the size as required
$vm.StorageProfile.OSDisk.DiskSizeGB = 1023
Update-AzureRmVM -ResourceGroupName $resourceGroupName -VM $vm

# Start the Azure Virtual Machine
Start-AzureRmVM -ResourceGroupName $resourceGroupName -Name $virtualMachineName

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – TAG Azure Virtual Machines?

In a cloud environment, the effective way to organize and manage your infrastructure is by tagging your resources. Example, if your client has multiple silos like marketing, sales etc., and your cost model is different to each silo, tagging the resources provides an effective way to measure and manage the cost for your client.

Use the below code to TAG your Azure virtual machine:

$resource_group = "resourceGroupName"
$vm_name = "virtualMachineName"
$tags = (Get-AzureRmResource -ResourceGroupName $resource_group -Name $vm_name).Tags
$tags += @{toolsinstalled="true"}
Set-AzureRmResource -ResourceGroupName $resource_group -Name $vm_name -ResourceType "Microsoft.Compute/VirtualMachines" -Tag $tags -Force

You may tag other resources (Storage Account, Virtual Network etc.,) by providing the correct values to “-ResourceType” parameter.

If you have a set of standard tags and you want to tag all virtual machines in a resource group, then use the code below as a template:

Param(

  ## Parameter declaration

  [string]$resource_group,


  # Tag values passed as a parameter

  [string]$rtb_status_tag_value,

  [string]$os_type_tag_value,

  [string]$environment_tag_value,

  [string]$build_date_tag_value,

  [string]$project_name_tag_value,

  [string]$support_type_tag_value

)


$azure_vm_list = get-azurermvm -ResourceGroupName $resource_group

foreach($azure_vm_list_iterator in $azure_vm_list){

$tags = (Get-AzureRmResource -ResourceGroupName $resource_group -Name $azure_vm_list_iterator.name).Tags

# Creating a hash table with tag name and value

$tags += @{"RTB status"=$rtb_status_tag_value}

$tags += @{"OS Type"=$os_type_tag_value}

$tags += @{"Environment"=$environment_tag_value}

$tags += @{"Built Date"=$build_date_tag_value}

$tags += @{"Project Name"=$project_name_tag_value}

$tags += @{"Support Type"=$support_type_tag_value}


"Setting tags"| write-output

Set-AzureRmResource -ResourceGroupName $resource_group -Name $azure_vm_list_iterator.name -ResourceType "Microsoft.Compute/VirtualMachines" -Tag $tags -ApiVersion '2017-12-01' -force

}

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – Who de-allocated my virtual machine?

Many a time we might want to know details about certain operations performed on our Azure resources.

Once such case study would be to track how many virtual machines are being de-allocated by users, so we can make a decision on not to monitor them.

I have written a simple script that would make the tracking easy.

Download the script

This script will fetch information of certain Azure operation against Azure resources and create a CSV file. Specifically, this script will create a CSV file that contains a list of Azure operations that de-allocates an Azure virtual machine.

You may alter the IF condition statement to produce desired results.

Example, fetch operational logs for Azure Storage only. Or fetch operational logs for re-start VM or any operation on any Azure resource.

The CSV file will be saved in the same folder from where you run the script and will be saved as “Azure_activity_logs.csv”

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – Audit report (Azure automation runbook)

The PowerShell script is an Azure automation runbook that pulls the below data and populates the data into a CSV file. The script then summarizes the data into an email’s body and sends an email to the recipient with the CSV files as attachments.

If the Azure automation runbook is scheduled to run every day, you will get a summary/high-level view of what is happening in your environment to your email box. The email could be the first report any organization’s high management would desire to look at.

1. Count of De-allocated Azure virtual machines

2. Count of Running Azure virtual machines

3. Count of Stopped Azure virtual machines

4. Count of Azure virtual machines that do not have native backup configured (Azure Back up and Recovery service)

5. Count of Inbound Security rules that causes vulnerability

Download the script

Sample Summary:

Screenshot from 2018-06-04 19-13-52

Email is sent via SendGrid service. You need to update the script with your SendGrid credentials.

You may choose a “Free Tier” pricing for SendGrid. Below is documentation to create a SendGrid account:

https://docs.microsoft.com/en-us/azure/sendgrid-dotnet-how-to-send-email

Note: The script is an Azure Automation runbook. You have to run it from an Azure Automation account.

If you would like me to add more data that would be useful as an Azure audit report, please let me know.

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – Unable to ping Azure Virtual Machine from outside Azure

You buy a new Azure subscription, spin up an Azure Virtual Machine. Now you want to test if it is working or not. So, you pull up the infamous Command Prompt (or powershell) and Ping the VIP (Virtual/Public IP) of your Azure Virtual Machine. Wola!! The ping fails with 100% loss. But you can see that the Azure Portal shows that your virtual machine is up and running. To double check, you even RDP to your VM and it is all good. This is one of the many situations where the Azure new comers get confused. Let me break down this for you:-

The explanation for this behaviour is that the good old, Windows Ping.exe uses ICMP protocol to communicate. But the Azure Load Balancer does not support ICMP protocol when a connection is being made from external source to Azure. This means, your local computer will not be able to “Ping” (probing using Ping.exe) the Azure virtual Machines. However Azure Load Balancer allows ICMP protocol inside the azure (internally). This means, two Azure virtual machines are able to talk to each other.

The solution is to ping the port of your Virtual Machine.

Example: Ping xx.xx.xx.xx:1234

Since Ping.exe does not support probing the port, we have to use the other tools like PSPing, TCPPing etc, to achieve this.

This explains most of it. I am going to demonstrate whatever I just explained.

Below is the details of my virtual machine:

VM Details

When I ping the VIP – 13.76.247.67, using the default Ping.exe. You can observe that we end up having 100% packet loss.

packet_loss

This behaviour is because the Azure Load Balancer does not allow ICMP communication between Azure and the external source. And Microsoft’s Ping.exe uses ICMP protocol.

The solution is to use PSPing (among many other options), and ping the port of the Virtual Machine. Please note that you have to add relevant entry in the NSG (Network Security Group) to allow incoming traffic to your Virtual Machine.

Since this is just a Demo, I have allowed all the traffic to my Virtual Machine via the port 3389. You have to use appropriate NSG and ACLs to your Virtual Machine and Subnet, in your production environment. 

NSG_Allow_All

PSPing.exe comes with a bundle – PSTools. This toolset can be downloaded here.

Copy PsPing onto your executable path. Typing “psping” displays its usage syntax.

psping_syntax

Note: If you are using the PSPing tool for the first time, you may have to agree to the terms and conditions before using it.

Since I have my port – 3389 opened for all incoming traffic. I will go ahead and use the PSPing tool to ping the port from my local computer. And as you can see it works like a charm !!

ping_success

Finally, note that you can ping only to the port for which you have enabled the incoming traffic. Since I have not enabled port 80, I expect the packets to be dropped.

packet_loss_wrong_port

Azure – High level discussion of Azure Storage Architecture

Windows Azure Storage is a cloud solution that provides customers to store seamlessly limitless amount of data for a any duration of time. It has been in production since November 2008. It is used in storing application data, Live Streaming, social networking search, gaming and music content, etc.,

Once you have your data stored in Azure storage, You can access your data any time and from anywhere. And you only pay for what you use and store. Currently we have thousands of customers who are already using Azure Storage Services.

Visit the Azure Portal to create your free subscription and try out Azure Storage. Also, check out the Microsoft article for a jump start and pre-requisites required to use the Azure Storage.

Why use Azure Storage?

Disaster Recovery: Azure Storage stores your data miles apart (minimum 400 miles) in different data centres. This provides a strong guard against natural calamities like earthquakes, tornadoes etc., Replication options like – LRS, ZRS, GRS, RA-GRS are provided, which can be chosen as per the business needs.

Multi Tenancy: As with other services, Azure storage uses the concept of shared tenancy. What this means is, to reduce the storage cost, depending on the varying work loads of the customer, data from multiple customers are served from the same storage infrastructure. This reduces the amount of storage space to be provisioned at a time than having each services run on their own dedicated hardware.

Global Name-space: For ease of use, Azure Storage implements a Global Namespace that allows the data to be stored and accessed in a consistent manner from any location in the world.

Global Partitioned Name-space:

Exabytes of data and beyond are stored in Azure Storage. Azure had to come up with a solution that allowed its clients to store and retrieve data without much of a hassle. To provide this capability, Azure leveraged DNS part of the storage name-space and break it down to three parts: Account Name, a Partition Name and an Object Name.

Syntax: http(s)://AccountName.<service>.core.windows.net/PartitionName/ObjectName

Account Name: This is the customer selected Storage Account Name (entered while creating the storage account– Azure portal or Azure Powershell). This Account Name is used to locate the primary storage cluster and the data centre where the requested data is stored. This primary location is where the preceding requests go to reach the data of that account.

Partition Name: This name locates the data once the request reaches the primary cluster. It is also used to scale out the access to data across the nodes depending on the traffic.

Object Name: This identifies the individual objects within that partition.

For Blobs, the full blob name is the PartitionName.

For Tables, each entity (row) in the table has a primary key that consists of two properties: the PartitionName and the ObjectName. This distinction allows applications using Tables to group rows into the same partition to perform atomic transactions across them.

For Queues, the queue name is the PartitionName and each message has an ObjectName to uniquely identify it within the queue.

Architectural Components:

Azure_Storage

Storage Stamp: This is a cluster of N racks of storage nodes. Each rack is built out as a separate fault domain with redundant networking and power. The goal is to keep the stamp around 70% utilized in terms of capacity, transitions and bandwidth. This is because ~20% is kept as a reserve for (a) disk short stroking to gain better seek timse and higher throughput by utilizing the outer tracks of the disks and (b) to continue providing storage capacity and availability in the presence of a rack failure within a stamp. When the storage stamp reaches 70% utilization, the location service migrates accounts to different stamps using Inter-Stamp replication.

Location Service: Manages all the storage stamps. Also responsible for managing the account name-spaces across all stamps. The LS itself is distributed across two geographical locations for its own disaster recovery.

Azure Storage provides storage from multiple locations. Each location is a data centre, which holds multiple storage stamp. To provision additional capacity, the LS has the ability to add new regions, new locations to regions and new stamps to locations. The LS can then allocate new storage accounts to those new stamps for customers as well as load balance (migrate) existing storage accounts from older stamps to new stamps.

As shown in the figure, When an application requests new Storage Account for storing data, it specifies the location affinity for the storage (Example: US North). The LS then chooses a storage stamp within that location as the primary stamp for the account. The LS then stores the account meta-data information in the chosen storage stamp, which tells the stamp to start taking traffic for the assigned account. The LS then updates the DNS to allow requests to now route from the name https://AccountName.service.core.windows.net/ to that storage stamp’s virtual IP (VIP, an IP address the storage stamp exposes for external traffic).

Three Layers within Storage Stamp:

Stream Layer: This layer stores the bits on disk and is in charge of distributing and replicating the data across many servers to keep data durable within a storage stamp. The stream layer can be thought of as a distributed file system layer within a stamp. It
understands files, called “streams”, how to store them, how to replicate them, etc., but it does not understand higher level object constructs or their semantics.

Partition Layer: The partition layer is built for (a) managing and understanding higher level data abstractions (Blob, Table, Queue), (b) providing a scalable object namespace, (c) providing transaction ordering and strong consistency for objects, (d) storing object data on top of the stream layer, and (e) caching object data to reduce disk I/O.
Front End Layer: The Front-End (FE) layer consists of a set of stateless servers that take incoming requests. Upon receiving a request, an FE looks up the AccountName, authenticates and authorizes the request, then routes the request to a partition server in the partition layer (based on the PartitionName). The system maintains a Partition Map that keeps track of the PartitionName ranges and which partition server is serving which PartitionNames. The FE servers cache the Partition Map and use it to determine which partition server to forward each request to. The FE servers also stream large objects directly from the stream layer and cache frequently accessed data for efficiency.

Two Replication Engines:

Intra-Stamp Replication (Stream Layer): This system provides synchronous replication and is focused on making sure all the data written into a stamp is kept durable within that stamp. It keeps enough replicas of the data across different nodes in different fault domains to keep data durable within the stamp in the face of disk, node, and rack failures. Intra-stamp replication is done completely by the stream layer and is on the critical path of the customer’s write requests. Once a transaction has been replicated successfully with intra-stamp replication, success can be returned back to the customer.

Inter-Stamp Replication (Partition Layer): This system provides asynchronous replication and is focused on replicating data across stamps. Inter-stamp replication is done in the background and is off the critical path of the customer’s request. This replication is at the object level, where either the whole object is replicated or recent delta changes are replicated for a given account. Inter-stamp replication is used for (a) keeping a copy of an account’s data in two locations for disaster recovery and (b) migrating an account’s data between stamps. Inter-stamp replication is configured for an account by the location service and performed by the partition layer.

Note: The above content has been summarized from a technical paper titled: “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency” released by Microsoft.
You can download the PDF here.

Azure – RDP from Linux box to Azure Virtual Machine (Windows Server)

This topic is pretty straight forward. Once you have your Azure Windows Server spun up, you can connect to it using RDP tool if your source is a Windows operating System.

But when you are in Linux the story is different, as it does not have the RDP tool. We have plenty of open source tools that we can use to make a connection from Linux Box to Windows Server.

Since we are discussing about Azure, I am using an Azure VM. Below is the details of that machine:

VM Details

We are going to use the “RDesktop” tool to connect from Linux to Windows. Installation of RDesktop is pretty simple. You may google/bing for assistance, or check this link for the complete installation guide in RedHat/CentOS/Fedora Operating System.

Once you have your RDesktop setup. Enable the RDP port- 3389 in your Azure virtual Machine. You can achieve this by adding “Inbound Security Rules” into your Network Security Group, via the Azure Portal.

For demo purposes, I am allowing all incoming traffic for the port – 3389. However, incase of production environment, you may have to provide appropriate rules for your NSG (Network Security Group) and ACLs (Access Control List) for better security.

NSG_Allow_All

Once RDesktop and RDP ports are configured. You may now run the RDesktop command to connect to your Virtual Machine.

Command: sudo rdesktop <IP Address>

rdesktop_fail

As you can see in the above screenshot, the connection fails. This is an expected behavior, because by default the RDP connection is set to “Allow connections only from computers running Remote Desktop with Network Level Authentication”

NLA_Enabled

In order to connect from Linux machines, we have to disable the NLA.

NLA_Disabled

Now give it another try, the remote connection works like a charm !

rdesktop_success