Azure

Azure – Configure Storage Spaces for Azure VM for increased disk performance

This blog will walk you through on how to configure Storage Spaces for Azure Virtual Machine (Windows). Finally, we get to see some IOPS benchmarks.

Each data disk (Standard Storage Account) has about 500 IOPS. In this example, we are going to create a Storage Space by attaching 4 data disks to a Standard A2 sized Azure VM. In theory, this should increase the IOPS to 2k. (500 x 4 = 2000)

 

Configuring Storage Spaces for Azure windows VM

Step 1: Attach four data disks to your virtual machine.

From the azure portal, select your virtual machine >> Click on “Disks” >> click on the “+ Add data disk” >> Fill out the details accordingly >> Save the disk.

1

Repeat this process 3 more times and we will have 4 data disks attached to our virtual machine as shown below:

4_disk_attached_azure_portal.PNG

 

Inside the VM, we can see the disks attached:

4_disk_not_initialized

 

 

Step 2: Login to the virtual machine and run the following PowerShell cmdlets. This will configure Storage Space and will create a drive for you.

 

In our example, we will configure one volume. Hence, only one storage pool. If you are implementing SQL Server or any other architecture, you may need more than one storage pool.

Create a new virtual disk using all the space available from the storage pool using a Simple configuration. The interleave is set to 256KB. We are also setting the number of columns to be equal to the number of disks in the pool

Format the disk with NTFS filesystem and a 64KB allocation unit size.

Below is a snippet of the PowerShell console after executing the above cmdlets.

create_storage_space.PNG

Finally, we can see the drive. A drive named “E” will be created with a free space of ~4TB.

e_drive_created.png

 

Benchmark Tests

Obviously, this works. However, I have run IOPS test to have a visual. You may choose any standard benchmark testing tools. To keep it simple, I have used a PowerShell script authored by Mikael Nystrom, Microsoft MVP. This script is a wrapper to the SQLIO.exe. You may download the PowerShell script and SQLIO.exe file, HERE.

 

Download the archive file to your local system and copy it to the server. Extract the contents to any folder.

 

Below is a sample script to estimate IOPS:

.\DiskPerformance.ps1 -TestFileName test.dat –TestFileSizeInGB 1 -TestFilepath F:\temp -TestMode Get-SmallIO -FastMode True -RemoveTestFile True -OutputFormat Out-GridView

Feel free to tweak the parameter values for different results.

Explaination of parameters:

-TestFileName test.dat

The name of the file, it will create the file using FSUTIL, but it checks if it exists and if it does it stops, you can override that with the –RemoveTestFile True

–TestFileSizeInGB 1

Size of the file, it has fixed values, use the TAB key to flip through them

-TestFilepath C:\VMs

The folder can also be an UNC path, it will create the folder so it does not need to exist.

-TestMode Get-SmallIO

There is too test modes Get-LargeIO or Get-SmallIO, you use Get-LargeIO to measure the transfer rate and you use Get-SmallIO to measure IOPS

-FastMode True

Fast mode true runs each test for just 10 seconds, it gives you a hint, if you don’t set it or set it to false it will run for 60 sec (it will take a break for 10 sec between each run)

-RemoveTestFile True

Removes the test file if it exists

-OutputFormat Out-GridView

Choose between Out-Gridview or Format-Table

 

IOPS for C drive on Azure VM [OS Disk]:

C_drive

 

IOPS for D drive on Azure VM [Temporary Disk]:

D_drive

 

IOPS for E drive on Azure VM [Standard data disk]:

E_drive

 

IOPS for F drive on Azure VM [Storage Spaces]:

F_drive

 

We can use this storage strategy when we have a small amount of data but the IOPS requirement is huge.

Example scenario:

You have 500GB of data, and the IOPS for that data exceeds 1K. Storing 500GB of data in one data disk will create IOPS problems since each data disk has a 500 IOPS limit. But, if we combine 4 disks and create a storage space, the IOPS will increase to ~2k [we have to consider latency etc., to have a correct figure]. Since we are using the same Standard A2 virtual machine and Azure charges for the overall data and not per disk, the pricing will be the same.

 

 

AWS – Monitor AWS Windows EC2 instance using Microsoft OMS (Operations Management Suite)

Microsoft is investing a lot of money and effort into OMS (Operations Management Suite). OMS can be used to monitor Windows/Linux machines, not just in Azure, but also in AWS, or any cloud vendor platform for that matter. You can even monitor the servers hosted in your on-premise environment.

Configuring OMS for Azure instance is pretty straightforward. I will walk you through on how to configure OMS on Windows AWS instance.

I already have a OMS workspace (with an Azure subscription)

Step 1: Create and connect to your AWS Windows instance, by following the below link as guidance:

http://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-windows-launch-instance.html

Step 2: Download the OMS direct agent for Windows Machine.

Option1: If you are using an Azure subscription to manage OMS, then you can find the link to download the direct agent by clicking on:

Selecting your OMS workspace >> Select “Quick Start” >> Select “Computers” >> Select “Download Windows Agent (64 bit)”

2

 

1

 

Option 2: You can download the OMS direct agent, from the OMS portal as well.

Click on the “gear button” (third icon from left) located at the top right-hand corner of the portal >> select “Connected Sources” >> Select “Windows Servers” >> Click on “Download Windows Agent (64 bit)”

3.PNG

Once the “MMASetup-AMD64.exe” file is downloaded to your local desktop. Copy the file to the AWS Windows instance where you are configuring the OMS agent.

Now, double click on the “MMASetup-AMD64.exe” to start installing.

 

4

 

Click “Next”

5

Click on “I Agree” once you have read the legal terms.

6

Select the installation folder, if you are not happy with the default location. Then click “Next”

7

Select the checkbox “Connect the agent to Azure  Log Analytics (OMS)” and then click “Next”.

8

Enter your OMS workspace details. You can find this information from Azure portal or OMS portal. It is the same page from where we download the Direct Agent for Windows.

[Optional] Click on the “Advanced” button, if your server has to go through a proxy server. Make necessary changes and click “Next”. Since I do not use a proxy server to connect to the OMS, I am leaving the fields as blank.

9

Click “Next” on the above page takes you back to the page where you updated the OMS workspace ID and key. Now click “Next” to proceed.

10

Select accordingly, if you need Microsoft updates or not. Then click “Next”.

11

Review your settings/data. Click on “Install”.

12

Now click on “Finish”.

 

Step 3: Verify connectivity to OMS workspace

Open Control Panel >> Select “Microsoft Monitoring Agent”

13

Select “Azure Log Analytics (OMS)” tab.  You can see that your windows agent has successfully connected to the Microsoft Operations Managment Suite service.

14

 

Step 4: Verify log from AWS windows instance to OMS

From the OMS portal, we can see that our AWS windows instance is connected. [WIN-PQ69983CQ24 is my AWS windows instance name]

16

 

A simple Log Search will give us data fetched from the instance.

17

 

 

Azure – Create a windows VM from a generalized image

This blog shows you how to create a windows VM from a  generalized image. This uses un-managed Azure disks.

For this example, I will be using resources deployed on Azure. i.e., generalize an Azure VM, create a image out of it and then create a new Azure VM using the image.

Below are the steps:

  1. Generalize the VM
  2. Capture a VM image from a generalized Azure VM, that we obtain from Step 1
  3. Create a VM from a generalized VHD image in a storage account, that we obtained from step 2

Part 1: Generalize the VM

  1. Remote Desktop to the Azure virtual machine
  2. **Important** Before running the “Sysprep.exe”. Delete the “unattend.xml” file from the “C:\Windows\Panther” folder. If you do not do this, you will encounter “OS Provision time out” exception while creating the VM from this image.

    This is due to the fact that when an image is deployed the unattend.xml file must come from the ISO image that is attached to the Virtual Machine by Windows Azure as part of VM provisioning from an image.

    8.PNG

  3. From the command prompt / powershell, change the directory to: “C:\Windows\System32\Sysprep”
  4. Run “Sysprep.exe”
  5. In the System Preparation Tool, select the option, “Enter System Out-of-Box Experience (OOBE)”. Select the “Generalize” option from the check-box.
  6. Select “Shutdown” from the drop down list, in Shutdown Options.
    1
  7. Once the Sysprep process completes, the VM will shutdown.
  8. From the Azure portal, you can see the status of the VM as “Stopped (Shutdown)”. Use the below powershell cmdlet to fetch the VM status.

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    2

Part 2: Capture a VM image from a generalized Azure Virtual Machine

  1. De-allocate the VM

    Stop-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine

    Confirm the status of the VM:

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    3

  2. Set the status of the VM to “Generalized”

    Set-AzureRmVm -ResourceGroupName manjug_test -Name windowsmachine -Generalized

    4

    Confirm the status of the VM:

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    5

  3. Create the image by running the below command:

    Save-AzureRmVMImage -ResourceGroupName manjug_test -Name windowsmachine ` -DestinationContainerName images -VHDNamePrefix windowsmachineimage ` -Path C:\Filename.json

    -DestinationContainerName, is the container name where the image will be stored.
    -VHDNamePrefix, is the prefix given to the image.
    -Path, is the path of json file that contains the details of the image that gets created.

    You can get the URL of your image from the JSON file template. Go to the resources > storageProfile > osDisk > image > uri section for the complete path of your image. The URL of the image looks like:

    https://<storageAccountName&gt;.blob.core.windows.net/system/Microsoft.Compute/Images/<imagesContainer>/<templatePrefix-osDisk>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd

    You can also verify the URI in the portal. The image is copied to a container named system in your storage account.

Part 3: Create a VM from a generalized VHD image in a storage account

  1. Obtain the image uri, from the json file (Part 2, step 3). Or you can fetch this from the Azure portal.

    From the portal:

    9.PNG

    From the JSON file:

    6

  2. Set the VHD uri to a variable. Example:

    $imageURI = “https://manjugtestdisks.blob.core.windows.net/system/Microsoft.Compute/Images/images/windowsmachineimage-osDisk.04a4f0cb-268a-49ea-a0d9-a203c8fa8c51.vhd&#8221;

  3. Create a Virtual Network
    Create the subnet. The following sample creates a subnet named mySubnet in the resource group myResourceGroup with the address prefix of 10.0.0.0/24.

    $rgName = “manjug_test”
    $subnetName = “mySubnet”
    $singleSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix 10.0.0.0/24

    Create the virtual network. The following sample creates a virtual network named myVnet in the West US location with the address prefix of 10.0.0.0/16.

    $location = “Southeast Asia”
    $vnetName = “myVnet”
    $vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location `
    -AddressPrefix 10.0.0.0/16 -Subnet $singleSubnet

  4. Create a public IP address and network interface

    Create a public IP address. This example creates a public IP address named myPip.

    $ipName = “myPip”
    $pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName $rgName -Location $location `
    -AllocationMethod Dynamic

    Create the NIC. This example creates a NIC named myNic.

    $nicName = “myNic”
    $nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $location `
    -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id

  5. Create the network security group and an RDP rule

    To be able to log in to your VM using RDP, you need to have a security rule that allows RDP access on port 3389

    This example creates an NSG named myNsg that contains a rule called myRdpRule that allows RDP traffic over port 3389.

    $nsgName = “myNsg”

    $rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name myRdpRule -Description “Allow RDP” `
    -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 `
    -SourceAddressPrefix Internet -SourcePortRange * `
    -DestinationAddressPrefix * -DestinationPortRange 3389

    $nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgName -Location $location `
    -Name $nsgName -SecurityRules $rdpRule

  6. Create a variable for the virtual network

    $vnet = Get-AzureRmVirtualNetwork -ResourceGroupName $rgName -Name $vnetName

  7. Create the Virtual Machine

    The following PowerShell script shows how to set up the virtual machine configurations and use the uploaded VM image as the source for the new installation.

    # Enter a new user name and password to use as the local administrator account
    # for remotely accessing the VM.
    $cred = Get-Credential

    # Name of the storage account where the VHD is located. This example sets the
    # storage account name as “myStorageAccount”
    $storageAccName = “manjugtestdisks”

    # Name of the virtual machine. This example sets the VM name as “myVM”.
    $vmName = “winmachimage”

    # Size of the virtual machine. This example creates “Standard_D2_v2” sized VM.
    # See the VM sizes documentation for more information:
    # https://azure.microsoft.com/documentation/articles/virtual-machines-windows-sizes/
    $vmSize = “Standard_D2_v2”

    # Computer name for the VM. This examples sets the computer name as “myComputer”.
    $computerName = “winmachimage”

    # Name of the disk that holds the OS. This example sets the
    # OS disk name as “myOsDisk”
    $osDiskName = “myOsDisk”

    # Assign a SKU name. This example sets the SKU name as “Standard_LRS”
    # Valid values for -SkuName are: Standard_LRS – locally redundant storage, Standard_ZRS – zone redundant
    # storage, Standard_GRS – geo redundant storage, Standard_RAGRS – read access geo redundant storage,
    # Premium_LRS – premium locally redundant storage.
    $skuName = “Standard_LRS”

    # Get the storage account where the uploaded image is stored
    $storageAcc = Get-AzureRmStorageAccount -ResourceGroupName $rgName -AccountName $storageAccName

    # Set the VM name and size
    $vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize

    #Set the Windows operating system configuration and add the NIC
    $vm = Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -ComputerName $computerName `
    -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
    $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

    # Create the OS disk URI
    $osDiskUri = ‘{0}vhds/{1}-{2}.vhd’ `
    -f $storageAcc.PrimaryEndpoints.Blob.ToString(), $vmName.ToLower(), $osDiskName

    # Configure the OS disk to be created from the existing VHD image (-CreateOption fromImage).
    $vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri `
    -CreateOption fromImage -SourceImageUri $imageURI -Windows

    # Create the new VM
    New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

  8. Verify that the Virtual Machine was created.

~ If this post helps at-least one person. The purpose is served. ~

 

Powershell – Script to check the Azure VHD lease status

The common miss conception while working with Azure compute is to assume that no billing charges will be incurred once the Azure VM is deleted. This is true to certain extent. Because, once you delete the VM, the billing for compute hours will stop. But the billing continues for the VHD (which was previously associated with the VM) that is still available in the Azure storage account.

As the title of the post states – the idea behind this script is to get a list of “Lease status” of Azure VHDs from all the storage accounts under your subscription. This is particularly helpful to delete any unused VHDs. Thus saving a lot of money for your organization.

The complete script is uploaded in the Microsoft Script Center. Use the below link to download it.

Check the Lease status of VHDs

 

 

Powershell – Script to Monitor Azure VM Availabilty

The idea behind writing this script is to have an automated solution to monitor availability of any Azure VMs. The script fetches the current server status, saves it in an Azure Table. Each script execution is one poll. So the second time the script runs, it fetches the current server status of VMs and then compares it to the previous value. If there are any changes to the server status during polling, such server details will be written to a hash table. Finally the details of the servers can be sent to an email.

Since we are monitoring the VM status from “RUNNING” to “VM STOPPED”, this will eliminate the scenarios, where VMs are stopped manually or as per a scheduled shutdown automation script. In these cases the VM status changes from “RUNNING” to “VM DEALLOCATED”.

Feel free to customize the script to add logic if you want to monitor the status of De-allocated VMs as well.

This script uses SendGrid as an email server. Feel free to add your SMTP address if you have one.

This script is useful when you do not yet have a fully automated monitoring like Nagios/OMS. Maybe you have a couple of servers that you want to monitor and do not want to spend more money on a custom monitoring. Simply create a runbook using this script as a baseline and schedule it in the Azure Automation Account.

The script is uploaded to the Microsoft Script Center. Please download it using the below link:

Monitor Azure VM Availability

Azure – Unable to ping Azure Virtual Machine from outside Azure

You buy a new Azure subscription, spin up an Azure Virtual Machine. Now you want to test if it is working or not. So, you pull up the infamous Command Prompt (or powershell) and Ping the VIP (Virtual/Public IP) of your Azure Virtual Machine. Wola!! The ping fails with 100% loss. But you can see that the Azure Portal shows that your virtual machine is up and running. To double check, you even RDP to your VM and it is all good. This is one of the many situations where the Azure new comers get confused. Let me break down this for you:-

The explanation for this behaviour is that the good old, Windows Ping.exe uses ICMP protocol to communicate. But the Azure Load Balancer does not support ICMP protocol when a connection is being made from external source to Azure. This means, your local computer will not be able to “Ping” (probing using Ping.exe) the Azure virtual Machines. However Azure Load Balancer allows ICMP protocol inside the azure (internally). This means, two Azure virtual machines are able to talk to each other.

The solution is to ping the port of your Virtual Machine.

Example: Ping xx.xx.xx.xx:1234

Since Ping.exe does not support probing the port, we have to use the other tools like PSPing, TCPPing etc, to achieve this.

This explains most of it. I am going to demonstrate whatever I just explained.

Below is the details of my virtual machine:

VM Details

When I ping the VIP – 13.76.247.67, using the default Ping.exe. You can observe that we end up having 100% packet loss.

packet_loss

This behaviour is because the Azure Load Balancer does not allow ICMP communication between Azure and the external source. And Microsoft’s Ping.exe uses ICMP protocol.

The solution is to use PSPing (among many other options), and ping the port of the Virtual Machine. Please note that you have to add relevant entry in the NSG (Network Security Group) to allow incoming traffic to your Virtual Machine.

Since this is just a Demo, I have allowed all the traffic to my Virtual Machine via the port 3389. You have to use appropriate NSG and ACLs to your Virtual Machine and Subnet, in your production environment. 

NSG_Allow_All

PSPing.exe comes with a bundle – PSTools. This toolset can be downloaded here.

Copy PsPing onto your executable path. Typing “psping” displays its usage syntax.

psping_syntax

Note: If you are using the PSPing tool for the first time, you may have to agree to the terms and conditions before using it.

Since I have my port – 3389 opened for all incoming traffic. I will go ahead and use the PSPing tool to ping the port from my local computer. And as you can see it works like a charm !!

ping_success

Finally, note that you can ping only to the port for which you have enabled the incoming traffic. Since I have not enabled port 80, I expect the packets to be dropped.

packet_loss_wrong_port

Azure – High level discussion of Azure Storage Architecture

Windows Azure Storage is a cloud solution that provides customers to store seamlessly limitless amount of data for a any duration of time. It has been in production since November 2008. It is used in storing application data, Live Streaming, social networking search, gaming and music content, etc.,

Once you have your data stored in Azure storage, You can access your data any time and from anywhere. And you only pay for what you use and store. Currently we have thousands of customers who are already using Azure Storage Services.

Visit the Azure Portal to create your free subscription and try out Azure Storage. Also, check out the Microsoft article for a jump start and pre-requisites required to use the Azure Storage.

Why use Azure Storage?

Disaster Recovery: Azure Storage stores your data miles apart (minimum 400 miles) in different data centres. This provides a strong guard against natural calamities like earthquakes, tornadoes etc., Replication options like – LRS, ZRS, GRS, RA-GRS are provided, which can be chosen as per the business needs.

Multi Tenancy: As with other services, Azure storage uses the concept of shared tenancy. What this means is, to reduce the storage cost, depending on the varying work loads of the customer, data from multiple customers are served from the same storage infrastructure. This reduces the amount of storage space to be provisioned at a time than having each services run on their own dedicated hardware.

Global Name-space: For ease of use, Azure Storage implements a Global Namespace that allows the data to be stored and accessed in a consistent manner from any location in the world.

Global Partitioned Name-space:

Exabytes of data and beyond are stored in Azure Storage. Azure had to come up with a solution that allowed its clients to store and retrieve data without much of a hassle. To provide this capability, Azure leveraged DNS part of the storage name-space and break it down to three parts: Account Name, a Partition Name and an Object Name.

Syntax: http(s)://AccountName.<service>.core.windows.net/PartitionName/ObjectName

Account Name: This is the customer selected Storage Account Name (entered while creating the storage account– Azure portal or Azure Powershell). This Account Name is used to locate the primary storage cluster and the data centre where the requested data is stored. This primary location is where the preceding requests go to reach the data of that account.

Partition Name: This name locates the data once the request reaches the primary cluster. It is also used to scale out the access to data across the nodes depending on the traffic.

Object Name: This identifies the individual objects within that partition.

For Blobs, the full blob name is the PartitionName.

For Tables, each entity (row) in the table has a primary key that consists of two properties: the PartitionName and the ObjectName. This distinction allows applications using Tables to group rows into the same partition to perform atomic transactions across them.

For Queues, the queue name is the PartitionName and each message has an ObjectName to uniquely identify it within the queue.

Architectural Components:

Azure_Storage

Storage Stamp: This is a cluster of N racks of storage nodes. Each rack is built out as a separate fault domain with redundant networking and power. The goal is to keep the stamp around 70% utilized in terms of capacity, transitions and bandwidth. This is because ~20% is kept as a reserve for (a) disk short stroking to gain better seek timse and higher throughput by utilizing the outer tracks of the disks and (b) to continue providing storage capacity and availability in the presence of a rack failure within a stamp. When the storage stamp reaches 70% utilization, the location service migrates accounts to different stamps using Inter-Stamp replication.

Location Service: Manages all the storage stamps. Also responsible for managing the account name-spaces across all stamps. The LS itself is distributed across two geographical locations for its own disaster recovery.

Azure Storage provides storage from multiple locations. Each location is a data centre, which holds multiple storage stamp. To provision additional capacity, the LS has the ability to add new regions, new locations to regions and new stamps to locations. The LS can then allocate new storage accounts to those new stamps for customers as well as load balance (migrate) existing storage accounts from older stamps to new stamps.

As shown in the figure, When an application requests new Storage Account for storing data, it specifies the location affinity for the storage (Example: US North). The LS then chooses a storage stamp within that location as the primary stamp for the account. The LS then stores the account meta-data information in the chosen storage stamp, which tells the stamp to start taking traffic for the assigned account. The LS then updates the DNS to allow requests to now route from the name https://AccountName.service.core.windows.net/ to that storage stamp’s virtual IP (VIP, an IP address the storage stamp exposes for external traffic).

Three Layers within Storage Stamp:

Stream Layer: This layer stores the bits on disk and is in charge of distributing and replicating the data across many servers to keep data durable within a storage stamp. The stream layer can be thought of as a distributed file system layer within a stamp. It
understands files, called “streams”, how to store them, how to replicate them, etc., but it does not understand higher level object constructs or their semantics.

Partition Layer: The partition layer is built for (a) managing and understanding higher level data abstractions (Blob, Table, Queue), (b) providing a scalable object namespace, (c) providing transaction ordering and strong consistency for objects, (d) storing object data on top of the stream layer, and (e) caching object data to reduce disk I/O.
Front End Layer: The Front-End (FE) layer consists of a set of stateless servers that take incoming requests. Upon receiving a request, an FE looks up the AccountName, authenticates and authorizes the request, then routes the request to a partition server in the partition layer (based on the PartitionName). The system maintains a Partition Map that keeps track of the PartitionName ranges and which partition server is serving which PartitionNames. The FE servers cache the Partition Map and use it to determine which partition server to forward each request to. The FE servers also stream large objects directly from the stream layer and cache frequently accessed data for efficiency.

Two Replication Engines:

Intra-Stamp Replication (Stream Layer): This system provides synchronous replication and is focused on making sure all the data written into a stamp is kept durable within that stamp. It keeps enough replicas of the data across different nodes in different fault domains to keep data durable within the stamp in the face of disk, node, and rack failures. Intra-stamp replication is done completely by the stream layer and is on the critical path of the customer’s write requests. Once a transaction has been replicated successfully with intra-stamp replication, success can be returned back to the customer.

Inter-Stamp Replication (Partition Layer): This system provides asynchronous replication and is focused on replicating data across stamps. Inter-stamp replication is done in the background and is off the critical path of the customer’s request. This replication is at the object level, where either the whole object is replicated or recent delta changes are replicated for a given account. Inter-stamp replication is used for (a) keeping a copy of an account’s data in two locations for disaster recovery and (b) migrating an account’s data between stamps. Inter-stamp replication is configured for an account by the location service and performed by the partition layer.

Note: The above content has been summarized from a technical paper titled: “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency” released by Microsoft.
You can download the PDF here.

Azure – RDP from Linux box to Azure Virtual Machine (Windows Server)

This topic is pretty straight forward. Once you have your Azure Windows Server spun up, you can connect to it using RDP tool if your source is a Windows operating System.

But when you are in Linux the story is different, as it does not have the RDP tool. We have plenty of open source tools that we can use to make a connection from Linux Box to Windows Server.

Since we are discussing about Azure, I am using an Azure VM. Below is the details of that machine:

VM Details

We are going to use the “RDesktop” tool to connect from Linux to Windows. Installation of RDesktop is pretty simple. You may google/bing for assistance, or check this link for the complete installation guide in RedHat/CentOS/Fedora Operating System.

Once you have your RDesktop setup. Enable the RDP port- 3389 in your Azure virtual Machine. You can achieve this by adding “Inbound Security Rules” into your Network Security Group, via the Azure Portal.

For demo purposes, I am allowing all incoming traffic for the port – 3389. However, incase of production environment, you may have to provide appropriate rules for your NSG (Network Security Group) and ACLs (Access Control List) for better security.

NSG_Allow_All

Once RDesktop and RDP ports are configured. You may now run the RDesktop command to connect to your Virtual Machine.

Command: sudo rdesktop <IP Address>

rdesktop_fail

As you can see in the above screenshot, the connection fails. This is an expected behavior, because by default the RDP connection is set to “Allow connections only from computers running Remote Desktop with Network Level Authentication”

NLA_Enabled

In order to connect from Linux machines, we have to disable the NLA.

NLA_Disabled

Now give it another try, the remote connection works like a charm !

rdesktop_success