azure storage

PowerShell – Delete Azure blobs older than X number of days

As a cost optimization strategy, organizations decide to retain data that are certain days old and delete the old data.

The same strategy can be implemented in Azure Storage. Let’s say if our application requires data that are 60 days old, then our approach is to retain only 60 days of data. And delete any blob that is older than 60 days.

This script deletes Azure blobs that are older than X days. Here ‘X’ is the number of days that you want to retain the data. (60, as stated in my example)

Download the script

You can create an Azure Automation Runbook from this script and schedule it to run every day. So, you will not be billed for the unwanted data.

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Advertisement

PowerShell – Fetch Azure Page Blobs from an Azure subscription

This script fetches the details of PAGE BLOB across the Azure subscription and saves it as a CSV file. The CSV file will be saved under the location from where the script was run.

The general use case could be to understand how many VHD files are present in your subscription. These could be your OS Disks, Datadisks or your VM snapshots.

Download Script Link

If you are looking for a script that generates a report for “unattached” managed and un-managed disks, then please visit the below link:

AZURE – GENERATE REPORT FOR UNATTACHED AZURE DISKS (MANAGED AND UN-MANAGED)

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – Install exe files (BigFix) on Azure windows virtual machine using Azure Custom Script Extension (CSE)

What is custom script extension?

The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post-deployment configuration, software installation, or any other configuration/management task. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension runtime. The Custom Script extension integrates with Azure Resource Manager templates, and can also be run using the Azure CLI, PowerShell, Azure portal, or the Azure Virtual Machine REST API.

This document details on how to use Custom Script Extension using the Azure PowerShell Module against an already provisioned Azure Windows virtual machine to install BigFix client.

Pre-requisites:

Operating System

The Custom Script Extension for Windows can be run on Windows 10 Client, Windows Server 2008 R2, 2012, 2012 R2, and 2016 releases.

Script Location

The script needs to be stored in Azure Blob storage, or any other location accessible through a valid URL.

Internet Connectivity

The Custom Script Extension for Windows requires that the target virtual machine is connected to the internet.

The BigFix client files are stored in the storage account:

1

We shall be naming the extension as “bigfixinstallextension.” Make sure that an extension with the same name already does not exist.

Step 1: Get the Azure virtual machine config object

$vm = get-azurermvm -ResourceGroupName “datadog-test” -Name “dg-private-1”

Step 2: Query the Virtual Machine object for existing extensions:

$vm.Extensions

You should see an output similar to below if it does not have any custom extensions.

2

Note: any azure virtual machine will have one default extension – “MicrosoftMonitoringAgent.” This is because Azure installs “Microsoft Monitoring Agent” on every virtual machine. Make sure, the virtual machine does not have another extension with the name “ bigfixinstallextension.” If it does have, we have to remove that extension.

Below link provides an Azure Powershell cmdlet to remove the extension:

https://docs.microsoft.com/en-us/powershell/module/azurerm.compute/remove-azurermvmextension?view=azurermps-5.5.0

Once, we have confirmed that a custom extension with name “ bigfixinstallextension” does not exists, we can proceed in adding one. Below is the powershell code:

# Resource group of virtual machine

$resource_group = “datadog-test”

# location of virtual machine

$location = “East US 2”

# azure virtual machine name

$vm_name = “dg-private-1”

# storage account name where the custom script is stored

$storage_account_name = “xxxx”

# storage account key of where the custom script is stored

$storage_account_key = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”

# custom script file name

$file_name = “azure_custom_script_execution_install_bigfix.ps1”

# container name where the custom script is stored

$container_name = “msifiles”

# extension name for the custom script extension

$extension_name = “bigfixinstallextension”

# azure powershell cmdlet to execute add the custom script extension and to execute the powershell file

Set-AzureRmVMCustomScriptExtension -ResourceGroupName $resource_group -Location $location -VMName $vm_name -Name $extension_name -TypeHandlerVersion “1.1” -StorageAccountName $storage_account_name -StorageAccountKey $storage_account_key -FileName $file_name -ContainerName $container_name

Output:

4

Now login to the Azure windows virtual machine to confirm if the BigFix client is installed and running:

5

The downloaded file can be found inside the virtual machine at the below file path:

C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.9\Downloads\1

Extension execution output is logged to files found under the following directory on the target virtual machine. For troubleshooting.

C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension

Explaining the PowerShell scriptazure_custom_script_execution_install_bigfix.ps1

This script gets executed as part of the Custom Script Execution. And it is responsible for installing the BigFix agent.

Below is the code:

# Create a directory to hold BigFix files

new-item ‘c:\bigfix’ -ItemType directory

# Copy BigFix files from Azure storage to local directory

Invoke-WebRequest -Uri https://customsc.blob.core.windows.net/msifiles/clientsettings.cfg -outfile ‘c:\bigfix\clientsettings.cfg’

Invoke-WebRequest -Uri https://customsc.blob.core.windows.net/msifiles/masthead.afxm -outfile ‘c:\bigfix\masthead.afxm’

Invoke-WebRequest -Uri https://customsc.blob.core.windows.net/msifiles/setup.exe -outfile ‘c:\bigfix\setup.exe’

# Execute the setup file

$arguments = “/S /v/qn”

$filepath = “c:\bigfix\setup.exe”

Start-Process $filepath $arguments -wait

Execution Flow:

1. Create a directory to hold big fix files.

2. Copy the three files associated with BigFix installation to the directory created in Step 1.

3. Execute the setup file in silent mode.

Click here to download my PowerShell scripts for Free !!

Click here for Azure tutorial videos !!

Azure – Configure Storage Spaces for Azure VM for increased disk performance

This blog will walk you through on how to configure Storage Spaces for Azure Virtual Machine (Windows). Finally, we get to see some IOPS benchmarks.

Each data disk (Standard Storage Account) has about 500 IOPS. In this example, we are going to create a Storage Space by attaching 4 data disks to a Standard A2 sized Azure VM. In theory, this should increase the IOPS to 2k. (500 x 4 = 2000)

 

Configuring Storage Spaces for Azure windows VM

Step 1: Attach four data disks to your virtual machine.

From the azure portal, select your virtual machine >> Click on “Disks” >> click on the “+ Add data disk” >> Fill out the details accordingly >> Save the disk.

1

Repeat this process 3 more times and we will have 4 data disks attached to our virtual machine as shown below:

4_disk_attached_azure_portal.PNG

 

Inside the VM, we can see the disks attached:

4_disk_not_initialized

 

 

Step 2: Login to the virtual machine and run the following PowerShell cmdlets. This will configure Storage Space and will create a drive for you.

 

In our example, we will configure one volume. Hence, only one storage pool. If you are implementing SQL Server or any other architecture, you may need more than one storage pool.

Create a new virtual disk using all the space available from the storage pool using a Simple configuration. The interleave is set to 256KB. We are also setting the number of columns to be equal to the number of disks in the pool

Format the disk with NTFS filesystem and a 64KB allocation unit size.

Below is a snippet of the PowerShell console after executing the above cmdlets.

create_storage_space.PNG

Finally, we can see the drive. A drive named “E” will be created with a free space of ~4TB.

e_drive_created.png

 

Benchmark Tests

Obviously, this works. However, I have run IOPS test to have a visual. You may choose any standard benchmark testing tools. To keep it simple, I have used a PowerShell script authored by Mikael Nystrom, Microsoft MVP. This script is a wrapper to the SQLIO.exe. You may download the PowerShell script and SQLIO.exe file, HERE.

 

Download the archive file to your local system and copy it to the server. Extract the contents to any folder.

 

Below is a sample script to estimate IOPS:

.\DiskPerformance.ps1 -TestFileName test.dat –TestFileSizeInGB 1 -TestFilepath F:\temp -TestMode Get-SmallIO -FastMode True -RemoveTestFile True -OutputFormat Out-GridView

Feel free to tweak the parameter values for different results.

Explaination of parameters:

-TestFileName test.dat

The name of the file, it will create the file using FSUTIL, but it checks if it exists and if it does it stops, you can override that with the –RemoveTestFile True

–TestFileSizeInGB 1

Size of the file, it has fixed values, use the TAB key to flip through them

-TestFilepath C:\VMs

The folder can also be an UNC path, it will create the folder so it does not need to exist.

-TestMode Get-SmallIO

There is too test modes Get-LargeIO or Get-SmallIO, you use Get-LargeIO to measure the transfer rate and you use Get-SmallIO to measure IOPS

-FastMode True

Fast mode true runs each test for just 10 seconds, it gives you a hint, if you don’t set it or set it to false it will run for 60 sec (it will take a break for 10 sec between each run)

-RemoveTestFile True

Removes the test file if it exists

-OutputFormat Out-GridView

Choose between Out-Gridview or Format-Table

 

IOPS for C drive on Azure VM [OS Disk]:

C_drive

 

IOPS for D drive on Azure VM [Temporary Disk]:

D_drive

 

IOPS for E drive on Azure VM [Standard data disk]:

E_drive

 

IOPS for F drive on Azure VM [Storage Spaces]:

F_drive

 

We can use this storage strategy when we have a small amount of data but the IOPS requirement is huge.

Example scenario:

You have 500GB of data, and the IOPS for that data exceeds 1K. Storing 500GB of data in one data disk will create IOPS problems since each data disk has a 500 IOPS limit. But, if we combine 4 disks and create a storage space, the IOPS will increase to ~2k [we have to consider latency etc., to have a correct figure]. Since we are using the same Standard A2 virtual machine and Azure charges for the overall data and not per disk, the pricing will be the same.

 

 

Azure – Create a windows VM from a generalized image

This blog shows you how to create a windows VM from a  generalized image. This uses un-managed Azure disks.

For this example, I will be using resources deployed on Azure. i.e., generalize an Azure VM, create a image out of it and then create a new Azure VM using the image.

Below are the steps:

  1. Generalize the VM
  2. Capture a VM image from a generalized Azure VM, that we obtain from Step 1
  3. Create a VM from a generalized VHD image in a storage account, that we obtained from step 2

Part 1: Generalize the VM

  1. Remote Desktop to the Azure virtual machine
  2. **Important** Before running the “Sysprep.exe”. Delete the “unattend.xml” file from the “C:\Windows\Panther” folder. If you do not do this, you will encounter “OS Provision time out” exception while creating the VM from this image.

    This is due to the fact that when an image is deployed the unattend.xml file must come from the ISO image that is attached to the Virtual Machine by Windows Azure as part of VM provisioning from an image.

    8.PNG

  3. From the command prompt / powershell, change the directory to: “C:\Windows\System32\Sysprep”
  4. Run “Sysprep.exe”
  5. In the System Preparation Tool, select the option, “Enter System Out-of-Box Experience (OOBE)”. Select the “Generalize” option from the check-box.
  6. Select “Shutdown” from the drop down list, in Shutdown Options.
    1
  7. Once the Sysprep process completes, the VM will shutdown.
  8. From the Azure portal, you can see the status of the VM as “Stopped (Shutdown)”. Use the below powershell cmdlet to fetch the VM status.

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    2

Part 2: Capture a VM image from a generalized Azure Virtual Machine

  1. De-allocate the VM

    Stop-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine

    Confirm the status of the VM:

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    3

  2. Set the status of the VM to “Generalized”

    Set-AzureRmVm -ResourceGroupName manjug_test -Name windowsmachine -Generalized

    4

    Confirm the status of the VM:

    (Get-AzureRmVM -ResourceGroupName manjug_test -Name windowsmachine -Status).Statuses

    5

  3. Create the image by running the below command:

    Save-AzureRmVMImage -ResourceGroupName manjug_test -Name windowsmachine ` -DestinationContainerName images -VHDNamePrefix windowsmachineimage ` -Path C:\Filename.json

    -DestinationContainerName, is the container name where the image will be stored.
    -VHDNamePrefix, is the prefix given to the image.
    -Path, is the path of json file that contains the details of the image that gets created.

    You can get the URL of your image from the JSON file template. Go to the resources > storageProfile > osDisk > image > uri section for the complete path of your image. The URL of the image looks like:

    https://<storageAccountName&gt;.blob.core.windows.net/system/Microsoft.Compute/Images/<imagesContainer>/<templatePrefix-osDisk>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd

    You can also verify the URI in the portal. The image is copied to a container named system in your storage account.

Part 3: Create a VM from a generalized VHD image in a storage account

  1. Obtain the image uri, from the json file (Part 2, step 3). Or you can fetch this from the Azure portal.

    From the portal:

    9.PNG

    From the JSON file:

    6

  2. Set the VHD uri to a variable. Example:

    $imageURI = “https://manjugtestdisks.blob.core.windows.net/system/Microsoft.Compute/Images/images/windowsmachineimage-osDisk.04a4f0cb-268a-49ea-a0d9-a203c8fa8c51.vhd&#8221;

  3. Create a Virtual Network
    Create the subnet. The following sample creates a subnet named mySubnet in the resource group myResourceGroup with the address prefix of 10.0.0.0/24.

    $rgName = “manjug_test”
    $subnetName = “mySubnet”
    $singleSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix 10.0.0.0/24

    Create the virtual network. The following sample creates a virtual network named myVnet in the West US location with the address prefix of 10.0.0.0/16.

    $location = “Southeast Asia”
    $vnetName = “myVnet”
    $vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location `
    -AddressPrefix 10.0.0.0/16 -Subnet $singleSubnet

  4. Create a public IP address and network interface

    Create a public IP address. This example creates a public IP address named myPip.

    $ipName = “myPip”
    $pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName $rgName -Location $location `
    -AllocationMethod Dynamic

    Create the NIC. This example creates a NIC named myNic.

    $nicName = “myNic”
    $nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $location `
    -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id

  5. Create the network security group and an RDP rule

    To be able to log in to your VM using RDP, you need to have a security rule that allows RDP access on port 3389

    This example creates an NSG named myNsg that contains a rule called myRdpRule that allows RDP traffic over port 3389.

    $nsgName = “myNsg”

    $rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name myRdpRule -Description “Allow RDP” `
    -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 `
    -SourceAddressPrefix Internet -SourcePortRange * `
    -DestinationAddressPrefix * -DestinationPortRange 3389

    $nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgName -Location $location `
    -Name $nsgName -SecurityRules $rdpRule

  6. Create a variable for the virtual network

    $vnet = Get-AzureRmVirtualNetwork -ResourceGroupName $rgName -Name $vnetName

  7. Create the Virtual Machine

    The following PowerShell script shows how to set up the virtual machine configurations and use the uploaded VM image as the source for the new installation.

    # Enter a new user name and password to use as the local administrator account
    # for remotely accessing the VM.
    $cred = Get-Credential

    # Name of the storage account where the VHD is located. This example sets the
    # storage account name as “myStorageAccount”
    $storageAccName = “manjugtestdisks”

    # Name of the virtual machine. This example sets the VM name as “myVM”.
    $vmName = “winmachimage”

    # Size of the virtual machine. This example creates “Standard_D2_v2” sized VM.
    # See the VM sizes documentation for more information:
    # https://azure.microsoft.com/documentation/articles/virtual-machines-windows-sizes/
    $vmSize = “Standard_D2_v2”

    # Computer name for the VM. This examples sets the computer name as “myComputer”.
    $computerName = “winmachimage”

    # Name of the disk that holds the OS. This example sets the
    # OS disk name as “myOsDisk”
    $osDiskName = “myOsDisk”

    # Assign a SKU name. This example sets the SKU name as “Standard_LRS”
    # Valid values for -SkuName are: Standard_LRS – locally redundant storage, Standard_ZRS – zone redundant
    # storage, Standard_GRS – geo redundant storage, Standard_RAGRS – read access geo redundant storage,
    # Premium_LRS – premium locally redundant storage.
    $skuName = “Standard_LRS”

    # Get the storage account where the uploaded image is stored
    $storageAcc = Get-AzureRmStorageAccount -ResourceGroupName $rgName -AccountName $storageAccName

    # Set the VM name and size
    $vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize

    #Set the Windows operating system configuration and add the NIC
    $vm = Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -ComputerName $computerName `
    -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
    $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

    # Create the OS disk URI
    $osDiskUri = ‘{0}vhds/{1}-{2}.vhd’ `
    -f $storageAcc.PrimaryEndpoints.Blob.ToString(), $vmName.ToLower(), $osDiskName

    # Configure the OS disk to be created from the existing VHD image (-CreateOption fromImage).
    $vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri `
    -CreateOption fromImage -SourceImageUri $imageURI -Windows

    # Create the new VM
    New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

  8. Verify that the Virtual Machine was created.

~ If this post helps at-least one person. The purpose is served. ~

 

Powershell – Script to check the Azure VHD lease status

The common miss conception while working with Azure compute is to assume that no billing charges will be incurred once the Azure VM is deleted. This is true to certain extent. Because, once you delete the VM, the billing for compute hours will stop. But the billing continues for the VHD (which was previously associated with the VM) that is still available in the Azure storage account.

As the title of the post states – the idea behind this script is to get a list of “Lease status” of Azure VHDs from all the storage accounts under your subscription. This is particularly helpful to delete any unused VHDs. Thus saving a lot of money for your organization.

The complete script is uploaded in the Microsoft Script Center. Use the below link to download it.

Check the Lease status of VHDs

 

 

Azure – High level discussion of Azure Storage Architecture

Windows Azure Storage is a cloud solution that provides customers to store seamlessly limitless amount of data for a any duration of time. It has been in production since November 2008. It is used in storing application data, Live Streaming, social networking search, gaming and music content, etc.,

Once you have your data stored in Azure storage, You can access your data any time and from anywhere. And you only pay for what you use and store. Currently we have thousands of customers who are already using Azure Storage Services.

Visit the Azure Portal to create your free subscription and try out Azure Storage. Also, check out the Microsoft article for a jump start and pre-requisites required to use the Azure Storage.

Why use Azure Storage?

Disaster Recovery: Azure Storage stores your data miles apart (minimum 400 miles) in different data centres. This provides a strong guard against natural calamities like earthquakes, tornadoes etc., Replication options like – LRS, ZRS, GRS, RA-GRS are provided, which can be chosen as per the business needs.

Multi Tenancy: As with other services, Azure storage uses the concept of shared tenancy. What this means is, to reduce the storage cost, depending on the varying work loads of the customer, data from multiple customers are served from the same storage infrastructure. This reduces the amount of storage space to be provisioned at a time than having each services run on their own dedicated hardware.

Global Name-space: For ease of use, Azure Storage implements a Global Namespace that allows the data to be stored and accessed in a consistent manner from any location in the world.

Global Partitioned Name-space:

Exabytes of data and beyond are stored in Azure Storage. Azure had to come up with a solution that allowed its clients to store and retrieve data without much of a hassle. To provide this capability, Azure leveraged DNS part of the storage name-space and break it down to three parts: Account Name, a Partition Name and an Object Name.

Syntax: http(s)://AccountName.<service>.core.windows.net/PartitionName/ObjectName

Account Name: This is the customer selected Storage Account Name (entered while creating the storage account– Azure portal or Azure Powershell). This Account Name is used to locate the primary storage cluster and the data centre where the requested data is stored. This primary location is where the preceding requests go to reach the data of that account.

Partition Name: This name locates the data once the request reaches the primary cluster. It is also used to scale out the access to data across the nodes depending on the traffic.

Object Name: This identifies the individual objects within that partition.

For Blobs, the full blob name is the PartitionName.

For Tables, each entity (row) in the table has a primary key that consists of two properties: the PartitionName and the ObjectName. This distinction allows applications using Tables to group rows into the same partition to perform atomic transactions across them.

For Queues, the queue name is the PartitionName and each message has an ObjectName to uniquely identify it within the queue.

Architectural Components:

Azure_Storage

Storage Stamp: This is a cluster of N racks of storage nodes. Each rack is built out as a separate fault domain with redundant networking and power. The goal is to keep the stamp around 70% utilized in terms of capacity, transitions and bandwidth. This is because ~20% is kept as a reserve for (a) disk short stroking to gain better seek timse and higher throughput by utilizing the outer tracks of the disks and (b) to continue providing storage capacity and availability in the presence of a rack failure within a stamp. When the storage stamp reaches 70% utilization, the location service migrates accounts to different stamps using Inter-Stamp replication.

Location Service: Manages all the storage stamps. Also responsible for managing the account name-spaces across all stamps. The LS itself is distributed across two geographical locations for its own disaster recovery.

Azure Storage provides storage from multiple locations. Each location is a data centre, which holds multiple storage stamp. To provision additional capacity, the LS has the ability to add new regions, new locations to regions and new stamps to locations. The LS can then allocate new storage accounts to those new stamps for customers as well as load balance (migrate) existing storage accounts from older stamps to new stamps.

As shown in the figure, When an application requests new Storage Account for storing data, it specifies the location affinity for the storage (Example: US North). The LS then chooses a storage stamp within that location as the primary stamp for the account. The LS then stores the account meta-data information in the chosen storage stamp, which tells the stamp to start taking traffic for the assigned account. The LS then updates the DNS to allow requests to now route from the name https://AccountName.service.core.windows.net/ to that storage stamp’s virtual IP (VIP, an IP address the storage stamp exposes for external traffic).

Three Layers within Storage Stamp:

Stream Layer: This layer stores the bits on disk and is in charge of distributing and replicating the data across many servers to keep data durable within a storage stamp. The stream layer can be thought of as a distributed file system layer within a stamp. It
understands files, called “streams”, how to store them, how to replicate them, etc., but it does not understand higher level object constructs or their semantics.

Partition Layer: The partition layer is built for (a) managing and understanding higher level data abstractions (Blob, Table, Queue), (b) providing a scalable object namespace, (c) providing transaction ordering and strong consistency for objects, (d) storing object data on top of the stream layer, and (e) caching object data to reduce disk I/O.
Front End Layer: The Front-End (FE) layer consists of a set of stateless servers that take incoming requests. Upon receiving a request, an FE looks up the AccountName, authenticates and authorizes the request, then routes the request to a partition server in the partition layer (based on the PartitionName). The system maintains a Partition Map that keeps track of the PartitionName ranges and which partition server is serving which PartitionNames. The FE servers cache the Partition Map and use it to determine which partition server to forward each request to. The FE servers also stream large objects directly from the stream layer and cache frequently accessed data for efficiency.

Two Replication Engines:

Intra-Stamp Replication (Stream Layer): This system provides synchronous replication and is focused on making sure all the data written into a stamp is kept durable within that stamp. It keeps enough replicas of the data across different nodes in different fault domains to keep data durable within the stamp in the face of disk, node, and rack failures. Intra-stamp replication is done completely by the stream layer and is on the critical path of the customer’s write requests. Once a transaction has been replicated successfully with intra-stamp replication, success can be returned back to the customer.

Inter-Stamp Replication (Partition Layer): This system provides asynchronous replication and is focused on replicating data across stamps. Inter-stamp replication is done in the background and is off the critical path of the customer’s request. This replication is at the object level, where either the whole object is replicated or recent delta changes are replicated for a given account. Inter-stamp replication is used for (a) keeping a copy of an account’s data in two locations for disaster recovery and (b) migrating an account’s data between stamps. Inter-stamp replication is configured for an account by the location service and performed by the partition layer.

Note: The above content has been summarized from a technical paper titled: “Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency” released by Microsoft.
You can download the PDF here.