{"id":1214,"date":"2023-08-20T00:50:00","date_gmt":"2023-08-20T00:50:00","guid":{"rendered":"https:\/\/labiol.xyz\/?p=1214"},"modified":"2024-01-19T05:07:41","modified_gmt":"2024-01-19T05:07:41","slug":"migration-vm-between-environments-and-storage-traps","status":"publish","type":"post","link":"https:\/\/www.labiol.xyz\/index.php\/2023\/08\/20\/migration-vm-between-environments-and-storage-traps\/","title":{"rendered":"Migration VM between environments and storage traps."},"content":{"rendered":"\n<p><strong>Problem definition:<\/strong><\/p>\n\n\n\n<p>VM migration between environments can have various subtle consequences. One of them is the behavior of the disk when migrating, for example, using HCX, from a traditional environment based on SAN storage to a vSAN environment. Traditional SAN environments usually (though not always) utilize thick-provisioned disks. Some arrays, such as advanced HPE arrays, have zero detection capabilities, enabling thin provisioning to occur only on the array, avoiding double thin provisioning. However, this approach may result in significant overhead on storage (exposing volumes on storage beyond the capacity of the disk array). The choice will, of course, depend on the administrator and their best practices or the capabilities of the specific hardware.<\/p>\n\n\n\n<p>VM migration between environments can have various subtle consequences. One of them is the behavior of the disk when migrating, for example, using HCX, from a traditional environment based on SAN storage to a vSAN environment. Traditional SAN environments usually (though not always) utilize thick-provisioned disks. Some arrays, such as advanced HPE arrays, have zero detection capabilities, enabling thin provisioning to occur only on the array, avoiding double thin provisioning. However, this approach may result in significant overhead on storage (exposing volumes on storage beyond the capacity of the disk array). The choice will, of course, depend on the administrator and their best practices or the capabilities of the specific hardware.<\/p>\n\n\n\n<p>In fact, what we anticipate from such calculations is that we have two entries in the table: occupied space and free space. This occupancy includes not only data but also metadata, such as file system information and logical volumes, which constitute a few percent. Typically, these can be safely omitted when adding a secure margin to the final allocations.<\/p>\n\n\n\n<p>So, in general, our expectation can be represented (in very simplistic form) like below:<br><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"749\" height=\"503\" src=\"http:\/\/3.127.215.50\/wp-content\/uploads\/2024\/01\/image-7.png\" alt=\"\" class=\"wp-image-1222\" srcset=\"https:\/\/www.labiol.xyz\/wp-content\/uploads\/2024\/01\/image-7.png 749w, https:\/\/www.labiol.xyz\/wp-content\/uploads\/2024\/01\/image-7-300x201.png 300w\" sizes=\"auto, (max-width: 749px) 100vw, 749px\" \/><\/figure>\n\n\n\n<p>In reality, however, what we are dealing with also includes all files that have been created and deleted from the disk during its lifetime. This is due to the fact that older operating systems (such as Windows 2003, 2008) and older RedHat systems do not support the UNMAP function (iSCSI function), which could inform the ESXi host&#8217;s disk handler that a file has been deleted from the file system and should also be deleted at the VMDK file level. The current situation can be illustrated by the following diagram:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"749\" height=\"517\" src=\"http:\/\/3.127.215.50\/wp-content\/uploads\/2024\/01\/image-6.png\" alt=\"\" class=\"wp-image-1221\" srcset=\"https:\/\/www.labiol.xyz\/wp-content\/uploads\/2024\/01\/image-6.png 749w, https:\/\/www.labiol.xyz\/wp-content\/uploads\/2024\/01\/image-6-300x207.png 300w\" sizes=\"auto, (max-width: 749px) 100vw, 749px\" \/><\/figure>\n\n\n\n<p>The &#8220;trash&#8221; value can quite huge. Based on my observation, it can take even 80-90% of free space.<\/p>\n\n\n\n<p> As you can imagine, migrating such a disk structure means we will be migrating everything!!! This includes data that has been deleted, perhaps years ago, but still remains at the VMDK file level (and file system level).<\/p>\n\n\n\n<p><strong>Solution<\/strong>:<\/p>\n\n\n\n<p>As usual, there is no simple solution, and as usual, there is not a single solution. There are two options: we can try to clean up before migration or after migration\u2014depending on our organizational, technological, and time capabilities. The approach I took involved migrating the &#8216;junk&#8217; to a cloud environment and then cleaning it up there.<\/p>\n\n\n\n<p><strong>Cleaning up<\/strong><\/p>\n\n\n\n<p>The algorithm for each disk looked as follows: <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>run a script that creates a large file filled with zeros, then delete it, and repeat the process to enhance efficiency. <\/li>\n\n\n\n<li>copy VM (storage vMotion) between vSAN datastories<\/li>\n\n\n\n<li>copy VM (storage vMotion) back where should be<\/li>\n<\/ul>\n\n\n\n<p>Fortunately, creating a disk filled with zeros doesn&#8217;t take too long, especially in a fast cloud environment. Below is a PowerShell script that can be used for this purpose, script is based on one founded over the internet and run with success even including on older Windows systems.<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-bash\" data-lang=\"Bash\"><code>&lt;#\n.SYNOPSIS\n Writes a large file full of zeroes to a volume in order to allow a storage\n appliance to reclaim unused space.\n\n.DESCRIPTION\n Creates a file called ThinSAN.tmp on the specified volume that fills the\n volume up to leave only the percent free value (default is 5%) with zeroes.\n This allows a storage appliance that is thin provisioned to mark that drive\n space as unused and reclaim the space on the physical disks.\n \n.PARAMETER Root\n The folder to create the zeroed out file in.  This can be a drive root (c:\\)\n or a mounted folder (m:\\mounteddisk).  This must be the root of the mounted\n volume, it cannot be an arbitrary folder within a volume.\n \n.PARAMETER PercentFree\n A float representing the percentage of total volume space to leave free.  The\n default is .05 (5%)\n\n.EXAMPLE\n PS&gt; Write-ZeroesToFreeSpace -Root &quot;c:\\&quot;\n \n This will create a file of all zeroes called c:\\ThinSAN.tmp that will fill the\n c drive up to 95% of its capacity.\n \n.EXAMPLE\n PS&gt; Write-ZeroesToFreeSpace -Root &quot;c:\\MountPoints\\Volume1&quot; -PercentFree .1\n \n This will create a file of all zeroes called\n c:\\MountPoints\\Volume1\\ThinSAN.tmp that will fill up the volume that is\n mounted to c:\\MountPoints\\Volume1 to 90% of its capacity.\n\n.EXAMPLE\n PS&gt; Get-WmiObject Win32_Volume -filter &quot;drivetype=3&quot; | Write-ZeroesToFreeSpace\n \n This will get a list of all local disks (type=3) and fill each one up to 95%\n of their capacity with zeroes.\n \n.NOTES\n You must be running as a user that has permissions to write to the root of the\n volume you are running this script against. This requires elevated privileges\n using the default Windows permissions on the C drive.\n#&gt;\nparam(\n  [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]\n  [ValidateNotNullOrEmpty()]\n  [Alias(&quot;Name&quot;)]\n  $Root,\n  [Parameter(Mandatory=$false)]\n  [ValidateRange(0,1)]\n  $PercentFree =.05\n)\nprocess{\n  #Convert the $Root value to a valid WMI filter string\n  $FixedRoot = ($Root.Trim(&quot;\\&quot;) -replace &quot;\\\\&quot;,&quot;\\\\&quot;) + &quot;\\\\&quot;\n  $FileName = &quot;ThinSAN.tmp&quot;\n  $FilePath = Join-Path $Root $FileName\n  \n  #Check and make sure the file doesn&#39;t already exist so we don&#39;t clobber someone&#39;s data\n  if( (Test-Path $FilePath) ) {\n    Write-Error -Message &quot;The file $FilePath already exists, please delete the file and try again&quot;\n  } else {\n    #Get a reference to the volume so we can calculate the desired file size later\n    $Volume = gwmi win32_volume -filter &quot;name=&#39;$FixedRoot&#39;&quot;\n    if($Volume) {\n      #I have not tested for the optimum IO size ($ArraySize), 64kb is what sdelete.exe uses\n      $ArraySize = 64kb\n      #Calculate the amount of space to leave on the disk\n      $SpaceToLeave = $Volume.Capacity * $PercentFree\n      #Calculate the file size needed to leave the desired amount of space\n      $FileSize = $Volume.FreeSpace - $SpacetoLeave\n      #Create an array of zeroes to write to disk\n      $ZeroArray = new-object byte[]($ArraySize)\n      \n      #Open a file stream to our file \n      $Stream = [io.File]::OpenWrite($FilePath)\n      #Start a try\/finally block so we don&#39;t leak file handles if any exceptions occur\n      try {\n        #Keep track of how much data we&#39;ve written to the file\n        $CurFileSize = 0\n        while($CurFileSize -lt $FileSize) {\n          #Write the entire zero array buffer out to the file stream\n          $Stream.Write($ZeroArray,0, $ZeroArray.Length)\n          #Increment our file size by the amount of data written to disk\n          $CurFileSize += $ZeroArray.Length\n        }\n      } finally {\n        #always close our file stream, even if an exception occurred\n        if($Stream) {\n          $Stream.Close()\n        }\n        #always delete the file if we created it, even if an exception occurred\n        if( (Test-Path $FilePath) ) {\n          del $FilePath\n        }\n      }\n    } else {\n      Write-Error &quot;Unable to locate a volume mounted at $Root&quot;\n    }\n  }\n}<\/code><\/pre><\/div>\n\n\n\n<p>After copying VM between vSAN datastore, VM size will be reduced almost immediately after  migration.<\/p>\n\n\n\n<p>There is another option to proceed with the newest VMware Public Cloud implementation (refering to AVS) where UNMAP function can be run from Azure AVS interface.<\/p>\n\n\n\n<p>For testing purpose, having script to create large, non-empty (!) file could be also beneficial. After many tests (again on old version of Windows) below one I can recommend (because of speed and simplicity):<\/p>\n\n\n\n<div class=\"hcb_wrap\"><pre class=\"prism line-numbers lang-bash\" data-lang=\"Bash\"><code># Set the size of the file in bytes\n$size = 10GB\n$chunkSize = 1GB\n\n# Create the file and write random data to it in chunks\n$file = New-Item -ItemType File e:\\file7.txt -Force\n$stream = $file.OpenWrite()\n$bytesWritten = 0\nwhile ($bytesWritten -lt $size) {\n    # Generate random data for the current chunk\n    $chunk = New-Object byte[] $chunkSize\n    $rand = New-Object System.Random\n    $rand.NextBytes($chunk)\n\n    # Write the chunk to the file\n    $stream.Write($chunk, 0, $chunk.Length)\n    $stream.Flush()\n\n    # Update the number of bytes written\n    $bytesWritten += $chunkSize\n}\n\n# Close the file stream\n$stream.Close()\n\n# Verify the file size\n(Get-Item e:\\file7.txt).length<\/code><\/pre><\/div>\n\n\n\n<p><strong>Lessen learn:<\/strong><\/p>\n\n\n\n<p>Migraion operations, especially in large, complex environments, are not trivial. Beyond simple tasks, such as calculating the target space, one must ponder and identify potential risks and traps. I hope this article has clearly highlighted and perhaps reminded you of old challenges, aiding in a successful migration.&#8221;<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Problem definition: VM migration between environments can have various subtle consequences. One of them is the behavior of the disk when migrating, for example, using HCX, from a traditional environment based on SAN storage to a vSAN environment. Traditional SAN environments usually (though not always) utilize thick-provisioned disks. Some arrays, &hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-1214","post","type-post","status-publish","format-standard","hentry","category-vmware"],"_links":{"self":[{"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/posts\/1214","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/comments?post=1214"}],"version-history":[{"count":11,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/posts\/1214\/revisions"}],"predecessor-version":[{"id":1227,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/posts\/1214\/revisions\/1227"}],"wp:attachment":[{"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/media?parent=1214"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/categories?post=1214"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.labiol.xyz\/index.php\/wp-json\/wp\/v2\/tags?post=1214"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}