Quantcast
Channel: Category Name
Viewing all articles
Browse latest Browse all 1120

Scale-Out File Server Improvements in Windows Server 2019

$
0
0

This blog discusses a new feature in the upcoming release of Windows Server 2019.  Currently, Windows Insiders receive current builds of Server 2019.  We urge you to become an Insider and play a part in making Windows Server 2019 the best that it can be.  To do so, go to this link and sign up.

Failover Clustering Scale-Out File Server was first introduced in Windows Server 2012 to take advantage of Cluster Shared Volumes (CSV).  SOFS works in conjunction with Server Message Block (SMB), so as SMB has been updated through the newer versions, so has Scale-Out File Server.  There are several enhancements that I wanted to bring to light in this post.

SMB Connections move on connect

Scale-Out File Server (SOFS) relies on DNS round robin for inbound connections sent to cluster nodes.  When using Storage Spaces on Windows Server 2016 and older, this behavior can be inefficient: if the connection is routed to a cluster node that is not the owner of the Cluster Shared Volume (aka the coordinator node), all data redirects over the network to another node before returning to the client. The SMB Witness service detects this lack of direct I/O and moves the connection to a coordinator.  This can lead to delays.

In Windows Server 2019, we are much more efficient.  The SMB Server service determines if direct I/O on the volume is possible.  If direct I/O is possible, it passes the connection on.  If it is redirected I/O, it will move the connection to the coordinator before I/O starts.  Synchronous client redirection required changes in the SMB client, so only Windows Server 2019 clients can use this new functionality when talking to a Windows 2019 Failover Cluster.  SMB clients from older OS versions will continue relying upon the SMB Witness to move to a more optimal server.

SMB Bypass of the CSV File System

In a Windows Server 2016, SOFS using Storage Spaces, a client connects to the SMB Server, talks to the CSV File System, and the CSV File System talks to NTFS.  All I/O’s from the remote SMB client go through SMB Server, CSVFS, REFS, and the rest of the storage stack. Since Direct I/O on REFS is not possible, the CSV File System only helps with hiding storage failures.  The same applies for SMB Continuous Availability. We can keep only one layer that hides storage failures and bypass the CSV File System. To do that, SMB Server queries from the CSVFS path to REFS and opens files directly on REFS. All I/O’s from these opens will be bypassing CSVFS and going from SMB Server directly to REFS.

Infrastructure Scale-Out File Server

There is a new Scale-Out File Server role in Windows Server 2019 called Infrastructure File Server.  When you create an Infrastructure File Server, it will create a single namespace share automatically for the CSV drive (i.e. \InfraSOFSNameVolume1, etc.).  In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server.  There can be at most only one infrastructure SOFS cluster role on a Failover Cluster.

To create the Infrastructure SOFS, you would need to use PowerShell.  For example:

Add-ClusterScaleOutFileServerRole -Cluster MyCluster -Infrastructure -Name InfraSOFSName

SMB Loopback

There is an enhancement made with Server Message Block (SMB) to work properly with SMB local loopback to itself which was previously not supported.  This hyper-converged SMB loopback CA is achieved via Virtual Machines accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server.

This is a role that Cluster Sets takes advantage of where the path to the VHD/VHDX is placed as \InfraSOFSNameVolume1.  This \InfraSOFSNameVolume1 path can then be utilized by the virtual machine if it is local or remote.

Identity Tunneling

In Server 2016, if Hyper-V virtual machines are hosted on a SOFS share, you must grant the machine accounts of the Hyper-V compute nodes permission to access the VHD/VHDX files.  If the virtual machines and VHD/VHDX is running on the same cluster, then the user must have rights.  This can make management difficult as two sets of permissions are needed.

In Windows Server 2019 when using SOFS, we now have “identity tunneling” on Infrastructure shares. When you access Infrastructure Share from the same cluster or Cluster Set, the application token is serialized and tunneled to the server, and VM disk access is done using that token. This works even if your identity is Local System, a service, or virtual machine account.

Thanks,
John Marlin
Senior Program Manager
High Availability and Storage


Viewing all articles
Browse latest Browse all 1120

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>