System Center Virtual Machine Manager 2019 Update Rollup 1 available

The first update rollup for SCVMM 2019 has been release by Microsoft on February 4th.

This fixes several issues:

  • Unable to add Windows Server 2019 hosts in untrusted domain to SCVMM.
  • Changes to VM network adapter or VM network overwrites associated ACL.
  • Unable to pull LLDP information on pNICs bount to a vSwitch.
  • Long running service template deployments time out after 3 hours. Timeout parameter can now be configured to time above 3 hours using the HKLM\Software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\GuestCommunicatorStatusTimeoutSecs registry key to any desired value.
  • VMM service experiences high memory usage with large number of objects created in tbl_ADHC_HostVolume
  • Unable to assign VM network to VMs on the hosts
  • Automatic Dynamic Optimization fails on clusters in untrusted domain
  • VMM jobs take long time to run whenever there is VMM server fail over to another node. 
  • Storage Provider Refresh fails when the NIC has no MAC present.
  • Unable to create a file share with the same name on different file servers through SCVMM console.
  • Cluster creation fails when VMM service is running under gMSaccount with ‘Access denied’ exception.
  • In addition to these, all the issues fixed in System Center 2016 VMM UR8 and prior URs for VMM 2016 are also fixed in System Center VMM 2019 UR1. 

New Features have been added into SCVMM 2019:

  • Ability to deploy Ubuntu 18.04 VMs
  • Nested virtualization can be enabled via VM templates, service templates and also when creating a new VM from the console
  • Cluster rolling upgrade is now supported for S2D clusters.
  • Deduplication is supported on ReFS volumes for Hyperconverged and SOFS
  • Storage DO (Dynamic Optimization) – helps in preventing cluster shared storage (CSV and file shares) from becoming full due to expansion/new VHDs placed on the cluster shared storage. You can now set a threshold value to trigger a warning when free storage space in the cluster shared storage falls below the threshold, during a new disk placement or auto migration of VHDs to other shared storage in the cluster
  • Support for storage health monitoring
    Storage health monitoring helps you to monitor the health and operational status of storage pool, LUNs, and physical disks in the VMM fabric. You can monitor the storage health in the Fabric page of VMM console.
  • VMM 2019 supports configuration of SLB VIPs while deploying multi-tier application by using the service templates
  • VMM 2019 supports encryption of VM networks. Using the new encrypted networks feature, end-to-end encryption can be easily configured on VM networks by using the network controller (NC). This encryption prevents the traffic between two VMs on the same network and same subnet, from being read and manipulated. The control of encryption is at the subnet level and encryption can be enabled/disabled for each subnet of the VM network.
  • In VMM 2019, you can configure Layer 3 forwarding gateway using the VMM console
  • Support for Static MAC address on VMs deployed on a VMM cloud
    This feature allows you to set static MAC address on VMs deployed on a cloud. You can also change the MAC address from static to dynamic and vice versa for the already deployed VMs.
  • Azure Integration – VM update management through VMM using Azure Automation Subscription. VMM 2019 is introducing the possibility of patching and updating on-prem VMs (managed by VMM) by integrating VMM with Azure automation subscription.
  • New RBAC Role – Virtual Machine Administrator
    In a scenario where enterprises want to create a user role for troubleshooting, it is necessary that the user has access to all the VMs so the user can make any required changes on the VMs to resolve the issue. There is also a need for the user to have access to the fabric to identify the root cause for the issue. However, for security reasons, this user should not be given the privileges to make any changes on the fabric (such as add storage, add hosts etc. The current role-based access control (RBAC) in VMM does not have a role defined for this persona and the existing roles of Delegated Admin and Fabric admin have too little or more than necessary permissions to perform just troubleshooting. To address this issue, VMM 2019 supports a new role called Virtual Machine Administrator. The user of this role has Read and Write access to all VMs but read-only access to the fabric.
  • Group Managed Service Account (gMSA) helps improve the security posture and provides convenience through automatic password management, simplified service principle name (SPN) management, and the ability to delegate the management to other administrators. VMM 2019 supports the use of gMSA for Management server service account.

New features added by Update Rollup 1:

  • Support for management of replicated library shares
    Large enterprises, usually have multi-site datacenter deployments to cater to various offices across globe. These enterprises typically have a locally available library server to access files for VM deployment than accessing the library shares from a remote location. This is to avoid any network related issues one might experience. However, library files need to be consistent across all the datacenters to ensure uniform VM deployments. To maintain uniformity of library contents, organizations use replication technologies. VMM now supports the management of library servers, which are replicated. You can use any replication technologies such as DFSR and manage the replicated shares through VMM.
  • Configuration of DCB settings on S2D clusters
    Remote Direct Memory Access (RDMA) in conjunction with Data Center Bridging (DCB) helps to achieve similar level of performance and losslessness in an Ethernet network as in fiber channel networks. VMM 2019 UR1 supports configuration of data center bridging (DCB) on S2D clusters.

Note

You must configure the DCB settings consistently across all the hosts and the fabric network (switches). A mis-configured DCB setting in any one of the host/fabric device is detrimental to the S2D performance.

  • User experience improvements in logical networks
    In VMM 2019 UR1, user’s experience while creating logical networks has been enhanced. Logical networks are now grouped in product description, based on use-cases. Also, provided illustration for each logical network type and a dependency graph.
  • Additional options to enable nested virtualization
    You can now enable nested virtualization while creating a new VM, deploying VMs through VM templates and service templates. In earlier releases, nested virtualization is supported only on deployed VMs. Learn more about enabling nested virtualization.

System Center Virtual Machine Manager ports and protocols.

Port and protocol exceptions

Connect Port/protocol Details Configure
VMM server to VMM agent on Windows Server-based hosts/remote library server 80: WinRM; 135: RPC; 139: NetBIOS; 445: SMB (over TCP) Used by the VMM agent

Inbound rule on hosts

Can’t modify
VMM server to VMM agent on Windows Server-based hosts/remote library server 443:HTTPS BITS data channel for file transfers

Inbound rule on hosts

Modify in VMM setup
VMM server to VMM agent on Windows Server-based hosts/remote library server 5985:WinRM Control channel

Inbound rule on hosts

Modify in VMM setup
VMM server to VMM agent on Windows Server-based hosts/remote library server 5986:WinRM Control channel (SSL)

Inbound rule on hosts

Can’t modify
VMM server to VMM guest agent (VM data channel) 443:HTTPS BITS data channel for file transfers

Inbound rule on machines running the agent

The VMM guest agent is a special version of the VMM agent. It’s is installed on VMs that are part of a service template, and on Linux VMs (with or without a service template).

Can’t modify
VMM server to VMM guest agent (VM control channel) 5985:WinRM Control channel

Inbound rule on machines running the agent

Can’t modify
VMM host to host 443:HTTPS BITS data channel for file transfers

Inbound rule on hosts and VMM server

Modify in VMM setup
VMM server to VWware ESXi servers/Web Services 22:SFTP

Inbound rule on hosts

Can’t modify
VMM server to load balancer 80:HTTP; 443:HTTPS Channel used for load balancer management Modify in load balancer provider
VMM server to remote SQL Server database 1433:TDS SQL Server listener

Inbound rule on SQL Server

Modify in VMM setup
VMM server to WSUS update servers 80/8530:HTTP; 443/8531:HTTPS Data and control channels

Inbound rule on WSUS server

Can’t modify from VMM
VMM library server to Hyper-V hosts 443:HTTPS BITS data channel for file transfers

Inbound rule on hosts – 443

Modify in VMM setup
VMM console to VMM WCF:8100 (HTTP); WCF:8101 (HTTPS); Net.TCP: 8102 Inbound rule on VMM console machine Modify in VMM setup
VMM server to storage management service WMI Local call
Storage management service to SMI-S provider CIM-XML Provider-specific
VMM server to Baseboard Management Controller (BMC) 443: HTTP (SMASH over WS-Management) Inbound rule on BMC device Modify on BMC device
VMM server to Baseboard Management Controller (BMC) 623: IPMI Inbound rule on BMC device Modify on BMC device
VMM server to Windows PE agent 8101:WCF; 8103:WCF 8101 is used for control channel, 8103 is used for time sync Modify in VMM setup
VMM server to WDS PXE provider 8102: WCF Inbound rule on PXE server
VMM server to Hyper-V host in untrusted/perimeter domain 443:HTTPS (BITS) BITS data channel for file transfers

Inbound rule on VMM server

Library server to Hyper-V host in untrusted/perimeter domain 443:HTTPS BITS data channel for file transfers

Inbound rule on VMM library

VMM server to Windows file server 80: WinRM; 135: RPC; 139: NetBIOS; 445: SMB (over TCP) Used by the VMM agent

Inbound rule on file server

VMM server to Windows file server 443:HTTPS BITS used for file transfer

Inbound rule on file server

VMM server to Windows file server 5985/5986:WinRM Control channel

Inbound rule on file server

For more information read the Microsoft docs here: https://docs.microsoft.com/en-us/system-center/vmm/plan-ports-protocols?view=sc-vmm-2019

Hyper-V Updating Integration components for Windows Server 2016

The way to do things before was painful – you had to use Windows Update to update the VMGuest.ISO which then you had to mount inside the guest and run the update from the VMGuest.ISO and reboot the VM. This had to be done manually on each VM.

You could use System Center Virtual Machine Manager (SCVMM) which allowed for batch reboots.

In Windows Server 2016 things have changed for the better – Windows Update will automatically update the integration components inside the VM if you are running any of the OSes below:

  • Windows Server 2016
  • Windows 10
  • Windows Server 2012 R2
  • Windows 8.1

If you are running an older OS like below you need to enable the Data Exchange Integration service and make sure it is running:

  • Windows Server 2012
  • Windows 8
  • Windows 7
  • Windows Vista SP2

But now we have another scenario – what if I live migrated my VMs from Windows Server 2012 /2012 R2 to Windows Server 2016? Will Windows update work from the start ? Well, not really. So what we need to do is to update manually the integration services by downloading the latest version of the integration services as a cab file from the Microsoft Download Center here: https://support.microsoft.com/en-us/help/3071740/hyper-v-integration-components-update-for-windows-virtual-machines-tha and run a PowerShell cmdlet:

Add-WindowsPackage -Online –PackagePath <path to .CAB file>

This can now be automated via Powershell to be done in batches on all VMs.