Rediger

Del via


Azure Stack Hub hotfix 1.2002.69.179

Summary

  • Fixed a bug in which BCDR runner logs filled up MASLogs folders on physical hosts.

Fixes rolled up from previous hotfix releases

  • Patched SDN-related binaries on the physical nodes.
  • Fixed an invalid state in Storage Resource Provider for storage accounts migrated from 1910 with suspended state.
  • Improved resiliency of VM provisioning and extension operations.
  • Improved SDN network reliability on the physical nodes.
  • Fixed an issue in which a virtual subnet was not being cleaned up if the tunnel was moved to a different GW VM and then the VGW was deleted.
  • Fixed an issue that could cause registration and internal secret rotation to fail.
  • Added memory-specific settings to crash dump settings.
  • Remediated SMB handle invalidation issue triggered by ESENT error 59 event in TableServer.
  • Fixed an issue which impacted the reliability of downloading subsequent updates.
  • Improved reliability of NuGet package installation after unexpected failure.
  • Fixed an issue where subscription dropdown validation fails when the user only has RG write permission.
  • Fixed an issue in which the blob download page has an issue when downloading large items.
  • Fixed an issue in which the configuration of the retention period for deleted storage accounts is reverted.
  • Improved Network Controller stability.
  • Increased Network Controller log retention to aid in diagnosis.
  • Fixed an issue where marketplace downloads could fail due to a certificate validation error.
  • Include deployment provider identity certificate into the internal secret rotation.
  • Fixed Windows storage WMI to keep call responsive, to improve the reliability of storage management operations.
  • Added TPM status monitor for physical hosts.
  • Restarted SQL VMs to mitigate potential issue with database access which affects access to portal.
  • Improved reliability of storage blob and table service.
  • Fixed an issue in which virtual machine scale set creation with the Standard_DS2_v2 SKU through UI always failed.
  • Configuration update improvements.
  • Fixed KVS enumerator leak in DiskRP to improve reliability of disk operations.
  • Re-enabled the ability to generate host crash dumps and trigger NMI crashes for hangs.
  • Addressed DNS server vulnerability described in CVE-2020-1350.
  • Changes that addressed cluster instability.
  • Improved reliability of JEA endpoint creation.
  • Fixed bug to unblock concurrent VM creation in batch sizes of 20 or above.
  • Improved the reliability and stability of the portal, adding a monitoring capability to restart the hosting service if it experiences any downtime.
  • Addressed an issue where some alerts were not paused during update.
  • Improved diagnostics around failures in DSC resources.
  • Improved error message generated by an unexpected failure in bare metal deployment script.
  • Added resiliency during physical node repair operations.
  • Fixed a code defect that sometimes caused HRP SF app to become unhealthy. Also fixed a code defect that prevented alerts from being suspended during update.
  • Added resiliency to image creation code when the destination path is unexpectedly not present.
  • Added disk cleanup interface for ERCS VMs and ensured that it runs prior to attempting to install new content to those VMs.
  • Improved quorum check for Service Fabric node repair in the auto-remediation path.
  • Improved logic around bringing cluster nodes back online in rare cases where outside intervention puts them into an unexpected state.
  • Improved resiliency of engine code to ensure typos in machine name casing do not cause unexpected state in the ECE configuration when manual actions are used to add and remove nodes.
  • Added a health check to detect VM or physical node repair operations that were left in a partially completed state from previous support sessions.
  • Improved diagnostic logging for installation of content from NuGet packages during update orchestration.
  • Fixed the internal secret rotation failure for customers who use AAD as identity system, and block ERCS outbound internet connectivity.
  • Increased the default timeout of Test-AzureStack for AzsScenarios to 45 minutes.
  • Improved HealthAgent update reliability.
  • Fixed an issue where VM repair of ERCS VMs was not being triggered during remediation actions.
  • Made host update resilient to issues caused by a silent failure to clean up stale infrastructure VM files.
  • Added a preventative fix for certutil parsing errors when using randomly generated passwords.
  • Added a round of health checks prior to the engine update, so that failed admin operations can be allowed to continue running with their original version of orchestration code.
  • Fixed ACS backup failure when the ACSSettingsService backup finished first.
  • Upgraded Azure Stack AD FS farm behavior level to v4. Azure Stack Hubs deployed with 1908 or later are already on v4.
  • Improved reliability of the host update process.
  • Fixed a certificate renewal issue that could have caused internal secret rotation to fail.
  • Fixed the new time server sync alert to correct an issue where it incorrectly detects a time sync issue when the time source was specified with the 0x8 flag.
  • Corrected a validation constraint error that occurred when using the new automatic log collection interface, and it detected https://login.windows.net/ as an invalid Azure AD endpoint.
  • Fixed an issue that prevented the use of SQL auto backup via the SQLIaaSExtension.
  • Corrected the alerting used in Test-AzureStack when validating the network controller certificates.
  • Upgraded Azure Stack AD FS farm behavior level to v4. Azure Stack Hubs deployed with 1908 or later are already on v4.
  • Improved reliability of the host update process.
  • Fixed a certificate renewal issue that could have caused internal secret rotation to fail.
  • Reduced alert triggers in order to avoid unnecessary proactive log collections.
  • Improved reliability of storage upgrade by eliminating Windows Health Service WMI call timeout.

Hotfix information

To apply this hotfix, you must have version 1.2002.0.35 or later.

Important

As outlined in the release notes for the 2002 update, make sure that you refer to the update activity checklist on running Test-AzureStack (with specified parameters), and resolve any operational issues that are found, including all warnings and failures. Also, review active alerts and resolve any that require action.

File information

Download the following files. Then, follow the instructions in Apply updates in Azure Stack to apply this update.

Download the zip file now.

Download the hotfix xml file now.

More information

Azure Stack Hub update resources

Apply updates in Azure Stack

Monitor updates in Azure Stack by using the privileged endpoint