Over the past week I've been able to learn a bit more about this issue from tech support.
There is apparently a workaround that is available to customers if/when the "Spark History Server" isn't showing the Spark UI successfully in a databricks workspace.
The workaround is only possible if you start by configuring a cluster to deliver logs to a dbfs location. Within those logs is an "eventlog" file which what is used to render the Spark UI.
This workaround allows us to render the eventlog file which is found in the delivered logs. The eventlog can only be rendered into a freshly-started all-purpose cluster. Once the events are rendered, the Spark UI can be reviewed as normal. The folks in Databricks engineering said they would make this workaround available in a KB article once it has been tested by a sufficient number of customers. This approach is called "replaying" the events. The signature of their method looks like so:
def replaySparkEvents(pathToEventLogs: String): Unit = { ... }
If/when you are unable to use the Spark UI in the Azure Databricks workspace, you should contact tech support. They are likely to provide you with this workaround, especially since the problems with this Spark UI seem to be persistent and recurring and unpredictable.