So from the error, looks like two nodes are marked as unreachable due to the taint and your pod has node affinity requirements that your existing nodes don't satisfy. To troubleshoot:
Check that your cluster autoscaler is properly configured viable max/min counts:
kubectl get deployment cluster-autoscaler -n kube-system -o yaml
Check your pod specs for any node affinity or node selector requirements
kubectl get pod <pod-name> -o yaml
While unlikely, you can also check if you've hit any quotas or limits for your subscription or region.
As a temporary work around you can also add tolerations to your pod deployments. Probably something like this:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoSchedule"
tolerationSeconds: 30
You can mark it 'Accept Answer' and 'Upvote' if this helped you
Regards,
Abiola