I was using also Azure Private AKS cluster and facing this issue when I was trying to access the private AKS cluster using kubectl from hub Virtual Network's jump server wherein the Private AKS cluster was deployed in spoke Virtual Network. I was using Azure Firewall for network traffic management.
In my case this happened because the Azure Private AKS uses private DNS zone and private endpoint, and my hub VNET jump box was unable to resolve the DNS for k8s control plane's kubeapi server, so in order to resolve I had manually added DNS record in jump server's C:\Windows\System32\drivers\etc\hosts file like below.
10.0.1.11 aks-xxx-test-dns-xxxxxx.xxxx-xx-axx-2xxxxxxx.privatelink.eastus.azmk8s.io
Replace the Kubernetes service IP and endpoint private DNS FQDN as per your environment.
Also this is just for testing purpose, in production scenario you should add correct firewall rules to resolve the private DNS of AKS control plan API server endpoint and configure correct route tables routes if you are using any for AKS VNET subnets and jump server subnets.