Hello @Igor Veselinovic !
Thank yo for your message
Secrets in Kubernetes are namespaced meaning they belong to the specific namespace so in order to access them via a Deployment you need to deploy in that same namespace.
There are ways to sync-replicate them , for example :
https://cert-manager.io/docs/devops-tips/syncing-secrets-across-namespaces/
A managed service is supposed to be simple and straightforward !
For Airflow running in a Kubernetes environment, especially when managed by a service like Azure Data Factory, it's best to follow the service's guidelines for secret management. If Airflow is managing the deployment of the KubernetesPodOperator
, it will typically expect the secrets to be in the same namespace as where the operator is running. If you need to use secrets in Airflow tasks across different namespaces, you would typically replicate those secrets to the namespaces where the tasks will run.
The dropdown you're seeing for the secret type (Private registry auth, Basic auth, Generic, etc.) is specifying the format or usage of the secret.
For instance, "Private registry auth" could be used to store credentials for accessing private Docker registries.
You are graspin correctly the concept. Be aware it is a Preview service meaning it s not intented for Production and also you may face issues ! Take in account the limitations :
Limitations
Managed Airflow in other regions is available by GA.
Data Sources connecting through airflow should be accessible through public endpoint (network).
DAGs that are inside a Blob Storage in VNet/behind Firewall is currently not supported. Instead we recommend using Git sync feature of Managed Airflow. See, Sync a GitHub repository in Managed Airflow
Importing Dags from Azure Key Vault isn't supported in LinkedServices.
Finally follow the Documentation for updates !
I hope this helps!
Kindly mark the answer as Accepted and Upvote in case it helped! Regards