The issue was resolved with Microsoft support help.
Issues was on Databricks side, the Databricks High Concurrency cluster setting - spark.databricks.pyspark.enableProcessIsolation was set to true.
When spark.databricks.pyspark.enableProcessIsolation was set to true, Databricks will block all outbound connection apart from port 443.
Solution was to add spark.databricks.pyspark.iptable.outbound.whitelisted.ports 10800 configuration on cluster advanced setting, since Apache Ignite thin clients connects on port 10800.
Pyignite Client.connect failed: Connection refused
Porsche Me
136
Reputation points
- vNet address: 10.106.0.0/15
- databricks public subnet address prefix: 10.106.3.0/24
- databricks private subnet address prefix: 10.106.4.0/24
- Databricks Runtime 7.4 (Apache Spark 3.0.1) deployed with delegated subnets
- Kubernetes subnet address prefix: 10.106.8.0/22
- Kubernetes version 1.19.7
- Apache Ignite 2.9.1 deployed in the kubernetes $ kubectl get svc -n ayushman
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cohortstore LoadBalancer 10.0.163.252 10.106.8.255 8080:31751/TCP,10800:30608/TCP,10900:31092/TCP 29h
We are getting 'Connection refused' error when trying to connect to Apache Ignite cluster from Databricks Notebooks. Below is our simple code snippet.
from pyignite import Client
client = Client(timeout=40.0)
client.connect('10.106.8.255', 10800)
We are able to connect to Apache Ignite using ./sqlline.sh from a Linux machine with in the vNet (outside the K8S subnet)
Any help to resolve this issues?
1 answer
Sort by: Most helpful
-
Porsche Me 136 Reputation points
2021-03-11T17:07:06.957+00:00