Hello
I am receiving continuous TCP connection on 5432 port request on my Pod, because of load balancer. I have tried making this service type ClusterIP, after that I was not getting this logs. But I want to use load balancer for external connectivity. So, Is there any way to stop this, or divert it some other port?
I even exposed one more port, but I am still getting TCP connection on 5432
ports:
- name: database
protocol: TCP
port: 5432
targetPort: 5432
nodePort: 31217
- name: http
protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30232
pod logs:
Actually my application prints this log for every TCP connection made by user on 5432 port, but why loadBalancer is sending this connection request? Please help me stop this.
2024-08-01 13:06:20,071 INFO [com.roc.pos.pga.TcpServer] (vert.x-eventloop-thread-0) New connection request
2024-08-01 13:06:20,278 INFO [com.roc.pos.pga.TcpServer] (vert.x-eventloop-thread-0) New connection request
2024-08-01 13:06:20,468 INFO [com.roc.pos.pga.TcpServer] (vert.x-eventloop-thread-0) New connection request
2024-08-01 13:06:20,719 INFO [com.roc.pos.pga.TcpServer] (vert.x-eventloop-thread-0) New connection request
2024-08-01 13:06:20,944 INFO [com.roc.pos.pga.TcpServer] (vert.x-eventloop-thread-0) New connection request
Below is my load balancer configuration.
kind: Service
apiVersion: v1
metadata:
name: server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: 'true'
finalizers:
- service.kubernetes.io/load-balancer-cleanup
spec:
ports:
- name: database
protocol: TCP
port: 5432
targetPort: 5432
nodePort: 30277
selector:
app.kubernetes.io/instance: server
app.kubernetes.io/name: server
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
status:
loadBalancer:
ingress:
- ip: