The issue is Socket.IO
transmits the auth data in the initial WebSocket handshake but If a proxy or load balancer strips these headers or modifies the handshake, the auth data may be lost.
Below is an example of how I used proxy-set headers in the ingress.yaml
file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
appgw.ingress.kubernetes.io/backend-protocol: "http"
appgw.ingress.kubernetes.io/request-timeout: "60"
appgw.ingress.kubernetes.io/proxy-set-header: "Upgrade $http_upgrade"
appgw.ingress.kubernetes.io/proxy-set-header.Connection: "upgrade"
spec:
ingressClassName: azure-application-gateway
rules:
- host: my-server-app.cloudapp.azure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-server-service
port:
number: 3000
Refer to this link for details about Kubernetes annotations.
For the service.yaml
, use the following configuration:
apiVersion: v1
kind: Service
metadata:
name: socketio-service
spec:
selector:
app: socketio
ports:
- protocol: TCP
port: 3000
targetPort: 3000
type: ClusterIP
Make sure add cors and ensure that CORS is configured to allow requests from all domains:
path: "/nodeserver/socket.io/",
cors: {
origin: "*",
methods: ["GET", "POST"],
allowedHeaders: ["my-custom-header"],
credentials: true,
},
});
Build the image, tag it with Azure Container Registry, and push it to the registry.
Then, create the Kubernetes service using this guide. Use the ACR image to deploy the application to the AKS cluster via a YAML file.
Below is the sample deployment.yaml
I used:
apiVersion: apps/v1
kind: Deployment
metadata:
name: socketio-server
labels:
app: socketio
spec:
replicas: 2
selector:
matchLabels:
app: socketio
template:
metadata:
labels:
app: socketio
spec:
containers:
- name: socketio
image:samopath.azurecr.io/newfolder2-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
Connect to your AKS cluster using Azure Cloud Shell, and run the application using Kubernetes. Follow this tutorial for more details.
Check the Deployment Status by using kubectl get deployment
. Check Pod Status by using kubectl get pods
Lists all services in the current namespace by
kubectl get services
.
use kubectl expose deployment socketio-server --type=NodePort --port=3000 --target-port=3000
to exposing it as a NodePort service.
To Update the socketio-server
service to use a LoadBalancer
use
kubectl patch service socketio-server -p '{"spec": {"type": "LoadBalancer"}}'
To view logs from a specific pod, use:
kubectl logs socketio-server-865b857564-c2mxx