Assign Kubernetes External IP Address

EDIT: The whole point of my installation is to achieve (if possible) the following results:

  • I have several k8s nodes
  • When I contact the IP address (from my company network), it should be redirected to one of my containers / pod / service / whatever.
  • I should be able to easily configure this IP address (e.g. in my .yml service definition).

I am running a small Kubernetes cluster (built with kubeadm) to evaluate if I can migrate my Docker (old) Swarm installation to k8s. A feature that I absolutely need is the ability to assign IP containers, for example, using MacVlan.

In my current docker setup, I use MacVlan to assign IP addresses from my company network to some containers so that I can communicate directly (without a reverse proxy server), as if it were any physical server. I am trying to achieve something similar with k8s.

I found out that:

  • I need to use the Service
  • I cannot use the LoadBalancer type, as for compatible cloud providers (e.g. GCE or AWS).
  • I have to use ExternalIPs
  • Is Ingress Resources some kind of reverse proxy?

My yaml file:

apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 nodeSelector: kubernetes.io/hostname: k8s-slave-3 --- kind: Service apiVersion: v1 metadata: name: nginx-service spec: type: ClusterIP selector: app: nginx ports: - name: http protocol: TCP port: 80 targetPort: 80 externalIPs: - ABCD 

I was hoping my service would get IP ABCD (which is one of my networks). My deployment works, since I can reach my nginx container from inside the k8s cluster using its ClusterIP.

What am I missing? Or at least where can I find information about my network traffic to see if packets are arriving?

EDIT :

 $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.96.0.1 <none> 443/TCP 6d nginx-service 10.102.64.83 ABCD 80/TCP 23h 

Thanks.

+14
source share
8 answers

If this is for testing only, then try

 kubectl port-forward service/nginx-service 80:80 

Then you can

 curl http://localhost:80 
+9
source

A solution that can work (and not only for testing, although it has its drawbacks ) is to configure your Pod to map the host network to the hostNetwork specification of hostNetwork set to true .

This means that you do not need a service to provide your Pod, since it will always be available on your host through one port (you specified the containerPort in the manifest). In this case, it is not necessary to record the DNS mapping.

It also means that you can only run one instance of this module on a given node (talking about the flaws ...). Thus, this makes it a good candidate for a DaemonSet object.

If your Pod still needs to access / resolve Kubernetes internal hostnames, you need to set the dnsPolicy specification to ClusterFirstWithNoHostNet . This parameter will allow your module to access the K8S DNS service.

Example:

 apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx spec: template: metadata: labels: app: nginx-reverse-proxy spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet tolerations: # allow a Pod instance to run on Master - optional - key: node-role.kubernetes.io/master effect: NoSchedule containers: - image: nginx name: nginx ports: - name: http containerPort: 80 - name: https containerPort: 443 

EDIT: I was put on this track thanks to the ingress-nginx documentation

+4
source

You can just set an external IP

CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'

For example: - $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'

+3
source

First of all, run this command:

 kubectl get -n namespace services 

The above command will return the output something like this:

  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h 

From the above output, it can be seen that the external IP addresses are not yet assigned to the services. To assign external IP addresses for the internal service, run the following command.

  kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}' 

and to assign an external IP address to an external service, run this command.

  kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}' 

Now get the namespace service to verify the assignment of external IP addresses:

 kubectl get -n namespace services 

We get the following conclusion:

 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h 

Hooray !!! Kubernetes External IPs are now assigned.

+2
source

you can try kube-keepalived-vip configurtion to route traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip

+1
source

You can try adding "type: NodePort" to your yaml file for this service, and then you will have a port to access it through a web browser or from the outside. For my case it helped.

0
source

Just enable the extra option.

 kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1 
-1
source

I donโ€™t know if this will help in your particular case, but what I did (and I work in a Bare Metal cluster) is to use a LoadBalancer and install loadBalancerIP as well as externalIPs on the IP of my server, as you did.

After that, the correct external IP address appeared for the load balancer.

-1
source

Source: https://habr.com/ru/post/1268803/


All Articles