Kubernetes DNS Error in Kubernetes 1.2

I am trying to configure DNS support in Kubernetes 1.2 on Centos 7. According to the documentation , there are two ways to do this. The first refers to the “supported tuning of the cluster of cubernets” and includes the setting of environment variables:

ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}" DNS_SERVER_IP="10.0.0.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1 

I added these settings to / etc / kubernetes / config and rebooted without any effect, so either I do not have a supported cluster tuner configuration (what is this?), Or there is something else required to install its environment.

The second approach requires more manual tuning. It adds two flags to kubelets, which I set by updating / etc / kubernetes / kubelet to include:

 KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local" 

and restarting the cube using systemctl restart kubelet . Then you need to start the replication controller and the service. The doc documentation above contains several template files for this, which require some editing, as for local changes (my Kubernetes API server listens for the actual IP address of the host name, not 127.0.0.1, which makes it necessary to add - kube-master-url ) and remove some Salt dependencies. When I do this, the replication controller successfully starts four containers, but the kube2sky container ends about a minute after initialization is complete:

 [ david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001 I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1 I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service. F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated 

I determined that termination is done by the healthz container after the message:

 2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1 

In addition, all other magazines look normal. However, there is one anomaly: when creating the replication controller, it was necessary to specify --validate = false, since the command otherwise receives the message:

 error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false 

Could this be related? These arguments are provided directly in the Kubernetes documentation. if not, what is needed for this?

This uses skydns-rc.yaml:

 apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m # Kube2sky watches all pods. memory: 200Mi requests: cpu: 100m memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain="cluster.local" - --kube-master-url=http://192.168.87.159:8080 - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain="cluster.local" ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. 

and skydns-svc.yaml:

 apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: "10.0.0.10" ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP 
+5
source share
1 answer

I just commented out the lines containing the successThreshold and failureThreshold in skydns-rc.yaml , then re-run the kubectl commands.

 kubectl create -f skydns-rc.yaml kubectl create -f skydns-svc.yaml 
+1
source

Source: https://habr.com/ru/post/1245793/


All Articles