I am trying to configure DNS support in Kubernetes 1.2 on Centos 7. According to the documentation , there are two ways to do this. The first refers to the “supported tuning of the cluster of cubernets” and includes the setting of environment variables:
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}" DNS_SERVER_IP="10.0.0.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1
I added these settings to / etc / kubernetes / config and rebooted without any effect, so either I do not have a supported cluster tuner configuration (what is this?), Or there is something else required to install its environment.
The second approach requires more manual tuning. It adds two flags to kubelets, which I set by updating / etc / kubernetes / kubelet to include:
KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"
and restarting the cube using systemctl restart kubelet
. Then you need to start the replication controller and the service. The doc documentation above contains several template files for this, which require some editing, as for local changes (my Kubernetes API server listens for the actual IP address of the host name, not 127.0.0.1, which makes it necessary to add - kube-master-url ) and remove some Salt dependencies. When I do this, the replication controller successfully starts four containers, but the kube2sky container ends about a minute after initialization is complete:
[ david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001 I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1 I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service. F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated
I determined that termination is done by the healthz container after the message:
2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1
In addition, all other magazines look normal. However, there is one anomaly: when creating the replication controller, it was necessary to specify --validate = false, since the command otherwise receives the message:
error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false
Could this be related? These arguments are provided directly in the Kubernetes documentation. if not, what is needed for this?
This uses skydns-rc.yaml:
apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources:
and skydns-svc.yaml:
apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: "10.0.0.10" ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP