How to access Kubernetes api from container container?

I used to freeze

https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/ 

like my base url, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine ( http://172.17.8.101:8080/api/v1beta3/namespaces/default/ ), it works fine.

+93
kubernetes
Jun 07 '15 at 4:55
source share
11 answers

In the official documentation, I found this:

https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod

Obviously, I lacked a security token that I did not need in a previous version of Kubernetes. Based on this, I developed, as it seems to me, a simpler solution than starting a proxy or installing golang on my container. Take a look at this example, which receives information from the API for the current container:

 KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME 

I also use include a simple jq binary ( http://stedolan.imtqy.com/jq/download/ ) to parse json for use in bash scripts.

+99
Jun 09 '15 at 17:55
source share

Each module has an automatically applied service account, which allows it to access the server. The service account provides both client credentials in the form of a bearer token and a certificate from a certification authority that was used to sign the certificate provided by apiserver. Using these two pieces of information, you can create a secure authenticated connection to apisever without using curl -k (aka curl --insecure ):

 curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/ 
+59
Oct 13 '15 at 18:12
source share

Using the Python client kubernetes ..

 from kubernetes import client, config config.load_incluster_config() v1_core = client.CoreV1Api() 
+9
Aug 24 '17 at 16:26
source share

wget version:

 KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) wget -vO- --ca-certificate /var/run/secrets/kubernetes.io/serviceaccount/ca.crt --header "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME 
+7
Nov 09 '17 at 10:26
source share

The most important addition to the details already mentioned is that the module from which you are trying to access the API server must have RBAC capabilities for this.

Each object in the k8s system is identified by a service account (for example, the user account used for users). Based on RBAC capabilities, a service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) is populated. Kube-api bindings (e.g. pykube) can use this token as input when creating a connection to kube-api servers. If the module has the correct RBAC capabilities, it will be able to establish a connection to the kube-api server.

+4
Jun 24 '18 at 15:10
source share

From within the pod, the kubernetes api server can be accessed directly at " https: //kubernetes.default ". By default, the default service account is used to access the api server.

Thus, we also need to pass "ca cert" and "default service account token" for authentication with the api server.

The certificate file is stored in the following location inside the container: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

and default service account token: /var/run/secrets/kubernetes.io/serviceaccount/token

You can use nodejs kubbernetes godaddy client .

 let getRequestInfo = () => { return { url: "https://kubernetes.default", ca: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt').toString(), auth: { bearer: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token').toString(), }, timeout: 1500 }; } let initK8objs = () =>{ k8obj = getRequestInfo(); k8score = new Api.Core(k8obj), k8s = new Api.Api(k8obj); } 
+3
Jul 16 '17 at 8:53 on
source share

For those using the Google Container Engine (powered by Kubernetes):

A simple call to https://kubernetes from a cluster using this kubernetes client for Java works .

+2
Jun 26 '17 at 13:56 on
source share

I ran into this problem when trying to access the API from inside the module using Go code. The following is what I implemented to get this to work if someone comes across this issue, wanting to use Go too.

The example uses the pod resource, for which you must use the client-go library if you are working with native kubernetes objects. The code is more useful for those working with CustomResourceDefintions.

 serviceHost := os.GetEnv("KUBERNETES_SERVICE_HOST") servicePort := os.GetEnv("KUBERNETES_SERVICE_PORT") apiVersion := "v1" // For example namespace := default // For example resource := "pod" // For example httpMethod := http.MethodGet // For Example url := fmt.Sprintf("https://%s:%s/apis/%s/namespaces/%s/%s", serviceHost, servicePort, apiVersion, namespace, resource) u, err := url.Parse(url) if err != nil { panic(err) } req, err := http.NewRequest(httpMethod, u.String(), bytes.NewBuffer(payload)) if err != nil { return err } caToken, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token") if err != nil { panic(err) // cannot find token file } req.Header.Set("Content-Type", "application/json") req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", string(caToken))) caCertPool := x509.NewCertPool() caCert, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt") if err != nil { return panic(err) // Can't find cert file } caCertPool.AppendCertsFromPEM(caCert) client := &http.Client{ Transport: &http.Transport{ TLSClientConfig: &tls.Config{ RootCAs: caCertPool, }, }, } resp, err := client.Do(req) if err != nil { log.Printf("sending helm deploy payload failed: %s", err.Error()) return err } defer resp.Body.Close() // Check resp.StatusCode // Check resp.Status 
+2
Apr 12 '18 at 16:28
source share

If RBAC is enabled, the default service account does not have any privileges.

Better create a separate service account for your needs and use it to create your module.

 spec: serviceAccountName: secret-access-sa containers: ... 

Well explained here https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/

+1
Dec 10 '17 at 3:48
source share

I had a similar authentication problem in GKE when python scripts unexpectedly threw exceptions. The solution that worked for me was to give the pods permission through the role

 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: fabric8-rbac subjects: - kind: ServiceAccount # Reference to upper 'metadata.name' name: default # Reference to upper 'metadata.namespace' namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io 

for more information enter link description here

+1
May 22 '18 at 8:44
source share
 curl -v -cacert <path to>/ca.crt --cert <path to>/kubernetes-node.crt --key <path to>/kubernetes-node.key https://<ip:port> 

My version of k8s is 1.2.0, and in other versions it should work too ^ ^

0
Mar 17 '16 at 3:34
source share



All Articles