Helm list: cannot display configuration files in the namespace "kube-system"

I installed 2.6.2 loopback on the kubernetes 8 cluster. helm init worked fine. but when I started helm list , it gave this error.

  helm list Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system" 

How to fix this RABC error message?

+87
source share
8 answers

Once these commands:

 kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init --service-account tiller --upgrade 

were launched, the problem was resolved.

+196
source

Safer answer

The accepted answer gives the administrator full access to Helm, which is not the best solution from a security point of view. With a little more work, we can limit Helm's access to a specific namespace. More details in the steering wheel documentation .

 $ kubectl create namespace tiller-world namespace "tiller-world" created $ kubectl create serviceaccount tiller --namespace tiller-world serviceaccount "tiller" created 

Define a role that allows Tiller to manage all resources in tiller-world , as in role-tiller.yaml :

 kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-manager namespace: tiller-world rules: - apiGroups: ["", "batch", "extensions", "apps"] resources: ["*"] verbs: ["*"] 

Then run:

 $ kubectl create -f role-tiller.yaml role "tiller-manager" created 

In rolebinding-tiller.yaml ,

 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-binding namespace: tiller-world subjects: - kind: ServiceAccount name: tiller namespace: tiller-world roleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.io 

Then run:

 $ kubectl create -f rolebinding-tiller.yaml rolebinding "tiller-binding" created 

After that, you can run helm init to install Tiller in the tiller-world namespace.

 $ helm init --service-account tiller --tiller-namespace tiller-world 

Now add the prefix to all the commands --tiller-namespace tiller-world or set TILLER_NAMESPACE=tiller-world in your environment variables.

More reliable answer in the future.

Stop using tiller. Helm 3 completely eliminates the need for Tiller. If you are using Helm 2, you can use the helm template to generate yaml from the Helm diagram, and then run kubectl apply to apply the objects to your Kubernetes cluster.

 helm template --name foo --namespace bar --output-dir ./output ./chart-template kubectl apply --namespace bar --recursive --filename ./output -o yaml 
+33
source

Helm works with the default account. You must provide permission to do so.

Only for reading:

 kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system 

For administrator access: for example, for installing packages.

 kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default 
+19
source

The default account does not have API permissions. Helm should probably have a service account assigned, and API permissions for that service account. See RBAC documentation for granting permissions to service accounts: https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions

+3
source

As you said, this is a RABC problem. I think this page is useful!

0
source
 apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system 

kubectl apply -f your-config-file-name.yaml

and then update your helm installation to use serviceAccount:

helm init --service-account tiller --upgrade

0
source

I got this error while trying to install the tiller offline, I thought that the tiller service account was not quite right, but it turned out that the network policy blocked the connection between the tiller and the api server.

The solution was to create a network policy for the tiller, allowing all outgoing communication of the tiller

0
source

I used the following method provided by * suresh Palemoni *, how can I revert these changes?

For administrator access: for example, for installing packages. kubectl create cluster binding add-on-cluster-admin --clusterrole = cluster-admin --serviceaccount = kube-system: default

(actually I would like to comment, but I can’t because of reputation ...)

-1
source

Source: https://habr.com/ru/post/1272511/


All Articles