Using Kubernetes with Artifactory / Nginx Proxy

Basically I ask how to configure yaml / kubernetes to use my Artfiactory for drawing images.

I am new to Kubernetes and have successfully installed what I am trying to do on my private machine based on this tutorial .

At work, I would like to use Kubernetes to allow developers to quickly deploy and scale docker images, however we work in an isolated environment. We have Nginx and Artifactory, who are currently working with our docker clients and can upload all the images needed for this if I manually do this.

Everything goes well with installing Kubernetes and creating / linking containers and submitting jobs; however, when I try to run selenium-hub-rc.yaml, I get errors when I checked the module descriptions for this module,

"Startcontainer for POD failed with ErrImagePull:" image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure, this may be due to lack of credentials for this request. Details: (failed to check the ping registry endpoints https://registry.access.redhat.com/v0/\nv2 ping attempt failed with error: get https://regitry.access.redhat.com/v2: bad request \ n v1 Error ping attemp with error: Get https://registry.access.redhat.com/v1/_ping: invalid request)

Now I believe that what happens here is that the dependency for the seleium image, the sub-infrastructure is trying to load through the public repo, and not my refactoring Artifactory.

This is where my question comes in, I tried to find a way to make image requests by my nodes direct to my Artifactory. Currently, I tried several ways, but all failed.

Here are some configurations that I use.

My nginx is also configured to listen on 8088 for regular docker requests and forward them to the Artifactory repository called docker-remote. It is also configured to listen on 8089 for the registry.access.redhat.com request and redirect it to red-remote repo

It’s clear that I have a rehearsal of Artifactory for dockers on docker-remote for docker.io and red-remote for redhat repo

The nodes / modules have a Hosts file pointing red-myproxy.mylab.lab to the nginx proxy server myproxy.mylab.lab and a docker.conf file to include

INSECURE_REGISTRY = '- insecure-registry myproxy.mylab.lab: 8088' INSECURE_REGISTRY = '- insecure-registry red-myproxy.mylab.lab: 8089'

Everything works fine up to this point, so when I go to create an image, I use Seleium-hub-rc.yaml to create docker images

# selenium-hub-rc.yaml apiVersion: v1 kind: ReplicationController metadata: name: selenium-hub spec: replicas: 1 selector: name: selenium-hub template: metadata: labels: name: selenium-hub spec: containers: - name: selenium-hub image: myproxy.mylab.lab:8088/selenium/hub ports: - containerPort: 4444 

That's all. I am launching a barley and this gives me this error. I tried setting the proxy in the docker.conf and fannel.conf files to see if I can give any default request a pointer to my repo

HTTP_PROXY = "HTTP: //red-proxy.mylab.lab: 8089"

but failed, I tried to figure out how the Yaml could do it, but couldn’t find anything and thought maybe I need to edit the image of selenium directly to do this?

I'm really confused by this, but I'm sure other people make good use of the kunbernetes sandbox and set up multiple repos for the docker images he was trying to pull.

Thank you for taking the time to review my problem and I hope you could help!

+5
source share
1 answer

I found the answer by editing the file /etc/kubernetes/kubelet using the line # pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=re‌​gistry.access.redhat‌​.com/rhel7/pod-infra‌​structure:latest" on the kubernetes nodes themselves and updating them to change the path to registry.redhat.com to my proxy server red-proxy.mylab.lab: 8089 Work! I am so happy that this headache is done :)

Credit goes to wiki.christophchamp.com/index.php?title=Kubernetes

0
source

Source: https://habr.com/ru/post/1262938/


All Articles