add prune and remove unused packages
This commit is contained in:
25
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/HACKING.md
generated
vendored
25
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/HACKING.md
generated
vendored
@@ -1,25 +0,0 @@
|
||||
# Kubernetes Worker
|
||||
|
||||
### Building from the layer
|
||||
|
||||
You can clone the kubernetes-worker layer with git and build locally if you
|
||||
have the charm package/snap installed.
|
||||
|
||||
```shell
|
||||
# Instal the snap
|
||||
sudo snap install charm --channel=edge
|
||||
|
||||
# Set the build environment
|
||||
export JUJU_REPOSITORY=$HOME
|
||||
|
||||
# Clone the layer and build it to our JUJU_REPOSITORY
|
||||
git clone https://github.com/juju-solutions/kubernetes
|
||||
cd kubernetes/cluster/juju/layers/kubernetes-worker
|
||||
charm build -r
|
||||
```
|
||||
|
||||
### Contributing
|
||||
|
||||
TBD
|
||||
|
||||
|
131
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/README.md
generated
vendored
131
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/README.md
generated
vendored
@@ -1,131 +0,0 @@
|
||||
# Kubernetes Worker
|
||||
|
||||
## Usage
|
||||
|
||||
This charm deploys a container runtime, and additionally stands up the Kubernetes
|
||||
worker applications: kubelet, and kube-proxy.
|
||||
|
||||
In order for this charm to be useful, it should be deployed with its companion
|
||||
charm [kubernetes-master](https://jujucharms.com/u/containers/kubernetes-master)
|
||||
and linked with an SDN-Plugin.
|
||||
|
||||
This charm has also been bundled up for your convenience so you can skip the
|
||||
above steps, and deploy it with a single command:
|
||||
|
||||
```shell
|
||||
juju deploy canonical-kubernetes
|
||||
```
|
||||
|
||||
For more information about [Canonical Kubernetes](https://jujucharms.com/canonical-kubernetes)
|
||||
consult the bundle `README.md` file.
|
||||
|
||||
|
||||
## Scale out
|
||||
|
||||
To add additional compute capacity to your Kubernetes workers, you may
|
||||
`juju add-unit` scale the cluster of applications. They will automatically
|
||||
join any related kubernetes-master, and enlist themselves as ready once the
|
||||
deployment is complete.
|
||||
|
||||
## Snap Configuration
|
||||
|
||||
The kubernetes resources used by this charm are snap packages. When not
|
||||
specified during deployment, these resources come from the public store. By
|
||||
default, the `snapd` daemon will refresh all snaps installed from the store
|
||||
four (4) times per day. A charm configuration option is provided for operators
|
||||
to control this refresh frequency.
|
||||
|
||||
>NOTE: this is a global configuration option and will affect the refresh
|
||||
time for all snaps installed on a system.
|
||||
|
||||
Examples:
|
||||
|
||||
```sh
|
||||
## refresh kubernetes-worker snaps every tuesday
|
||||
juju config kubernetes-worker snapd_refresh="tue"
|
||||
|
||||
## refresh snaps at 11pm on the last (5th) friday of the month
|
||||
juju config kubernetes-worker snapd_refresh="fri5,23:00"
|
||||
|
||||
## delay the refresh as long as possible
|
||||
juju config kubernetes-worker snapd_refresh="max"
|
||||
|
||||
## use the system default refresh timer
|
||||
juju config kubernetes-worker snapd_refresh=""
|
||||
```
|
||||
|
||||
For more information on the possible values for `snapd_refresh`, see the
|
||||
*refresh.timer* section in the [system options][] documentation.
|
||||
|
||||
[system options]: https://forum.snapcraft.io/t/system-options/87
|
||||
|
||||
## Operational actions
|
||||
|
||||
The kubernetes-worker charm supports the following Operational Actions:
|
||||
|
||||
#### Pause
|
||||
|
||||
Pausing the workload enables administrators to both [drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/) and [cordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_cordon/)
|
||||
a unit for maintenance.
|
||||
|
||||
|
||||
#### Resume
|
||||
|
||||
Resuming the workload will [uncordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_uncordon/) a paused unit. Workloads will automatically migrate unless otherwise directed via their application declaration.
|
||||
|
||||
## Private registry
|
||||
|
||||
With the "registry" action that is part for the kubernetes-worker charm, you can very easily create a private docker registry, with authentication, and available over TLS. Please note that the registry deployed with the action is not HA, and uses storage tied to the kubernetes node where the pod is running. So if the registry pod changes is migrated from one node to another for whatever reason, you will need to re-publish the images.
|
||||
|
||||
### Example usage
|
||||
|
||||
Create the relevant authentication files. Let's say you want user `userA` to authenticate with the password `passwordA`. Then you'll do :
|
||||
|
||||
echo -n "userA:passwordA" > htpasswd-plain
|
||||
htpasswd -c -b -B htpasswd userA passwordA
|
||||
|
||||
(the `htpasswd` program comes with the `apache2-utils` package)
|
||||
|
||||
Supposing your registry will be reachable at `myregistry.company.com`, and that you already have your TLS key in the `registry.key` file, and your TLS certificate (with `myregistry.company.com` as Common Name) in the `registry.crt` file, you would then run :
|
||||
|
||||
juju run-action kubernetes-worker/0 registry domain=myregistry.company.com htpasswd="$(base64 -w0 htpasswd)" htpasswd-plain="$(base64 -w0 htpasswd-plain)" tlscert="$(base64 -w0 registry.crt)" tlskey="$(base64 -w0 registry.key)" ingress=true
|
||||
|
||||
If you then decide that you want do delete the registry, just run :
|
||||
|
||||
juju run-action kubernetes-worker/0 registry delete=true ingress=true
|
||||
|
||||
## Known Limitations
|
||||
|
||||
Kubernetes workers currently only support 'phaux' HA scenarios. Even when configured with an HA cluster string, they will only ever contact the first unit in the cluster map. To enable a proper HA story, kubernetes-worker units are encouraged to proxy through a [kubeapi-load-balancer](https://jujucharms.com/kubeapi-load-balancer)
|
||||
application. This enables a HA deployment without the need to
|
||||
re-render configuration and disrupt the worker services.
|
||||
|
||||
External access to pods must be performed through a [Kubernetes
|
||||
Ingress Resource](http://kubernetes.io/docs/user-guide/ingress/).
|
||||
|
||||
When using NodePort type networking, there is no automation in exposing the
|
||||
ports selected by kubernetes or chosen by the user. They will need to be
|
||||
opened manually and can be performed across an entire worker pool.
|
||||
|
||||
If your NodePort service port selected is `30510` you can open this across all
|
||||
members of a worker pool named `kubernetes-worker` like so:
|
||||
|
||||
```
|
||||
juju run --application kubernetes-worker open-port 30510/tcp
|
||||
```
|
||||
|
||||
Don't forget to expose the kubernetes-worker application if its not already
|
||||
exposed, as this can cause confusion once the port has been opened and the
|
||||
service is not reachable.
|
||||
|
||||
Note: When debugging connection issues with NodePort services, its important
|
||||
to first check the kube-proxy service on the worker units. If kube-proxy is not
|
||||
running, the associated port-mapping will not be configured in the iptables
|
||||
rulechains.
|
||||
|
||||
If you need to close the NodePort once a workload has been terminated, you can
|
||||
follow the same steps inversely.
|
||||
|
||||
```
|
||||
juju run --application kubernetes-worker close-port 30510
|
||||
```
|
56
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions.yaml
generated
vendored
56
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions.yaml
generated
vendored
@@ -1,56 +0,0 @@
|
||||
pause:
|
||||
description: |
|
||||
Cordon the unit, draining all active workloads.
|
||||
params:
|
||||
delete-local-data:
|
||||
type: boolean
|
||||
description: Force deletion of local storage to enable a drain
|
||||
default: False
|
||||
force:
|
||||
type: boolean
|
||||
description: |
|
||||
Continue even if there are pods not managed by a RC, RS, Job, DS or SS
|
||||
default: False
|
||||
|
||||
resume:
|
||||
description: |
|
||||
UnCordon the unit, enabling workload scheduling.
|
||||
microbot:
|
||||
description: Launch microbot containers
|
||||
params:
|
||||
replicas:
|
||||
type: integer
|
||||
default: 3
|
||||
description: Number of microbots to launch in Kubernetes.
|
||||
delete:
|
||||
type: boolean
|
||||
default: False
|
||||
description: Remove a microbots deployment, service, and ingress if True.
|
||||
upgrade:
|
||||
description: Upgrade the kubernetes snaps
|
||||
registry:
|
||||
description: Create a private Docker registry
|
||||
params:
|
||||
htpasswd:
|
||||
type: string
|
||||
description: base64 encoded htpasswd file used for authentication.
|
||||
htpasswd-plain:
|
||||
type: string
|
||||
description: base64 encoded plaintext version of the htpasswd file, needed by docker daemons to authenticate to the registry.
|
||||
tlscert:
|
||||
type: string
|
||||
description: base64 encoded TLS certificate for the registry. Common Name must match the domain name of the registry.
|
||||
tlskey:
|
||||
type: string
|
||||
description: base64 encoded TLS key for the registry.
|
||||
domain:
|
||||
type: string
|
||||
description: The domain name for the registry. Must match the Common Name of the certificate.
|
||||
ingress:
|
||||
type: boolean
|
||||
default: false
|
||||
description: Create an Ingress resource for the registry (or delete resource object if "delete" is True)
|
||||
delete:
|
||||
type: boolean
|
||||
default: false
|
||||
description: Remove a registry replication controller, service, and ingress if True.
|
76
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/microbot
generated
vendored
76
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/microbot
generated
vendored
@@ -1,76 +0,0 @@
|
||||
#!/usr/local/sbin/charm-env python3
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from charmhelpers.core.hookenv import action_get
|
||||
from charmhelpers.core.hookenv import action_set
|
||||
from charmhelpers.core.hookenv import unit_public_ip
|
||||
from charms.templating.jinja2 import render
|
||||
from subprocess import call, check_output
|
||||
|
||||
os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
|
||||
|
||||
context = {}
|
||||
context['replicas'] = action_get('replicas')
|
||||
context['delete'] = action_get('delete')
|
||||
context['public_address'] = unit_public_ip()
|
||||
|
||||
arch = check_output(['dpkg', '--print-architecture']).rstrip()
|
||||
context['arch'] = arch.decode('utf-8')
|
||||
|
||||
if not context['replicas']:
|
||||
context['replicas'] = 3
|
||||
|
||||
# Declare a kubectl template when invoking kubectl
|
||||
kubectl = ['kubectl', '--kubeconfig=/root/.kube/config']
|
||||
|
||||
# Remove deployment if requested
|
||||
if context['delete']:
|
||||
service_del = kubectl + ['delete', 'svc', 'microbot']
|
||||
service_response = call(service_del)
|
||||
deploy_del = kubectl + ['delete', 'deployment', 'microbot']
|
||||
deploy_response = call(deploy_del)
|
||||
ingress_del = kubectl + ['delete', 'ing', 'microbot-ingress']
|
||||
ingress_response = call(ingress_del)
|
||||
|
||||
if ingress_response != 0:
|
||||
action_set({'microbot-ing':
|
||||
'Failed removal of microbot ingress resource.'})
|
||||
if deploy_response != 0:
|
||||
action_set({'microbot-deployment':
|
||||
'Failed removal of microbot deployment resource.'})
|
||||
if service_response != 0:
|
||||
action_set({'microbot-service':
|
||||
'Failed removal of microbot service resource.'})
|
||||
sys.exit(0)
|
||||
|
||||
# Creation request
|
||||
|
||||
render('microbot-example.yaml', '/root/cdk/addons/microbot.yaml',
|
||||
context)
|
||||
|
||||
create_command = kubectl + ['create', '-f',
|
||||
'/root/cdk/addons/microbot.yaml']
|
||||
|
||||
create_response = call(create_command)
|
||||
|
||||
if create_response == 0:
|
||||
action_set({'address':
|
||||
'microbot.{}.xip.io'.format(context['public_address'])})
|
||||
else:
|
||||
action_set({'microbot-create': 'Failed microbot creation.'})
|
28
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/pause
generated
vendored
28
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/pause
generated
vendored
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -ex
|
||||
|
||||
export PATH=$PATH:/snap/bin
|
||||
|
||||
DELETE_LOCAL_DATA=$(action-get delete-local-data)
|
||||
FORCE=$(action-get force)
|
||||
|
||||
# placeholder for additional flags to the command
|
||||
export EXTRA_FLAGS=""
|
||||
|
||||
# Determine if we have extra flags
|
||||
if [[ "${DELETE_LOCAL_DATA}" == "True" || "${DELETE_LOCAL_DATA}" == "true" ]]; then
|
||||
EXTRA_FLAGS="${EXTRA_FLAGS} --delete-local-data=true"
|
||||
fi
|
||||
|
||||
if [[ "${FORCE}" == "True" || "${FORCE}" == "true" ]]; then
|
||||
EXTRA_FLAGS="${EXTRA_FLAGS} --force"
|
||||
fi
|
||||
|
||||
|
||||
# Cordon and drain the unit
|
||||
kubectl --kubeconfig=/root/.kube/config cordon $(hostname)
|
||||
kubectl --kubeconfig=/root/.kube/config drain $(hostname) ${EXTRA_FLAGS}
|
||||
|
||||
# Set status to indicate the unit is paused and under maintenance.
|
||||
status-set 'waiting' 'Kubernetes unit paused'
|
139
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/registry
generated
vendored
139
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/registry
generated
vendored
@@ -1,139 +0,0 @@
|
||||
#!/usr/local/sbin/charm-env python3
|
||||
#
|
||||
# For a usage examples, see README.md
|
||||
#
|
||||
# TODO
|
||||
#
|
||||
# - make the action idempotent (i.e. if you run it multiple times, the first
|
||||
# run will create/delete the registry, and the reset will be a no-op and won't
|
||||
# error out)
|
||||
#
|
||||
# - take only a plain authentication file, and create the encrypted version in
|
||||
# the action
|
||||
#
|
||||
# - validate the parameters (make sure tlscert is a certificate, that tlskey is a
|
||||
# proper key, etc)
|
||||
#
|
||||
# - when https://bugs.launchpad.net/juju/+bug/1661015 is fixed, handle the
|
||||
# base64 encoding the parameters in the action itself
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from base64 import b64encode
|
||||
|
||||
from charmhelpers.core.hookenv import action_get
|
||||
from charmhelpers.core.hookenv import action_set
|
||||
from charms.templating.jinja2 import render
|
||||
from subprocess import call, check_output
|
||||
|
||||
os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
|
||||
|
||||
deletion = action_get('delete')
|
||||
|
||||
context = {}
|
||||
|
||||
arch = check_output(['dpkg', '--print-architecture']).rstrip()
|
||||
context['arch'] = arch.decode('utf-8')
|
||||
|
||||
# These config options must be defined in the case of a creation
|
||||
param_error = False
|
||||
for param in ('tlscert', 'tlskey', 'domain', 'htpasswd', 'htpasswd-plain'):
|
||||
value = action_get(param)
|
||||
if not value and not deletion:
|
||||
key = "registry-create-parameter-{}".format(param)
|
||||
error = "failure, parameter {} is required".format(param)
|
||||
action_set({key: error})
|
||||
param_error = True
|
||||
|
||||
context[param] = value
|
||||
|
||||
# Create the dockercfg template variable
|
||||
dockercfg = '{"%s": {"auth": "%s", "email": "root@localhost"}}' % \
|
||||
(context['domain'], context['htpasswd-plain'])
|
||||
context['dockercfg'] = b64encode(dockercfg.encode()).decode('ASCII')
|
||||
|
||||
if param_error:
|
||||
sys.exit(0)
|
||||
|
||||
# This one is either true or false, no need to check if it has a "good" value.
|
||||
context['ingress'] = action_get('ingress')
|
||||
|
||||
# Declare a kubectl template when invoking kubectl
|
||||
kubectl = ['kubectl', '--kubeconfig=/root/.kube/config']
|
||||
|
||||
# Remove deployment if requested
|
||||
if deletion:
|
||||
resources = ['svc/kube-registry', 'rc/kube-registry-v0', 'secrets/registry-tls-data',
|
||||
'secrets/registry-auth-data', 'secrets/registry-access']
|
||||
|
||||
if action_get('ingress'):
|
||||
resources.append('ing/registry-ing')
|
||||
|
||||
delete_command = kubectl + ['delete', '--ignore-not-found=true'] + resources
|
||||
delete_response = call(delete_command)
|
||||
if delete_response == 0:
|
||||
action_set({'registry-delete': 'success'})
|
||||
else:
|
||||
action_set({'registry-delete': 'failure'})
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
# Creation request
|
||||
render('registry.yaml', '/root/cdk/addons/registry.yaml',
|
||||
context)
|
||||
|
||||
create_command = kubectl + ['create', '-f',
|
||||
'/root/cdk/addons/registry.yaml']
|
||||
|
||||
create_response = call(create_command)
|
||||
|
||||
if create_response == 0:
|
||||
action_set({'registry-create': 'success'})
|
||||
|
||||
# Create a ConfigMap if it doesn't exist yet, else patch it.
|
||||
# A ConfigMap is needed to change the default value for nginx' client_max_body_size.
|
||||
# The default is 1MB, and this is the maximum size of images that can be
|
||||
# pushed on the registry. 1MB images aren't useful, so we bump this value to 1024MB.
|
||||
cm_name = 'nginx-load-balancer-conf'
|
||||
check_cm_command = kubectl + ['get', 'cm', cm_name]
|
||||
check_cm_response = call(check_cm_command)
|
||||
|
||||
if check_cm_response == 0:
|
||||
# There is an existing ConfigMap, patch it
|
||||
patch = '{"data":{"body-size":"1024m"}}'
|
||||
patch_cm_command = kubectl + ['patch', 'cm', cm_name, '-p', patch]
|
||||
patch_cm_response = call(patch_cm_command)
|
||||
|
||||
if patch_cm_response == 0:
|
||||
action_set({'configmap-patch': 'success'})
|
||||
else:
|
||||
action_set({'configmap-patch': 'failure'})
|
||||
|
||||
else:
|
||||
# No existing ConfigMap, create it
|
||||
render('registry-configmap.yaml', '/root/cdk/addons/registry-configmap.yaml',
|
||||
context)
|
||||
create_cm_command = kubectl + ['create', '-f', '/root/cdk/addons/registry-configmap.yaml']
|
||||
create_cm_response = call(create_cm_command)
|
||||
|
||||
if create_cm_response == 0:
|
||||
action_set({'configmap-create': 'success'})
|
||||
else:
|
||||
action_set({'configmap-create': 'failure'})
|
||||
|
||||
# Patch the "default" serviceaccount with an imagePullSecret.
|
||||
# This will allow the docker daemons to authenticate to our private
|
||||
# registry automatically
|
||||
patch = '{"imagePullSecrets":[{"name":"registry-access"}]}'
|
||||
patch_sa_command = kubectl + ['patch', 'sa', 'default', '-p', patch]
|
||||
patch_sa_response = call(patch_sa_command)
|
||||
|
||||
if patch_sa_response == 0:
|
||||
action_set({'serviceaccount-patch': 'success'})
|
||||
else:
|
||||
action_set({'serviceaccount-patch': 'failure'})
|
||||
|
||||
|
||||
else:
|
||||
action_set({'registry-create': 'failure'})
|
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/resume
generated
vendored
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/resume
generated
vendored
@@ -1,8 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -ex
|
||||
|
||||
export PATH=$PATH:/snap/bin
|
||||
|
||||
kubectl --kubeconfig=/root/.kube/config uncordon $(hostname)
|
||||
status-set 'active' 'Kubernetes unit resumed'
|
5
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/upgrade
generated
vendored
5
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/actions/upgrade
generated
vendored
@@ -1,5 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -eux
|
||||
|
||||
charms.reactive set_state kubernetes-worker.snaps.upgrade-specified
|
||||
exec hooks/config-changed
|
108
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/config.yaml
generated
vendored
108
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/config.yaml
generated
vendored
@@ -1,108 +0,0 @@
|
||||
options:
|
||||
ingress:
|
||||
type: boolean
|
||||
default: true
|
||||
description: |
|
||||
Deploy the default http backend and ingress controller to handle
|
||||
ingress requests.
|
||||
labels:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Labels can be used to organize and to select subsets of nodes in the
|
||||
cluster. Declare node labels in key=value format, separated by spaces.
|
||||
allow-privileged:
|
||||
type: string
|
||||
default: "true"
|
||||
description: |
|
||||
Allow privileged containers to run on worker nodes. Supported values are
|
||||
"true", "false", and "auto". If "true", kubelet will run in privileged
|
||||
mode by default. If "false", kubelet will never run in privileged mode.
|
||||
If "auto", kubelet will not run in privileged mode by default, but will
|
||||
switch to privileged mode if gpu hardware is detected. Pod security
|
||||
policies (PSP) should be used to restrict container privileges.
|
||||
channel:
|
||||
type: string
|
||||
default: "1.11/stable"
|
||||
description: |
|
||||
Snap channel to install Kubernetes worker services from
|
||||
require-manual-upgrade:
|
||||
type: boolean
|
||||
default: true
|
||||
description: |
|
||||
When true, worker services will not be upgraded until the user triggers
|
||||
it manually by running the upgrade action.
|
||||
kubelet-extra-args:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space separated list of flags and key=value pairs that will be passed as arguments to
|
||||
kubelet. For example a value like this:
|
||||
runtime-config=batch/v2alpha1=true profiling=true
|
||||
will result in kube-apiserver being run with the following options:
|
||||
--runtime-config=batch/v2alpha1=true --profiling=true
|
||||
proxy-extra-args:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space separated list of flags and key=value pairs that will be passed as arguments to
|
||||
kube-proxy. For example a value like this:
|
||||
runtime-config=batch/v2alpha1=true profiling=true
|
||||
will result in kube-apiserver being run with the following options:
|
||||
--runtime-config=batch/v2alpha1=true --profiling=true
|
||||
docker-logins:
|
||||
type: string
|
||||
default: "[]"
|
||||
description: |
|
||||
Docker login credentials. Setting this config allows Kubelet to pull images from
|
||||
registries where auth is required.
|
||||
|
||||
The value for this config must be a JSON array of credential objects, like this:
|
||||
[{"server": "my.registry", "username": "myUser", "password": "myPass"}]
|
||||
ingress-ssl-chain-completion:
|
||||
type: boolean
|
||||
default: false
|
||||
description: |
|
||||
Enable chain completion for TLS certificates used by the nginx ingress
|
||||
controller. Set this to true if you would like the ingress controller
|
||||
to attempt auto-retrieval of intermediate certificates. The default
|
||||
(false) is recommended for all production kubernetes installations, and
|
||||
any environment which does not have outbound Internet access.
|
||||
nginx-image:
|
||||
type: string
|
||||
default: "auto"
|
||||
description: |
|
||||
Docker image to use for the nginx ingress controller. Auto will select an image
|
||||
based on architecture.
|
||||
default-backend-image:
|
||||
type: string
|
||||
default: "auto"
|
||||
description: |
|
||||
Docker image to use for the default backend. Auto will select an image
|
||||
based on architecture.
|
||||
snapd_refresh:
|
||||
default: "max"
|
||||
type: string
|
||||
description: |
|
||||
How often snapd handles updates for installed snaps. Setting an empty
|
||||
string will check 4x per day. Set to "max" to delay the refresh as long
|
||||
as possible. You may also set a custom string as described in the
|
||||
'refresh.timer' section here:
|
||||
https://forum.snapcraft.io/t/system-options/87
|
||||
kubelet-extra-config:
|
||||
default: "{}"
|
||||
type: string
|
||||
description: |
|
||||
Extra configuration to be passed to kubelet. Any values specified in this
|
||||
config will be merged into a KubeletConfiguration file that is passed to
|
||||
the kubelet service via the --config flag. This can be used to override
|
||||
values provided by the charm.
|
||||
|
||||
Requires Kubernetes 1.10+.
|
||||
|
||||
The value for this config must be a YAML mapping that can be safely
|
||||
merged with a KubeletConfiguration file. For example:
|
||||
{evictionHard: {memory.available: 200Mi}}
|
||||
|
||||
For more information about KubeletConfiguration, see upstream docs:
|
||||
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
|
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/debug-scripts/inotify
generated
vendored
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/debug-scripts/inotify
generated
vendored
@@ -1,8 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -ux
|
||||
|
||||
# We had to bump inotify limits once in the past, hence why this oddly specific
|
||||
# script lives here in kubernetes-worker.
|
||||
|
||||
sysctl fs.inotify > $DEBUG_SCRIPT_DIR/sysctl-limits
|
||||
ls -l /proc/*/fd/* | grep inotify > $DEBUG_SCRIPT_DIR/inotify-instances
|
15
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/debug-scripts/kubectl
generated
vendored
15
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/debug-scripts/kubectl
generated
vendored
@@ -1,15 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -ux
|
||||
|
||||
export PATH=$PATH:/snap/bin
|
||||
|
||||
alias kubectl="kubectl --kubeconfig=/root/cdk/kubeconfig"
|
||||
|
||||
kubectl cluster-info > $DEBUG_SCRIPT_DIR/cluster-info
|
||||
kubectl cluster-info dump > $DEBUG_SCRIPT_DIR/cluster-info-dump
|
||||
for obj in pods svc ingress secrets pv pvc rc; do
|
||||
kubectl describe $obj --all-namespaces > $DEBUG_SCRIPT_DIR/describe-$obj
|
||||
done
|
||||
for obj in nodes; do
|
||||
kubectl describe $obj > $DEBUG_SCRIPT_DIR/describe-$obj
|
||||
done
|
@@ -1,9 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -ux
|
||||
|
||||
for service in kubelet kube-proxy; do
|
||||
systemctl status snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-systemctl-status
|
||||
journalctl -u snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-journal
|
||||
done
|
||||
|
||||
# FIXME: get the snap config or something
|
@@ -1,2 +0,0 @@
|
||||
# This stubs out charm-pre-install coming from layer-docker as a workaround for
|
||||
# offline installs until https://github.com/juju/charm-tools/issues/301 is fixed.
|
@@ -1,17 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
MY_HOSTNAME=$(hostname)
|
||||
|
||||
: ${JUJU_UNIT_NAME:=`uuidgen`}
|
||||
|
||||
|
||||
if [ "${MY_HOSTNAME}" == "ubuntuguest" ]; then
|
||||
juju-log "Detected broken vsphere integration. Applying hostname override"
|
||||
|
||||
FRIENDLY_HOSTNAME=$(echo $JUJU_UNIT_NAME | tr / -)
|
||||
juju-log "Setting hostname to $FRIENDLY_HOSTNAME"
|
||||
if [ ! -f /etc/hostname.orig ]; then
|
||||
mv /etc/hostname /etc/hostname.orig
|
||||
fi
|
||||
echo "${FRIENDLY_HOSTNAME}" > /etc/hostname
|
||||
hostname $FRIENDLY_HOSTNAME
|
||||
fi
|
362
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/icon.svg
generated
vendored
362
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/icon.svg
generated
vendored
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 26 KiB |
42
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/layer.yaml
generated
vendored
42
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/layer.yaml
generated
vendored
@@ -1,42 +0,0 @@
|
||||
repo: https://github.com/kubernetes/kubernetes.git
|
||||
includes:
|
||||
- 'layer:basic'
|
||||
- 'layer:debug'
|
||||
- 'layer:snap'
|
||||
- 'layer:leadership'
|
||||
- 'layer:docker'
|
||||
- 'layer:metrics'
|
||||
- 'layer:nagios'
|
||||
- 'layer:tls-client'
|
||||
- 'layer:cdk-service-kicker'
|
||||
- 'interface:http'
|
||||
- 'interface:kubernetes-cni'
|
||||
- 'interface:kube-dns'
|
||||
- 'interface:kube-control'
|
||||
- 'interface:aws-integration'
|
||||
- 'interface:gcp-integration'
|
||||
- 'interface:openstack-integration'
|
||||
- 'interface:vsphere-integration'
|
||||
- 'interface:azure-integration'
|
||||
- 'interface:mount'
|
||||
config:
|
||||
deletes:
|
||||
- install_from_upstream
|
||||
options:
|
||||
basic:
|
||||
packages:
|
||||
- 'cifs-utils'
|
||||
- 'ceph-common'
|
||||
- 'nfs-common'
|
||||
- 'socat'
|
||||
- 'virt-what'
|
||||
tls-client:
|
||||
ca_certificate_path: '/root/cdk/ca.crt'
|
||||
server_certificate_path: '/root/cdk/server.crt'
|
||||
server_key_path: '/root/cdk/server.key'
|
||||
client_certificate_path: '/root/cdk/client.crt'
|
||||
client_key_path: '/root/cdk/client.key'
|
||||
cdk-service-kicker:
|
||||
services:
|
||||
- 'snap.kubelet.daemon'
|
||||
- 'snap.kube-proxy.daemon'
|
@@ -1,35 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
|
||||
def get_version(bin_name):
|
||||
"""Get the version of an installed Kubernetes binary.
|
||||
|
||||
:param str bin_name: Name of binary
|
||||
:return: 3-tuple version (maj, min, patch)
|
||||
|
||||
Example::
|
||||
|
||||
>>> `get_version('kubelet')
|
||||
(1, 6, 0)
|
||||
|
||||
"""
|
||||
cmd = '{} --version'.format(bin_name).split()
|
||||
version_string = subprocess.check_output(cmd).decode('utf-8')
|
||||
return tuple(int(q) for q in re.findall("[0-9]+", version_string)[:3])
|
73
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metadata.yaml
generated
vendored
73
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metadata.yaml
generated
vendored
@@ -1,73 +0,0 @@
|
||||
name: kubernetes-worker
|
||||
summary: The workload bearing units of a kubernetes cluster
|
||||
maintainers:
|
||||
- Tim Van Steenburgh <tim.van.steenburgh@canonical.com>
|
||||
- George Kraft <george.kraft@canonical.com>
|
||||
- Rye Terrell <rye.terrell@canonical.com>
|
||||
- Konstantinos Tsakalozos <kos.tsakalozos@canonical.com>
|
||||
- Charles Butler <Chuck@dasroot.net>
|
||||
- Matthew Bruzek <mbruzek@ubuntu.com>
|
||||
- Mike Wilson <mike.wilson@canonical.com>
|
||||
description: |
|
||||
Kubernetes is an open-source platform for deploying, scaling, and operations
|
||||
of application containers across a cluster of hosts. Kubernetes is portable
|
||||
in that it works with public, private, and hybrid clouds. Extensible through
|
||||
a pluggable infrastructure. Self healing in that it will automatically
|
||||
restart and place containers on healthy nodes if a node ever goes away.
|
||||
tags:
|
||||
- misc
|
||||
series:
|
||||
- xenial
|
||||
- bionic
|
||||
subordinate: false
|
||||
requires:
|
||||
kube-api-endpoint:
|
||||
interface: http
|
||||
kube-dns:
|
||||
# kube-dns is deprecated. Its functionality has been rolled into the
|
||||
# kube-control interface. The kube-dns relation will be removed in
|
||||
# a future release.
|
||||
interface: kube-dns
|
||||
kube-control:
|
||||
interface: kube-control
|
||||
aws:
|
||||
interface: aws-integration
|
||||
gcp:
|
||||
interface: gcp-integration
|
||||
openstack:
|
||||
interface: openstack-integration
|
||||
vsphere:
|
||||
interface: vsphere-integration
|
||||
azure:
|
||||
interface: azure-integration
|
||||
nfs:
|
||||
interface: mount
|
||||
provides:
|
||||
cni:
|
||||
interface: kubernetes-cni
|
||||
scope: container
|
||||
resources:
|
||||
cni-amd64:
|
||||
type: file
|
||||
filename: cni.tgz
|
||||
description: CNI plugins for amd64
|
||||
cni-arm64:
|
||||
type: file
|
||||
filename: cni.tgz
|
||||
description: CNI plugins for arm64
|
||||
cni-s390x:
|
||||
type: file
|
||||
filename: cni.tgz
|
||||
description: CNI plugins for s390x
|
||||
kubectl:
|
||||
type: file
|
||||
filename: kubectl.snap
|
||||
description: kubectl snap
|
||||
kubelet:
|
||||
type: file
|
||||
filename: kubelet.snap
|
||||
description: kubelet snap
|
||||
kube-proxy:
|
||||
type: file
|
||||
filename: kube-proxy.snap
|
||||
description: kube-proxy snap
|
2
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metrics.yaml
generated
vendored
2
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metrics.yaml
generated
vendored
@@ -1,2 +0,0 @@
|
||||
metrics:
|
||||
juju-units: {}
|
1500
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py
generated
vendored
1500
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py
generated
vendored
File diff suppressed because it is too large
Load Diff
@@ -1,6 +0,0 @@
|
||||
apiVersion: v1
|
||||
data:
|
||||
body-size: 1024m
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: nginx-load-balancer-conf
|
@@ -1,44 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: default-http-backend
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
app: default-http-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: default-http-backend
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- name: default-http-backend
|
||||
# Any image is permissible as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: {{ defaultbackend_image }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: default-http-backend
|
||||
# namespace: kube-system
|
||||
labels:
|
||||
k8s-app: default-http-backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: default-http-backend
|
@@ -1,179 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: nginx-ingress-{{ juju_application }}-serviceaccount
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: nginx-ingress-{{ juju_application }}-clusterrole
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: nginx-ingress-{{ juju_application }}-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
resourceNames:
|
||||
# Defaults to "<election-id>-<ingress-class>"
|
||||
# Here: "<ingress-controller-leader>-<nginx>"
|
||||
# This has to be adapted if you change either parameter
|
||||
# when launching the nginx-ingress-controller.
|
||||
- "ingress-controller-leader-nginx"
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: nginx-ingress-role-nisa-{{ juju_application }}-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: nginx-ingress-{{ juju_application }}-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nginx-ingress-{{ juju_application }}-serviceaccount
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: nginx-ingress-clusterrole-nisa-{{ juju_application }}-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: nginx-ingress-{{ juju_application }}-clusterrole
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nginx-ingress-{{ juju_application }}-serviceaccount
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: nginx-load-balancer-{{ juju_application }}-conf
|
||||
---
|
||||
apiVersion: {{ daemonset_api_version }}
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: nginx-ingress-{{ juju_application }}-controller
|
||||
labels:
|
||||
juju-application: nginx-ingress-{{ juju_application }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: nginx-ingress-{{ juju_application }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nginx-ingress-{{ juju_application }}
|
||||
spec:
|
||||
nodeSelector:
|
||||
juju-application: {{ juju_application }}
|
||||
terminationGracePeriodSeconds: 60
|
||||
# hostPort doesn't work with CNI, so we have to use hostNetwork instead
|
||||
# see https://github.com/kubernetes/kubernetes/issues/23920
|
||||
hostNetwork: true
|
||||
serviceAccountName: nginx-ingress-{{ juju_application }}-serviceaccount
|
||||
containers:
|
||||
- image: {{ ingress_image }}
|
||||
name: nginx-ingress-{{ juju_application }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
- containerPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
|
||||
- --enable-ssl-chain-completion={{ ssl_chain_completion }}
|
@@ -1,63 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app: microbot
|
||||
name: microbot
|
||||
spec:
|
||||
replicas: {{ replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: microbot
|
||||
strategy: {}
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app: microbot
|
||||
spec:
|
||||
containers:
|
||||
- image: cdkbot/microbot-{{ arch }}:latest
|
||||
imagePullPolicy: ""
|
||||
name: microbot
|
||||
ports:
|
||||
- containerPort: 80
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 80
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 30
|
||||
resources: {}
|
||||
restartPolicy: Always
|
||||
serviceAccountName: ""
|
||||
status: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: microbot
|
||||
labels:
|
||||
app: microbot
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: microbot
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: microbot-ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: microbot.{{ public_address }}.xip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: microbot
|
||||
servicePort: 80
|
@@ -1,39 +0,0 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: default
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: fuseim.pri/ifs
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-client-provisioner
|
||||
spec:
|
||||
containers:
|
||||
- name: nfs-client-provisioner
|
||||
image: quay.io/external_storage/nfs-client-provisioner:latest
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: fuseim.pri/ifs
|
||||
- name: NFS_SERVER
|
||||
value: {{ hostname }}
|
||||
- name: NFS_PATH
|
||||
value: {{ mountpoint }}
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: {{ hostname }}
|
||||
path: {{ mountpoint }}
|
118
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/templates/registry.yaml
generated
vendored
118
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/templates/registry.yaml
generated
vendored
@@ -1,118 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: registry-tls-data
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: {{ tlscert }}
|
||||
tls.key: {{ tlskey }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: registry-auth-data
|
||||
type: Opaque
|
||||
data:
|
||||
htpasswd: {{ htpasswd }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kube-registry-v0
|
||||
labels:
|
||||
k8s-app: kube-registry
|
||||
version: v0
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: kube-registry
|
||||
version: v0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-registry
|
||||
version: v0
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
containers:
|
||||
- name: registry
|
||||
image: cdkbot/registry-{{ arch }}:2.6
|
||||
resources:
|
||||
# keep request = limit to keep this container in guaranteed class
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: REGISTRY_HTTP_ADDR
|
||||
value: :5000
|
||||
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
|
||||
value: /var/lib/registry
|
||||
- name: REGISTRY_AUTH_HTPASSWD_REALM
|
||||
value: basic_realm
|
||||
- name: REGISTRY_AUTH_HTPASSWD_PATH
|
||||
value: /auth/htpasswd
|
||||
volumeMounts:
|
||||
- name: image-store
|
||||
mountPath: /var/lib/registry
|
||||
- name: auth-dir
|
||||
mountPath: /auth
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
name: registry
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: image-store
|
||||
hostPath:
|
||||
path: /srv/registry
|
||||
- name: auth-dir
|
||||
secret:
|
||||
secretName: registry-auth-data
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-registry
|
||||
labels:
|
||||
k8s-app: kube-registry
|
||||
kubernetes.io/cluster-service: "true"
|
||||
kubernetes.io/name: "KubeRegistry"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: kube-registry
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: registry
|
||||
port: 5000
|
||||
protocol: TCP
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: registry-access
|
||||
data:
|
||||
.dockercfg: {{ dockercfg }}
|
||||
type: kubernetes.io/dockercfg
|
||||
{%- if ingress %}
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: registry-ing
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ domain }}
|
||||
secretName: registry-tls-data
|
||||
rules:
|
||||
- host: {{ domain }}
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: kube-registry
|
||||
servicePort: 5000
|
||||
path: /
|
||||
{% endif %}
|
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/wheelhouse.txt
generated
vendored
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/wheelhouse.txt
generated
vendored
@@ -1 +0,0 @@
|
||||
charms.templating.jinja2>=0.0.1,<2.0.0
|
Reference in New Issue
Block a user