Bumping k8s dependencies to 1.13
This commit is contained in:
35
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/README.md
generated
vendored
35
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/README.md
generated
vendored
@@ -27,6 +27,38 @@ To add additional compute capacity to your Kubernetes workers, you may
|
||||
join any related kubernetes-master, and enlist themselves as ready once the
|
||||
deployment is complete.
|
||||
|
||||
## Snap Configuration
|
||||
|
||||
The kubernetes resources used by this charm are snap packages. When not
|
||||
specified during deployment, these resources come from the public store. By
|
||||
default, the `snapd` daemon will refresh all snaps installed from the store
|
||||
four (4) times per day. A charm configuration option is provided for operators
|
||||
to control this refresh frequency.
|
||||
|
||||
>NOTE: this is a global configuration option and will affect the refresh
|
||||
time for all snaps installed on a system.
|
||||
|
||||
Examples:
|
||||
|
||||
```sh
|
||||
## refresh kubernetes-worker snaps every tuesday
|
||||
juju config kubernetes-worker snapd_refresh="tue"
|
||||
|
||||
## refresh snaps at 11pm on the last (5th) friday of the month
|
||||
juju config kubernetes-worker snapd_refresh="fri5,23:00"
|
||||
|
||||
## delay the refresh as long as possible
|
||||
juju config kubernetes-worker snapd_refresh="max"
|
||||
|
||||
## use the system default refresh timer
|
||||
juju config kubernetes-worker snapd_refresh=""
|
||||
```
|
||||
|
||||
For more information on the possible values for `snapd_refresh`, see the
|
||||
*refresh.timer* section in the [system options][] documentation.
|
||||
|
||||
[system options]: https://forum.snapcraft.io/t/system-options/87
|
||||
|
||||
## Operational actions
|
||||
|
||||
The kubernetes-worker charm supports the following Operational Actions:
|
||||
@@ -89,7 +121,7 @@ service is not reachable.
|
||||
Note: When debugging connection issues with NodePort services, its important
|
||||
to first check the kube-proxy service on the worker units. If kube-proxy is not
|
||||
running, the associated port-mapping will not be configured in the iptables
|
||||
rulechains.
|
||||
rulechains.
|
||||
|
||||
If you need to close the NodePort once a workload has been terminated, you can
|
||||
follow the same steps inversely.
|
||||
@@ -97,4 +129,3 @@ follow the same steps inversely.
|
||||
```
|
||||
juju run --application kubernetes-worker close-port 30510
|
||||
```
|
||||
|
||||
|
42
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/config.yaml
generated
vendored
42
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/config.yaml
generated
vendored
@@ -13,16 +13,17 @@ options:
|
||||
cluster. Declare node labels in key=value format, separated by spaces.
|
||||
allow-privileged:
|
||||
type: string
|
||||
default: "auto"
|
||||
default: "true"
|
||||
description: |
|
||||
Allow privileged containers to run on worker nodes. Supported values are
|
||||
"true", "false", and "auto". If "true", kubelet will run in privileged
|
||||
mode by default. If "false", kubelet will never run in privileged mode.
|
||||
If "auto", kubelet will not run in privileged mode by default, but will
|
||||
switch to privileged mode if gpu hardware is detected.
|
||||
switch to privileged mode if gpu hardware is detected. Pod security
|
||||
policies (PSP) should be used to restrict container privileges.
|
||||
channel:
|
||||
type: string
|
||||
default: "1.10/stable"
|
||||
default: "1.11/stable"
|
||||
description: |
|
||||
Snap channel to install Kubernetes worker services from
|
||||
require-manual-upgrade:
|
||||
@@ -58,6 +59,15 @@ options:
|
||||
|
||||
The value for this config must be a JSON array of credential objects, like this:
|
||||
[{"server": "my.registry", "username": "myUser", "password": "myPass"}]
|
||||
ingress-ssl-chain-completion:
|
||||
type: boolean
|
||||
default: false
|
||||
description: |
|
||||
Enable chain completion for TLS certificates used by the nginx ingress
|
||||
controller. Set this to true if you would like the ingress controller
|
||||
to attempt auto-retrieval of intermediate certificates. The default
|
||||
(false) is recommended for all production kubernetes installations, and
|
||||
any environment which does not have outbound Internet access.
|
||||
nginx-image:
|
||||
type: string
|
||||
default: "auto"
|
||||
@@ -70,3 +80,29 @@ options:
|
||||
description: |
|
||||
Docker image to use for the default backend. Auto will select an image
|
||||
based on architecture.
|
||||
snapd_refresh:
|
||||
default: "max"
|
||||
type: string
|
||||
description: |
|
||||
How often snapd handles updates for installed snaps. Setting an empty
|
||||
string will check 4x per day. Set to "max" to delay the refresh as long
|
||||
as possible. You may also set a custom string as described in the
|
||||
'refresh.timer' section here:
|
||||
https://forum.snapcraft.io/t/system-options/87
|
||||
kubelet-extra-config:
|
||||
default: "{}"
|
||||
type: string
|
||||
description: |
|
||||
Extra configuration to be passed to kubelet. Any values specified in this
|
||||
config will be merged into a KubeletConfiguration file that is passed to
|
||||
the kubelet service via the --config flag. This can be used to override
|
||||
values provided by the charm.
|
||||
|
||||
Requires Kubernetes 1.10+.
|
||||
|
||||
The value for this config must be a YAML mapping that can be safely
|
||||
merged with a KubeletConfiguration file. For example:
|
||||
{evictionHard: {memory.available: 200Mi}}
|
||||
|
||||
For more information about KubeletConfiguration, see upstream docs:
|
||||
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
|
||||
|
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/layer.yaml
generated
vendored
8
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/layer.yaml
generated
vendored
@@ -3,6 +3,7 @@ includes:
|
||||
- 'layer:basic'
|
||||
- 'layer:debug'
|
||||
- 'layer:snap'
|
||||
- 'layer:leadership'
|
||||
- 'layer:docker'
|
||||
- 'layer:metrics'
|
||||
- 'layer:nagios'
|
||||
@@ -12,8 +13,11 @@ includes:
|
||||
- 'interface:kubernetes-cni'
|
||||
- 'interface:kube-dns'
|
||||
- 'interface:kube-control'
|
||||
- 'interface:aws'
|
||||
- 'interface:gcp'
|
||||
- 'interface:aws-integration'
|
||||
- 'interface:gcp-integration'
|
||||
- 'interface:openstack-integration'
|
||||
- 'interface:vsphere-integration'
|
||||
- 'interface:azure-integration'
|
||||
- 'interface:mount'
|
||||
config:
|
||||
deletes:
|
||||
|
11
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metadata.yaml
generated
vendored
11
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/metadata.yaml
generated
vendored
@@ -18,6 +18,7 @@ tags:
|
||||
- misc
|
||||
series:
|
||||
- xenial
|
||||
- bionic
|
||||
subordinate: false
|
||||
requires:
|
||||
kube-api-endpoint:
|
||||
@@ -30,9 +31,15 @@ requires:
|
||||
kube-control:
|
||||
interface: kube-control
|
||||
aws:
|
||||
interface: aws
|
||||
interface: aws-integration
|
||||
gcp:
|
||||
interface: gcp
|
||||
interface: gcp-integration
|
||||
openstack:
|
||||
interface: openstack-integration
|
||||
vsphere:
|
||||
interface: vsphere-integration
|
||||
azure:
|
||||
interface: azure-integration
|
||||
nfs:
|
||||
interface: mount
|
||||
provides:
|
||||
|
424
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py
generated
vendored
424
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py
generated
vendored
@@ -14,12 +14,16 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
import yaml
|
||||
|
||||
from charms.leadership import leader_get, leader_set
|
||||
|
||||
from pathlib import Path
|
||||
from shlex import split
|
||||
@@ -36,7 +40,7 @@ from charms.reactive import when, when_any, when_not, when_none
|
||||
|
||||
from charms.kubernetes.common import get_version
|
||||
|
||||
from charms.reactive.helpers import data_changed, any_file_changed
|
||||
from charms.reactive.helpers import data_changed
|
||||
from charms.templating.jinja2 import render
|
||||
|
||||
from charmhelpers.core import hookenv, unitdata
|
||||
@@ -52,6 +56,7 @@ kubeconfig_path = '/root/cdk/kubeconfig'
|
||||
kubeproxyconfig_path = '/root/cdk/kubeproxyconfig'
|
||||
kubeclientconfig_path = '/root/.kube/config'
|
||||
gcp_creds_env_key = 'GOOGLE_APPLICATION_CREDENTIALS'
|
||||
snap_resources = ['kubectl', 'kubelet', 'kube-proxy']
|
||||
|
||||
os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
|
||||
db = unitdata.kv()
|
||||
@@ -59,11 +64,21 @@ db = unitdata.kv()
|
||||
|
||||
@hook('upgrade-charm')
|
||||
def upgrade_charm():
|
||||
# migrate to new flags
|
||||
if is_state('kubernetes-worker.restarted-for-cloud'):
|
||||
remove_state('kubernetes-worker.restarted-for-cloud')
|
||||
set_state('kubernetes-worker.cloud.ready')
|
||||
if is_state('kubernetes-worker.cloud-request-sent'):
|
||||
# minor change, just for consistency
|
||||
remove_state('kubernetes-worker.cloud-request-sent')
|
||||
set_state('kubernetes-worker.cloud.request-sent')
|
||||
|
||||
# Trigger removal of PPA docker installation if it was previously set.
|
||||
set_state('config.changed.install_from_upstream')
|
||||
hookenv.atexit(remove_state, 'config.changed.install_from_upstream')
|
||||
|
||||
cleanup_pre_snap_services()
|
||||
migrate_resource_checksums()
|
||||
check_resources_for_upgrade_needed()
|
||||
|
||||
# Remove the RC for nginx ingress if it exists
|
||||
@@ -88,12 +103,56 @@ def upgrade_charm():
|
||||
set_state('kubernetes-worker.restart-needed')
|
||||
|
||||
|
||||
def get_resource_checksum_db_key(resource):
|
||||
''' Convert a resource name to a resource checksum database key. '''
|
||||
return 'kubernetes-worker.resource-checksums.' + resource
|
||||
|
||||
|
||||
def calculate_resource_checksum(resource):
|
||||
''' Calculate a checksum for a resource '''
|
||||
md5 = hashlib.md5()
|
||||
path = hookenv.resource_get(resource)
|
||||
if path:
|
||||
with open(path, 'rb') as f:
|
||||
data = f.read()
|
||||
md5.update(data)
|
||||
return md5.hexdigest()
|
||||
|
||||
|
||||
def migrate_resource_checksums():
|
||||
''' Migrate resource checksums from the old schema to the new one '''
|
||||
for resource in snap_resources:
|
||||
new_key = get_resource_checksum_db_key(resource)
|
||||
if not db.get(new_key):
|
||||
path = hookenv.resource_get(resource)
|
||||
if path:
|
||||
# old key from charms.reactive.helpers.any_file_changed
|
||||
old_key = 'reactive.files_changed.' + path
|
||||
old_checksum = db.get(old_key)
|
||||
db.set(new_key, old_checksum)
|
||||
else:
|
||||
# No resource is attached. Previously, this meant no checksum
|
||||
# would be calculated and stored. But now we calculate it as if
|
||||
# it is a 0-byte resource, so let's go ahead and do that.
|
||||
zero_checksum = hashlib.md5().hexdigest()
|
||||
db.set(new_key, zero_checksum)
|
||||
|
||||
|
||||
def check_resources_for_upgrade_needed():
|
||||
hookenv.status_set('maintenance', 'Checking resources')
|
||||
resources = ['kubectl', 'kubelet', 'kube-proxy']
|
||||
paths = [hookenv.resource_get(resource) for resource in resources]
|
||||
if any_file_changed(paths):
|
||||
set_upgrade_needed()
|
||||
for resource in snap_resources:
|
||||
key = get_resource_checksum_db_key(resource)
|
||||
old_checksum = db.get(key)
|
||||
new_checksum = calculate_resource_checksum(resource)
|
||||
if new_checksum != old_checksum:
|
||||
set_upgrade_needed()
|
||||
|
||||
|
||||
def calculate_and_store_resource_checksums():
|
||||
for resource in snap_resources:
|
||||
key = get_resource_checksum_db_key(resource)
|
||||
checksum = calculate_resource_checksum(resource)
|
||||
db.set(key, checksum)
|
||||
|
||||
|
||||
def set_upgrade_needed():
|
||||
@@ -142,16 +201,8 @@ def channel_changed():
|
||||
set_upgrade_needed()
|
||||
|
||||
|
||||
@when('kubernetes-worker.snaps.upgrade-needed')
|
||||
@when_not('kubernetes-worker.snaps.upgrade-specified')
|
||||
def upgrade_needed_status():
|
||||
msg = 'Needs manual upgrade, run the upgrade action'
|
||||
hookenv.status_set('blocked', msg)
|
||||
|
||||
|
||||
@when('kubernetes-worker.snaps.upgrade-specified')
|
||||
def install_snaps():
|
||||
check_resources_for_upgrade_needed()
|
||||
channel = hookenv.config('channel')
|
||||
hookenv.status_set('maintenance', 'Installing kubectl snap')
|
||||
snap.install('kubectl', channel=channel, classic=True)
|
||||
@@ -159,6 +210,7 @@ def install_snaps():
|
||||
snap.install('kubelet', channel=channel, classic=True)
|
||||
hookenv.status_set('maintenance', 'Installing kube-proxy snap')
|
||||
snap.install('kube-proxy', channel=channel, classic=True)
|
||||
calculate_and_store_resource_checksums()
|
||||
set_state('kubernetes-worker.snaps.installed')
|
||||
set_state('kubernetes-worker.restart-needed')
|
||||
remove_state('kubernetes-worker.snaps.upgrade-needed')
|
||||
@@ -173,7 +225,7 @@ def shutdown():
|
||||
'''
|
||||
try:
|
||||
if os.path.isfile(kubeconfig_path):
|
||||
kubectl('delete', 'node', gethostname().lower())
|
||||
kubectl('delete', 'node', get_node_name())
|
||||
except CalledProcessError:
|
||||
hookenv.log('Failed to unregister node.')
|
||||
service_stop('snap.kubelet.daemon')
|
||||
@@ -243,26 +295,73 @@ def set_app_version():
|
||||
|
||||
|
||||
@when('kubernetes-worker.snaps.installed')
|
||||
@when_not('kube-control.dns.available')
|
||||
def notify_user_transient_status():
|
||||
''' Notify to the user we are in a transient state and the application
|
||||
is still converging. Potentially remotely, or we may be in a detached loop
|
||||
wait state '''
|
||||
@when('snap.refresh.set')
|
||||
@when('leadership.is_leader')
|
||||
def process_snapd_timer():
|
||||
''' Set the snapd refresh timer on the leader so all cluster members
|
||||
(present and future) will refresh near the same time. '''
|
||||
# Get the current snapd refresh timer; we know layer-snap has set this
|
||||
# when the 'snap.refresh.set' flag is present.
|
||||
timer = snap.get(snapname='core', key='refresh.timer').decode('utf-8')
|
||||
|
||||
# During deployment the worker has to start kubelet without cluster dns
|
||||
# configured. If this is the first unit online in a service pool waiting
|
||||
# to self host the dns pod, and configure itself to query the dns service
|
||||
# declared in the kube-system namespace
|
||||
|
||||
hookenv.status_set('waiting', 'Waiting for cluster DNS.')
|
||||
# The first time through, data_changed will be true. Subsequent calls
|
||||
# should only update leader data if something changed.
|
||||
if data_changed('worker_snapd_refresh', timer):
|
||||
hookenv.log('setting snapd_refresh timer to: {}'.format(timer))
|
||||
leader_set({'snapd_refresh': timer})
|
||||
|
||||
|
||||
@when('kubernetes-worker.snaps.installed',
|
||||
'kube-control.dns.available')
|
||||
@when_not('kubernetes-worker.snaps.upgrade-needed')
|
||||
def charm_status(kube_control):
|
||||
@when('kubernetes-worker.snaps.installed')
|
||||
@when('snap.refresh.set')
|
||||
@when('leadership.changed.snapd_refresh')
|
||||
@when_not('leadership.is_leader')
|
||||
def set_snapd_timer():
|
||||
''' Set the snapd refresh.timer on non-leader cluster members. '''
|
||||
# NB: This method should only be run when 'snap.refresh.set' is present.
|
||||
# Layer-snap will always set a core refresh.timer, which may not be the
|
||||
# same as our leader. Gating with 'snap.refresh.set' ensures layer-snap
|
||||
# has finished and we are free to set our config to the leader's timer.
|
||||
timer = leader_get('snapd_refresh')
|
||||
hookenv.log('setting snapd_refresh timer to: {}'.format(timer))
|
||||
snap.set_refresh_timer(timer)
|
||||
|
||||
|
||||
@hookenv.atexit
|
||||
def charm_status():
|
||||
'''Update the status message with the current status of kubelet.'''
|
||||
update_kubelet_status()
|
||||
vsphere_joined = is_state('endpoint.vsphere.joined')
|
||||
azure_joined = is_state('endpoint.azure.joined')
|
||||
cloud_blocked = is_state('kubernetes-worker.cloud.blocked')
|
||||
if vsphere_joined and cloud_blocked:
|
||||
hookenv.status_set('blocked',
|
||||
'vSphere integration requires K8s 1.12 or greater')
|
||||
return
|
||||
if azure_joined and cloud_blocked:
|
||||
hookenv.status_set('blocked',
|
||||
'Azure integration requires K8s 1.11 or greater')
|
||||
return
|
||||
if is_state('kubernetes-worker.cloud.pending'):
|
||||
hookenv.status_set('waiting', 'Waiting for cloud integration')
|
||||
return
|
||||
if not is_state('kube-control.dns.available'):
|
||||
# During deployment the worker has to start kubelet without cluster dns
|
||||
# configured. If this is the first unit online in a service pool
|
||||
# waiting to self host the dns pod, and configure itself to query the
|
||||
# dns service declared in the kube-system namespace
|
||||
hookenv.status_set('waiting', 'Waiting for cluster DNS.')
|
||||
return
|
||||
if is_state('kubernetes-worker.snaps.upgrade-specified'):
|
||||
hookenv.status_set('waiting', 'Upgrade pending')
|
||||
return
|
||||
if is_state('kubernetes-worker.snaps.upgrade-needed'):
|
||||
hookenv.status_set('blocked',
|
||||
'Needs manual upgrade, run the upgrade action')
|
||||
return
|
||||
if is_state('kubernetes-worker.snaps.installed'):
|
||||
update_kubelet_status()
|
||||
return
|
||||
else:
|
||||
pass # will have been set by snap layer or other handler
|
||||
|
||||
|
||||
def update_kubelet_status():
|
||||
@@ -347,6 +446,8 @@ def watch_for_changes(kube_api, kube_control, cni):
|
||||
'kube-control.dns.available', 'kube-control.auth.available',
|
||||
'cni.available', 'kubernetes-worker.restart-needed',
|
||||
'worker.auth.bootstrapped')
|
||||
@when_not('kubernetes-worker.cloud.pending',
|
||||
'kubernetes-worker.cloud.blocked')
|
||||
def start_worker(kube_api, kube_control, auth_control, cni):
|
||||
''' Start kubelet using the provided API and DNS info.'''
|
||||
servers = get_kube_api_servers(kube_api)
|
||||
@@ -403,8 +504,8 @@ def sdn_changed():
|
||||
@when('kubernetes-worker.config.created')
|
||||
@when_not('kubernetes-worker.ingress.available')
|
||||
def render_and_launch_ingress():
|
||||
''' If configuration has ingress daemon set enabled, launch the ingress load
|
||||
balancer and default http backend. Otherwise attempt deletion. '''
|
||||
''' If configuration has ingress daemon set enabled, launch the ingress
|
||||
load balancer and default http backend. Otherwise attempt deletion. '''
|
||||
config = hookenv.config()
|
||||
# If ingress is enabled, launch the ingress controller
|
||||
if config.get('ingress'):
|
||||
@@ -470,8 +571,9 @@ def apply_node_labels():
|
||||
|
||||
|
||||
@when_any('config.changed.kubelet-extra-args',
|
||||
'config.changed.proxy-extra-args')
|
||||
def extra_args_changed():
|
||||
'config.changed.proxy-extra-args',
|
||||
'config.changed.kubelet-extra-config')
|
||||
def config_changed_requires_restart():
|
||||
set_state('kubernetes-worker.restart-needed')
|
||||
|
||||
|
||||
@@ -592,6 +694,20 @@ def configure_kubernetes_service(service, base_args, extra_args_key):
|
||||
db.set(prev_args_key, args)
|
||||
|
||||
|
||||
def merge_kubelet_extra_config(config, extra_config):
|
||||
''' Updates config to include the contents of extra_config. This is done
|
||||
recursively to allow deeply nested dictionaries to be merged.
|
||||
|
||||
This is destructive: it modifies the config dict that is passed in.
|
||||
'''
|
||||
for k, extra_config_value in extra_config.items():
|
||||
if isinstance(extra_config_value, dict):
|
||||
config_value = config.setdefault(k, {})
|
||||
merge_kubelet_extra_config(config_value, extra_config_value)
|
||||
else:
|
||||
config[k] = extra_config_value
|
||||
|
||||
|
||||
def configure_kubelet(dns, ingress_ip):
|
||||
layer_options = layer.options('tls-client')
|
||||
ca_cert_path = layer_options.get('ca_certificate_path')
|
||||
@@ -603,35 +719,93 @@ def configure_kubelet(dns, ingress_ip):
|
||||
kubelet_opts['kubeconfig'] = kubeconfig_path
|
||||
kubelet_opts['network-plugin'] = 'cni'
|
||||
kubelet_opts['v'] = '0'
|
||||
kubelet_opts['address'] = '0.0.0.0'
|
||||
kubelet_opts['port'] = '10250'
|
||||
kubelet_opts['cluster-domain'] = dns['domain']
|
||||
kubelet_opts['anonymous-auth'] = 'false'
|
||||
kubelet_opts['client-ca-file'] = ca_cert_path
|
||||
kubelet_opts['tls-cert-file'] = server_cert_path
|
||||
kubelet_opts['tls-private-key-file'] = server_key_path
|
||||
kubelet_opts['logtostderr'] = 'true'
|
||||
kubelet_opts['fail-swap-on'] = 'false'
|
||||
kubelet_opts['node-ip'] = ingress_ip
|
||||
|
||||
if (dns['enable-kube-dns']):
|
||||
kubelet_opts['cluster-dns'] = dns['sdn-ip']
|
||||
|
||||
# set --allow-privileged flag for kubelet
|
||||
kubelet_opts['allow-privileged'] = set_privileged()
|
||||
|
||||
if is_state('kubernetes-worker.gpu.enabled'):
|
||||
hookenv.log('Adding '
|
||||
'--feature-gates=DevicePlugins=true '
|
||||
'to kubelet')
|
||||
kubelet_opts['feature-gates'] = 'DevicePlugins=true'
|
||||
|
||||
if is_state('endpoint.aws.ready'):
|
||||
kubelet_opts['cloud-provider'] = 'aws'
|
||||
elif is_state('endpoint.gcp.ready'):
|
||||
cloud_config_path = _cloud_config_path('kubelet')
|
||||
kubelet_opts['cloud-provider'] = 'gce'
|
||||
kubelet_opts['cloud-config'] = str(cloud_config_path)
|
||||
elif is_state('endpoint.openstack.ready'):
|
||||
cloud_config_path = _cloud_config_path('kubelet')
|
||||
kubelet_opts['cloud-provider'] = 'openstack'
|
||||
kubelet_opts['cloud-config'] = str(cloud_config_path)
|
||||
elif is_state('endpoint.vsphere.joined'):
|
||||
# vsphere just needs to be joined on the worker (vs 'ready')
|
||||
cloud_config_path = _cloud_config_path('kubelet')
|
||||
kubelet_opts['cloud-provider'] = 'vsphere'
|
||||
# NB: vsphere maps node product-id to its uuid (no config file needed).
|
||||
uuid_file = '/sys/class/dmi/id/product_uuid'
|
||||
with open(uuid_file, 'r') as f:
|
||||
uuid = f.read().strip()
|
||||
kubelet_opts['provider-id'] = 'vsphere://{}'.format(uuid)
|
||||
elif is_state('endpoint.azure.ready'):
|
||||
azure = endpoint_from_flag('endpoint.azure.ready')
|
||||
cloud_config_path = _cloud_config_path('kubelet')
|
||||
kubelet_opts['cloud-provider'] = 'azure'
|
||||
kubelet_opts['cloud-config'] = str(cloud_config_path)
|
||||
kubelet_opts['provider-id'] = azure.vm_id
|
||||
|
||||
if get_version('kubelet') >= (1, 10):
|
||||
# Put together the KubeletConfiguration data
|
||||
kubelet_config = {
|
||||
'apiVersion': 'kubelet.config.k8s.io/v1beta1',
|
||||
'kind': 'KubeletConfiguration',
|
||||
'address': '0.0.0.0',
|
||||
'authentication': {
|
||||
'anonymous': {
|
||||
'enabled': False
|
||||
},
|
||||
'x509': {
|
||||
'clientCAFile': ca_cert_path
|
||||
}
|
||||
},
|
||||
'clusterDomain': dns['domain'],
|
||||
'failSwapOn': False,
|
||||
'port': 10250,
|
||||
'tlsCertFile': server_cert_path,
|
||||
'tlsPrivateKeyFile': server_key_path
|
||||
}
|
||||
if dns['enable-kube-dns']:
|
||||
kubelet_config['clusterDNS'] = [dns['sdn-ip']]
|
||||
if is_state('kubernetes-worker.gpu.enabled'):
|
||||
kubelet_config['featureGates'] = {
|
||||
'DevicePlugins': True
|
||||
}
|
||||
|
||||
# Add kubelet-extra-config. This needs to happen last so that it
|
||||
# overrides any config provided by the charm.
|
||||
kubelet_extra_config = hookenv.config('kubelet-extra-config')
|
||||
kubelet_extra_config = yaml.load(kubelet_extra_config)
|
||||
merge_kubelet_extra_config(kubelet_config, kubelet_extra_config)
|
||||
|
||||
# Render the file and configure Kubelet to use it
|
||||
os.makedirs('/root/cdk/kubelet', exist_ok=True)
|
||||
with open('/root/cdk/kubelet/config.yaml', 'w') as f:
|
||||
f.write('# Generated by kubernetes-worker charm, do not edit\n')
|
||||
yaml.dump(kubelet_config, f)
|
||||
kubelet_opts['config'] = '/root/cdk/kubelet/config.yaml'
|
||||
else:
|
||||
# NOTE: This is for 1.9. Once we've dropped 1.9 support, we can remove
|
||||
# this whole block and the parent if statement.
|
||||
kubelet_opts['address'] = '0.0.0.0'
|
||||
kubelet_opts['anonymous-auth'] = 'false'
|
||||
kubelet_opts['client-ca-file'] = ca_cert_path
|
||||
kubelet_opts['cluster-domain'] = dns['domain']
|
||||
kubelet_opts['fail-swap-on'] = 'false'
|
||||
kubelet_opts['port'] = '10250'
|
||||
kubelet_opts['tls-cert-file'] = server_cert_path
|
||||
kubelet_opts['tls-private-key-file'] = server_key_path
|
||||
if dns['enable-kube-dns']:
|
||||
kubelet_opts['cluster-dns'] = dns['sdn-ip']
|
||||
if is_state('kubernetes-worker.gpu.enabled'):
|
||||
kubelet_opts['feature-gates'] = 'DevicePlugins=true'
|
||||
|
||||
if get_version('kubelet') >= (1, 11):
|
||||
kubelet_opts['dynamic-config-dir'] = '/root/cdk/kubelet/dynamic-config'
|
||||
|
||||
configure_kubernetes_service('kubelet', kubelet_opts, 'kubelet-extra-args')
|
||||
|
||||
@@ -696,6 +870,7 @@ def create_kubeconfig(kubeconfig, server, ca, key=None, certificate=None,
|
||||
|
||||
|
||||
@when_any('config.changed.default-backend-image',
|
||||
'config.changed.ingress-ssl-chain-completion',
|
||||
'config.changed.nginx-image')
|
||||
@when('kubernetes-worker.config.created')
|
||||
def launch_default_ingress_controller():
|
||||
@@ -738,17 +913,16 @@ def launch_default_ingress_controller():
|
||||
return
|
||||
|
||||
# Render the ingress daemon set controller manifest
|
||||
context['ssl_chain_completion'] = config.get(
|
||||
'ingress-ssl-chain-completion')
|
||||
context['ingress_image'] = config.get('nginx-image')
|
||||
if context['ingress_image'] == "" or context['ingress_image'] == "auto":
|
||||
if context['arch'] == 's390x':
|
||||
context['ingress_image'] = \
|
||||
"docker.io/cdkbot/nginx-ingress-controller-s390x:0.9.0-beta.13"
|
||||
elif context['arch'] == 'arm64':
|
||||
context['ingress_image'] = \
|
||||
"k8s.gcr.io/nginx-ingress-controller-arm64:0.9.0-beta.15"
|
||||
else:
|
||||
context['ingress_image'] = \
|
||||
"k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15" # noqa
|
||||
images = {'amd64': 'quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.1', # noqa
|
||||
'arm64': 'quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.16.1', # noqa
|
||||
's390x': 'quay.io/kubernetes-ingress-controller/nginx-ingress-controller-s390x:0.16.1', # noqa
|
||||
'ppc64el': 'quay.io/kubernetes-ingress-controller/nginx-ingress-controller-ppc64le:0.16.1', # noqa
|
||||
}
|
||||
context['ingress_image'] = images.get(context['arch'], images['amd64'])
|
||||
if get_version('kubelet') < (1, 9):
|
||||
context['daemonset_api_version'] = 'extensions/v1beta1'
|
||||
else:
|
||||
@@ -1008,10 +1182,20 @@ def missing_kube_control():
|
||||
missing.
|
||||
|
||||
"""
|
||||
hookenv.status_set(
|
||||
'blocked',
|
||||
'Relate {}:kube-control kubernetes-master:kube-control'.format(
|
||||
hookenv.service_name()))
|
||||
try:
|
||||
goal_state = hookenv.goal_state()
|
||||
except NotImplementedError:
|
||||
goal_state = {}
|
||||
|
||||
if 'kube-control' in goal_state.get('relations', {}):
|
||||
hookenv.status_set(
|
||||
'waiting',
|
||||
'Waiting for kubernetes-master to become ready')
|
||||
else:
|
||||
hookenv.status_set(
|
||||
'blocked',
|
||||
'Relate {}:kube-control kubernetes-master:kube-control'.format(
|
||||
hookenv.service_name()))
|
||||
|
||||
|
||||
@when('docker.ready')
|
||||
@@ -1040,11 +1224,17 @@ def get_node_name():
|
||||
if is_state('endpoint.aws.ready'):
|
||||
cloud_provider = 'aws'
|
||||
elif is_state('endpoint.gcp.ready'):
|
||||
cloud_provider = 'gcp'
|
||||
cloud_provider = 'gce'
|
||||
elif is_state('endpoint.openstack.ready'):
|
||||
cloud_provider = 'openstack'
|
||||
elif is_state('endpoint.vsphere.ready'):
|
||||
cloud_provider = 'vsphere'
|
||||
elif is_state('endpoint.azure.ready'):
|
||||
cloud_provider = 'azure'
|
||||
if cloud_provider == 'aws':
|
||||
return getfqdn()
|
||||
return getfqdn().lower()
|
||||
else:
|
||||
return gethostname()
|
||||
return gethostname().lower()
|
||||
|
||||
|
||||
class ApplyNodeLabelFailed(Exception):
|
||||
@@ -1084,9 +1274,29 @@ def remove_label(label):
|
||||
|
||||
|
||||
@when_any('endpoint.aws.joined',
|
||||
'endpoint.gcp.joined')
|
||||
'endpoint.gcp.joined',
|
||||
'endpoint.openstack.joined',
|
||||
'endpoint.vsphere.joined',
|
||||
'endpoint.azure.joined')
|
||||
@when_not('kubernetes-worker.cloud.ready')
|
||||
def set_cloud_pending():
|
||||
k8s_version = get_version('kubelet')
|
||||
k8s_1_11 = k8s_version >= (1, 11)
|
||||
k8s_1_12 = k8s_version >= (1, 12)
|
||||
vsphere_joined = is_state('endpoint.vsphere.joined')
|
||||
azure_joined = is_state('endpoint.azure.joined')
|
||||
if (vsphere_joined and not k8s_1_12) or (azure_joined and not k8s_1_11):
|
||||
set_state('kubernetes-worker.cloud.blocked')
|
||||
else:
|
||||
remove_state('kubernetes-worker.cloud.blocked')
|
||||
set_state('kubernetes-worker.cloud.pending')
|
||||
|
||||
|
||||
@when_any('endpoint.aws.joined',
|
||||
'endpoint.gcp.joined',
|
||||
'endpoint.azure.joined')
|
||||
@when('kube-control.cluster_tag.available')
|
||||
@when_not('kubernetes-worker.cloud-request-sent')
|
||||
@when_not('kubernetes-worker.cloud.request-sent')
|
||||
def request_integration():
|
||||
hookenv.status_set('maintenance', 'requesting cloud integration')
|
||||
kube_control = endpoint_from_flag('kube-control.cluster_tag.available')
|
||||
@@ -1109,26 +1319,47 @@ def request_integration():
|
||||
'k8s-io-cluster-name': cluster_tag,
|
||||
})
|
||||
cloud.enable_object_storage_management()
|
||||
elif is_state('endpoint.azure.joined'):
|
||||
cloud = endpoint_from_flag('endpoint.azure.joined')
|
||||
cloud.tag_instance({
|
||||
'k8s-io-cluster-name': cluster_tag,
|
||||
})
|
||||
cloud.enable_object_storage_management()
|
||||
cloud.enable_instance_inspection()
|
||||
cloud.enable_dns_management()
|
||||
set_state('kubernetes-worker.cloud-request-sent')
|
||||
hookenv.status_set('waiting', 'waiting for cloud integration')
|
||||
set_state('kubernetes-worker.cloud.request-sent')
|
||||
hookenv.status_set('waiting', 'Waiting for cloud integration')
|
||||
|
||||
|
||||
@when_none('endpoint.aws.joined',
|
||||
'endpoint.gcp.joined')
|
||||
def clear_requested_integration():
|
||||
remove_state('kubernetes-worker.cloud-request-sent')
|
||||
'endpoint.gcp.joined',
|
||||
'endpoint.openstack.joined',
|
||||
'endpoint.vsphere.joined',
|
||||
'endpoint.azure.joined')
|
||||
def clear_cloud_flags():
|
||||
remove_state('kubernetes-worker.cloud.pending')
|
||||
remove_state('kubernetes-worker.cloud.request-sent')
|
||||
remove_state('kubernetes-worker.cloud.blocked')
|
||||
remove_state('kubernetes-worker.cloud.ready')
|
||||
|
||||
|
||||
@when_any('endpoint.aws.ready',
|
||||
'endpoint.gcp.ready')
|
||||
@when_not('kubernetes-worker.restarted-for-cloud')
|
||||
def restart_for_cloud():
|
||||
'endpoint.gcp.ready',
|
||||
'endpoint.openstack.ready',
|
||||
'endpoint.vsphere.ready',
|
||||
'endpoint.azure.ready')
|
||||
@when_not('kubernetes-worker.cloud.blocked',
|
||||
'kubernetes-worker.cloud.ready')
|
||||
def cloud_ready():
|
||||
remove_state('kubernetes-worker.cloud.pending')
|
||||
if is_state('endpoint.gcp.ready'):
|
||||
_write_gcp_snap_config('kubelet')
|
||||
set_state('kubernetes-worker.restarted-for-cloud')
|
||||
set_state('kubernetes-worker.restart-needed')
|
||||
elif is_state('endpoint.openstack.ready'):
|
||||
_write_openstack_snap_config('kubelet')
|
||||
elif is_state('endpoint.azure.ready'):
|
||||
_write_azure_snap_config('kubelet')
|
||||
set_state('kubernetes-worker.cloud.ready')
|
||||
set_state('kubernetes-worker.restart-needed') # force restart
|
||||
|
||||
|
||||
def _snap_common_path(component):
|
||||
@@ -1176,6 +1407,37 @@ def _write_gcp_snap_config(component):
|
||||
daemon_env_path.write_text(daemon_env)
|
||||
|
||||
|
||||
def _write_openstack_snap_config(component):
|
||||
# openstack requires additional credentials setup
|
||||
openstack = endpoint_from_flag('endpoint.openstack.ready')
|
||||
|
||||
cloud_config_path = _cloud_config_path(component)
|
||||
cloud_config_path.write_text('\n'.join([
|
||||
'[Global]',
|
||||
'auth-url = {}'.format(openstack.auth_url),
|
||||
'username = {}'.format(openstack.username),
|
||||
'password = {}'.format(openstack.password),
|
||||
'tenant-name = {}'.format(openstack.project_name),
|
||||
'domain-name = {}'.format(openstack.user_domain_name),
|
||||
]))
|
||||
|
||||
|
||||
def _write_azure_snap_config(component):
|
||||
azure = endpoint_from_flag('endpoint.azure.ready')
|
||||
cloud_config_path = _cloud_config_path(component)
|
||||
cloud_config_path.write_text(json.dumps({
|
||||
'useInstanceMetadata': True,
|
||||
'useManagedIdentityExtension': True,
|
||||
'subscriptionId': azure.subscription_id,
|
||||
'resourceGroup': azure.resource_group,
|
||||
'location': azure.resource_group_location,
|
||||
'vnetName': azure.vnet_name,
|
||||
'vnetResourceGroup': azure.vnet_resource_group,
|
||||
'subnetName': azure.subnet_name,
|
||||
'securityGroupName': azure.security_group_name,
|
||||
}))
|
||||
|
||||
|
||||
def get_first_mount(mount_relation):
|
||||
mount_relation_list = mount_relation.mounts()
|
||||
if mount_relation_list and len(mount_relation_list) > 0:
|
||||
|
@@ -176,3 +176,4 @@ spec:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
|
||||
- --enable-ssl-chain-completion={{ ssl_chain_completion }}
|
||||
|
Reference in New Issue
Block a user