If the Dockerfile needs to run some command, that step fails unless
QEMU is set up properly first:
failed to solve: rpc error: code = Unknown desc = failed to load
LLB: runtime execution on platform linux/ppc64le not supported
The approach taken here extends the existing support for
cross-compiling binaries on the build host and specifying the Go
compiler: Go is installed if needed (as in Prow testing), binaries are
build on the host, then one image is created for each platform, and
finally those are combined into a single multi-architecture image.
Developers should not be forced to build for all platforms by
default. We also don't want to copy-and-paste the go invocation for
each new platform.
To address both, the target platform(s) are now configurable via
BUILD_PLATFORMS and additional platforms are only enabled in the Prow
CI.
For now this serves as a test that the source actually compiles for
multiple platforms. Building images for different target platforms is a
different problem.
The final 1.3.0 release of the hostpath driver really uses the 1.3.0
driver image in its deployment, in contrast to the previous -rc
candidates which still used 1.2.0.
This relies on a slightly different deployment script: a "deploy.sh"
must exist which knows that it has to dump a test driver configurion
into the file pointed to with CSI_PROW_TEST_DRIVER, if that env
variable is set.
That way, we no longer need to know what capabilities the installed
driver has.
This requires adding one more parallel e2e test run with
a special focus flag because snapshot tests are still guarded
with a "[Feature:VolumeSnapshotDataSource]" tag. The setting that
skips all tests with "[Feature:.*]" has to be removed because it
overrides the focus.
We don't have serial snapshot tests yet. This needs to be modified
again if we add any in the future.
Inside a real Prow job it is better to clean up at runtime instead of leaving that to the Prow job cleanup code because the later sometimes times out.
Signed-off-by: Mucahit Kurt <mucahitkurt@gmail.com>
changes to kind. There are 3 changes made to prow.sh:
1. Use a master commit of kind that includes the fix for Kubernetes
master.
2. Use git clone instead of git checkout (shallow) to source Kubernetes.
This lets kind correctly figure out the Kubernetes release tag.
3. Build kind with make install. The kind fix was not working correctly
when built with go build.
with specific patch versions that kind 0.4.0 supports. Also, feature
gate setting is only supported on 1.15+ due to
kind.sigs.k8s.io/v1alpha3 and kubeadm.k8s.io/v1beta2 dependencies.
By moving the code into a separate function, other CSI drivers have a
chance to overwrite it. For the hostpath driver itself we need the
ability to set the driver name depending on which revision is getting
installed.
Whether a component supports sanity testing depends on the
component. For example, csi-driver-host-path enables it because it
makes sense there (and only there). Letting the prow.sh script decide
whether it actually runs simplifies the job definitions in test-infra.
The previous logic failed for canary jobs, those also deploy a recent
driver. Instead of guessing what driver gets installed based on job
parameters, check what really runs in the cluster and base the
decision on that.
We only need to maintain this blacklist for 1.0.x until we replace it
with 1.1.0, then this entire hostpath_supports_block can be removed.
This ensures that also new, currently unknown alpha gates are enabled
when testing against a future Kubernetes versions. For all currently
known Kubernetes versions we just use the minimal set of alpha gates,
which ensures that we don't miss any of them in our documentation.
Instead of always using the latest E2E tests for all Kubernetes
versions, the plan now is to use the tests that match the Kubernetes
version. However, for 1.13 we have to make an exception because the
suite for that version did not support the necessary
--storage.testdriver parameter.
While switching back and forth between release-1.13 and release-1.14
locally, there was the problem that the local kind image kept using
the wrong kubelet binary despite rebuilding from source. The problem
went away after cleaning the Bazel cache. Exact root cause unknown,
but perhaps using unique tags and properly cleaning the repo helps.
If not, then the unique tags at least will show what the problem is.