![]() |
腼腆的绿豆 · Kubelet核心功能解析· 2 天前 · |
![]() |
文质彬彬的拐杖 · 推荐项目:零配置的Vuetify-Nuxt模 ...· 3 月前 · |
![]() |
任性的马克杯 · 怎么把obj文件保存在数组c++ - CSDN文库· 10 月前 · |
![]() |
睿智的柠檬 · JPA与EntityManager批量新增、 ...· 1 年前 · |
![]() |
豁达的钢笔 · 如何使用记事本编写Python代码-百度经验· 1 年前 · |
![]() |
聪明的羽毛球 · selenium释放内存-掘金· 2 年前 · |
concurrent = 1
check_interval = 1
log_level = "debug"
shutdown_timeout = 0
listen_address = ':9252'
[session_server]
session_timeout = 1800
[[runners]]
name = ""
url = "https://gitlab.com/"
id = 0
token = "__REDACTED__"
token_obtained_at = "0001-01-01T00:00:00Z"
token_expires_at = "0001-01-01T00:00:00Z"
executor = "kubernetes"
shell = "bash"
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = "alpine"
namespace = ""
namespace_overwrite_allowed = ""
pod_labels_overwrite_allowed = ""
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
node_selector_overwrite_allowed = "kubernetes.io/arch=.*" # <--- allows overwrite of the architecture
job:
image: IMAGE_NAME
variables:
KUBERNETES_NODE_SELECTOR_ARCH: 'kubernetes.io/arch=amd64' # <--- select the architecture
Define a list of node affinities to add to a pod specification at build time.
node_affinities
does not determine which operating system a build should run with, only
node_selectors
. For more information, see
Operating system, architecture, and Windows kernel version
.
Example configuration in the
config.toml
:
concurrent = 1
[[runners]]
name = "myRunner"
url = "gitlab.example.com"
executor = "kubernetes"
[runners.kubernetes]
[runners.kubernetes.affinity]
[runners.kubernetes.affinity.node_affinity]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution]]
weight = 100
[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference.match_expressions]]
key = "cpu_speed"
operator = "In"
values = ["fast"]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference.match_expressions]]
key = "mem_speed"
operator = "In"
values = ["fast"]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution]]
weight = 50
[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference.match_expressions]]
key = "core_count"
operator = "In"
values = ["high", "32"]
[[runners.kubernetes.affinity.node_affinity.preferred_during_scheduling_ignored_during_execution.preference.match_fields]]
key = "cpu_type"
operator = "In"
values = ["arm64"]
[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution]
[[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms]]
[[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms.match_expressions]]
key = "kubernetes.io/e2e-az-name"
operator = "In"
values = [
"e2e-az1",
"e2e-az2"
]
Use pod affinity and anti-affinity to constrain the nodes your pod is eligible to be scheduled on, based on labels on other pods.
Example configuration in the
config.toml
:
concurrent = 1
[[runners]]
name = "myRunner"
url = "gitlab.example.com"
executor = "kubernetes"
[runners.kubernetes]
[runners.kubernetes.affinity]
[runners.kubernetes.affinity.pod_affinity]
[[runners.kubernetes.affinity.pod_affinity.required_during_scheduling_ignored_during_execution]]
topology_key = "failure-domain.beta.kubernetes.io/zone"
namespaces = ["namespace_1", "namespace_2"]
[runners.kubernetes.affinity.pod_affinity.required_during_scheduling_ignored_during_execution.label_selector]
[[runners.kubernetes.affinity.pod_affinity.required_during_scheduling_ignored_during_execution.label_selector.match_expressions]]
key = "security"
operator = "In"
values = ["S1"]
[[runners.kubernetes.affinity.pod_affinity.preferred_during_scheduling_ignored_during_execution]]
weight = 100
[runners.kubernetes.affinity.pod_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term]
topology_key = "failure-domain.beta.kubernetes.io/zone"
[runners.kubernetes.affinity.pod_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector]
[[runners.kubernetes.affinity.pod_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector.match_expressions]]
key = "security_2"
operator = "In"
values = ["S2"]
[runners.kubernetes.affinity.pod_anti_affinity]
[[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution]]
topology_key = "failure-domain.beta.kubernetes.io/zone"
namespaces = ["namespace_1", "namespace_2"]
[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector]
[[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.label_selector.match_expressions]]
key = "security"
operator = "In"
values = ["S1"]
[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.namespace_selector]
[[runners.kubernetes.affinity.pod_anti_affinity.required_during_scheduling_ignored_during_execution.namespace_selector.match_expressions]]
key = "security"
operator = "In"
values = ["S1"]
[[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution]]
weight = 100
[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term]
topology_key = "failure-domain.beta.kubernetes.io/zone"
[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector]
[[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector.match_expressions]]
key = "security_2"
operator = "In"
values = ["S2"]
[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.namespace_selector]
[[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.namespace_selector.match_expressions]]
key = "security_2"
operator = "In"
values = ["S2"]
Use container lifecycle hooks to run code configured for a handler when the corresponding lifecycle hook is executed.
You can configure two types of hooks:
PreStop
and
PostStart
. Each of them allows only one type of handler to be set.
Example configuration in the
config.toml
file:
[[runners]]
name = "kubernetes"
url = "https://gitlab.example.com/"
executor = "kubernetes"
token = "yrnZW46BrtBFqM7xDzE7dddd"
[runners.kubernetes]
image = "alpine:3.11"
privileged = true
namespace = "default"
[runners.kubernetes.container_lifecycle.post_start.exec]
command = ["touch", "/builds/postStart.txt"]
[runners.kubernetes.container_lifecycle.pre_stop.http_get]
port = 8080
host = "localhost"
path = "/test"
[[runners.kubernetes.container_lifecycle.pre_stop.http_get.http_headers]]
name = "header_name_1"
value = "header_value_1"
[[runners.kubernetes.container_lifecycle.pre_stop.http_get.http_headers]]
name = "header_name_2"
value = "header_value_2"
Use the following settings to configure each lifecycle hook:
Option | Type | Required | Description |
---|---|---|---|
exec
|
KubernetesLifecycleExecAction
|
No |
Exec
specifies the action to take.
|
http_get
|
KubernetesLifecycleHTTPGet
|
No |
HTTPGet
specifies the http request to perform.
|
tcp_socket
|
KubernetesLifecycleTcpSocket
|
No |
TCPsocket
specifies an action involving a TCP port.
|
Option | Type | Required | Description |
---|---|---|---|
command
|
string
list
|
Yes | The command line to execute inside the container. |
Option | Type | Required | Description |
---|---|---|---|
port
|
int
|
Yes | The number of the port to access on the container. |
host
|
string | No | The host name to connect to, defaults to the pod IP (optional). |
path
|
string | No | The path to access on the HTTP server (optional). |
scheme
|
string | No | The scheme used for connecting to the host. Defaults to HTTP (optional). |
http_headers
|
KubernetesLifecycleHTTPGetHeader
list
|
No | Custom headers to set in the request (optional). |
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | HTTP header name. |
value
|
string | Yes | HTTP header value. |
Option | Type | Required | Description |
---|---|---|---|
port
|
int
|
Yes | The number of the port to access on the container. |
host
|
string | No | The host name to connect to, defaults to the pod IP (optional). |
Use the following options to configure the DNS settings of the pods.
Option | Type | Required | Description |
---|---|---|---|
nameservers
|
string
list
|
No | A list of IP addresses that are used as DNS servers for the pod. |
options
|
KubernetesDNSConfigOption
|
No | A optional list of objects where each object may have a name property (required) and a value property (optional). |
searches
|
string
lists
|
No | A list of DNS search domains for hostname lookup in the pod. |
Example configuration in the
config.toml
file:
concurrent = 1
check_interval = 30
[[runners]]
name = "myRunner"
url = "https://gitlab.example.com"
token = "__REDACTED__"
executor = "kubernetes"
[runners.kubernetes]
image = "alpine:latest"
[runners.kubernetes.dns_config]
nameservers = [
"1.2.3.4",
searches = [
"ns1.svc.cluster-domain.example",
"my.dns.search.suffix",
[[runners.kubernetes.dns_config.options]]
name = "ndots"
value = "2"
[[runners.kubernetes.dns_config.options]]
name = "edns0"
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | Configuration option name. |
value
|
*string
|
No | Configuration option value. |
GitLab Runner drops the following capabilities by default.
User-defined
cap_add
has priority over the default list of dropped capabilities.
If you want to add the capability that is dropped by default, add it to
cap_add
.
NET_RAW
This feature is available in Kubernetes 1.7 and higher.
Configure a
host aliases
to
instruct Kubernetes to add entries to
/etc/hosts
file in the container.
Use the following options:
Option | Type | Required | Description |
---|---|---|---|
IP
|
string | Yes | The IP address you want to attach hosts to. |
Hostnames
|
string
list
|
Yes | A list of host name aliases that are attached to the IP. |
Example configuration in the
config.toml
file:
concurrent = 4
[[runners]]
# usual configuration
executor = "kubernetes"
[runners.kubernetes]
[[runners.kubernetes.host_aliases]]
ip = "127.0.0.1"
hostnames = ["web1", "web2"]
[[runners.kubernetes.host_aliases]]
ip = "192.168.1.1"
hostnames = ["web14", "web15"]
You can also configure host aliases by using the command-line parameter
--kubernetes-host_aliases
with JSON input.
For example:
gitlab-runner register --kubernetes-host_aliases '[{"ip":"192.168.1.100","hostnames":["myservice.local"]},{"ip":"192.168.1.101","hostnames":["otherservice.local"]}]'
When the cache is used with the Kubernetes executor, a volume called
/cache
is mounted on the pod. During job
execution, if cached data is needed, the runner checks if cached data is available. Cached data is available if
a compressed file is available on the cache volume.
To set the cache volume, use the
cache_dir
setting in the
config.toml
file.
cache dir
as a compressed file.
The compressed file is then extracted into the
build
folder.
You can mount the following volume types:
hostPath
persistentVolumeClaim
configMap
secret
emptyDir
csi
Example of a configuration with multiple volume types:
concurrent = 4
[[runners]]
# usual configuration
executor = "kubernetes"
[runners.kubernetes]
[[runners.kubernetes.volumes.host_path]]
name = "hostpath-1"
mount_path = "/path/to/mount/point"
read_only = true
host_path = "/path/on/host"
[[runners.kubernetes.volumes.host_path]]
name = "hostpath-2"
mount_path = "/path/to/mount/point_2"
read_only = true
[[runners.kubernetes.volumes.pvc]]
name = "pvc-1"
mount_path = "/path/to/mount/point1"
[[runners.kubernetes.volumes.config_map]]
name = "config-map-1"
mount_path = "/path/to/directory"
[runners.kubernetes.volumes.config_map.items]
"key_1" = "relative/path/to/key_1_file"
"key_2" = "key_2"
[[runners.kubernetes.volumes.secret]]
name = "secrets"
mount_path = "/path/to/directory1"
read_only = true
[runners.kubernetes.volumes.secret.items]
"secret_1" = "relative/path/to/secret_1_file"
[[runners.kubernetes.volumes.empty_dir]]
name = "empty-dir"
mount_path = "/path/to/empty_dir"
medium = "Memory"
[[runners.kubernetes.volumes.csi]]
name = "csi-volume"
mount_path = "/path/to/csi/volume"
driver = "my-csi-driver"
[runners.kubernetes.volumes.csi.volume_attributes]
size = "2Gi"
hostPath
volume
Configure the
hostPath
volume
to instruct Kubernetes to mount
a specified host path in the container.
Use the following options in the
config.toml
file:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | The name of the volume. |
mount_path
|
string | Yes | The path where the volume is mounted in the container. |
sub_path
|
string | No | The sub-path inside the mounted volume instead of its root. |
host_path
|
string | No |
The path on the host mounted as a volume. If you don’t specify a value, it defaults to the same path as
mount_path
.
|
read_only
|
boolean | No |
Sets the volume in read-only mode. Defaults to
false
.
|
mount_propagation
|
string | No | Share mounted volumes between containers. For more information, see Mount Propagation . |
persistentVolumeClaim
volume
Configure the
persistentVolumeClaim
volume
to
instruct Kubernetes to use a
persistentVolumeClaim
defined in a Kubernetes cluster and mount it in the container.
Use the following options in the
config.toml
file:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes |
The name of the volume and at the same time the name of
PersistentVolumeClaim
that should be used. Supports variables. For more information, see
Persistent per-concurrency build volumes
.
|
mount_path
|
string | Yes | Path in the container where the volume is mounted. |
read_only
|
boolean | No | Sets the volume to read-only mode (defaults to false). |
sub_path
|
string | No | Mount a sub-path in the volume instead of the root. |
mount_propagation
|
string | No | Set the mount propagation mode for the volume. For more details, see Kubernetes mount propagation . |
configMap
volume
Configure the
configMap
volume to instruct Kubernetes to use a
configMap
defined in a Kubernetes cluster and mount it in the container.
Use the following options in the
config.toml
:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes |
The name of the volume and at the same time the name of
configMap
that should be used.
|
mount_path
|
string | Yes | Path in the container where the volume is mounted. |
read_only
|
boolean | No | Sets the volume to read-only mode (defaults to false). |
sub_path
|
string | No | Mount a sub-path in the volume instead of the root. |
items
|
map[string]string
|
no |
Key-to-path mapping for keys from the
configMap
that should be used.
|
Each key from the
configMap
is changed into a file and stored in the mount path. By default:
configMap
key is used as the filename.
To change the default key and value storage, use the
items
option. If you use the
items
option,
only specified keys
are added to the volumes and all other keys are skipped.
If you use a key that doesn’t exist, the job fails on the pod creation stage.
secret
volume
Configure a
secret
volume
to instruct Kubernetes to use
a
secret
defined in a Kubernetes cluster and mount it in the container.
Use the following options in the
config.toml
file:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | The name of the volume and at the same time the name of secret that should be used. |
mount_path
|
string | Yes | Path inside of container where the volume should be mounted. |
read_only
|
boolean | No | Sets the volume in read-only mode (defaults to false). |
sub_path
|
string | No | Mount a sub-path in the volume instead of the root. |
items
|
map[string]string
|
No | Key-to-path mapping for keys from the configMap that should be used. |
Each key from selected
secret
is changed into a file stored in the selected mount path. By default:
configMap
key is used as the filename.
To change default key and value storage, use the
items
option. If you use the
items
option,
only specified keys
are added to the volumes and all other keys are skipped.
If you use a key that doesn’t exist, the job fails on the pod creation stage.
emptyDir
volume
Configure an
emptyDir
volume
to instruct Kubernetes to mount an empty directory in the container.
Use the following options in the
config.toml
file:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | The name of the volume. |
mount_path
|
string | Yes | Path inside of container where the volume should be mounted. |
sub_path
|
string | No | Mount a sub-path in the volume instead of the root. |
medium
|
string | No |
“Memory” provides a
tmpfs
, otherwise it defaults to the node disk storage (defaults to “”).
|
size_limit
|
string | No |
The total amount of local storage required for the
emptyDir
volume.
|
csi
volume
Configure a
Container Storage Interface (
csi
) volume
to instruct
Kubernetes to use a custom
csi
driver to mount an arbitrary storage system in the container.
Use the following options in the
config.toml
:
Option | Type | Required | Description |
---|---|---|---|
name
|
string | Yes | The name of the volume. |
mount_path
|
string | Yes | Path inside of container where the volume should be mounted. |
driver
|
string | Yes | A string value that specifies the name of the volume driver to use. |
fs_type
|
string | No |
A string value that specifies the name of the file system type (for example,
ext4
,
xfs
,
ntfs
).
|
volume_attributes
|
map[string]string
|
No |
Key-value pair mapping for attributes of the
csi
volume.
|
sub_path
|
string | No | Mount a sub-path in the volume instead of the root. |
read_only
|
boolean | No | Sets the volume in read-only mode (defaults to false). |
Volumes defined for the build container are also automatically mounted for all services containers. You can use this functionality as an alternative to
services_tmpfs
(available only to Docker executor), to mount database storage in RAM to speed up tests.
Example configuration in the
config.toml
file:
[[runners]]
# usual configuration
executor = "kubernetes"
[runners.kubernetes]
[[runners.kubernetes.volumes.empty_dir]]
name = "mysql-tmpfs"
mount_path = "/var/lib/mysql"
medium = "Memory"
To store the builds directory for the job, define custom volume mounts to the
configured
builds_dir
(
/builds
by default).
If you use
pvc
volumes
,
based on the
access mode
,
you might be limited to running jobs on one node.
Example configuration in the
config.toml
file:
concurrent = 4
[[runners]]
# usual configuration
executor = "kubernetes"
builds_dir = "/builds"
[runners.kubernetes]
[[runners.kubernetes.volumes.empty_dir]]
name = "repo"
mount_path = "/builds"
medium = "Memory"
pvc.name
introduced
in GitLab 16.3.
The build directories in Kubernetes CI jobs are ephemeral by default.
If you want to persist your Git clone across jobs (to make
GIT_STRATEGY=fetch
work),
you must mount a persistent volume claim for your build folder.
Because multiple jobs can run concurrently, you must either
use a
ReadWriteMany
volume, or have one volume for each potential
concurrent job on the same runner. The latter is likely to be more performant.
Here is an example of such a configuration:
concurrent = 4
[[runners]]
executor = "kubernetes"
builds_dir = "/mnt/builds"
[runners.kubernetes]
[[runners.kubernetes.volumes.pvc]]
# CI_CONCURRENT_ID identifies parallel jobs of the same runner.
name = "build-pvc-$CI_CONCURRENT_ID"
mount_path = "/mnt/builds"
In this example, create the persistent volume claims named
build-pvc-0
to
build-pvc-3
yourself.
Create as many as the runner’s
concurrent
setting dictates.
After you set the security policy, the helper image must conform to the policy. The image does not receive privileges from the root group, so you must ensure that the user ID is part of the root group.
If you only need the
nonroot
environment, you can use the
GitLab Runner UBI
OpenShift Container Platform images instead of a helper image. You can also use the
GitLab Runner Helper UBI
OpenShift Container Platform images.
The following example creates a user and group called
nonroot
and sets the helper image to run as that user.
ARG tag
FROM registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-helper-ocp:${tag}
USER root
RUN groupadd -g 59417 nonroot && \
useradd -u 59417 nonroot -g nonroot
WORKDIR /home/nonroot
USER 59417:59417
When you use Docker in your builds, there are several considerations you should be aware of.
/var/run/docker.sock
There is risk involved if you use the
runners.kubernetes.volumes.host_path
option
to expose
/var/run/docker.sock
of your host into your build container.
Be careful when you run builds in the same cluster as your production
containers. The node’s containers are accessible from the build container.
docker:dind
If you run the
docker:dind
, also called the
docker-in-docker
image,
containers must run in privileged mode. This may have potential risks
and cause additional issues.
The Docker daemon runs as a separate container in the pod because it is started as a
service
,
typically in the
.gitlab-ci.yml
. Containers in pods only share volumes assigned to them and
an IP address, that they use to communicate with each other with
localhost
. The
docker:dind
container does not share
/var/run/docker.sock
and the
docker
binary tries to use it by default.
To configure the client use TCP to contact the Docker daemon, in the other container, include the environment variables of the build container:
DOCKER_HOST=tcp://docker:2375
for no TLS connection.
DOCKER_HOST=tcp://docker:2376
for TLS connection.
In Docker 19.03 and later, TLS is enabled by default but you must map certificates to your client. You can enable non-TLS connection for Docker-in-Docker or mount certificates. For more information, see Use Docker In Docker Workflow with Docker executor .
If you use
docker:dind
or
/var/run/docker.sock
, the Docker daemon
has access to the underlying kernel of the host machine. This means that any
limits
set in the pod do not work when Docker images are built.
The Docker daemon reports the full capacity of the node, regardless of
limits imposed on the Docker build containers spawned by Kubernetes.
If you run build containers in privileged mode, or if
/var/run/docker.sock
is exposed,
the host kernel may become exposed to build containers. To minimize exposure, specify a label
in the
node_selector
option. This ensures that the node matches the labels before any containers
can be deployed to the node. For example, if you specify the label
role=ci
, the build containers
only run on nodes labeled
role=ci
, and all other production services run on other nodes.
To further separate build containers, you can use node taints . Taints prevent other pods from scheduling on the same nodes as the build pods, without extra configuration for the other pods.
You can restrict the Docker images that are used to run your jobs. To do this, you specify wildcard patterns. For example, to allow images from your private Docker registry only:
[[runners]]
(...)
executor = "kubernetes"
[runners.kubernetes]
(...)
allowed_images = ["my.registry.tld:5000/*:*"]
allowed_services = ["my.registry.tld:5000/*:*"]
Or, to restrict to a specific list of images from this registry:
[[runners]]
(...)
executor = "kubernetes"
[runners.kubernetes]
(...)
allowed_images = ["my.registry.tld:5000/ruby:*", "my.registry.tld:5000/node:*"]
allowed_services = ["postgres:9.4", "postgres:latest"]
In the
.gitlab-ci.yml
file, you can specify a pull policy. This policy determines how
a CI/CD job should fetch images.
To restrict which pull policies can be used from those specified in the
.gitlab-ci.yml
file, use
allowed_pull_policies
.
For example, to allow only the
always
and
if-not-present
pull policies:
[[runners]]
(...)
executor = "kubernetes"
[runners.kubernetes]
(...)
allowed_pull_policies = ["always", "if-not-present"]
allowed_pull_policies
, the default is the value in the
pull_policy
keyword.
pull_policy
, the cluster’s image
default pull policy
is used.
pull_policy
and
allowed_pull_policies
.
The effective pull policy is determined by comparing the policies in
pull_policy
keyword
and
allowed_pull_policies
. GitLab uses the
intersection
of these two policy lists.
For example, if
pull_policy
is
["always", "if-not-present"]
and
allowed_pull_policies
is
["if-not-present"]
, then the job uses only
if-not-present
because it’s the only pull policy defined in both lists.
pull_policy
keyword must include at least one pull policy specified in
allowed_pull_policies
.
The job fails if none of the
pull_policy
values match
allowed_pull_policies
.
GitLab Runner uses
kube attach
instead of
kube exec
by default. This should avoid problems like when a
job is marked successful midway
in environments with an unstable network.
Follow issue #27976 for progress on legacy execution strategy removal.
By default, the Kubernetes executor retries specific requests to the Kubernetes API after five failed attempts. The delay is controlled by
a backoff algorithm with a 500 millisecond floor and a customizable ceiling with default value of two seconds.
To configure the number of retries, use the
retry_limit
option in the
config.toml
file.
Similarly, for backoff ceiling, use the
retry_backoff_max
option.
The following failures are automatically retried:
error dialing backend
TLS handshake timeout
read: connection timed out
connect: connection timed out
Timeout occurred
http2: client connection lost
connection refused
tls: internal error
io.unexpected EOF
syscall.ECONNRESET
syscall.ECONNREFUSED
syscall.ECONNABORTED
syscall.EPIPE
To control the amount of retries for each error, use the
retry_limits
option.
The
rety_limits
specifies the amount of retries for each error separately,
and is a map of error messages to the amount of retries.
The error message can be a substring of the error message returned by the Kubernetes API.
The
retry_limits
option has precedence over the
retry_limit
option.
For example, configure the
retry_limits
option to retry the TLS related errors in your
environment 10 times instead of the default five times:
[[runners]]
name = "myRunner"
url = "https://gitlab.example.com/"
executor = "kubernetes"
[runners.kubernetes]
retry_limit = 5
[runners.kubernetes.retry_limits]
"TLS handshake timeout" = 10
"tls: internal error" = 10
To retry an entirely different error, such as
exceeded quota
20 times:
[[runners]]
name = "myRunner"
url = "https://gitlab.example.com/"
executor = "kubernetes"
[runners.kubernetes]
retry_limit = 5
[runners.kubernetes.retry_limits]
"exceeded quota" = 20
In 15.0, GitLab Runner uses the entrypoint defined in a Docker image when used with the Kubernetes executor with
kube attach
.
In GitLab 15.1 and later, the entrypoint defined in a Docker image is used with the Kubernetes executor when
FF_KUBERNETES_HONOR_ENTRYPOINT
is set.
The container entry point has the following known issues:
File type CI/CD variables are not written to disk when the entrypoint is executed. The file is only accessible in the job during script execution.
The following CI/CD variables are not accessible in the entrypoint. You can use
before_script
to make
any setup changes before running script commands:
Before GitLab Runner 17.4:
kube exec
, GitLab Runner did not wait for the entrypoint to open a shell (see
above
).
Starting with GitLab Runner 17.4, the entrypoint logs are now forwarded. The system waits for the entrypoint to run and spawn the shell. This has the following implications:
FF_KUBERNETES_HONOR_ENTRYPOINT
is set, and the image’s entrypoint takes
longer than
poll_timeout
(default: 180 s), the build fails. The
poll_timeout
value (and potentially
poll_interval
)
must be adapted if the entrypoint is expected to run longer.
FF_KUBERNETES_HONOR_ENTRYPOINT
and
FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY
are set, the system adds a
startup probe
to the build container, so that it knows when the entrypoint is spawning
the shell. If a custom entrypoint uses the provided
args
to spawn the expected shell, then the startup probe is resolved
automatically. However, if the container image is spawning the shell without
using the command passed in through
args
, the entrypoint must resolve the
startup probe itself by creating a file named
.gitlab-startup-marker
inside
the root of the build directory.
The startup probe checks every
poll_interval
for the
.gitlab-startup-marker
file. If the file is not present in
poll_timeout
, the pod is considered
unhealthy, and the system abort the build.
When using Kubernetes executor, users with access to the Kubernetes cluster can read variables used in the job. By default, job variables are stored in:
To restrict access to job variable data, you should use role-based access control (RBAC). When you use RBAC, only GitLab administrators have access to the namespace used by the GitLab Runner.
If you need other users to access the GitLab Runner namespace, set the following
verbs
to restrict the user access in the GitLab Runner namespace:
pods
and
configmaps
:
get
watch
list
pods/exec
and
pods/attach
, use
create
.
Example RBAC definition for authorized users:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab-runner-authorized-users
rules:
- apiGroups: [""]
resources: ["configmaps", "pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods/exec", "pods/attach"]
verbs: ["create"]
Prerequisites:
image_pull_secrets
or
service_account
is set.
resource_availability_check_max_attempts
is set to a number greater than zero.
serviceAccount
used with the
get
and
list
permissions.
GitLab Runner checks if the new service accounts or secrets are available with a 5-second interval between each try.
5
when a negative value is set.
resource_availability_check_max_attempts
to any value other than
0
.
The value you set defines the amount of times the runner checks for service accounts or secrets.
Prerequisites:
values.yml
file for GitLab Runner Helm charts,
rbac.clusterWideAccess
is set to
true
.
You can overwrite Kubernetes namespaces to designate a namespace for CI purposes, and deploy a custom
![]() |
腼腆的绿豆 · Kubelet核心功能解析 2 天前 |
![]() |
任性的马克杯 · 怎么把obj文件保存在数组c++ - CSDN文库 10 月前 |
![]() |
豁达的钢笔 · 如何使用记事本编写Python代码-百度经验 1 年前 |
![]() |
聪明的羽毛球 · selenium释放内存-掘金 2 年前 |