12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182 |
- # The intended namespace where the validation test will be run. This is should be where the
- # Rook-Ceph cluster will be installed.
- namespace: "{{ .Namespace }}"
- # These fields should be set to the name of the Network Attachment Definition (NAD) which will be
- # used for the Ceph cluster's public or cluster network, respectively. This should be a namespaced
- # name in the form <namespace>/<name> if the NAD is defined in a different namespace from the
- # CephCluster namespace. One or both of the fields may be set.
- publicNetwork: "{{ .PublicNetwork }}"
- clusterNetwork: "{{ .ClusterNetwork }}"
- # This is the time to wait for resources to change to the expected state. For example, for the test
- # web server to start, for test clients to become ready, or for test resources to be deleted. At
- # longest, this may need to reflect the time it takes for client pods to get address assignments and
- # then for each client to determine that its network connection is stable.
- # This should be at least 1 minute. 2 minutes or more is recommended.
- resourceTimeout: "{{ .ResourceTimeout }}"
- # This is the time window in which validation clients are all expected to become "Ready" together.
- # Validation clients are all started at approximately the same time, and they should all stabilize
- # at approximately the same time. Once the first validation client becomes "Ready," the tool checks
- # that all of the remaining clients become "Ready" before this threshold duration elapses.
- # In networks that have connectivity issues, limited bandwidth, or high latency, clients will contend
- # for network traffic with each other, causing some clients to randomly fail and become "Ready" later
- # than others. These randomly-failing clients are considered "flaky." Adjust this value to reflect
- # expectations for the underlying network. For fast and reliable networks, this can be set to a smaller
- # value. For networks that are intended to be slow, this can be set to a larger value. Additionally, for
- # very large Kubernetes clusters, it may take longer for all clients to start, and it therefore may take
- # longer for all clients to become "Ready"; in that case, this value can be set slightly higher.
- flakyThreshold: "{{ .FlakyThreshold }}"
- # The Nginx image which will be used for the web server and clients.
- nginxImage: "{{ .NginxImage }}"
- # Specify validation test config for groups of nodes. A Ceph cluster may span more than one type of
- # node, but usually not more than 3. Node types defined here should not overlap with each other.
- # Common examples of different node types are below:
- # - General, converged nodes that are used for both Ceph daemons and user applications
- # - Dedicated storage nodes for hosting Ceph storage daemons
- # - Non-storage nodes running workloads other than Ceph (running only Ceph CSI plugin daemons)
- # - Dedicated nodes for hosting Ceph arbiter mons
- # - Dedicated nodes for hosting only Ceph core (RADOS) components: mons, mgrs, OSDs
- # - Dedicated nodes for hosting non-core storage components: MDS, RGW, CSI provisioner, etc.
- nodeTypes:
- {{- range $type, $config := .NodeTypes }}
- {{ $type }}: # Unique name for this node type (lower-case alphanumeric characters and dashes only)
- # This should reflect the maximum number of OSDs that may run on any given node of this type
- # during a worst-case failure scenario. This may be a static number expected by the
- # StorageClassDeviceSets planned for the CephCluster. For portable OSDs, this may be larger to
- # account for OSDs being rescheduled during node failures.
- osdsPerNode: {{ $config.OSDsPerNode }}
- # This should reflect the maximum number of Ceph and CSI daemons that may run on any given node
- # during a worst-case failure scenario. CSI requires 2 Multus addresses per plugin daemon.
- #
- # In the converged case, a Rook cluster with a single instance of all resource types might run
- # these daemons on a single node during worst-case failure:
- # 1 mon, 1 mgr, 1 MDS, 1 NFS server, 1 RGW, 1 rbdmirror, 1 cephfsmirror, plus
- # 1 CSI provisioner and 2 CSI plugins for all 3 CSI types (RBD, CephFS, NFS) -- 16 daemons
- #
- # A node that is running only client workloads would run only the CSI plugin daemons per node:
- # 2 CSI plugins for all 3 CSI type (RBD, CephFS, NFS) -- 6 daemons
- otherDaemonsPerNode: {{ $config.OtherDaemonsPerNode }}
- # Placement options for informing the validation tool how to identify this type of node.
- # Affinities are not supported because their behavior is too relaxed for ensuring a specific
- # number of daemons per node.
- placement:
- # Kubernetes Pod spec node selector for assigning validation Pods to this node type.
- # See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
- nodeSelector:
- {{- range $k, $v := $config.Placement.NodeSelector }}
- {{ $k }}: {{ $v }}
- {{- end }}
- # Kubernetes tolerations for assigning validation Pods to this node type.
- # See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
- tolerations:
- {{- range $idx, $toleration := $config.Placement.Tolerations }}
- - {{ $toleration.ToJSON }}
- {{- end }}
- {{ end }}
|