Skip to content

Commit e839bf7

Browse files
committed
Fix spelling mistake in scheduling section
1 parent 33dcba8 commit e839bf7

10 files changed

+29
-29
lines changed

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -254,13 +254,13 @@ the node label that the system uses to denote the domain. For examples, see
254254
[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).
255255

256256
{{< note >}}
257-
Inter-pod affinity and anti-affinity require substantial amount of
257+
Inter-pod affinity and anti-affinity require substantial amounts of
258258
processing which can slow down scheduling in large clusters significantly. We do
259259
not recommend using them in clusters larger than several hundred nodes.
260260
{{< /note >}}
261261

262262
{{< note >}}
263-
Pod anti-affinity requires nodes to be consistently labelled, in other words,
263+
Pod anti-affinity requires nodes to be consistently labeled, in other words,
264264
every node in the cluster must have an appropriate label matching `topologyKey`.
265265
If some or all nodes are missing the specified `topologyKey` label, it can lead
266266
to unintended behavior.
@@ -364,7 +364,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
364364

365365
{{< note >}}
366366
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
367-
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
367+
The `matchLabelKeys` field is an alpha-level field and is disabled by default in
368368
Kubernetes {{< skew currentVersion >}}.
369369
When you want to use it, you have to enable it via the
370370
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
@@ -415,7 +415,7 @@ spec:
415415

416416
{{< note >}}
417417
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
418-
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
418+
The `mismatchLabelKeys` field is an alpha-level field and is disabled by default in
419419
Kubernetes {{< skew currentVersion >}}.
420420
When you want to use it, you have to enable it via the
421421
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
@@ -561,7 +561,7 @@ where each web server is co-located with a cache, on three separate nodes.
561561
| *webserver-1* | *webserver-2* | *webserver-3* |
562562
| *cache-1* | *cache-2* | *cache-3* |
563563

564-
The overall effect is that each cache instance is likely to be accessed by a single client, that
564+
The overall effect is that each cache instance is likely to be accessed by a single client that
565565
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
566566

567567
You might have other reasons to use Pod anti-affinity.
@@ -589,7 +589,7 @@ Some of the limitations of using `nodeName` to select nodes are:
589589
{{< note >}}
590590
`nodeName` is intended for use by custom schedulers or advanced use cases where
591591
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
592-
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
592+
failed Pods if the assigned Nodes get oversubscribed. You can use the [node affinity](#node-affinity) or the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
593593
{{</ note >}}
594594

595595
Here is an example of a Pod spec using the `nodeName` field:

content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ ResourceClass
4141
driver.
4242

4343
ResourceClaim
44-
: Defines a particular resource instances that is required by a
44+
: Defines a particular resource instance that is required by a
4545
workload. Created by a user (lifecycle managed manually, can be shared
4646
between different Pods) or for individual Pods by the control plane based on
4747
a ResourceClaimTemplate (automatic lifecycle, typically used by just one
4848
Pod).
4949

5050
ResourceClaimTemplate
51-
: Defines the spec and some meta data for creating
51+
: Defines the spec and some metadata for creating
5252
ResourceClaims. Created by a user when deploying a workload.
5353

5454
PodSchedulingContext

content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The kubelet has the following default hard eviction thresholds:
171171
- `nodefs.inodesFree<5%` (Linux nodes)
172172

173173
These default values of hard eviction thresholds will only be set if none
174-
of the parameters is changed. If you changed the value of any parameter,
174+
of the parameters is changed. If you change the value of any parameter,
175175
then the values of other parameters will not be inherited as the default
176176
values and will be set to zero. In order to provide custom values, you
177177
should provide all the thresholds respectively.

content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -182,8 +182,8 @@ When Pod priority is enabled, the scheduler orders pending Pods by
182182
their priority and a pending Pod is placed ahead of other pending Pods
183183
with lower priority in the scheduling queue. As a result, the higher
184184
priority Pod may be scheduled sooner than Pods with lower priority if
185-
its scheduling requirements are met. If such Pod cannot be scheduled,
186-
scheduler will continue and tries to schedule other lower priority Pods.
185+
its scheduling requirements are met. If such Pod cannot be scheduled, the
186+
scheduler will continue and try to schedule other lower priority Pods.
187187

188188
## Preemption
189189

@@ -199,7 +199,7 @@ the Pods are gone, P can be scheduled on the Node.
199199
### User exposed information
200200

201201
When Pod P preempts one or more Pods on Node N, `nominatedNodeName` field of Pod
202-
P's status is set to the name of Node N. This field helps scheduler track
202+
P's status is set to the name of Node N. This field helps the scheduler track
203203
resources reserved for Pod P and also gives users information about preemptions
204204
in their clusters.
205205

@@ -209,8 +209,8 @@ After victim Pods are preempted, they get their graceful termination period. If
209209
another node becomes available while scheduler is waiting for the victim Pods to
210210
terminate, scheduler may use the other node to schedule Pod P. As a result
211211
`nominatedNodeName` and `nodeName` of Pod spec are not always the same. Also, if
212-
scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
213-
arrives, scheduler may give Node N to the new higher priority Pod. In such a
212+
the scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
213+
arrives, the scheduler may give Node N to the new higher priority Pod. In such a
214214
case, scheduler clears `nominatedNodeName` of Pod P. By doing this, scheduler
215215
makes Pod P eligible to preempt Pods on another Node.
216216

@@ -288,7 +288,7 @@ enough demand and if we find an algorithm with reasonable performance.
288288

289289
## Troubleshooting
290290

291-
Pod priority and pre-emption can have unwanted side effects. Here are some
291+
Pod priority and preemption can have unwanted side effects. Here are some
292292
examples of potential problems and ways to deal with them.
293293

294294
### Pods are preempted unnecessarily

content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ The output is:
5959
```
6060

6161
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
62-
by re-applying a modified manifest:
62+
by reapplying a modified manifest:
6363

6464
{{% code_sample file="pods/pod-without-scheduling-gates.yaml" %}}
6565

content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -57,9 +57,9 @@ the `NodeResourcesFit` score function can be controlled by the
5757
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatio` and
5858
`resources`. The `shape` in the `requestedToCapacityRatio`
5959
parameter allows the user to tune the function as least requested or most
60-
requested based on `utilization` and `score` values. The `resources` parameter
61-
consists of `name` of the resource to be considered during scoring and `weight`
62-
specify the weight of each resource.
60+
requested based on `utilization` and `score` values. The `resources` parameter
61+
comprises both the `name` of the resource to be considered during scoring and
62+
its corresponding `weight`, which specifies the weight of each resource.
6363

6464
Below is an example configuration that sets
6565
the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`

content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ If you don't specify a threshold, Kubernetes calculates a figure using a
7777
linear formula that yields 50% for a 100-node cluster and yields 10%
7878
for a 5000-node cluster. The lower bound for the automatic value is 5%.
7979

80-
This means that, the kube-scheduler always scores at least 5% of your cluster no
80+
This means that the kube-scheduler always scores at least 5% of your cluster no
8181
matter how large the cluster is, unless you have explicitly set
8282
`percentageOfNodesToScore` to be smaller than 5.
8383

content/en/docs/concepts/scheduling-eviction/scheduling-framework.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ called for that node. Nodes may be evaluated concurrently.
113113

114114
### PostFilter {#post-filter}
115115

116-
These plugins are called after Filter phase, but only when no feasible nodes
116+
These plugins are called after the Filter phase, but only when no feasible nodes
117117
were found for the pod. Plugins are called in their configured order. If
118118
any postFilter plugin marks the node as `Schedulable`, the remaining plugins
119119
will not be called. A typical PostFilter implementation is preemption, which

content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ An empty `effect` matches all effects with key `key1`.
8484

8585
{{< /note >}}
8686

87-
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
87+
The above example used the `effect` of `NoSchedule`. Alternatively, you can use the `effect` of `PreferNoSchedule`.
8888

8989

9090
The allowed values for the `effect` field are:
@@ -227,7 +227,7 @@ are true. The following taints are built in:
227227
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
228228
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
229229
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
230-
with "external" cloud provider, this taint is set on a node to mark it
230+
with an "external" cloud provider, this taint is set on a node to mark it
231231
as unusable. After a controller from the cloud-controller-manager initializes
232232
this node, the kubelet removes this taint.
233233

content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ spec:
7171
```
7272

7373
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or
74-
refer to [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.
74+
refer to the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.
7575

7676
### Spread constraint definition
7777

@@ -254,7 +254,7 @@ follows the API definition of the field; however, the behavior is more likely to
254254
confusing and troubleshooting is less straightforward.
255255

256256
You need a mechanism to ensure that all the nodes in a topology domain (such as a
257-
cloud provider region) are labelled consistently.
257+
cloud provider region) are labeled consistently.
258258
To avoid you needing to manually label nodes, most clusters automatically
259259
populate well-known labels such as `kubernetes.io/hostname`. Check whether
260260
your cluster supports this.
@@ -263,7 +263,7 @@ your cluster supports this.
263263

264264
### Example: one topology spread constraint {#example-one-topologyspreadconstraint}
265265

266-
Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in
266+
Suppose you have a 4-node cluster where 3 Pods labeled `foo: bar` are located in
267267
node1, node2 and node3 respectively:
268268

269269
{{<mermaid>}}
@@ -290,7 +290,7 @@ can use a manifest similar to:
290290
{{% code_sample file="pods/topology-spread-constraints/one-constraint.yaml" %}}
291291

292292
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
293-
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
293+
to nodes that are labeled `zone: <any value>` (nodes that don't have a `zone` label
294294
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
295295
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.
296296

@@ -494,7 +494,7 @@ There are some implicit conventions worth noting here:
494494
above example, if you remove the incoming Pod's labels, it can still be placed onto
495495
nodes in zone `B`, since the constraints are still satisfied. However, after that
496496
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
497-
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as
497+
having 2 Pods labeled as `foo: bar`, and zone `B` having 1 Pod labeled as
498498
`foo: bar`. If this is not what you expect, update the workload's
499499
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.
500500

@@ -618,7 +618,7 @@ section of the enhancement proposal about Pod topology spread constraints.
618618
because, in this case, those topology domains won't be considered until there is
619619
at least one node in them.
620620

621-
You can work around this by using an cluster autoscaling tool that is aware of
621+
You can work around this by using a cluster autoscaling tool that is aware of
622622
Pod topology spread constraints and is also aware of the overall set of topology
623623
domains.
624624

0 commit comments

Comments
 (0)