Skip to content

Commit 4726921

Browse files
committed
Fix typo in docs/proposals
Signed-off-by: chymy <[email protected]>
1 parent 56f4f9a commit 4726921

11 files changed

+31
-31
lines changed

docs/proposals/20181121-machine-api.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,9 +42,9 @@ performed in-place or via Node replacement.
4242
## Proposal
4343

4444
This proposal introduces a new API type: Machine. See the full definition in
45-
[types.go](types.go).
45+
[machine_types.go](../../api/v1alpha4/machine_types.go).
4646

47-
A "Machine" is the declarative spec for a Node, as represented in Kuberenetes
47+
A "Machine" is the declarative spec for a Node, as represented in Kubernetes
4848
core. If a new Machine object is created, a provider-specific controller will
4949
handle provisioning and installing a new host to register as a new Node matching
5050
the Machine spec. If the Machine's spec is updated, a provider-specific
@@ -143,4 +143,4 @@ revisit the specifics when new patterns start to emerge in core.
143143

144144
## Types
145145

146-
Please see the full types [here](https://github.com/kubernetes-sigs/cluster-api/tree/release-0.2/pkg/apis/deprecated/v1alpha1/machine_types.go).
146+
Please see the full types [here](../../api/v1alpha4/machine_types.go).

docs/proposals/20191016-clusterctl-redesign.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
title: Clusterctl redesign - Improve user experience and management across Cluster API providers
33
authors:
44
- "@timothysc"
5-
- @frapposelli
6-
- @fabriziopandini
5+
- "@frapposelli"
6+
- "@fabriziopandini"
77
reviewers:
88
- "@detiber"
99
- "@ncdc"

docs/proposals/20191017-kubeadm-based-control-plane.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,10 +131,10 @@ Non-Goals listed in this document are intended to scope bound the current v1alph
131131
- To mutate the configuration of live, running clusters (e.g. changing api-server flags), as this is the responsibility of the [component configuration working group](https://git.k8s.io/community/wg-component-standard).
132132
- To provide configuration of external cloud providers (i.e. the [cloud-controller-manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)).This is deferred to kubeadm.
133133
- To provide CNI configuration. This is deferred to external, higher level tooling.
134-
- To provide the upgrade logic to handle changes to infrastructure (networks, firewalls etc…) that may need to be done to support a control plane on a newer version of Kubernetes (e.g. a cloud controller manager requires updated permissions against infrastructure APIs). We expect the work on [add-on components](https://git.k8s.io/community/sig-cluster-lifecycle#cluster-addons)) to help to resolve some of these issues.
134+
- To provide the upgrade logic to handle changes to infrastructure (networks, firewalls etc…) that may need to be done to support a control plane on a newer version of Kubernetes (e.g. a cloud controller manager requires updated permissions against infrastructure APIs). We expect the work on [add-on components](https://git.k8s.io/community/sig-cluster-lifecycle#cluster-addons) to help to resolve some of these issues.
135135
- To provide automation around the horizontal or vertical scaling of control plane components, especially as etcd places hard performance limits beyond 3 nodes (due to latency).
136136
- To support upgrades where the infrastructure does not rely on a Load Balancer for access to the API Server.
137-
- To implement a fully modeled state machine and/or Conditions, a larger effort for Cluster API more broadly is being organized on [this issue](https://github.com/kubernetes-sigs/cluster-api/issues/1658))
137+
- To implement a fully modeled state machine and/or Conditions, a larger effort for Cluster API more broadly is being organized on [this issue](https://github.com/kubernetes-sigs/cluster-api/issues/1658)
138138

139139
## Proposal
140140

docs/proposals/20191030-machine-health-checking.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ MHC requests a remediation in one of the following ways:
9090
- Creating a CR based on a template which signals external component to remediate the machine
9191

9292
It provides a short-circuit mechanism and limits remediation when the number of unhealthy machines is not within `unhealthyRange`, or has reached `maxUnhealthy` threshold for a targeted group of machines with `unhealthyRange` taking precedence.
93-
This is similar to what the node life cycle controller does for reducing the eviction rate as nodes become unhealthy in a given zone. E.g a large number of nodes in a single zone are down due to a networking issue.
93+
This is similar to what the node life cycle controller does for reducing the eviction rate as nodes become unhealthy in a given zone. E.g. a large number of nodes in a single zone are down due to a networking issue.
9494

9595
The machine health checker is an integration point between node problem detection tooling expressed as node conditions and remediation to achieve a node auto repairing feature.
9696

@@ -104,7 +104,7 @@ If any of those criteria are met for longer than the given timeouts and the numb
104104

105105
Timeouts:
106106
- For the node conditions the time outs are defined by the admin.
107-
- For a machine with no nodeRef an opinionated value could be assumed e.g 10 min.
107+
- For a machine with no nodeRef an opinionated value could be assumed e.g. 10 min.
108108

109109
### Remediation:
110110
- Remediation is not an integral part or responsibility of MachineHealthCheck. This controller only functions as a means for others to act when a Machine is unhealthy in the best way possible.
@@ -376,7 +376,7 @@ For failing early testing we could consider a test suite leveraging kubemark as
376376
[testing-guidelines]: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
377377

378378
### Graduation Criteria [optional]
379-
This propose the new CRD to belong to the same API group than other cluster-api resources, e.g machine, machineSet and to follow the same release cadence.
379+
This propose the new CRD to belong to the same API group than other cluster-api resources, e.g. machine, machineSet and to follow the same release cadence.
380380

381381
### Version Skew Strategy [optional]
382382

docs/proposals/20200330-spot-instances.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ Allow users to cut costs of running Kubernetes clusters on cloud providers by mo
102102

103103
- Any logic for choosing instances types based on availability from the cloud provider
104104

105-
- A one to one map for each provider available mechanism for deploying spot instances, e.g aws fleet.
105+
- A one to one map for each provider available mechanism for deploying spot instances, e.g. aws fleet.
106106

107107
- Support Spot instances via MachinePool for any cloud provider that doesn't already support MachinePool
108108

@@ -315,7 +315,7 @@ could introduce instability to the cluster or even result in a loss of quorum fo
315315
Running control-plane instances on top of spot instances should be forbidden.
316316

317317
There may also be limitations within cloud providers that restrict the usage of spot instances within the control-plane,
318-
eg. Azure Spot VMs do not support [ephemeral disks](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/spot-vms#limitations) which may be desired for control-plane instances.
318+
e.g. Azure Spot VMs do not support [ephemeral disks](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/spot-vms#limitations) which may be desired for control-plane instances.
319319

320320
This risk will be documented and it will be strongly advised that users do not attempt to create control-plane instances on spot instances.
321321
To prevent it completely, an admission controller could be used to verify that Infrastructure Machines do not get created with the control-plane label,
@@ -459,7 +459,7 @@ Spot VMs are available in two forms in Azure.
459459
###### Scale Sets
460460

461461
Scale sets include support for Spot VMs by indicating when created, that they should be backed by Spot VMs.
462-
At this point, a eviction policy should be set and a maximum price you wish to pay.
462+
At this point, an eviction policy should be set and a maximum price you wish to pay.
463463
Alternatively, you can also choose to only be preempted in the case that there are capacity constraints,
464464
in which case, you will pay whatever the market rate is, but will be preempted less often.
465465

@@ -468,7 +468,7 @@ Once support is added, enabling Spot backed Scale Sets would be a case of modify
468468

469469
###### Single Instances
470470
Azure supports Spot VMs on single VM instances by indicating when created, that the VM should be a Spot VM.
471-
At this point, a eviction policy should be set and a maximum price you wish to pay.
471+
At this point, an eviction policy should be set and a maximum price you wish to pay.
472472
Alternatively, you can also choose to only be preempted in the case that there are capacity constraints,
473473
in which case, you will pay whatever the market rate is, but will be preempted less often.
474474

docs/proposals/20200423-etcd-data-disk.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ authors:
55
reviewers:
66
- "@bagnaram"
77
- "@vincepri"
8-
- @detiber
8+
- "@detiber"
99
- "@fabrizio.pandini"
1010
creation-date: 2020-04-23
1111
last-updated: 2020-05-11
@@ -80,9 +80,9 @@ As a user of a Workload Cluster, I want provision and mount additional data stor
8080

8181
### Implementation Details/Notes/Constraints
8282

83-
### Changes required in the bootstrap provider (ie. CABPK)
83+
### Changes required in the bootstrap provider (i.e. CABPK)
8484

85-
1. Add a two new fields to KubeadmConfig for disk setup and mount points
85+
1. Add two new fields to KubeadmConfig for disk setup and mount points
8686

8787
```go
8888
// DiskSetup specifies options for the creation of partition tables and file systems on devices.

docs/proposals/20200506-conditions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,7 @@ ControlPlaneReady=False, Reason=ScalingUp, Severity=Info
254254
```
255255

256256
In other cases, the combination of `Reason` and `Severity` allows to detect when a failure is due to a catastrophic
257-
error or to other events that are transient or can be eventually remediated by an user intervention
257+
error or to other events that are transient or can be eventually remediated by a user intervention
258258

259259
```
260260
MachineReady=False, Reason=MachineNotHealthy, Severity=Error
@@ -456,7 +456,7 @@ time an upgrade starts.
456456
Then, those new conditions will be then captured by the summary in `KubeadmControlPlane.Status.Conditions[Ready]`
457457
and be reflected to `Cluster.Status.Conditions[ControlPlaneReady]`.
458458

459-
However, please note that during upgrades, some rules that are be used to evaluate the
459+
However, please note that during upgrades, some rules that are been used to evaluate the
460460
operational state of a control plane should be temporary changed e.g. during upgrades:
461461

462462
- It is acceptable to have a number of replicas higher than the desired number of replicas
@@ -482,11 +482,11 @@ enhance the condition utilities to handle those situations in a generalized way.
482482
- Mitigation: Ensure all the implementations comply with the defined set of constraints/design principles.
483483

484484
- Risk: Having a consistent polarity ensures a simple and clear contract with the consumers, and it allows
485-
processing conditions in a simple and consistent way without being forced to implements specific logic
485+
processing conditions in a simple and consistent way without being forced to implement specific logic
486486
for each condition type. However, we are aware about the fact that enforcing of consistent polarity (truthy)
487487
combined with the usage of recommended suffix for condition types can lead to verbal contortions to express
488488
conditions, especially in case of conditions designed to signal problems or in case of conditions
489-
that might exists or not.
489+
that might exist or not.
490490
- Mitigation: We are relaxing the rule about recommended suffix and allowing usage of custom suffix.
491491
- Mitigation: We are recommending the condition adhere to the design principle to express the operational state
492492
of the component, and this should help in avoiding conditions name to surface internal implementation details.

docs/proposals/20200511-clusterctl-extensible-template-processing.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ title: Extensible Templating Processing for clusterctl
33
authors:
44
* "@wfernandes"
55
reviewers:
6-
* @timothysc
7-
* @ncdc
8-
* @fabriziopandini
9-
* @vincepri
6+
* "@timothysc"
7+
* "@ncdc"
8+
* "@fabriziopandini"
9+
* "@vincepri"
1010

1111

1212
creation-date: 2020-04-27
@@ -233,12 +233,12 @@ libraries so the issue of support should be solved with this contract.
233233
- Currently, clusterctl relies on the conformance of file name conventions
234234
such as `infrastructure-components.yaml` and
235235
`cluster-template-<flavor>.yaml`. Other templating tools might require other
236-
conventions to be defined and followed to allow the same day 1 experience.
236+
conventions to be defined and followed to allow the same "day 1" experience.
237237
- Some templating tools will require multiple files to be defined rather than
238-
a single yaml file. These artifacts will need to be grouped together to
238+
a single yaml file. These artifacts will need to be "grouped" together to
239239
support current retrieval mechanisms. Currently, `clusterctl config cluster`
240240
retrieves templates from multiple sources such as ConfigMaps within a
241-
cluster, URL, Github Repository, Local Repository and even the overrides'
241+
cluster, URL, Github Repository, Local Repository and even the overrides
242242
directory. To ensure compatibility, we’ll need to establish a compression
243243
format like tar.gz
244244

docs/proposals/20200602-machine-deletion-phase-hooks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ lifecycle hook.
109109
- Create a mechanism to signal what lifecycle point a machine is at currently.
110110
- Dictate implementation of controllers that respond to the hooks.
111111
- Implement ordering in the machine-controller.
112-
- Require anyone uses these hooks for normal machine operations, these are
112+
- Require anyone to use these hooks for normal machine operations, these are
113113
strictly optional and for custom integrations only.
114114

115115

docs/proposals/20210203-externally-managed-cluster-infrastructure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Refer to the [Cluster API Book Glossary](https://cluster-api.sigs.k8s.io/referen
5656
### Managed cluster infrastructure
5757

5858
Cluster infrastructure whose lifecycle is managed by a provider InfraCluster controller.
59-
E.g in AWS:
59+
E.g. in AWS:
6060
- Network
6161
- VPC
6262
- Subnets

docs/proposals/20210210-insulate-users-from-kubeadm-API-changes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ should take this opportunity to stop relying on the assumption that the kubeadm
6363
KubeadmConfig/KubeadmControlPlane specs are supported(1) by all the Kubernetes/kubeadm version
6464
in the support range.
6565

66-
This would allow to separate what users fill in in the KubeadmConfig/KubeadmControlPlane
66+
This would allow to separate what users fill in the KubeadmConfig/KubeadmControlPlane
6767
from which kubeadm API version Cluster API end up using in the bootstrap data.
6868

6969
(1) Supported in this context means that the serialization format of the types is the same,

0 commit comments

Comments
 (0)