2 - Patch Releases
Warning: This content is auto-generated and links may not function. The source of the document is located
here.
Kubernetes Patch Releases
Schedule and team contact information for Kubernetes patch releases.
For general information about Kubernetes release cycle, see the
release process description.
Cadence
Our typical patch release cadence is monthly. It is
commonly a bit faster (1 to 2 weeks) for the earliest patch releases
after a 1.X minor release. Critical bug fixes may cause a more
immediate release outside of the normal cadence. We also aim to not make
releases during major holiday periods.
See the Release Managers page for full contact details on the Patch Release Team.
Please give us a business day to respond - we may be in a different timezone!
In between releases the team is looking at incoming cherry-pick
requests on a weekly basis. The team will get in touch with
submitters via GitHub PR, SIG channels in Slack, and direct messages
in Slack and email
if there are questions on the PR.
Cherry-Picks
Please follow the cherry-pick process.
Cherry-picks must be merge-ready in GitHub with proper labels (eg:
approved, lgtm, release note) and passing CI tests ahead of the
cherry-pick deadline. This is typically two days before the target
release, but may be more. Earlier PR readiness is better, as we
need time to get CI signal after merging your cherry-picks ahead
of the actual release.
Cherry-pick PRs which miss merge criteria will be carried over and tracked
for the next patch release.
Support Period
In accordance with the yearly support KEP, the Kubernetes
Community will support active patch release series for a period of roughly
fourteen (14) months.
The first twelve months of this timeframe will be considered the standard
period.
Towards the end of the twelve month, the following will happen:
- Release Managers will cut a release
- The patch release series will enter maintenance mode
During the two-month maintenance mode period, Release Managers may cut
additional maintenance releases to resolve:
- CVEs (under the advisement of the Product Security Committee)
- dependency issues (including base image updates)
- critical core component issues
At the end of the two-month maintenance mode period, the patch release series
will be considered EOL (end of life) and cherry picks to the associated branch
are to be closed soon afterwards.
Note that the 28th of the month was chosen for maintenance mode and EOL target
dates for simplicity (every month has it).
Upcoming Monthly Releases
Timelines may vary with the severity of bug fixes, but for easier planning we
will target the following monthly release points. Unplanned, critical
releases may also occur in between these.
Monthly Patch Release |
Target date |
May 2021 |
2021-05-12 |
June 2021 |
2021-06-16 |
July 2021 |
2021-07-14 |
Detailed Release History for Active Branches
1.21
1.21 enters maintenance mode on 2022-04-28
End of Life for 1.21 is 2022-06-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
1.21.1 |
2021-05-07 |
2021-05-12 |
1.20
1.20 enters maintenance mode on 2021-12-28
End of Life for 1.20 is 2022-02-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
1.20.7 |
2021-05-07 |
2021-05-12 |
1.20.6 |
2021-04-09 |
2021-04-14 |
1.20.5 |
2021-03-12 |
2021-03-17 |
1.20.4 |
2021-02-12 |
2021-02-18 |
1.20.3 |
Conformance Tests Issue |
2021-02-17 |
1.20.2 |
2021-01-08 |
2021-01-13 |
1.20.1 |
Tagging Issue |
2020-12-18 |
1.19
1.19 enters maintenance mode on 2021-08-28
End of Life for 1.19 is 2021-10-28
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
1.19.11 |
2021-05-07 |
2021-05-12 |
1.19.10 |
2021-04-09 |
2021-04-14 |
1.19.9 |
2021-03-12 |
2021-03-17 |
1.19.8 |
2021-02-12 |
2021-02-17 |
1.19.7 |
2021-01-08 |
2021-01-13 |
1.19.6 |
Tagging Issue |
2020-12-18 |
1.19.5 |
2020-12-04 |
2020-12-09 |
1.19.4 |
2020-11-06 |
2020-11-11 |
1.19.3 |
2020-10-09 |
2020-10-14 |
1.19.2 |
2020-09-11 |
2020-09-16 |
1.19.1 |
2020-09-04 |
2020-09-09 |
1.18
1.18 enters maintenance mode on 2021-04-28
End of Life for 1.18 is 2021-05-12
PATCH RELEASE |
CHERRY PICK DEADLINE |
TARGET DATE |
1.18.19 |
2021-05-07 |
2021-05-12 |
1.18.18 |
2021-04-09 |
2021-04-14 |
1.18.17 |
2021-03-12 |
2021-03-17 |
1.18.16 |
2021-02-12 |
2021-02-17 |
1.18.15 |
2021-01-08 |
2021-01-13 |
1.18.14 |
Tagging Issue |
2020-12-18 |
1.18.13 |
2020-12-04 |
2020-12-09 |
1.18.12 |
N/A |
2020-11-12 |
1.18.11 |
No-op release |
2020-11-11 |
1.18.10 |
2020-10-09 |
2020-10-14 |
1.18.9 |
2020-09-11 |
2020-09-16 |
1.18.8 |
N/A |
2020-08-13 |
1.18.7 |
2020-08-07 |
2020-08-12 |
1.18.6 |
2020-07-10 |
2020-07-15 |
1.18.5 |
2020-06-25 |
2020-06-26 |
1.18.4 |
2020-06-12 |
2020-06-17 |
1.18.3 |
2020-05-15 |
2020-05-20 |
1.18.2 |
2020-04-13 |
2020-04-16 |
1.18.1 |
2020-04-06 |
2020-04-08 |
Non-Active Branch History
These releases are no longer supported.
Minor Version |
Final Patch Release |
EOL date |
1.17 |
1.17.17 |
2021-01-13 |
1.16 |
1.16.15 |
2020-09-02 |
1.15 |
1.15.12 |
2020-05-06 |
1.14 |
1.14.10 |
2019-12-11 |
1.13 |
1.13.12 |
2019-10-15 |
1.12 |
1.12.10 |
2019-07-08 |
1.11 |
1.11.10 |
2019-05-01 |
1.10 |
1.10.13 |
2019-02-13 |
1.9 |
1.9.11 |
2018-09-29 |
1.8 |
1.8.15 |
2018-07-12 |
1.7 |
1.7.16 |
2018-04-04 |
1.6 |
1.6.13 |
2017-11-23 |
1.5 |
1.5.8 |
2017-10-01 |
1.4 |
1.4.12 |
2017-04-21 |
1.3 |
1.3.10 |
2016-11-01 |
1.2 |
1.2.7 |
2016-10-23 |
4 - The Release Cycle
Warning: This content is auto-generated and links may not function. The source of the document is located
here.
Targeting enhancements, Issues and PRs to Release Milestones
This document is focused on Kubernetes developers and contributors who need to
create an enhancement, issue, or pull request which targets a specific release
milestone.
The process for shepherding enhancements, issues, and pull requests into a
Kubernetes release spans multiple stakeholders:
- the enhancement, issue, and pull request owner(s)
- SIG leadership
- the Release Team
Information on workflows and interactions are described below.
As the owner of an enhancement, issue, or pull request (PR), it is your
responsibility to ensure release milestone requirements are met. Automation and
the Release Team will be in contact with you if updates are required, but
inaction can result in your work being removed from the milestone. Additional
requirements exist when the target milestone is a prior release (see
cherry pick process for more information).
TL;DR
If you want your PR to get merged, it needs the following required labels and
milestones, represented here by the Prow /commands it would take to add them:
Normal Dev (Weeks 1-8)
- /sig {name}
- /kind {type}
- /lgtm
- /approved
- /milestone {v1.y}
- /sig {name}
- /kind {bug, failing-test}
- /lgtm
- /approved
Post-Release (Weeks 11+)
Return to 'Normal Dev' phase requirements:
- /sig {name}
- /kind {type}
- /lgtm
- /approved
Merges into the 1.y branch are now via cherry picks, approved
by Release Managers.
In the past, there was a requirement for a milestone-targeted pull requests to
have an associated GitHub issue opened, but this is no longer the case.
Features or enhancements are effectively GitHub issues or KEPs which
lead to subsequent PRs.
The general labeling process should be consistent across artifact types.
Definitions
-
issue owners: Creator, assignees, and user who moved the issue into a
release milestone
-
Release Team: Each Kubernetes release has a team doing project management
tasks described here.
The contact info for the team associated with any given release can be found
here.
-
Y days: Refers to business days
-
enhancement: see "Is My Thing an Enhancement?"
-
Enhancements Freeze:
the deadline by which KEPs have to be completed in order for
enhancements to be part of the current release
-
Exception Request:
The process of requesting an extension on the deadline for a particular
Enhancement
-
Code Freeze:
The period of ~4 weeks before the final release date, during which only
critical bug fixes are merged into the release.
-
Pruning:
The process of removing an Enhancement from a release milestone if it is not
fully implemented or is otherwise considered not stable.
-
release milestone: semantic version string or
GitHub milestone
referring to a release MAJOR.MINOR vX.Y
version.
See also
release versioning.
-
release branch: Git branch release-X.Y
created for the vX.Y
milestone.
Created at the time of the vX.Y-rc.0
release and maintained after the
release for approximately 12 months with vX.Y.Z
patch releases.
Note: releases 1.19 and newer receive 1 year of patch release support, and
releases 1.18 and earlier received 9 months of patch release support.
The Release Cycle

Kubernetes releases currently happen approximately four times per year.
The release process can be thought of as having three main phases:
- Enhancement Definition
- Implementation
- Stabilization
But in reality, this is an open source and agile project, with feature planning
and implementation happening at all times. Given the project scale and globally
distributed developer base, it is critical to project velocity to not rely on a
trailing stabilization phase and rather have continuous integration testing
which ensures the project is always stable so that individual commits can be
flagged as having broken something.
With ongoing feature definition through the year, some set of items will bubble
up as targeting a given release. Enhancements Freeze
starts ~4 weeks into release cycle. By this point all intended feature work for
the given release has been defined in suitable planning artifacts in
conjunction with the Release Team's Enhancements Lead.
After Enhancements Freeze, tracking milestones on PRs and issues is important.
Items within the milestone are used as a punchdown list to complete the
release. On issues, milestones must be applied correctly, via triage by the
SIG, so that Release Team can track bugs and enhancements (any
enhancement-related issue needs a milestone).
There is some automation in place to help automatically assign milestones to
PRs.
This automation currently applies to the following repos:
kubernetes/enhancements
kubernetes/kubernetes
kubernetes/release
kubernetes/sig-release
kubernetes/test-infra
At creation time, PRs against the master
branch need humans to hint at which
milestone they might want the PR to target. Once merged, PRs against the
master
branch have milestones auto-applied so from that time onward human
management of that PR's milestone is less necessary. On PRs against release
branches, milestones are auto-applied when the PR is created so no human
management of the milestone is ever necessary.
Any other effort that should be tracked by the Release Team that doesn't fall
under that automation umbrella should be have a milestone applied.
Implementation and bug fixing is ongoing across the cycle, but culminates in a
code freeze period.
Code Freeze starts in week ~10 and continues for ~2 weeks.
Only critical bug fixes are accepted into the release codebase during this
time.
There are approximately two weeks following Code Freeze, and preceding release,
during which all remaining critical issues must be resolved before release.
This also gives time for documentation finalization.
When the code base is sufficiently stable, the master branch re-opens for
general development and work begins there for the next release milestone. Any
remaining modifications for the current release are cherry picked from master
back to the release branch. The release is built from the release branch.
Each release is part of a broader Kubernetes lifecycle:

Removal Of Items From The Milestone
Before getting too far into the process for adding an item to the milestone,
please note:
Members of the Release Team may remove issues from the
milestone if they or the responsible SIG determine that the issue is not
actually blocking the release and is unlikely to be resolved in a timely
fashion.
Members of the Release Team may remove PRs from the milestone for any of the
following, or similar, reasons:
- PR is potentially de-stabilizing and is not needed to resolve a blocking
issue
- PR is a new, late feature PR and has not gone through the enhancements
process or the exception process
- There is no responsible SIG willing to take ownership of the PR and resolve
any follow-up issues with it
- PR is not correctly labelled
- Work has visibly halted on the PR and delivery dates are uncertain or late
While members of the Release Team will help with labelling and contacting
SIG(s), it is the responsibility of the submitter to categorize PRs, and to
secure support from the relevant SIG to guarantee that any breakage caused by
the PR will be rapidly resolved.
Where additional action is required, an attempt at human to human escalation
will be made by the Release Team through the following channels:
- Comment in GitHub mentioning the SIG team and SIG members as appropriate for
the issue type
- Emailing the SIG mailing list
- bootstrapped with group email addresses from the
community sig list
- optionally also directly addressing SIG leadership or other SIG members
- Messaging the SIG's Slack channel
- bootstrapped with the slackchannel and SIG leadership from the
community sig list
- optionally directly "@" mentioning SIG leadership or others by handle
Adding An Item To The Milestone
Milestone Maintainers
The members of the milestone-maintainers
GitHub team are entrusted with the responsibility of specifying the release
milestone on GitHub artifacts.
This group is maintained
by SIG Release and has representation from the various SIGs' leadership.
Feature additions
Feature planning and definition takes many forms today, but a typical example
might be a large piece of work described in a KEP, with associated task
issues in GitHub. When the plan has reached an implementable state and work is
underway, the enhancement or parts thereof are targeted for an upcoming milestone
by creating GitHub issues and marking them with the Prow "/milestone" command.
For the first ~4 weeks into the release cycle, the Release Team's Enhancements
Lead will interact with SIGs and feature owners via GitHub, Slack, and SIG
meetings to capture all required planning artifacts.
If you have an enhancement to target for an upcoming release milestone, begin a
conversation with your SIG leadership and with that release's Enhancements
Lead.
Issue additions
Issues are marked as targeting a milestone via the Prow "/milestone" command.
The Release Team's Bug Triage Lead
and overall community watch incoming issues and triage them, as described in
the contributor guide section on
issue triage.
Marking issues with the milestone provides the community better visibility
regarding when an issue was observed and by when the community feels it must be
resolved. During Code Freeze, a milestone must be set to merge
a PR.
An open issue is no longer required for a PR, but open issues and associated
PRs should have synchronized labels. For example a high priority bug issue
might not have its associated PR merged if the PR is only marked as lower
priority.
PR Additions
PRs are marked as targeting a milestone via the Prow "/milestone" command.
This is a blocking requirement during Code Freeze as described above.
Other Required Labels
Here is the list of labels and their use and purpose.
SIG Owner Label
The SIG owner label defines the SIG to which we escalate if a milestone issue
is languishing or needs additional attention. If there are no updates after
escalation, the issue may be automatically removed from the milestone.
These are added with the Prow "/sig" command. For example to add the label
indicating SIG Storage is responsible, comment with /sig storage
.
Priority Label
Priority labels are used to determine an escalation path before moving issues
out of the release milestone. They are also used to determine whether or not a
release should be blocked on the resolution of the issue.
priority/critical-urgent
: Never automatically move out of a release
milestone; continually escalate to contributor and SIG through all available
channels.
- considered a release blocking issue
- requires daily updates from issue owners during Code Freeze
- would require a patch release if left undiscovered until after the minor
release
priority/important-soon
: Escalate to the issue owners and SIG owner; move
out of milestone after several unsuccessful escalation attempts.
- not considered a release blocking issue
- would not require a patch release
- will automatically be moved out of the release milestone at Code Freeze
after a 4 day grace period
priority/important-longterm
: Escalate to the issue owners; move out of the
milestone after 1 attempt.
- even less urgent / critical than
priority/important-soon
- moved out of milestone more aggressively than
priority/important-soon
Issue/PR Kind Label
The issue kind is used to help identify the types of changes going into the
release over time. This may allow the Release Team to develop a better
understanding of what sorts of issues we would miss with a faster release
cadence.
For release targeted issues, including pull requests, one of the following
issue kind labels must be set:
kind/api-change
: Adds, removes, or changes an API
kind/bug
: Fixes a newly discovered bug.
kind/cleanup
: Adding tests, refactoring, fixing old bugs.
kind/design
: Related to design
kind/documentation
: Adds documentation
kind/failing-test
: CI test case is failing consistently.
kind/feature
: New functionality.
kind/flake
: CI test case is showing intermittent failures.
5 - Version Skew Policy
The maximum version skew supported between various Kubernetes components.
This document describes the maximum version skew supported between various Kubernetes components.
Specific cluster deployment tools may place additional restrictions on version skew.
Supported versions
Kubernetes versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology.
For more information, see Kubernetes Release Versioning.
The Kubernetes project maintains release branches for the most recent three minor releases (1.21, 1.20, 1.19). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support.
Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
Patch releases are cut from those branches at a regular cadence, plus additional urgent releases, when required.
The Release Managers group owns this decision.
For more information, see the Kubernetes patch releases page.
Supported version skew
kube-apiserver
In highly-available (HA) clusters, the newest and oldest kube-apiserver
instances must be within one minor version.
Example:
- newest
kube-apiserver
is at 1.21
- other
kube-apiserver
instances are supported at 1.21 and 1.20
kubelet
kubelet
must not be newer than kube-apiserver
, and may be up to two minor versions older.
Example:
kube-apiserver
is at 1.21
kubelet
is supported at 1.21, 1.20, and 1.19
Note: If version skew exists between kube-apiserver
instances in an HA cluster, this narrows the allowed kubelet
versions.
Example:
kube-apiserver
instances are at 1.21 and 1.20
kubelet
is supported at 1.20, and 1.19 (1.21 is not supported because that would be newer than the kube-apiserver
instance at version 1.20)
kube-controller-manager, kube-scheduler, and cloud-controller-manager
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
must not be newer than the kube-apiserver
instances they communicate with. They are expected to match the kube-apiserver
minor version, but may be up to one minor version older (to allow live upgrades).
Example:
kube-apiserver
is at 1.21
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
are supported at 1.21 and 1.20
Note: If version skew exists between kube-apiserver
instances in an HA cluster, and these components can communicate with any kube-apiserver
instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components.
Example:
kube-apiserver
instances are at 1.21 and 1.20
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
communicate with a load balancer that can route to any kube-apiserver
instance
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
are supported at 1.20 (1.21 is not supported because that would be newer than the kube-apiserver
instance at version 1.20)
kubectl
kubectl
is supported within one minor version (older or newer) of kube-apiserver
.
Example:
kube-apiserver
is at 1.21
kubectl
is supported at 1.22, 1.21, and 1.20
Note: If version skew exists between kube-apiserver
instances in an HA cluster, this narrows the supported kubectl
versions.
Example:
kube-apiserver
instances are at 1.21 and 1.20
kubectl
is supported at 1.21 and 1.20 (other versions would be more than one minor version skewed from one of the kube-apiserver
components)
Supported component upgrade order
The supported version skew between components has implications on the order in which components must be upgraded.
This section describes the order in which components must be upgraded to transition an existing cluster from version 1.20 to version 1.21.
kube-apiserver
Pre-requisites:
- In a single-instance cluster, the existing
kube-apiserver
instance is 1.20
- In an HA cluster, all
kube-apiserver
instances are at 1.20 or 1.21 (this ensures maximum skew of 1 minor version between the oldest and newest kube-apiserver
instance)
- The
kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
instances that communicate with this server are at version 1.20 (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
kubelet
instances on all nodes are at version 1.20 or 1.19 (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
- Registered admission webhooks are able to handle the data the new
kube-apiserver
instance will send them:
ValidatingWebhookConfiguration
and MutatingWebhookConfiguration
objects are updated to include any new versions of REST resources added in 1.21 (or use the matchPolicy: Equivalent
option available in v1.15+)
- The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in 1.21
Upgrade kube-apiserver
to 1.21
kube-controller-manager, kube-scheduler, and cloud-controller-manager
Pre-requisites:
- The
kube-apiserver
instances these components communicate with are at 1.21 (in HA clusters in which these control plane components can communicate with any kube-apiserver
instance in the cluster, all kube-apiserver
instances must be upgraded before upgrading these components)
Upgrade kube-controller-manager
, kube-scheduler
, and cloud-controller-manager
to 1.21
kubelet
Pre-requisites:
- The
kube-apiserver
instances the kubelet
communicates with are at 1.21
Optionally upgrade kubelet
instances to 1.21 (or they can be left at 1.20 or 1.19)
Note: Before performing a minor version
kubelet
upgrade,
drain pods from that node.
In-place minor version
kubelet
upgrades are not supported.
Warning: Running a cluster with kubelet
instances that are persistently two minor versions behind kube-apiserver
is not recommended:
- they must be upgraded within one minor version of
kube-apiserver
before the control plane can be upgraded
- it increases the likelihood of running
kubelet
versions older than the three maintained minor releases
kube-proxy
kube-proxy
must be the same minor version as kubelet
on the node.
kube-proxy
must not be newer than kube-apiserver
.
kube-proxy
must be at most two minor versions older than kube-apiserver.
Example:
If kube-proxy
version is 1.19:
kubelet
version must be at the same minor version as 1.19.
kube-apiserver
version must be between 1.19 and 1.21, inclusive.