Skip to content

Conversation

@subhtk
Copy link
Contributor

@subhtk subhtk commented Feb 26, 2024

Version(s): 4.12+

Issue: OCPBUG-26050

Link to docs preview: Preview

QE review:

  • QE has approved this change.

Additional information:

@openshift-ci openshift-ci bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Feb 26, 2024
@ocpdocs-previewbot
Copy link

ocpdocs-previewbot commented Feb 26, 2024

@patrickdillon
Copy link
Contributor

This LGTM, but should have QE or the engineer responsible for this ack as well

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2024
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 16, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 15, 2024
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Oct 16, 2024
@openshift-ci
Copy link

openshift-ci bot commented Oct 16, 2024

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@subhtk
Copy link
Contributor Author

subhtk commented Oct 16, 2024

/reopen

@subhtk
Copy link
Contributor Author

subhtk commented Oct 16, 2024

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 16, 2024
@openshift-ci openshift-ci bot reopened this Oct 16, 2024
@openshift-ci
Copy link

openshift-ci bot commented Oct 16, 2024

@subhtk: Reopened this PR.

Details

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2025
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 15, 2025
@subhtk
Copy link
Contributor Author

subhtk commented Mar 5, 2025

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 5, 2025
@subhtk subhtk force-pushed the ocpbug26050 branch 2 times, most recently from a2d8edb to 8bce5c5 Compare April 7, 2025 11:34
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 7, 2025

include::modules/installation-azure-preparing-diskencryptionsets.adoc[leveloffset=+1]

include::modules/installation-azure-day2-operations-diskencryptionsets.adoc[leveloffset=+1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
include::modules/installation-azure-day2-operations-diskencryptionsets.adoc[leveloffset=+1]
// Preparing an Azure Disk Encryption Set for Day2 Operator
include::modules/installation-azure-day2-operations-diskencryptionsets.adoc[leveloffset=+1]

Day 2 or post-installation? From my time on the installation team, they preferred post-isntallation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinyunma can you take a look?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is applied optionally on running cluster, I thinks day2 should be okay.
But the section title looks confused.

During installation (day1), installer supports to enable disk encryption with both platform-managed keys and with customer-managed keys. If enabling with customer-managed keys, user needs to create resource disk encryption set to provide those keys, that's what describes in section Preparing an Azure Disk Encryption Set

For day2 operation, only encryption at host with platform-managed keys is verified with steps described in this PR. How about to update as "Enable disk encryption with platform-managed keys on day2"?

$ oc adm uncordon <node_name>
----

. Make sure that all the operators are available.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. Make sure that all the operators are available.
. Make sure that all the operators are available.

How?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinyunma can you verify if this is the correct step:

. Verify that all cluster Operators are available:
+
[source,terminal]
----
$ oc get clusteroperators
----
+
All Operators should show `AVAILABLE=True`, `PROGRESSING=False`, and `DEGRADED=False`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about to check node status instead of operators status here? Similar as hardware update for VM on vSphere.

Power on VM -> Wait for the node to report as "Ready" ($ oc wait --for=condition=Ready node/<node_name>) -> Make the node as schedulable again ($ oc adm uncordon <node_name>)

Same procedures are also verified in our automation script.

@openshift-ci
Copy link

openshift-ci bot commented Apr 11, 2025

@subhtk: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

[id="preparing-disk-encryption-sets-day2-operator_{context}"]
= Preparing an Azure Disk Encryption Set for Day2 Operator

The {product-title} installation program can use an existing Disk Encryption Set with a user-managed key. To enable this feature, create a `DiskEncryptionSet` in Azure and provide the key to the installation program.
Copy link

@jinyunma jinyunma Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on #72130 (comment), suggest to update here too, something like:

If disk encryption is not configured during installation, you can enable disk encryption with platform-managed keys on each node when cluster is up and running.

@bergerhoffer
Copy link
Contributor

The branch/enterprise-4.20 label has been added to this PR.

This is because your PR targets the main branch and is labeled for enterprise-4.19. And any PR going into main must also target the latest version branch (enterprise-4.20).

If the update in your PR does NOT apply to version 4.20 onward, please re-target this PR to go directly into the appropriate version branch or branches (enterprise-4.x) instead of main.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 15, 2025
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 15, 2025
@bergerhoffer bergerhoffer added this to the Continuous Release milestone Oct 20, 2025
@bergerhoffer
Copy link
Contributor

The branch/enterprise-4.21 label has been added to this PR.

This is because your PR targets the main branch and is labeled for enterprise-4.20. And any PR going into main must also target the latest version branch (enterprise-4.21).

If the update in your PR does NOT apply to version 4.21 onward, please re-target this PR to go directly into the appropriate version branch or branches (enterprise-4.x) instead of main.

@mburke5678
Copy link
Contributor

Closed in favor of #101966

@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Dec 7, 2025
@openshift-ci
Copy link

openshift-ci bot commented Dec 7, 2025

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.