Answer (Correct Order):
Navigate in VCF Operations to Fleet Management → Tasks, select the instance and review Tasks
Check for errors or failed ClusterConfig objects
SSH into the Supervisor Cluster to review logs / attempt manual cleanup if required
Restart the WCP Service
To determinewhya Supervisor upgrade failed, you start where the platform records the most actionable failure context: themanagement-plane task history. The Fleet/Task view surfaces the upgrade workflow steps, timestamps, and the specific operation that failed, which gives you the fastest pointer to the failing phase (precheck, image staging, reconciliation, add-on updates, etc.). Next, you validate the Kubernetes-side reconciliation state by checking forfailed ClusterConfigobjects, because Supervisor enablement/upgrade actions commonly drive configuration via controller reconciliation; a failed ClusterConfig often contains the clearest “reason” and “message” fields that explain what blocked progress (validation errors, dependency issues, unreachable endpoints, incompatible components).
If the management-plane/task view and ClusterConfig status still don’t isolate the root cause, the next escalation is toSSH into the Supervisorto inspect component logs/events and identify low-level failures that may not bubble up cleanly (for example, admission failures, controller crashes, or connectivity/auth issues). Finally,restarting WCPis a remediation step that can clear stuck states, but it’s placed last because it can alter evidence and should not be your first action when you’re still trying to identify the original failure cause.