problem
While downscaling a Kubernetes cluster, the async job fails with an IllegalArgumentException during execution of ScaleKubernetesClusterCmd.
After the failure, the Kubernetes cluster remains stuck in SCALE state and does not recover automatically.
This issue is reproducible when reducing the number of worker nodes.
cmk listKubernetesClusters id=cff9d134-82a9-4383-a953-db8098536d68 filter=size
{
"count": 1,
"kubernetescluster": [
{
"size": 3
}
]
}
cloud@acsm1:~$ cmk scaleKubernetesCluster id=cff9d134-82a9-4383-a953-db8098536d68 size=2
{
"account": "admin",
"accountid": "67aa9f23-d520-11f0-b69a-525400aa0210",
"cmd": "org.apache.cloudstack.api.command.user.kubernetes.cluster.ScaleKubernetesClusterCmd",
"completed": "2025-12-10T16:25:57+0000",
"created": "2025-12-10T16:25:57+0000",
"domainid": "e89c5713-d51f-11f0-b69a-525400aa0210",
"domainpath": "ROOT",
"jobid": "560b7207-4c03-4d86-b6f5-f4822ad6c314",
"jobprocstatus": 0,
"jobresult": {
"errorcode": 530,
"errortext": "fromIndex(4) > toIndex(3)"
},
"jobresultcode": 530,
"jobresulttype": "object",
"jobstatus": 2,
"userid": "67ab52c3-d520-11f0-b69a-525400aa0210"
}
🙈 Error: async API failed for job 560b7207-4c03-4d86-b6f5-f4822ad6c314
versions
Apache CloudStack 4.22
Management Server running on Ubuntu 24.04
Kubernetes Server Version: v1.33.1
The steps to reproduce the bug
...
What to do about it?
No response