Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To support multiple ports for autoscaling #8619

Closed
palashbiswas-git opened this issue Feb 7, 2024 · 6 comments
Closed

To support multiple ports for autoscaling #8619

palashbiswas-git opened this issue Feb 7, 2024 · 6 comments

Comments

@palashbiswas-git
Copy link

ISSUE TYPE
  • Bug Report
  • Improvement Request
  • Enhancement Request
COMPONENT NAME
Autoscaling
CLOUDSTACK VERSION
4.19.0.0
SUMMARY
Could you please enhance CloudStack to enable support for multiple ports in autoscaling? The current limitation of only one port restricts the deployment of real-time production infrastructure with autoscaling groups.
EXPECTED RESULTS
To address this issue of limited ports in the next release!
ACTUAL RESULTS

@DaanHoogland
Copy link
Contributor

at first sight this looks like a valid feature request @palashbiswas-git , but can you elaborate on the configurations/requests/settings that you expect and how they would bahave, including error handling.

@palashbiswas-git
Copy link
Author

Hi @DaanHoogland ,

I'm currently working with a container-based application that supports autoscaling. However, I'm facing a limitation due to having a single load balancer and single port, which prevents me from parking it under an Autoscaling group, especially when I have multiple containers in a single VM/Instance. As you know, except for frontend web-based applications, most applications work with multiple ports where the VM needs to connect to other corresponding nodes, containers, and other services.

Because of this limitation, I believe many Cloudstack users are unable to secure those backend critical systems/applications through autoscaling groups.

Therefore, it would be greatly appreciated if this feature could be included in an upcoming enhancement.

I hope this clarifies the issue and the configuration/setting we are looking for in the future.

Thank you.

@kiranchavala
Copy link
Contributor

Hi @palashbiswas-git

Have you tried cloudstack autoscaler for kubernetes ?

https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/cloudstack

When a Kubernetes pod scales up , cloudstack automatically provisions a node

apache/cloudstack-kubernetes-provider#52 (comment)

@palashbiswas-git
Copy link
Author

Hi @kiranchavala,

You are correct that this can be achieved through Kubernetes. However, our requirement is to use either containers or normal VMs, where we can't handle multiple load balancers or a single load balancer with multiple ports if it's part of an autoscaling group.

@btzq
Copy link

btzq commented Mar 4, 2024

This is a great feature for us. Right now, in order to overcome this limitation, we deploy an NGINX instance in our VM, configured with static routes. These static routes would then route traffic to the multiple dockers in the VM.

Then, we scale the entire VM altogether.

Its a pretty hacky workaround, but it helps simplify our deployment.

Autoscaling with multiple ports would be really great! I full support.

@DaanHoogland
Copy link
Contributor

@palashbiswas-git , I am sure you have a valid use case, and I think something like this could be implemented in the future. I read this as a generic problem statement and not a very explicit one. I am moving this to discussion to further distill it and create (probably multiple) issues once it is more clear.

@apache apache locked and limited conversation to collaborators May 31, 2024
@DaanHoogland DaanHoogland converted this issue into discussion #9152 May 31, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

4 participants