Skip to content
This repository has been archived by the owner on Feb 9, 2022. It is now read-only.

Load-based scaling #194

Open
RAnders00 opened this issue Dec 22, 2020 · 4 comments
Open

Load-based scaling #194

RAnders00 opened this issue Dec 22, 2020 · 4 comments
Labels
enhancement New feature or request

Comments

@RAnders00
Copy link
Contributor

Create new connections when lots of messages need to be sent

@RAnders00 RAnders00 added the enhancement New feature or request label Dec 22, 2020
@5E7EN
Copy link
Contributor

5E7EN commented Dec 23, 2020

Is this in order to sent messages faster than the default Twitch enforced interval of 100ms by distributing the messages across multiple connections? Why can't messages just be queued on a single connection?

@RAnders00
Copy link
Contributor Author

Yes, it's a thing larger bots might work if they consistently have to send messages faster than 100ms intervals. The messages could back up faster than they can be sent. This also seems useful for usages like mass-banning bots or similar.

@5E7EN
Copy link
Contributor

5E7EN commented Dec 23, 2020

This is a fair request, however there's one main issue I've experienced in the past while working with similar configurations - and that is, messages being sent out of order (which I assume is because of clients fluctuating in latency; along with not actually confirming message delivery) - as seen here: https://i.imgur.com/qaNe9JN.gif (supposed 10-width pyramid).

@TroyKomodo
Copy link
Contributor

TroyKomodo commented Dec 27, 2020

@5E7EN @RAnders00 Perhaps creating message groups where messages within the group must be processed on a single connection and then create a connection if no connection can support the size of the group?
Solves the out of order issue as you could then just wrap all the messages that need to be sent in order in a MessageGroup.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants