-
Notifications
You must be signed in to change notification settings - Fork 476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SetIdleTimeout调用SetKeepAlive的原因? #61
Comments
是的,这个确实没有找到很好的实现,主要是代价较大,我们再评估一下,或考虑移除它。 |
确实,踢空闲连接和 keep alive 没什么关系,踢掉空闲连接需要类似于 timer 的东西来支持,可以参考 muduo 书的 7.10 使用时间轮来踢 |
Do you have an example of this implementation we can use as a workaround in the meantime? |
You may refer to http://www.cs.columbia.edu/~nahum/w6998/papers/ton97-timing-wheels.pdf. And Google something like heap-based timer, timing-wheel, etc. |
其实就是 |
cool |
评论区出人才 |
早期用的自实现的heap timer,但其实用在deadline这种超时后的回调只是close的fast callback场景,标准库timer也是足够了,所以把早期版本自实现的heap timer废弃了,现在用的是标准库timer,之前的链接已经失效了,新的: |
另外,tcp keepalive和应用层的keepalive是有区别的,之前gnet里有人问、也聊过这个: |
连接少的场景其实都没有设置这个的必要,所以考虑的场景至少也是上万连接的场景。而上万连接对象时,维护 idle time 的代价可能会非常高。这也是为什么一直没实现的原因。 即便用go timer,上万的timer在go runtime中,且还需要不停modtimer(因为连接会有新读写),代价也是很大的。 |
@joway 标准库heap timer代价没那么大的,一万个连接一点都不算大 |
这个问题还在吗,我们现在也有这个需求。具体来来说,采用netpoll来实现websocket协议,在server端的时候,server端主动关闭的时候,需要等待客户端响应关闭帧或者等待超时, 两个事件中任何一个发生时,清理底层连接. |
@someview 我记个todo,之后实现下 |
Connection接口中SetIdleTimeout的具体实现是调用了SetKeepAlive,我的理解是连接的空闲时间超过了Idle timeout后连接被关闭,但是KeepAlive应该是定期发送维持连接存活的报文,怎么能实现超过空闲时间后连接被关闭呢?
The text was updated successfully, but these errors were encountered: