Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The server is async, so there is no blocking functions exposed.

It passes all Autobahn tests, meaning it properly handles close frames & pings etc.

Timers are used to force close connections. The C++ HTTP server does not currently time out, but the Node.js HTTP server does, so this is one issue that needs to be fixed, yes.



Just looked at the code. You seem to queue everything and therefore never block. That can be OK for some use-cases, but you don't provide any kind of backpressure due to that in the send command. It will get problematic in case of slow receivers. And in node it will break the stream semantics. E.g. if someone pipes 1Gb of messages into your socket (or sends them if WriteableStream is not supported) he will think that they have been sent immediatly and won't know that the data is buffered on lower layer. If you pipe a fast source into a slow receiver it will break over time - which is exactly the thing that node streams actually try to avoid.

And another thing I saw there: Your SHORT_SEND optimization looks broken, as there does not seem to be tracked if the buffer is already used by another message that is still queued for sending. So short messages can corrupt each other.


I can assure you, the short send optimization is not broken.


Ok, the buffer seems tob be copied it in the queuing code if it needs to be queued. That's fine.

But it would have been helpful to point to that code lines instead of only "assuring".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: