Infuse does not close outstanding WebDAV HTTPS connections when seeking, causes slow streaming speed

Tested on Apple TV and iPhone.

When seeking, infuse abandons the previous streaming TCP socket (ie. stops doing read(2)) without closing (close(2)). When using ss to monitor open TCP connections on the webdav server, an open TCP socket with a filled read buffer is observed, one for each seek.

There seems to be other cases as well - for example, upon starting a new video, 2 HTTP requests are done with 2 different sockets, with different Content-Range headers, but then one of the HTTP requests (the first one?) is similarly abandoned (filled read buffer on the server side, unclosed).

Upon backing out of the video, all sockets (including stale) are closed at once.

It should be infuse’s responsibility to close these sockets rather than having the server be responsible for timing them out and closing them. I’m convinced this is an issue with infuse because I’ve never observed this issue with VLC and mpv (old sockets are always closed immediately on seek), but happens every time I play a video with infuse.

This can have a large impact on streaming speed as the number of these sockets pile up.

By default, Infuse uses and intelligent disk cache which allows for different segments of a file to be download independently. This is helpful especially with network connections that provide intermittent speeds.

If you prefer not to use this you can change the ‘Streaming Cache’ option found in Infuse > Settings > Playback from Auto to Legacy. Legacy will use a more traditional memory-based approach to streaming, which may help avoid these extra connections.

Thank you for replying! I tried switching to “Legacy” but still see the same problem.

I can appreciate that downloading different segments independently can improve network speeds, but it seems like these are stale connections that are never reused after seeking. After a seek, the connection from the previous downloading segment is kept open, with unread data available. And unless I forcefully close them from the server side, they’ll remain open until the video stops playing or I exit the video. I can see that the kernel send queue size stays exactly the same for the connection without changing.

for example, this is the output in my last test after a seek:

ESTAB 0 3862536 timer:(persist,6.628ms,0) uid:1000 ino:1059673 sk:46e <->
ESTAB 0 3689128 timer:(persist,4.388ms,0) uid:1000 ino:1059890 sk:470 <->
ESTAB 0 3530172 timer:(persist,9.124ms,0) uid:1000 ino:1059858 sk:46c <->
ESTAB 0 4039248 timer:(on,436ms,0) uid:1000 ino:1059906 sk:472 <->

3 of the connections are left open with ~3.5mb in the Send-Q, never reused or closed, and only 1 connection is actively downloading.

One more thought is that the Range request that infuse sends when downloading segments is always to the end of the file (ie. Range: bytes=[segment start position]- or Range: bytes=[segment start position]-[eof position], not sure which), so the chance of reusing such a connection seems very slim (and perhaps why I’ve never seen it happen), basically only if the user seeks back to the original position, or if you’re willing to read a lot of unnecessary data to get to the desired position of another upcoming segment.

There may have been a good reason for this, as IIRC there are cases where Infuse needs to access metadata which is present at both ends of a file in order to stream it.

It’s possible there is some room for improvement though, and we can take a peek at this.

1 Like

Thanks for looking!