Massive Bandwidth Usage (Plex + rclone)

Using Version 7.8 (7.8.4860)

Hi there
I have plex linked with library disabled (as library makes this issue even worse).

What I’ve noticed is when media is playing from the plex server, infuse periodically (seems like every 2 minutes) requests the full file again from plex.

I tracked the issue in iftop.

My pelx instance is served via nginx proxy manager, and I could see massive requests going out

Ultimately my plex instance is backed by zurg, rclone and real - debrid.

If i watch in plex directly - all is fine, my RD bandwidth usage increases periodically based on the amount of data streamed in plex.

If I watch in Infuse - RD bandwidth usage sky-rockets to the amount of (data streamed + the total size of the file * however many intervals of 1 or 2 mins)

This makes it un-usable for me really, as RD are now imposing daily bandwidth limits on users.

Infuse includes a read-ahead cache that will work to cache a larger portion of the video on device. This helps provide reliable streaming in unstable networks, but if you prefer not to use this you can switch the ‘Streaming Cache’ option found in Infuse > Settings > Playback from ‘Auto’ to ‘Memory Only’.

Hi there

Thanks for the reply

I understand that, and that’s fine. This video in question is 800mb.

It loads and caches the entire video on my 10gb line in like a second. I see the playback bar fully grow in greyed shading to show download state, and I see the file cached on disk

Yet even though it’s cached the entire file is requested over the plex connection.

I’ve also tested the same with jellyfin this afternoon and the same thing occurs.

Yet when playing in either jellyfin or plex directly no such issue occurs. The bandwidth usage in iftop shows the chunks being played as and when. In plex this is chunked. In jellyfin this is the entire video. But only once.

In infuse it’s the full 800mb every two minutes on the wire. For a 800mb file it’s using 16gb bandwidth for a 40 minute episode

This is even worse if I enable library mode as it disregards the fact that up next is disabled and fetches file data from the server which has its shares mounted for everything in the library. On

These issues are apparent on my Mac, my iPhone and my Apple TV

I did a few quick tests here streaming from a remote Jellyfin server and Dropbox, and monitored bandwidth with Little Snitch. There was bandwidth activity from Infuse for the initial caching, but once the file was fully cached network activity dropped to zero.

Hi James

Thanks for testing. It has to be related to the rclone mount then and the way infuse reads the file from it / from plex

When I stream in plex the mount is fine. I tried turning off the read ahead cache to memory and that sent it through the roof it requested the whole file over and over in a matter of seconds. I had to force close infuse quickly as I used 24 gb bandwidth in less than a minute.

I can record all this in a video if required.

The rclone mount is mounted as:

mount zurg: /data
–allow-non-empty
–allow-other
–uid=568
–gid=568
–no-modtime=true
–no-checksum=true
–use-mmap=true
–buffer-size=16M
–dir-cache-time=10s
–contimeout=10s
–timeout=10s
–bwlimit=500M

It works fine in plex and plex on mobile + Apple TV. That’s direct play on all devices, no transcoding

I can also monitor bandwidth usage using plex dashboard as I have plex pass.

See here. This is Apple TV plex app, only one load in two minutes, the entire episode then no bandwidth used.

This is infuse with read ahead and memory on (auto)
You can see less than two minutes later it does another fetch of the entire file. And it will repeat that on cue at that time region

More on this
So you are right about little snitch - it isn’t plex to infuse that the bandwidth issues occur, but its plex server itself, however its directly caused by Infuse

Playing in anything else is fine

Something Infuse does causes plex, emby or jellyfin (tried with all three) to repull the whole file again instead of stream the current chunk.

Right now I’ve enabled vfs caching with a 300gb cache, and cache mode full.
This means every 1.5m i’m getting buffered reads to and from rclone which are are causing a direct correlation in cpu spikes

CleanShot 2024-06-28 at 18.36.25

Its minimal on this box as its my dev lab - but I want to move it to my smaller production system. Its unusable for me to be spiking 1gb of traffic across my internal loadbalancer every 1.5 mins, so unfortunately I can’t continue to use Infuse.

Its a real shame - as this truly is the best video player on apple ecosystem, especially with Dolby Vision support

Will try it again in the future, see if things change with v8.

Update on this
When setting streaming cache to “Legacy” this doesn’t appear to happen.
On auto and memory, does infuse periodically scan that the file is still there or something?
As that incorrectly triggers rclone to re-serve the full file to plex

Auto happens every 1.5 mins
In Memory happens every 2 minutes
Legacy - doesn’t seem to occur

Tests - M1 Max Mac Studio with 64GB ram

The newer caching options included a server ‘tickling’ feature which will access a few bytes of data from a random portion of the file during playback. This ensures the source device/drive does not go into sleep mode during streaming which could lead to extended buffering or playback errors.

Using the Legacy option would avoid this behavior which may work better with your setup.

Ahh that’ll be it then - Thanks

Yeah because the entirety of my media file is kept cloud side, and only the watched segments are cached by rclone, then recycled when new sections are accessed, the tickling will force chunk downloads based on my rclone chunk size and read-ahead etc

Hopefully legacy isn’t going to go away anytime soon, like with the release of 8 etc?

Maybe we could get a separate toggle to disable tickling, and still benefit from disk based caching etc? (i’ve a ticking phobia anyhow lol)

There are no plans to remove the Legacy streaming option.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.