I want to host security cameras and a plex server. Does this mean that my server needs a GPU? (Or would benefit from one). I heard plex does fine with just a CPU
plex would benifit from using a gpu
I use Jellyfin which is similar to Plex. I have it on a Raspberry Pi 4 8 GB. It’s perfectly fine if I’m sending H264 but most modern browsers do not support H265 so it forces the server to transcode. That will consume almost all processing power if it’s CPU-only and is a very slow process.
That will consume almost all processing power if it’s CPU-only and is a very slow process.
This is a complicated topic and the terminology is a bit ambiguous.
Yes, non-hardware-accelerated transcoding is slow and will consume the CPU.
However, you don’t necessarily need an external GPU to do hardware-accelerated transcoding. When you use Intel QuickSync for example, the codec hardware is part if the CPU. On the other hand it is only in CPUs that have integrated graphics, so you could still say the transcoding is done “by the GPU”, just not the additional one that you put in. In fact, putting in a dedicated graphics card often disables the integrated graphics and you have to use tricks to re-enable it before you can use it for transcoding again.
If you use a third party app that can play directly, like Infuse, or your CPU has Intel QuickSync, then you’ll likely be fine. For security cameras unless you need any sort of facial recognition or object detection just a CPU will still be fine. If you need anything more advanced then a GPU is necessary
Bought an Intel A750 for transcoding on my Jellyfin server. The CPU did fine for just 1 or 2 people watching at the same time. It just pegged out the CPU at 100%. Plus had limited support for what types could be transcoded. With the A750 can easily handle several people watching at the same time and can transcode any format. With little CPU usages, so everything else running stays fast.
If you want object detection with your ip cameras, you can use Frigate, and to have good performance, you can buy a Google Coral to perform the object detection part.
Transcoding
As long as you have Plex Pass, hardware transcoding is extremely good with moder QuickSync Intel processors, and specially good if you run Linux.
In fact, putting in a dedicated graphics card often disables the integrated graphics including QuickSync and you might have to set up a virtual screen to re-enable it.
I have found transcoding to work noticeably better when using quicksync (the intel chip native encoder) rather than a GPU.
At this point, I think the only real reason you would want a GPU is for LLMs.
A decent, recent Nvidia GPU is going to beat most CPUs. I wouldn’t shell out extra for a GPU just for transcoding, though. Good enough is good enough.
Plex server or running your own LLM
Just about anything Machine Learning or AI, Transcoding for a media server, render farm for something like blender perhaps?
Get an Intel cpu with iGPU (most do) and you’ll be good to go for anything that you’re doing there.
Yes, GPU for transcoding is reason enough, but object detection using frigate, or AI stuff is nice also. Buy a nvidia.
With plex, an intel quick sync igpu will be fine unless you plan to do >10 transcodes at a time
My main server has a 3070 and I use it to stream games through moonlight to all my tvs and computers around the house. That way I get the most value from the card instead of it being locked into one machine
I have two GPUs in a single tower.
A GTX 750 to that I share with my LXCs. It does jellyfin transcode, frigate nvr for 3 cameras, kasm accelerated desktops, xfce4 pve host acceleration, Jupyter tensorflow, ersatz tv transcode, and I plan to use it for immich. At most it is taxed about 25 percent but I plan to have a lot more nvr and jellyfin streams.
I also have a 1660 ti passed to windows 11 VM for my gaming VM. I use sunshine and moonlight for remote gaming but I also roll easy diffusion for some image generation. I had an LLM but (https://github.com/oobabooga/text-generation-webui) but it was too slow for what I’m used to - I just use bing chat and now meta on whatsapp for my personal and an LLM I have access to at work.
How do you do GPU passthrough?
There’s the possibility of self hosted speech recognition for use with Mycroft or other personal assistants. Saves you from having Amazon listen to your every word.