You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/data/blog/2026/plex-lxc-hardware-transcoding.md
+11-14Lines changed: 11 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,13 +22,11 @@ Every device is different, and the numbers matter. So I start by first enumerati
22
22
ls -la /dev/dri
23
23
```
24
24
25
-
/dev/dri stands for Direct Rendering Infrastructure. It's a directory in Linux that contains device files used for direct GPU access.
25
+
The DRI in `/dev/dri` stands for Direct Rendering Infrastructure. It's a directory in Linux that contains device files used for direct GPU access.
26
26
27
-
Typically you'll find `/dev/dri/card0`, `card1`, etc. These respresent the GPU(s) you have installed in the maching.
27
+
Typically you'll find `/dev/dri/card0`, `card1`, etc. These respresent the GPU(s) installed in the machine.
28
28
29
-
Alongside them, you'll have render nodes. `/dev/dri/renderD128`, `renderD129` These are also important for transcoding. They allow unprivileged GPU access for compute and rendering tasks (3D rendering, video encoding/decoding, GPU-accelerated computing) without requiring display out. Applications like Vulkan, OpenGL, VA-API, and CUDA can use these.
30
-
31
-
You'll note in my machine, there is no `card0`, but there is a `card1` even though I only have 1 GPU installed. Also note the numbers following both the card and render node, `226:1` and `226:128`. These are the device driver major and minor numbers, identify what driver to use.
29
+
Alongside them are the render nodes. `/dev/dri/renderD128`, `renderD129` These are also important for transcoding. They allow unprivileged GPU access for compute and rendering tasks (3D rendering, video encoding/decoding, GPU-accelerated computing) without requiring display out. Applications like Vulkan, OpenGL, VA-API, and CUDA can use these.
32
30
33
31
```bash
34
32
drwxr-xr-x 3 root root 100 Mar 16 15:04 .
@@ -38,12 +36,14 @@ crw-rw---- 1 root video 226, 1 Mar 16 15:05 card1
38
36
crw-rw---- 1 root render 226, 128 Mar 16 15:04 renderD128
39
37
```
40
38
39
+
You'll note on my machine, there is no `card0`, but there is a `card1` even though I only have 1 GPU. Also note the numbers following both the card and render node, `226:1` and `226:128`. These are the device driver major and minor numbers, identify what driver to use.
40
+
41
41
> [!NOTE]
42
42
> In proxmox, usually the user running PVE is root, so there shouldn't be a permissions issue.
43
43
44
44
## Granting access to the LXC
45
45
46
-
Stop your LXC in the GUI and then edit the configuration. For this example I'm going to use LXC ID 100.
46
+
Stop your LXC in the GUI and then edit the configuration from the CLI. For this example I'm going to use LXC ID 100.
47
47
48
48
```bash
49
49
cd /etc/pve/lxc
@@ -94,21 +94,19 @@ Breaking it down by token:
94
94
95
95
So the full line is telling the cgroup to allow the LXC container to read, write, and create the character device with major:minor 226:128, which is my renderD128 render node.
96
96
97
-
Under the cgroup lines are the device permissions.
98
-
99
-
### dev
97
+
### Device Permissions
100
98
101
-
Here I've added the paths to the hardware on the host. You'll notice them from before,`/dev/dri/card1` and `/dev/dri/renderD128`.
99
+
Next, under cgroup, I've added the paths to the hardware on the host. You'll notice they match from before:`/dev/dri/card1` and `/dev/dri/renderD128`.
102
100
103
101
Breaking it down by token:
104
102
105
103
-`dev0` - the Proxmox config key, just an index for the device passthrough entry. dev0, dev1, dev2 etc. if you have multiple devices.
106
104
-`/dev/dri/card1` - the path on the host to the device being passed through. Proxmox will bind-mount this into the container.
107
105
-`gid=44` - the group ID that will own the device node inside the container. This is what makes it accessible to the video group (GID 44) inside the LXC, so the plex user can actually use it.
108
106
109
-
One important thing to understand is the gid here is applied to the device node as seen from **_inside_** the container. That's why it needs to match the GID of the video/render group inside the LXC specifically, not necessarily the host's GID. Often they are the same, though sometimes they are not, which was my problem.
107
+
One important thing to understand is the GID here is applied to the device node as seen from **_inside_** the container. That's why it needs to match the GID of the video/render group inside the LXC specifically, not necessarily the host's GID. Often they are the same, though sometimes they are not, _which was my problem_.
110
108
111
-
You'll need to start the LXC and then run the following command to find the correct GID.
109
+
Once all that is configured and saved. I can start the LXC and run the following command inside the container to find the correct GID.
112
110
113
111
```bash
114
112
getent group video render
@@ -117,12 +115,11 @@ getent group video render
117
115
For my container, I got back the following.
118
116
119
117
```bash
120
-
plex:~$ getent group video render
121
118
video:x:44:plex
122
119
render:x:105:plex
123
120
```
124
121
125
-
This is where the GIDs in my configuration are derived, `44` and `105` respectively.
122
+
This is where the GIDs in my configuration are derived, `44` and `105` respectively. If they don't match, shut down the container and go back to the host lxc configuration and update them.
126
123
127
124
> [!IMPORTANT]
128
125
> the device GID refers to the device node as seen from **_inside_** the container.
0 commit comments