Do you guys have any success with setting up an arr stack with rootless Podman Quadlets? I really like the idea of Quadlets, but I can’t make it work.

Any guide and/or experience sharing would be greatly appreciated.

I have set up a Rocky Linux 10 with Podman 5.4.2 but after downloading the containers the quadlets were crashing.

Shall I continue digging this rabbit hole or shall I switch back to Docker Compose?

  • Melusine@tarte.nuage-libre.fr
    link
    fedilink
    Français
    arrow-up
    2
    ·
    24 hours ago

    I currently have my services as quadlet, not servarr though. My strategy to wite them was to start from podman CLI, setting up option as it went and when I was done I would use the CLI to generate Quadlet files.

  • Eldaroth@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    Nice, did the move from docker to podman a couple of months ago myself. Now running the arr stack, nextcloud, immich and some other services as quadlets. File permission due to podmans rootless nature usually was the culprit if something was not working properly.

    I can share my quadlet systemd files I use for the arr stack. I deployed it as a pod:

    [Unit]
    Description=Arr-stack pod
    
    [Pod]
    PodName=arr-stack
    # Jellyseerr Port Mapping
    PublishPort=8055:5055
    # Sonarr Port Mapping
    PublishPort=8089:8989
    # Radarr Port Mapping
    PublishPort=8078:7878
    # Prowlarr Port Mapping
    PublishPort=8096:9696
    # Flaresolverr Port Mapping
    PublishPort=8091:8191
    # qBittorrent Port Mapping
    PublishPort=8080:8080
    ---
    [Unit]
    Description=Gluetun Container
    
    [Container]
    ContainerName=gluetun
    EnvironmentFile=global.env
    EnvironmentFile=gluetun.env
    Environment=FIREWALL_INPUT_PORTS=8080
    Image=docker.io/qmcgaw/gluetun:v3.40.0
    Pod=arr-stack.pod
    AutoUpdate=registry
    PodmanArgs=--privileged
    AddCapability=NET_ADMIN
    AddDevice=/dev/net/tun:/dev/net/tun
    
    Volume=%h/container_volumes/gluetun/conf:/gluetun:Z,U
    
    Secret=openvpn_user,type=env,target=OPENVPN_USER
    Secret=openvpn_password,type=env,target=OPENVPN_PASSWORD
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=qBittorrent Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=qbittorrent
    EnvironmentFile=global.env
    Environment=WEBUI_PORT=8080
    Image=lscr.io/linuxserver/qbittorrent:5.1.2
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/qbittorrent/conf:/config:Z,U
    Volume=%h/Downloads/completed:/downloads:z,U
    Volume=%h/Downloads/incomplete:/incomplete:z,U
    Volume=%h/Downloads/torrents:/torrents:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Prowlarr Container
    Requires=gluetun.service
    After=gluetun.service
    
    [Container]
    ContainerName=prowlarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/prowlarr:2.0.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:9696/prowlarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    Volume=%h/container_volumes/prowlarr/conf:/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Flaresolverr Container
    
    [Container]
    ContainerName=flaresolverr
    EnvironmentFile=global.env
    Image=ghcr.io/flaresolverr/flaresolverr:v3.4.0
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Radarr Container
    
    [Container]
    ContainerName=radarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/radarr:5.27.5
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:7878/radarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/radarr/conf:/config:Z,U
    Volume=/mnt/movies:/movies
    Volume=%h/Downloads/completed/radarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Sonarr Container
    
    [Container]
    ContainerName=sonarr
    EnvironmentFile=global.env
    Image=lscr.io/linuxserver/sonarr:4.0.15
    AutoUpdate=registry
    UserNS=keep-id:uid=1000,gid=1000
    Pod=arr-stack.pod
    Network=container:gluetun
    
    HealthCmd=["curl","--fail","http://127.0.0.1:8989/sonarr/ping"]
    HealthInterval=30s
    HealthRetries=10
    
    # Disable SecurityLabels due to SMB share
    SecurityLabelDisable=true
    Volume=%h/container_volumes/sonarr/conf:/config:Z,U
    Volume=/mnt/tv:/tv
    Volume=%h/Downloads/completed/sonarr:/downloads:z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    ---
    [Unit]
    Description=Jellyseerr Container
    
    [Container]
    ContainerName=jellyseerr
    EnvironmentFile=global.env
    Image=docker.io/fallenbagel/jellyseerr:2.7.3
    AutoUpdate=registry
    Pod=arr-stack.pod
    Network=container:gluetun
    
    Volume=%h/container_volumes/jellyseerr/conf:/app/config:Z,U
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=default.target
    

    I run my podman containers in a VM running Alma Linux. Works pretty great so far.

    Had the same issue when debugging systemctl errors, journalctl not being very helpful. At one point I just ran podman logs -f <container> in another terminal in a while loop just to catch the logs of the application. Not the most sophisticated approach, but it works 😄

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Nice, thanks for sharing. How did you solve the file permission issue?

      Also I see you put all your services as a single pod quadlet what I am trying to achieve is to have every service as a separate systemd unit file, that I can control separately. In this case you also have a complication with the network setup.

      • Eldaroth@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        That’s where UserNS=keep-id:uid=1000,gid=1000 comes into play. It “maps” the containers’ user to your local user on the host to some extent, there is a deeper explanation of what exactly it does in this GitHub issue: https://github.com/containers/podman/issues/24934

        Well the pod only links the container together, it’s not one systemd file. Every container has its own file, so does the pod and the network (separated by ‘—’ in my code block above). You still can start and stop each container as a service separately or just the whole pod with all containers linked to it. Pods have the advantage that the containers in them can talk to each other more easily.

        The network I just created to separate my services from each other. Thinking of it, this was the old setup, as I started using gluetun and run it as a privileged container, it’s using the host network anyway. I edited my post above and removed the network unit file.

  • greybeard@feddit.online
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    I’m really glad to see quadlets taking off. I’ve been playing with them myself and really happy with the results. They pair well with ansible. Letting you write your quadlet files in a way that makes them highly portable.

  • thenorthernmist@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    Heya, I managed to set up the *arr stack as separate quadlets. The main problem I had was to get the correct permissions for the files inside the containers, and that seemed to be because of the way linuxserver.io is handling the filesystem (don’t quote me on this). Anyways this is how I set up the container segment in the .container file (located in /home/USER/.container/systemd/):

    [Container]
    Image=lscr.io/linuxserver/radarr:latest
    Timezone=Europe/Stockholm
    Environment=PUID=1002
    Environment=PGID=1002
    UIDMap=1002:0:1
    UIDMap=0:1:1002
    GIDMap=1002:0:1
    GIDMap=0:1:1002
    AutoUpdate=registry
    Volume=/mnt/docker/radarr:/config:Z
    Volume=/mnt/media/movies:/data/movies:z
    #PublishPort=7878:7878
    Network=proxy.network
    

    The thing that made it work for me was the UID-/GIDMaps, which basically translates the UID/GID from the host into the container. All you need to do is change the 1002 ID, which represents the UID and GID of the user that owns the files and directories.

    I also have a proxy.network file placed in the same directory with the content:

    [Unit]
    Description=Proxy network for containers
    [Network]
    

    So I can use that for container-container communication (and a caddy container for external access).

    Also notice the AutoUpdate=registry, which auto-updates the container (if you want that). However you first need to enable the “update-timer”: systemctl --user enable podman-auto-update.timer

    Also also, remember to create a file with the user running podman in /var/lib/systemd/linger, so that your containers don’t exit when you logout: touch /var/lib/systemd/linger/USERNAME

    And full disclosure, I ended up switching back to docker and docker-compose for my arr stack, however I still strongly prefer podman and run podman container on my externally accessible servers (VPS).

    Hope it helps.

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      You can actually set your user to linger with

      sudo loginctl enable-linger $USER
      

      I will test your setup and report back if it works.

      By the way what was the reason to switch back to Docker Compose?

      • thenorthernmist@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Cool, didn’t know that :)

        The reason for it was that I found myself fixing weird issues, like the one with the UID map and also an issue where containers couldn’t talk to each other outside of the container network (a container couldn’t talk to another container that used host networking).

        I was happy to figure out how to do quadlets, and still prefer dem from a security point of view, but found myself spending more time than I wanted fixing things when I already had a fully working arr stack compose file (which has something like 18 containers in it, that I would need to port).

        Now granted I could probably just have run podman-compose, and knowing myself I’ll probably try that later as well :)

        Let me know how it goes!

  • k_rol@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I’m curious to see your setup and logs as well. I’m going full steam quadlets too but I didn’t do arr yet.

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I can try to upload my container services and network tomorrow and share the link here.

  • @filister I don’t have an arr stack running, but I’m using several podman quadlets for running successfully e.g. PostgreSQL, Nextcloud, HomeAssistant and some more.
    Did you checked the journal with
    journalctl --identifier=\<container name\> for possible errors?

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I don’t know, I tried even with uptime-kuma and Homepage but as soon as I start the service it kills it after 6 unsuccessful restarts. Maybe I will spin up a completely new VM tomorrow and start from scratch.

      I think the problem might be with the data directory permissions, even though I have added the subuid and the subgid to my user and enabled the lingering on the user.

      But I did so many things so there is a chance it is already quite messed up.

  • Tinkerer@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I’m going down this rabbit hole right now and porting all my docker containers to quadlets on rocky Linux 10 as well. Haven’t done arr stack yet but everything else has been a pretty smooth transition.

    Don’t give up its worth it to be able to run rootless!

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Absolutely plus I love the idea of having them as separate services. I just don’t know how to configure them apparently.

      Did you create a separate systemd network for your Quadlets or are you using a bridge or host network?

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      There are no logs in journalctl, just when I check the status of the systemd services I see that the container service has crashed and after 5-6 restarts it gave up.

      I was thinking of installing the latest podman 5.7.0 and try with it, as there are quite a few updates between that one and 5.4.2 that comes as standard on Rocky.