• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: October 24th, 2023

help-circle
  • The reason this happens is because when a stack is created with compose via the CLI, there is no back reference to the compose file that created the resulting stack. As such, Portainer has no way of knowing how the stack was created and for safety we flag these as limited stacks. If the stack is deployed through Portainer, we save the compose file and reference it in our database to that stack.

    The directory names within the Portainer data volume are given numerical identifiers because stack names can change, and because we let you manage multiple environments from one Portainer instance we also need to allow for the same stack name to exist on more than one environment. The directories weren’t initially intended for direct access outside of Portainer.


  • At present, Portainer is built on the one server multiple agents model where you have one environment that is your “management” interface and runs Portainer Server, and the others use the Agent to interface with the Server. You can only log into the environment running the Portainer Server container, not the Agents. We don’t currently support multi-tenancy of the Portainer Server.

    In production setups we generally recommend a separate environment purely for management, where workloads don’t run, that you run the Portainer Server on, and all workload environments use Agents. This way if one of your workload environments goes down, you can still manage the others in the meantime.


  • If you’ve mounted your share to /media/nfsshare1 on your host OS and you can write to it from within Linux, you should just be able to bind mount /media/nfsshare1 to a directory within your container in the same way you do a non-NFS local path - under the Volumes tab in Advanced container settings when creating a container, or in your stack yaml. As far as Docker will be concerned, it is a local path - since the mounting is done at the OS level through fstab, Docker has no idea what it actually is underneath.

    If on the other hand you want to create a NFS volume in Portainer, you wouldn’t do the mounting via fstab and instead do it all in the Create volume page (or in your stack yaml).


  • You shouldn’t need to build the image - the image already exists on Docker Hub. What you want to do is create a container (or stack) that uses the existing image on Docker Hub. Here’s a slightly modified version of the stack file from the NPM documentation that you can deploy in Portainer (the only change I’ve made is to turn the relative path bind mounts into named volumes):

    version: '3.8'
    services:
      app:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        ports:
          - '80:80'
          - '443:443'
          - '81:81'
    
        volumes:
          - data:/data
          - letsencrypt:/etc/letsencrypt
    
    volumes:
      data:
      letsencrypt:
    

    In Portainer, go to Stacks, click Add stack, give the stack a name, then paste this into the Web editor and deploy.

    Building an image is generally reserved for when you are either creating an image from scratch or are extending an existing image with your own modifications. Simply deploying a container from an image that someone else has created doesn’t require any image building.



  • You’re using relative paths in your volumes:

    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
    

    When using compose from the command line, these volumes would be mounted as bind mounts with relative paths to where you ran the docker compose up command - for example, if you were in /tmp/pihole they would be at /tmp/pihole/etc-pihole and /tmp/pihole/etc-dnsmasq.d. In Portainer, because we run the compose commands from within the Portainer container’s file system rather than on the host environment, the relative path is relative to the Portainer container’s file system.

    If these file paths exist on your host filesystem, you could change the relative paths to absolute paths instead:

    volumes:
      - /path/to/etc-pihole:/etc/pihole
      - /path/to/etc-dnsmasq.d:/etc/dnsmasq.d
    

    Or, if they don’t exist and are populated by the container image on first deploy, you could create them as named volumes:

    services:
      pihole:
        ...
        volumes:
          - etc-pihole:/etc/pihole
          - etc-dnsmasq.d:/etc/dnsmasq.d
    
    volumes:
      etc-pihole:
      etc-dnsmasq.d:
    

    Your method will depend on your requirements and what the image itself expects and provides.


  • Docker doesn’t allow you to mount a subpath of a named volume - you can only mount the named volume itself:

    volumes:
      - my_data:/path/in/container
    

    When mounting an existing volume in a stack file, you will also need to flag that volume as external otherwise it will try to create it for you. This is done in a separate volumes section outside of the services section:

    volumes:
      my_data:
        external: true
    

    Do I need to have one Volume for every stack so I would just say my_data:/datastore?

    You can share volumes between containers, however I would generally advise a volume for each. This makes the container configurations more independent of each other, whereas shared volumes are less so.

    Am I better off just ignoring Volumes and putting my persistent files somewhere like /data/my_data/ ?

    It depends. If you only intend to deploy your stack on the one environment and you don’t need the ability to redeploy the stack on another environment, then bind mounts (mounting to a path on the host) are fine. When portability is a concern, named volumes are generally better.

    Am I asking the wrong questions :)

    No such thing :)