I am hosting more than 10 services currently but only Nextcloud sends me errors periodically and only Nextcloud is super extremely painfully slow. I quit this sh*t. No more troubleshooting and optimization.

There are mainly 4 services in Nextcloud I’m using:

  • Files: as simple server for upload and download binaries
  • Calendar (with DAVx5): as sync server without web UI
  • Notes: simple note-taking
  • Network folder: mounted on Linux dolphin

Could you recommend me the alternatives for these? All services are supposed to be exposed by HTTPS, so authentication like login is needed. And I’ve tried note-taking apps like Joplin or trillium but couldn’t like it.

Thanks in advance.

  • r3dk0w@alien.topB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    If you’re having issues with NextCloud being slow and having errors, it’s probably because the machine you are running it on is low on RAM and/or CPU.

    I bring this up because what ever replacements you try would likely have the same issues.

    My NextCloud instance was nearly unusable when I had it on a Raspberry PI 3, but when I moved it to a container on my faster machine (AMD Ryzen 7 4800U with 16GB of ram) it now works flawlessly.

    • sachingopal@alien.topB
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I agree with this. It needs a good amount of CPU cycle and RAM. Raspi struggled for me too.

      • lannistersstark@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My NC instance runs on a 24GB RAM, 4 CPU Ampere A1 host(Oracle), and still struggles. YMMV.

        And it struggles as a photo backup host an i5-7xxx and 16GB RAM at home.


        It’s not absurdly slow, it’s just…irritating sometimes.

      • benjiro3000@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Even if you ran a basic sqlite nexcloud, if properly optimized, you can deal with millions of files like its nothing. And that is the issue, the bugs and lacking optimization…

        4650g + 64GB ram + Mysql and it was file locking on just a 21k 10GB folder constantly.

        I have written apps (in Go) that do similar and process data 100 times faster then nextcloud. Hell, my scrapers are faster then nextcloud in a local netwerk, and that is dealing with external data, over the internet.

        Its BADLY designed software that puts the blame on the consumer to get bigger and better hardware, for what is essentially, early 2000 functionality.

        • r3dk0w@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Mysql and it was file locking on just a 21k 10GB folder constantly

          It’ll definitely do that if you keep your database on a network share with spinning disks.

          Spin up a container with sqlite in a ram disk and point it to your same data location. Most of the problems go away.

          • benjiro3000@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’ll definitely do that if you keep your database on a network share with spinning disks.

            Database and Nextcloud where on a 4TB NVME drive … in Mysql with plenty of cache/memory assigned to it. Not my first rodeo, …

            • EuroRob@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I’m running on an SSD as a VM on 10yr old laptop and have had very few issues compared to running on Raspis in the last. It’s not my first rodeo either and found Debian with NexCloudPi setup script worked the best, then restore from backup. The WebUI is performing great as well as bookmarks, contacts, calendar, video chats and most things I’ve thrown at it. NVME may be overkill but the combination of solid CPU, RAM and Disk IO should alleviate any problems. My hunch is there are other resource constraints or bottlenecks at play, perhaps DDOS or other attacks (experienced that for sure and you can test by dropping your firewall ingress rules to confirm).

              Also, this is FOSS and I find the features and usability are better than anything else out there, especially with Letsencrypt.

  • forwardslashroot@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I was on the same boat when I was running NC on a container. I switched to VM, and most of my issues have been resolved, but collabora. I am currently using the built-in collabora server, which is slow.

  • nick_ian@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I have my issues with Nextcloud, but it’s still, by far, the best solution I’ve come across.

  • MiddledAgedGuy@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago
    • Syncthing for files.
    • Proton calendar (so not self hosted)
    • Joplin, using file based sync with aforementioned syncthing. I saw you didn’t like it though.
    • I occasionally use scp
    • rglullisA
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      For calendaring, I also went with the option of syncthing via DecSync. I can get my contacts and calendar on Android and Thunderbird, so I can avoid yet another unnecessary webapp.

      • MiddledAgedGuy@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        This does look cool! But I notice that there’s really only one contributor (technically two, but the second only did one tiny commit) and they haven’t contributed any code in over a year. I don’t want to invest too much time migrating to a stale if not dead project.

        • rglullisA
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Honestly, I think that the lack of commits is more due to the application being feature complete than “dead”. I’ve been using it for at least 3 years now and it works quite well.

  • shittywhopper@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Sorry to hear you’ve had a bad experience. I’ve been running the lsio Nextcloud docker container for 4 years without any issues at all.

  • alt_and_f4_for_Admin@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For Calendar I highly recommend radicale. Super easy to setup and has a non bloated management ui. Has worked flawlessly over the last years

  • CountZilch@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Synology Drive is rock solid. Not open source though if that’s important to you and technically requires Synology hardware.

  • djbon2112@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Owncloud.

    I personally never caught the Nextcloud hype, and stuck with the original. So far I’ve heard (and seen, having tried it twoce) nothing but trouble from Nextcloud while my Owncloud install continues to be rock solid for going on 10 years (regularly updated, of course!).

    • natriusaut@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Dunno, running my nextcloud for a long time now, even updating the lazy way over the web UI and not the suggested CLI, not even once had a problem that was Nextclouds fault.

    • AnApexBread@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Same. I ran OwnCloud and Nextcloud in parallel for a while until a Nextcloud update nuked it and my wife lost some of her college work.

      After that I’ve appreciated the slower more deliberate pace of OwnCloud

    • Discommodian@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I always recommend OwnCloud. It even has a raw photo viewer plugin and if you know anything about RAW 24 megapixel photos, they are tough to load. But with owncloud a folder full of 30 pictures loads within 10-15 seconds

    • Theon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I personally never caught the Nextcloud hype

      The “hype” being simply Nextcloud not being OwnCloud which turned proprietary, no?

      • djbon2112@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Owncloud is not proprietary (it’s AGPLv3) and I’m really not sure where people get that idea.

        The original Nextcloud/Owncloud fork was due to disagreements in development direction, not (say) like Jellyfin/Emby where there was actually a license change. Nextcloud wanted to “move fast”, Owncloud wanted stability. There was potential concern around the time of the fork that, perhaps, hypothetically, some day, Owncloud might “go proprietary”, but going on close to 10 years that has not happened.

  • sachingopal@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You have not stated the hardware you are running this on. It makes a huge difference. Hope this is not Raspi?

    • monnef@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hope this is not Raspi?

      What is wrong with RPi? I thought RPi 4 for two calendars (one calendar per user) on nextcloud would be plenty, looking at the requirements:

      A 64-bit CPU, OS and PHP is required for Nextcloud to run well. …
      Nextcloud needs a minimum of 128MB RAM per process, and we recommend a minimum of 512MB RAM per process.

      Also, how resource intensive could/should be syncing two personal calendars (via Thunderbird)? I don’t understand, why NextCloud with this virtually negligible task struggles so much. The pi has 7+GB of free memory, CPU load under few %, rarely one core has some load, most of the time nothing accesses the card nor disk (virtually 0 iowait; only with a short spike once every 5 minutes). Why does Nextcloud take half a minute to several minutes for a sync of one calendar in Thunderbird?

      • drpepper@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Its underpowered, especially for an application based on PHP which is single threaded so requires a core with a fast clock. The RPi4 with 1.5Ghz is woefully underpowered to drive anything php backed.

        • monnef@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I see 1.8GHz in glances (in my case actively cooled, but since it doesn’t seem to max any core, it probably doesn’t matter). I have other RPi4s, I wonder why is backend in Java (well, Scala) ok, backend in Haskell ok, but backend in PHP wouldn’t be? I still don’t understand how Nextcloud can lock up for so long (tens of seconds) on a simple write event into calendar operation. That hacky unoptimized Java BE which does joins manually and inserts sequentially (so from a db perspective just awful), handles 5-10 times more data and still does it order of magnitude faster. My old phone which was weaker than even RPi4 could handle dozens of such small operations in one second (I believe that was SQLite + Java). There must something seriously wrong with Nextcloud (including PHP runtime) and/or the RPi, because such insignificant amount of data (1 word title, one date, one reminder option), most likely merely few dozens of bytes, takes so incredibly long to process and write to db…

          • drpepper@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            i cant comment on the differences between languages, but it probably also has a lot to do on how nextcloud is written. unoptimized software is always going to be slower than it’s counterpart.

    • Successful_Try543@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      NC on RPi 4 with 8 GB RAM works fine for me. The RPi 3 turned out to be lacking sufficient amount of RAM (1GB) after a NC version update.

  • xiongmao1337@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is concerning to me because I’ve been considering ditching Synology and spinning up nextcloud. I like Synology drive but I’m tired of the underpowered hardware and dumb roadblocks and vendor lock-in nonsense. I’m very curious what you end up doing!

    • jimheim@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Nextcloud is great. I don’t doubt that OP is having problems, and I understand how frustration can set in and one might throw in the towel and look for alternatives, but OP’s experience is atypical. I’ve been running it for years without any issues. I should point out that I only use it for small-scale personal stuff, but it’s good for me. I have it syncing on eight devices, including Linux, MacOS, and Windows desktops; Android phone; iPad; Raspberry Pi. My phone auto-uploads new camera photos. I’m using WebDAV/Fuse mounts on some machines. Everything is solid.

    • dangernoodle01@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      A confirmed, yet still not resolved bug caused me and about 200 other people lose data (metadata) for tons of files. Well, at least 200 reacted to the GitHub bugreport I filled. I think you can easily find it because it’s the most upvoted yet unresolved issue.

      Besides this, it’d often give random errors and just not function properly. My favorites are the unexplained file locks: My brother in Christ, what do you mean error while deleting a file. It’s 2023 holy shit, just delete the damn file. It’s ridiculously unreliable and fragile. They have tons, thousands of bugreports open - yet they focus on pushing new, unwanted social features to become the new facebook and zoom. They definitely should focus on fixing the foundation first.

    • rangerelf@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not OP, but I run it on docker with postgres and redis, behind a reverse proxy. All apps on NC have pretty good performance and haven’t had any weird issues. It’s on an old xeon with 32gb and on spinning rust.

      • ilikepie71@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Do you have redis talking to nextcloud over the unix socket or just regular TCP? The former is apparently another way to speed up nextcloud, but I’m struggling to understand to get containers using the unix socket instead.

        • rangerelf@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I have both Postgres and Redis talking to Nextcloud through their respective unix sockets; I store the sockets in a named volume, so I can mount it on whatever containers need to reach them.

            • rangerelf@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Sure:

              POSTGRES

              ---
              version: '3.8'
              services:
                postgres:
                  container_name: postgres
                  image: postgres:14-alpine
                  environment:
                    POSTGRES_PASSWORD: "XXXXXXXXXXXXXXXX"
                    PGDATA: "/var/lib/postgresql/data/pgdata"
                  volumes:
                    - type: bind
                      source: ./data
                      target: /var/lib/postgresql/data
                    - type: volume
                      source: postgres-socket
                      target: /run/postgresql
                  logging:
                    driver: json-file
                    options:
                      max-size: 2m
                  restart: unless-stopped
              networks:
                default:
                  external:
                    name: backend
              volumes:
                postgres-socket:
                  name: postgres-socket
              

              REDIS

              ---
              version: '3.8'
              services:
                redis:
                  image: redis:7.2-alpine
                  command:
                    - /data/redis.conf
                    - --loglevel
                    - verbose
                  volumes:
                    - type: bind
                      source: ./data
                      target: /data
                    - type: volume
                      source: redis-socket
                      target: /var/run
                  logging:
                    driver: json-file
                    options:
                      max-size: 2m
                  restart: unless-stopped
              networks:
                default:
                  external:
                    name: backend
              volumes:
                redis-socket:
                  name: redis-socket
              

              Here’s redis.conf, it took me a couple of tries to get it just right:

              # create a unix domain socket to listen on
              unixsocket /var/run/redis/redis.sock
              unixsocketperm 666
              # protected-mode no
              requirepass rrrrrrrrrrrrr
              bind 0.0.0.0
              port 6379
              tcp-keepalive 300
              daemonize no
              stop-writes-on-bgsave-error no
              rdbcompression yes
              rdbchecksum yes
              # maximum memory allowed for redis
              maxmemory 50M
              # how redis will evice old objects - least recently used
              maxmemory-policy allkeys-lru
              # logging
              # levels: debug verbose notice warning
              loglevel notice
              logfile ""
              always-show-logo yes
              

              NEXTCLOUD

              ---
              version: '3.8'
              services:
                nextcloud:
                  image: nextcloud:27-fpm
                  env_file:
                    - data/environment.txt
                  volumes:
                    - type: bind
                      source: ./data/html
                      target: /var/www/html
                    - type: volume
                      source: redis-socket
                      target: /redis
                    - type: volume
                      source: postgres-socket
                      target: /postgres
                    - type: tmpfs
                      target: /tmp:exec
                    - type: bind
                      source: ./data/zz-docker.conf
                      target: /usr/local/etc/php-fpm.d/zz-docker.conf
                    - type: bind
                      source: ./data/opcache_cli.conf
                      target: /usr/local/etc/php/conf.d/opcache_cli.conf
                  networks:
                    - web
                    - backend
                  logging:
                    driver: json-file
                    options:
                      max-size: 2m
                  restart: unless-stopped
                crond:
                  image: nextcloud:27-fpm
                  entrypoint: /cron.sh
                  env_file:
                    - data/environment.txt
                  volumes:
                    - type: bind
                      source: ./data/html
                      target: /var/www/html
                    - type: bind
                      source: ./data/zz-docker.conf
                      target: /usr/local/etc/php-fpm.d/zz-docker.conf
                    - type: volume
                      source: redis-socket
                      target: /redis
                    - type: volume
                      source: postgres-socket
                      target: /postgres
                    - type: tmpfs
                      target: /tmp:exec
                  networks:
                    - web
                    - backend
                  logging:
                    driver: json-file
                    options:
                      max-size: 2m
                  restart: unless-stopped
                collabora:
                  image: collabora/code:23.05.5.4.1
                  privileged: true
                  environment:
                    extra_params: "--o:ssl.enable=false --o:ssl.termination=true"
                    aliasgroup1: 'https://my.nextcloud.domain.org:443'
                  cap_add:
                    - MKNOD
                  networks:
                    - web
                  logging:
                    driver: json-file
                    options:
                      max-size: 2m
                  restart: unless-stopped
              networks:
                backend:
                  external:
                    name: backend
                web:
                  external:
                    name: web
              volumes:
                redis-socket:
                  name: redis-socket
                postgres-socket:
                  name: postgres-socket
              

              The environment.txt file is hostnames, logins, passwords, etc…

              POSTGRES_DB=nextcloud
              POSTGRES_USER=xxxxxxx
              POSTGRES_PASSWORD=yyyyyyyyyyyyyyyyyyy
              POSTGRES_SERVER=postgres
              POSTGRES_HOST=/postgres/.s.PGSQL.5432
              NEXTCLOUD_ADMIN_USER=aaaaa
              NEXTCLOUD_ADMIN_PASSWORD=hhhhhhhhhhhhhhhhhhh
              REDIS_HOST=redis
              REDIS_HOST_PORT=6379
              REDIS_HOST_PASSWORD=rrrrrrrrrrrrr
              

              The zz-docker.conf file sets some process tuning and log format, some might not even be necessary:

              [global]
              daemonize = no
              error_log = /proc/self/fd/2
              log_limit = 8192
              
              [www]
              access.log = /proc/self/fd/2
              access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"
              catch_workers_output = yes
              decorate_workers_output = no
              clear_env = no
              
              user = www-data
              group = www-data
              
              listen = 9000
              listen = /var/www/html/.fpm-sock
              listen.owner = www-data
              listen.group = www-data
              listen.mode = 0666
              listen.backlog = 512
              
              pm = dynamic
              pm.max_children = 16
              pm.start_servers = 6
              pm.min_spare_servers = 4
              pm.max_spare_servers = 6
              pm.process_idle_timeout = 30s;
              pm.max_requests = 512
              

              The opcache_cli.conf file has a single line:

              opcache.enable_cli=1
              

              I don’t remember why it’s there but it’s working so I’m not touching it :-D

              Good luck :-)

    • qfla@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Also not OP. I run nextcloud on 10th gen i3 on spinning rust and performance is good. I run it on LXC container though so without docker

    • spokale@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I dumped synology and just use proxmox for the automatic ZFS support, then I can run my apps in either containers or VMs and even do GPU passthrough if needed.

  • Charming-Molasses-22@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I use linuxserver.io’s nextcloud docker image. While I’ve seen people struggle to setup Nextcloud properly to the point of just giving and installing the snap version of it, I can count the number of times I’ve needed to do manual interventions for nextcloud with LSIO’s nextcloud image. It works like a charm.

  • murdaBot@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    PSA: saying “I run Nextcloud and don’t have any problems” doesn’t help anyone or contribute anything useful to the conversation. It just makes you look like an insecure fanboy.

    • primalbluewolf@alien.topB
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Disagree, seeing as OP has not posted anything other than “I run Nextcloud and have problems”, providing a counter is straightforward and expected.

        • primalbluewolf@alien.topB
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Well, the comments were helpful to me, in trying to determine if I want to put effort into setting up Nextcloud. A post full of alternatives, with people saying that Nextcloud is buggy? Obviously, look at the alternatives.

          A post full of comments saying “you shouldnt have those issues, want some help troubleshooting your config” and a couple alternatives? Probably worth looking into Nextcloud rather than writing it off.

    • HammyHavoc@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      No, it makes you look insecure about your objectivity. Spreading FUD about a FOSS project isn’t helpful, and it’s usually down to misconfiguration or poor hardware that it doesn’t run properly.

      I see plenty of folks who think they’ve got Redis setup but are following crap guides, so it isn’t working.

  • xristiano@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I open source my homelab as much as I can. But when it comes to backups of my family’s photos, servers, and laptops I don’t want troubleshoot bugs that could cost me valuable data and time; that’s why I gladly pay for a Synology NAS.