I’m currently watching the progress of a 4tB rsync file transfer, and i’m curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there’s a lot that can effect transfer speeds, so I guess i’m not asking why my transfer itself isn’t going faster. I’m more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with 5.0GB/s 5.0Gb/s read/write 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    11 months ago

    I mean, yeah, at this point letting it finish regardless seems like the right play. You could Ctrl-Z and then do little experiments and then resume it if you feel confident mucking around with that and you’re curious.

    You can estimate the current speed pretty accurately with something like “df; sleep 30; df” and then do the math.

    It’s useful to mess around with tar, because it will try to saturate its pipes without waiting, so even if that saturation on its own doesn’t fix anything, you can start to eliminate possibilities for where the issue might be. You know for sure it won’t wait for anything from the other end before continuing to do its reads. “time dd if=/dev/zero of=file” or similar commands can also determine the speed of individual parts of the pipeline.

    (Edit: If you’re doing the dd test make sure you write or read a ton of data, to make sure you’re dealing with the physical disk and not the memory cache)

    Best of luck