• Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    Wow, very interesting to see Sync doing so well in comparison!

    But I thought a lot of the advantage of async is it’s ability to handle many concurrent requests? What happens if you run the same tests but this time with 100 requests in parallel?

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I mean, it’s a bit of a weird comparison to begin with. Web servers were always able to parallelize without async, because they’d just spawn a new thread per request. The real advantage of async is programs where working with threads isn’t as trivial…

      • hackeryarn@lemmy.worldOP
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I totally agree with you. This article was really a response to a lot of hype around async web servers in Python.

        I kind of knew what to expect, but wanted to throw real numbers against it. I was surprised to see a 10x slowdown with the async switch in Django.

    • solrize@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      I’ve run python progs with 1000+ threads and it’s fine. 10k might be harder. Async is overrated. Erlang and elixir are preferable.

      • logging_strict@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        overrated or not the choice is between sync or async drivers. Actually there is no choice, just an illusion of choice.

        So async or async … choose. Without the web router running multithreaded, concurrency will have minimal effect. But everything is step by step. freethreaded hasn’t been out for very long.

  • sherbang@chaos.social
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    @hackeryarn It’s not clear from this writeup how SQLAlchemy is set up. If you’re using a sync postgres driver then you’re doing async-to-sync in your code and not testing what you think you’re testing.

    A test of different async SQLAlchemy configurations would be helpful next to this. Including testing that the SQLAlchemy setup is async all the way through.

    • logging_strict@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I live and breathe this stuff

      SQLAlchemy AsyncSession calls greenlet_spawn which wraps a Session sync method. For async dialect+driver, the sqlalchemy dbapi driver will make async connections.

      Hey lets make an async call. You mean rewrite the exact code except with dispersed await sprinkled about? Fuck that! Once is enough. Instead wrap the sync call in a greenlet_spawn. And then return to the gulag of static type checking hell forever.

      So is it async all the way thru? No. It’s async enough™

    • hackeryarn@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      It is using the async driver. I am using FastAPI’s thin wrapper around SQLAlchemy which also does some slight tuning for it to work better with FastAPI in an async mode.

  • loweffortname@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    It wasn’t clear whether they have a connection proxy in front of the postgres instance. PosrgreSQL connections are expensive, so something like pg_bouncer could also make a big difference here.

    (I realize the point was to test python web servers, but it would have been an interesting additional metric.)

    • hackeryarn@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      No connection proxy in this case. The pooled sync test uses client side pooling which shows better performance. Using a proxy would have the same effect, just moves the pooling to server side.