Wow, very interesting to see Sync doing so well in comparison!
But I thought a lot of the advantage of async is it’s ability to handle many concurrent requests? What happens if you run the same tests but this time with 100 requests in parallel?
I mean, it’s a bit of a weird comparison to begin with. Web servers were always able to parallelize without
async, because they’d just spawn a new thread per request. The real advantage ofasyncis programs where working with threads isn’t as trivial…I totally agree with you. This article was really a response to a lot of hype around async web servers in Python.
I kind of knew what to expect, but wanted to throw real numbers against it. I was surprised to see a 10x slowdown with the async switch in Django.
This is running with concurrent requests. 64 workers firing request to be exact.
That sounds like plenty. Cool!
I’ve run python progs with 1000+ threads and it’s fine. 10k might be harder. Async is overrated. Erlang and elixir are preferable.
overrated or not the choice is between sync or async drivers. Actually there is no choice, just an illusion of choice.
So async or async … choose. Without the web router running multithreaded, concurrency will have minimal effect. But everything is step by step. freethreaded hasn’t been out for very long.
Only reliable web server is an Erlang web server.
@hackeryarn It’s not clear from this writeup how SQLAlchemy is set up. If you’re using a sync postgres driver then you’re doing async-to-sync in your code and not testing what you think you’re testing.
A test of different async SQLAlchemy configurations would be helpful next to this. Including testing that the SQLAlchemy setup is async all the way through.
I live and breathe this stuff
SQLAlchemy AsyncSession calls greenlet_spawn which wraps a Session sync method. For async dialect+driver, the sqlalchemy dbapi driver will make async connections.
Hey lets make an async call. You mean rewrite the exact code except with dispersed await sprinkled about? Fuck that! Once is enough. Instead wrap the sync call in a greenlet_spawn. And then return to the gulag of static type checking hell forever.
So is it async all the way thru? No. It’s async enough™
It is using the async driver. I am using FastAPI’s thin wrapper around SQLAlchemy which also does some slight tuning for it to work better with FastAPI in an async mode.
It wasn’t clear whether they have a connection proxy in front of the postgres instance. PosrgreSQL connections are expensive, so something like pg_bouncer could also make a big difference here.
(I realize the point was to test python web servers, but it would have been an interesting additional metric.)
No connection proxy in this case. The pooled sync test uses client side pooling which shows better performance. Using a proxy would have the same effect, just moves the pooling to server side.




