The “why” is missing an important one: put less load on your primary DB for frequent gets. If you can replace your cache with your primary DB then ya you probably don’t need a cache. Just fix indexing, data normalization, and/or access patterns.
There’s a few things here I’m a bit… eh on:
- If the UNLOGGED tables are unavailable on replicas, you’re now putting read loads back on your primary which sounds like it defeats the point of having replicas to begin with?
- The last claim: “…[with] better persistence than traditional caching services…” seems like a weird claim to make if these tables get truncated on DB crashes. Redis with AOT seems far more robust.
The point of using a cache is to have data in memory and not on disk. From what I can tell, Postge Unlogged tables are still written to (and read from) disk. It is just that the write is done in an unsafe way.
The article explains how one would go about benchmarking performance but forgets to actually include performance metrics. Luckily they link to another write up that does. Using an Unlogged table vs. a regular table reduces write times about 45% and gives you about 3 times as many transactions per second. It is not nothing but it is probably not worth the code complexity vs. writing directly to a persistent table.
Even the “no persistence” behavior of a cache is not strictly true: an unlogged table is only truncated if Postgre is shut down unexpectedly (by kill -9 the process or by killing the VM). If you restart if you shut down the process in a controlled manner, the unlogged table is properly persisted and still has data when it starts.
Here are some possibly related communities in the instance:
Feel free to crosspost into them or post future content on this topic there if they are relevant.
I am a bot and this was performed automatically 🤖 For any issues contact Ategon.