I find running an OpenAI style API endpoint (using llama.cpp directly when I want fine control, or StudioLM when I need something quick and easy) is the best way to go in combination with a good chat UI designed to interface with OpenAI models.
To that end, I redirect Chatbox to my local LLM server, and I LOVE IT. Clean but powerful interface, support for markdown, ability to save different agents for quick recall, and more. Highly, HIGHLY recommend it.
It’s open source and available on pretty much every platform – and you can use it to interface with both local LLM and with OpenAI LLM’s.
VITS2 was published recently by the authors of VITS. If I understand correctly, it implements the use of transformers, runs more efficiently than VITS, and is capable of better voices too, provided the dataset. some folks make an open source implementation of it, with the help of the authors of the paper. See the GitHub repo