For small loads it’s perfectly fine. Big loads would push the duty cycle of the transformer and the excess heat would be an issue but for small loads like a router or NUC it’s perfectly adequate.
Anything bigger like a desktop PC or server in active use I’d upgrade to a bigger UPS to ensure a better duty cycle on the components, but that’s also an additional expense.
Nobody seems to consider that
A: AI should be trained to be truthful and accurate, not gagged from speaking by arbitrary rules.
B: AI will not have exponential impact because it won’t be trusted any more than humans already are. We know how to gate user input with limited access, this already applies to language model outputs.
I already have to deal with regular humans who wouldn’t obey these rules; so why should I expect an LLM to act differently?