This tendency of AI to just outright lie when it doesn’t have a real answer is mildly upsetting. It’s indicative of the fact that the people building these systems have no clue (or interest in?) how to implement basic ethical guidelines. That doesn’t bode well for the evolution of these systems and what they will be capable of.
What it boils down to is that they’re not really AI. They’re large language models, and chatbots built around them are basically using predictive autofill on steroids.
This tendency of AI to just outright lie when it doesn’t have a real answer is mildly upsetting. It’s indicative of the fact that the people building these systems have no clue (or interest in?) how to implement basic ethical guidelines. That doesn’t bode well for the evolution of these systems and what they will be capable of.
What it boils down to is that they’re not really AI. They’re large language models, and chatbots built around them are basically using predictive autofill on steroids.