I had a discussion in class with one of my teachers. He says that AI is and can only be always deterministic because “even a deep learning neural network is a set of equations running on a computer, and the stochastic factor is added at the beginning. But the output of a model is always deterministic, even if it’s not interpretable by humans.”
How would you reply? (Possibly with examples and papers)
Tysm!
If you keep Dropout at Inference time, you don’t get deterministic results, even if the input stays constant. People sometimes think you can use this to derive uncertainty (I don’t).
He is right tho
Your teacher’s argument is based on the fact that pseudo-random generators are deterministic, which is entirely irrelevant to ML theory.
If you want to make the point that the “can only be” part is extremely far-reaching, just bring quantum physicists in the discussion.
A single AI network is deterministic. If you apply the same input, you get the same output. If you train on the same dataset, in the same order, with the same initial weights and hyperparameters, you will get an identical training result.
The tricky thing is that AI is high dimensional and non-linear. So what appears to be a very small change to the input can cause a large change in the output. I think the clearest example of this is adversarial AI.
It also has to be the same hardware…
Your teacher’s argument is based on the fact that pseudo-random generators are deterministic, which is entirely irrelevant to ML theory.
If you want to make the point that the “can only be” part is extremely far-reaching, just bring quantum physicists in the discussion.
He is right tho
A single AI network is deterministic. If you apply the same input, you get the same output. If you train on the same dataset, in the same order, with the same initial weights and hyperparameters, you will get an identical training result.
The tricky thing is that AI is high dimensional and non-linear. So what appears to be a very small change to the input can cause a large change in the output. I think the clearest example of this is adversarial AI.
Human output is the same - deterministic with a stochastic factor, although you may prefer to call the latter free will.
The joke is that so are you, and so is him.
floating point arithmetic is “non-deterministic”
I will give a difference answer, systems that do online learning are certainly not deterministic in the common sense of the world as their internal changes based on non deterministic behaviour.
Systems that rely on noise generation via non deterministic processes are also non deterministic.
This non determinism is rooted in the change of parts of the state or the input, but for identical state and inputs, the systems are deterministic as long as no bitflips or quantum effects occur in the silicon.
Basically yes. As far as we know, human brains don’t employ quantum randomness in any meaningful manner, so they’re also deterministic.
What difference does it make? It doesn’t say much about what AI systems or humans can or can’t do.
Basically yes. As far as we know, human brains don’t employ quantum randomness in any meaningful manner, so they’re also deterministic.
What difference does it make? It doesn’t say much about what AI systems or humans can or can’t do.
Ultimately it depends on whether the system is closed or open. If the system is closed (a model with inputs and outputs) then it’s deterministic. If the system is open (if it reaches out to the internet, it asks you for your own opinion, it hires mechanical turks from Amazon to fine tune it, etc) then it might not be deterministic (if any of the inputs are not deterministic).
I think MCMC is a family of methods that puts lie to the claim of determinism. Unless his point is “if you set the random seed the same, then this code block will produce the same result with perfect fidelity”. In which case, sure, okay.