Maybe a kinda risc language could be used initally and expanded over time although chatgpt is already doing some amazing things with more complex languages.
Maybe a kinda risc language could be used initally and expanded over time although chatgpt is already doing some amazing things with more complex languages.
Code can do things like launch severs, build out system architectures, read in a file, write pixels to the screen, call system calls, call a calculator app or talk to a quantum computer, move a robot, etc… significantly more than math.
There was a paper which I couldn’t find at the moment, which says in the early stages of the gpt, when they added code into its knowledge base, it got better at reasoning. I think that math might help in some other ways, but code can be used to solve math problems and do more than math in anycase.
I think we’ll get better models by having LLMs start to filter out less quality data from the training set and also have more machine generated data, particularly in the areas like code where a AI can run billions of experiments and use successes to better train the LLM. All of this is gonna cost a lot more compute.
ie for coding LLM proposes experiment, it is run, it keeps trying until its successful and good results are fed back into the LLM training and it is penalized for bad results. Learning how to code has actually seemed to help the LLM reason better in other ways, so improving that I would expect it to help it significantly. At some point, if coding is good enough, it might be able to write its own better LLM system.
Determinism in computational models, including binary systems, relies on the ability to reproduce results given the same initial conditions and operations. In a deterministic system, if you run the same sequence of instructions with the same inputs (including the seed for random number generation), you should expect the same output every time, assuming the system is isolated from external non-deterministic factors.
Multithreading introduces non-determinism when threads operate in a shared environment and their execution order affects the outcome. It’s the responsibility of the programmer to manage this through synchronization mechanisms to ensure deterministic behavior if required.
There is also analog and quantum computers. Analog computers work on the principle of approximation and continuous variable manipulation, which can introduce non-deterministic elements due to physical variations. Quantum computers operate on quantum bits (qubits) and can produce non-deterministic results because they exploit quantum superposition and entanglement.
In the context of machine learning models and AI, these principles apply as well. Binary-based AI will be deterministic if the conditions are controlled, while quantum and analog AI might introduce non-deterministic elements by their nature or design for efficiency.
Logging the creation time would help establish when the text was created or recreated. If someone was to, for instance, write the text before the creation time, they could argue that they came up with it before the AI. If someone used the text years later for an assignment, it would be a hard argument to make that AI came up with it unless it had propogated online.
If it happens within the timeframe of the assignment being set and say a month… then that is much stronger correlation that it might have come from AI, increasing the 9s in the probability.
The note AI would actually have to log everytime it generated the text.
The only way to do it closely for text would be to log every message for every generation with a creation time.
Much of what AI writes is exactly what a human would say.
Meta if you have a masters and then get 5 years experience 200k - 500k. After 10 years, maybe 700k on how well you do and how popular ML is then. ML is hugely popular right now at FANNG. Although you have to do well in the interview and have interesting research.