For sure. OAI also mentioned using a similar process. They probably failed in implementation and will probably copy them as soon as they can.
For sure. OAI also mentioned using a similar process. They probably failed in implementation and will probably copy them as soon as they can.
It has about 18 thousand samples.
GAN - Great if it works, but you better get used to praying cause it’s difficult to train like reinforcement learning. After all the pain you either got a complete piece of garbage or amazing miracle work that’s extremely efficient with O(1) time complexity. Look at GigaGan. Images are sharper with detail and sometimes almost impossible to tell.
Diffusion - Slow but gets high quality results and super easy to train. It will probably improve in the future when we get better noise schedulers and other breakthroughs. O(n) which n is time steps. Images are smoother. But good quality enough to fool most people.
so, are you solving a markov decision problem here
Yes. I am thinking of just using a metric to see if it made the optimal decision by the amount of value it delivers per capita.
The main flaw from my previous metric is that it had a bias towards naive algorithms because the way its calculated which leads results to be misleading from reality. Skipping turns is sometimes the optimal decision which the metric said it was bad, but reality it isn’t.
When I dug closer into the data it turns out the AI was destroying the naive algorithms with this metric and the total results we were aiming for.
There way too much speculation. Speculation is like a fetish on that sub.
Ayyayyay the source for this. The news must be desperate to cash in on the drama. You have all these anonymous people pretending they work at OAI. Last one said that they had AGI internally. They used to do this with Google with conspiracy theories that Google was locking an AGI from everyone.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company
I would take this with a heavy grain of salt.
Ubuntu. Debian is also a good alternative if you are fine with dealing with ancient Linux Kernel.
Ubuntu. Debian is also a good alternative if you are fine with dealing with ancient Linux Kernel.
After training AI actually emits less CO2 emissions than humans working on the same thing. So the initial power draw is just a sacrifice for longer term eco friendly. This isn’t dumping GPU into ponds like manufacturing that actually decimates ecosystems sources of water.
The only problem I have is the first point. You are limited by compute and any large language model now needs to report to the government.
They are also weighing in about open source weights so this seems like a warning attack on open source models within 270 days. Which may get open source AI banned under the guise of national security. We need to be loud and speak our thoughts now!
FU Mr World coin. You benefit from Open source and now trying to bite the hand that feed you.
They have around the same amount of cuda cores. Normally the bigger the cuda cores the higher the inference
A100 is like a 3070ti with 80gb Vram. H100 is like a 4090 with 80gb of ram and optimized hardware for transformers.
What are his thoughts on the societal impacts of AI in fictional universes like Star Wars in comparison to the doomer narrative?
What are things he would tell people who don’t know much about AI about AI risks from what they hear form Sam Altman and the media?
What are ouse advice would he give to people doing research on challenging problems? Also some tips and tricks about training RL models he learned.