• 0 Posts
  • 14 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle



  • Well it depends on what you are building. If you are actually doing ML research, i.e. you want to publish papers, people are doing evaluation and you won’t get published without it. There’s a bunch of tricks that have been used to evaluate generative models that you can find in these papers. I remember in grad school our TA made us read a paper and then in the discussion he said that he thought the method they proposed was not good at all, he wanted us to read it to learn about their evaluation metric which he deemed “very clever”.


  • It’s basically impossible to be completely caught up. So don’t feel bad. I am not really sure it’s all that useful either, you should know of technologies / techniques / architectures and what they are used for. You don’t need to know the details of how they work or how to implement them from scratch. Just being aware means you know what to research when the appropriate problem comes your way.

    Also a lot of the newest stuff is just hype and won’t stick. If you’ve been in ML research since 2017 (when transformers came out) you should know that. How many different CNN architectures came out between Resnet in 2016 (or 15?) and now? and still most people simply use Resnet.




  • I mean it’s obviously a power play. But the characters involved are this:

    • Altman: entrepreneur extraordinaire, head of YC, CEO, investor, etc.

    • Brockman: entrepreneur extraordinaire, former CTO of Stripe (e-commerce infra company), investor, etc.

    • Nadella: CEO of Microsoft, nuff said.

    • Sutskevar: researcher extraordinaire, academic

    • McCauley: RAND Corp scientist (no idea what that means but it has scientist in the name)

    • Toner: Georgetown academic.

    All of the tech entrepreneur people and investors – the people who are obsessed with just making money – are on one side. And all of the academic, science people are on the other side. Recall that OpenAI was founded by a bunch of researchers and academics who explicitly made it a nonprofit, which Altman changed once he became CEO in 2019.

    Idk if the academics really care about the betterment of mankind, but I know for a fact that the other guys’ are driven by pure greed.







  • I interviewed there and got rejected on the final round in the spring. The people interviewing me seemed competent, and I liked their interview questions (I’m not exactly sure if I was rejected cause my answers were wrong, it didn’t seem that way to me, but I did kinda feel like me and last interviewer didn’t really click). Though after the fact I did do some research into the founders and yes was surprised by the fact that it’s more like she’s a silicon valley person more than AI person.

    But Sam Altman is silicon valley person and he’s running OpenAI just fine (in terms of raising the valuation of the company and increasing returns to investors – which is the job of CEO). Although he had Ilya, I don’t think there is anyone that impressive running Generally Intelligent’s research.