• hokage@lemmy.world
    link
    fedilink
    arrow-up
    214
    arrow-down
    2
    ·
    1 year ago

    What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        33
        ·
        1 year ago

        Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else

    • Wats0ns@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      35
      ·
      1 year ago

      Openai biggest spending is infrastructure, Whis is rented from… Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

      • fidodo@lemm.ee
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 year ago

        MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.

    • monobot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      1 year ago

      While title is click bite, they do say right at the beginning:

      *Right now, it is pulling through only because of Microsoft’s $10 billion funding *

      Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      I mean, you’re correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn’t care if they make a loss because it’s sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it’s still true that OpenAI’s business plan was unsustainable hyped hogwash.

      • chiliedogg@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 year ago

        Their business plan got Microsoft to drop 10 billion dollars on them.

        None of my shitty plans have pulled that off.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If they got any of that into their own pockets kudos to them.

          Mainly they used it to pay for the tech and research and it’s all reverting back to Microsoft eventually. Going bankrupt is not quite the same as being acquired.

      • fidodo@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Almost every company uses either Google or Microsoft Office products and we already know that they’re working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they’ll recoup that investment in months.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    127
    arrow-down
    3
    ·
    1 year ago

    There’s no way Microsoft is going to let it go bankrupt.

  • Elderos@lemmings.world
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    5
    ·
    1 year ago

    That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

    • glockenspiel@lemmy.world
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      3
      ·
      1 year ago

      I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

      What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.

      I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

      A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

      • Windex007@lemmy.world
        link
        fedilink
        English
        arrow-up
        62
        arrow-down
        5
        ·
        1 year ago

        As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.

        It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.

        Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

        ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          32
          arrow-down
          2
          ·
          1 year ago

          Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

          • Windex007@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            1 year ago

            I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.

            I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.

            Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.

            If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.

            The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          1 year ago

          Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.

        • ewe@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

          Not me, but my boss would… wait a minute…

        • flameguy21@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.

      • bmovement@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.

        Edit: shit, maybe I’m too dependent on it.

        • JimmyMcGill@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          1 year ago

          I’m also having a good time with copilot

          Considering asking my company to pay for the subscription as I can justify that it’s worth it.

          Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.

          I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.

          Btw if you aren’t in the chat beta I really recommend it

          • Jerkface@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.

            • JimmyMcGill@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.

              The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.

    • Gsus4@feddit.nl
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

    • Immersive_Matthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.

  • Billy_Gnosis@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    7
    ·
    1 year ago

    If AI was so great, it would find a solution to operate at fraction of the cost it does now

    • Death_Equity@lemmy.world
      link
      fedilink
      English
      arrow-up
      66
      arrow-down
      1
      ·
      1 year ago

      Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        8
        ·
        1 year ago

        if we don’t know, it doesn’t know.

        If we know, but there’s no public text about it, it doesn’t know either.

        it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new

      • Buffaloaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I asked Bard how ChatGPT can fix itself, here’s what it said:

        ChatGPT can fix itself in a number of ways, including:

        Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.

        Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.

        Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.

        Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.

        Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.

        • lagomorphlecture@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          “Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”

          So basically create its own Fox News and see how that goes.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Deepmind is actually working on an AI that improve performances of low level programs. It started with improving sorting algorithm.

      It’s an RL algorithm.

      Main issue is that everything takes time, and expectations on current AI are artificially inflated.

      It will reach the point most are discussing now, it’ll simply take a bit longer than people expect

      Source: https://www.nature.com/articles/d41586-023-01883-4

    • pachrist@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they’ll absolutely keep throwing cash at it.

      • XTornado@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        They seems to be killing Cortana… So I expect a new assistant at least based partially on this tbh.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      1 year ago

      It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.

      Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached

      • MajorHavoc@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I probably sound like I hate it, but I’m just giving my annual “this new tech isn’t the miracle it’s being sold as” warning, before I go back to charging folks good money to clean up the mess they made going “all in” on the last one.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can’t strike for better wages when your employer is already trying to get rid of you.

        The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.

        Hunger shouldn’t be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I agree with this. People should fight to democratize AI, public model, public data, public fair research. And should fight misuse of it from business schools’ type of guys.

  • TimeMuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 year ago

    Indian newpapers publish anything without any sort of verification. From reddit videos to whatsapp forwards. More than news, they are like an old chinese whispers game which is run infinitely. So take this with a huge grain of salt.

  • figaro@lemdro.id
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    Pretty sure Microsoft will be happy to come save the day and just buy out the company.

      • NuanceDemon@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 year ago

        It works if you ask it for small specific components, the bigger the scope of the request, the less likely it will give you anything worthwhile.

        So basically you still need to know what you’re doing and how to design a script/program anyway, and you’re just using chatgpt to figure out the syntax.

        It’s a bit of time-saver at times but it’s not replacing anyone in the immediate future.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I’ve tried using it myself and the responses I get, no matter how I phrase them, are too vague in most places to be useful. I have yet to get anything better than what I’ve found in documentation.

        • sfgifz@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          edit-2
          1 year ago

          My experience is different, the response I get is not perfect but it’s good enough to be a start for any decent dev to refactor and build upon with lesser effort than from scratch. Maybe it depends on what language or framework you’re asking for.

        • Tony Bark@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I have problems with it repeating certain words over and over again no matter how much I adjust the style and tone.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they’re anywhere near a net profit but they’re far from bankruptcy.

    • subversive_dev@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Right!? I believe it has the hallmark repetitive blandness indicating AI wrote it (because oroboros)

    • pexavc@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The flow of the writing style felt kinda off, like someone was speaking really fast spewing random trivia and leaving

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    1 year ago

    This is alarming…

    One of the things companies have started doing lately is signaling “we could do bankrupt”, then jumping ahead a stage on enshittification

  • Cheesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    A company that just raised $10b from Microsoft is struggling with $260m a year? That’s almost 40 years of runway.

  • Browning@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    They are choosing to spend that much. That doesn’t suggest that they expect financial problems.

    • stealthnerd@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      True. They could close it off to the public at any time and only offer a subscription service.

      However, they are probably afraid to do that for fear that they will lose out to competitors. Offering the service for free was the key to their popularity and bringing AI technology into the hands the average users. If they cut that off, someone else will quickly take their place.

    • Ragdoll X@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I’d guess it’s one of those “make people dependent on our free/cheap product then increase the price later” kind of deal

  • banneryear1868@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Of course it will, all these companies are funded by tech giants and venture capitalist firms. They don’t make money they cost money.