I prompted ChatGPT to write and adjust a linear Python script for a repetitive task I needed to automate. It took 30 minutes versus the 6-12 hours I would have consumed if I had coded it myself.
It’s a huge force multiplier when used properly.
It can speed up the process, but it’s not like it would replace a programmer. It still requires someone with enough knowledge to check it’s output and correct it’s mistakes or call it’s bullshit.
It won’t replace us yet. This is the first technology over my entire career that has me a little concerned about the future.
I don’t know. The speed that these things blew up in becoming The Next Big Thing™️ kind of sets off my bullshit detectors.
I’m certainly not an expert in machine learning topics, but I suspect that the output of LLMs will never be able to output complex code that doesn’t require a lot of modification and verification.
While it may not eliminate positions entirely, it will greatly reduce the number of positions needed.
See any advancements in automation from farming to manufacturing.
See any advancements in automation from farming to manufacturing.
See, this is the kind of thing that makes my bullshit detectors go off. The comparison elevates this new tech to the same level of importance as past revolutionary shifts in industry. But this only seems justified if you can assume the rapid advancements in LLMs will continue at the same rate going forward, which not a given at all. Fundamentally, these models are trained to produce convincing output, not accurate output. There is no guarantee that high accuracy will be achieved with this approach.
For programming, I don’t see these LLMs any differently than previous advancements in tooling and in high level programming languages and frameworks. They will make it easier to rapidly prototype and deploy (shoddy) apps, but they will not be replacing devs who work at a low level high performance, or critical areas, nor will they be drastically reducing the workforce needed - at least not any more than other tooling advancements.
All just my opinion, of course. We shall see.
Exactly, a year or two ago I said that knowledge of obscure and obsolete languages won’t be as saleable a skill soon because of the ability to convert code automatically into a more widely used language, everyone laughed at me and said that will never happen - already some big companies have started doing it.
I was talking to a friend recently about AI coding and realised that beyond a certain point a huge portion of the industry will be made obsolete entirely and honestly it’s probably not very far away - pretty much all the coding either of us had worked on won’t be needed if you can simply ask your computer to do it without needing a special program.
I’ve created a lot of GUI tools for example and tools for configuration but being able to just talk to you computer would erase the need for almost all of them, and a lot of stuff you won’t even need to do in the first place - why would I install an app to monitor network connectivity and locate newly added devices when I can just say ‘computer, how’s the network been working today? Is my printer working?’ and it just tells you.
How we interact with computers has done nothing but change, I really think we’re going to see a real game change soon, like not a game changing move, literally switching from chess to buckaroo.
Those two examples you’ve given (asking the computer about the network and printer) don’t need ai (LLMs in this context) to exist. They need to be pre programmed absolute functions. Suggesting that these LLMs are a step towards that not only ignores that we already have voice assistants built into computers, but ignores the fact that LLM outputs are volatile and can’t be trusted.
What I’m getting at is you won’t need absolute functions to pre exist when you can just ask your computer and it’s able to poll the relevant resources and format a reply, of course current models can’t do this but if you think that history ended and there will be no more developments in AI then you’re not being serious.
LLMs have the spotlight at the moment because natural language has been a Holy Grail of AI research for a long time but all the other types of models are amazing at other things, it’s only a matter of time before the various pieces are combined to make some really useful and powerful tools
I used to do some coding in high school and early college. I’ve since moved on to other things but it’s fun to have ChatGPT write me a little python script or something and debug my way through it.
Users will flood back in the next few week when school comes back. I’d like to see another breakdown in December.
Higher ed, primary ed, and homework were all subcategories ChatGPT classified sessions into, and together, these make up ~10% of all use cases. That’s not enough to account for the ~29% decline in traffic from April/May to July, and thus, I think we can put a nail in the coffin of Theory B.
It’s addressed in the article. First, use started to decline in April, before school was out. Second, only 23 percent of prompts were related to education, which includes both homework type prompts, and personal/professional knowledge seeking. Only about 10 percent was strictly homework. So school work isn’t a huge slice of ChatGPTs use.
Combine that with schools cracking down on kids using ChatGPT (in classroom assignments and tests, etc), and I don’t think your going to see a major bounce back in traffic when school starts. Maybe a little.
I’m starting to think generative AI might be a bit of a fad. Personally I was very excited about it and used ChatGPT, Bing, and Bard all the time. But over time I realized they just weren’t very good, inaccurate answers, bland writing, just not much help to me, a non programmer. I still use them, but now it’s maybe once a day or less, not all day like I used to. Generative AI seems more like a tool that is helpful in some limited cases, not the major transformation it felt like early in the year. Who knows, maybe they’ll get better and more useful.
Also, not super related, but I saw a static the other day that only about a third of the US has even tried ChatGPT. It feels like a huge thing to us tech nerdy people, but your average person hasn’t bothered to even try it out.
This is still year one. I think it’s way too early to infer a trend which may well be cyclical not linear. I’d hardly say nail in the coffin for the theory.
People had a huge surge of interest in it at first because they wanted to know what it was about, it was fascinating and exciting especially exploring it’s limits and playing around - it was fun spending hours just messing with it. The numbers are bound to drop as the novelty wears off, the amount of people actually using it to get stuff done might still be going up even if numbers fall by half.
Certainly some teachers fear new technology but most realize that they have to teach the kids to live in the world the kids will be growing up into not the would the teacher grew up into. Teachers will be educating kids on how to use it as a research tool, on how to use it in projects, in how to use it to improve writing quality - we will be eventually see more kids getting marked down with ‘chatGPT could have helped reword this’ than we see ‘this is written too well you must have used modern tools to help’
Not that I have any faith in teachers being sensible, when I was at school they wouldn’t accept typed homework on the premise ‘when you get a job your boss isn’t going to allow you to type up your work’ and I’m only talking about the mid 90’s here lol
Really though I think most people are going to be interacting with LLMs via tools built into other things - it’ll be one of those things we only really notice when it’s annoying. Daily use will be things like being able to refine searches when online shopping ‘I need a plug for my bath’ returning a selection of bath plugs rather than electrical connectors, music by the bang plug, and pluggano pasta. Especially when it can show a selection then refine it by saying ‘like that but in pink’ or even ask it ‘what’s the difference between these two’ and it says ‘this is three dollars and made from a softer material for a more effective seal’
Oof. I’ve tried it with a few Powershell things and it has recommended cmdlets that don’t exist, parameters that don’t exist, or the wrong usage of cmdlets.
It’s really limited to basic, junior level programming assistance, and even then it’s not 100% reliable. Any time I’ve tried asking it something more advanced it takes a lot of coaxing to get it to output reasonable code. But it’s helpful for boilerplating basic code sometimes.
Have you tried 3.5 or 4?
I haven’t had many issues in 4. Occasionally it does what you’re saying and I just say “bro, that doesn’t exist” and it’s like “oh, my bad, here you go.” And gives me something that works.
Just yesterday I had 4 make up a Jinja filter that didn’t exist. I told it that and it returned something new that also didn’t work but had the same basic form. 4 sucks now for anything that I’d like to be accurate.
Both models have definitely decreased in quality over time.
What kind of prompts are you giving?
I find results can be improved quite easily with better prompt engineering.
It makes things up wholecloth and it’s the user’s fault for not prompting in correctly? Come on.
It’s not a person. It’s a tool.
I don’t remember what version. I just gave up trying
Well don’t expect it to just give magical results without learning prompt engineering and understanding the tools you’re working with.
Set-MailboxAddressBook doesn’t exist.
Set-ADAttribute doesn’t exist.
Asking for a simple command and expecting to receive something that actually exists is magical?
I used gpt4 for terraform and it was kind of all over the place in terms of fully deprecated methods. It felt like a nice jumping off point but honestly probably would’ve been less work to just write it up from the docs in the first place.
I can definitely see how it could help someone fumble through it and come up with something working without knowing what to look for though.
Was also having weird issues with it truncating outputs and needing to split it, but even telling it to split would cause it to kind of stall.
I used it today to write a formal letter. I hate all the blah blah blah. It spewed out exactly the kind of bullshit I was looking for. Cut it down to half and corrected some weirdness. It took just as long as if I was good at that kind of thing.
I love using it while programming but I almost never use it besides that. Not even sure what I would use it for besides that on a day to day basis.
Sometimes I use it for laughs. I usually use it as a search engine one steroids when I cant find the answer to a problem. Not having data past 2021 is a huge limiting factor for real producity though.
I know it’s not popular, but Bing Chat works surprisingly well if you need a GPT response that can hit the Internet. It’s not perfect but anytime I need current information I generally use it and it’s worked pretty well for me!
I also use it for programming and today is the first day that I experienced the degradation that everyone has been talking about. It was spitting out the same code over and over, saying it was changing it, and then it slowed to a crawl and barely responded. Most of its answers were wrong and unhelpful. I have really enjoyed using it instead of stackoverflow for a few months now, so I hope this isn’t something that’s going to continue.
As for other uses, my wife and I used it to find a movie to watch a few days ago. We described the type of movie we wanted to see and asked it to recommend 10. We picked one and it was exactly what we wanted to watch. That was really neat.
I used to to write a resume and cover letter for me, which I then punched up. I figure that since companies are using AI to review resumes, I should use one to write one.
Recently, I used it for book/Author recommendations. At first I also used it for coding, but now I just ask it to explain concepts to me (what’s the difference between… / what are some ways to approach…)
Basically how non-tech people thought search engines worked at the beginning of this century.
When Im feeling blue I ask it to say nice things about me.
I write a lot of emails for work, but I’m not the most eloquent writer, so I get wordy.
I sometimes feed my email into chatgpt and ask it rewrite it to be more concise, but remain in a friendly but professional tone. Boom, done.
It’s abysmal at this point… Whatever they did to it, the results are now awful and far more inaccurate than they were a few months back.
Idk man I’ve been having a blast with the API and gpt4. Once I get it workin you’d basically get access to gpt4 for pennys. Plus if you’re real whacky and pay for a per token subscription from elevenlabs you’ll have a voice assistant too
Well I even subscribed to it at some point. But they really dumbed down the v4 model, so it’s basically on par with the V3 model. And since open source models have become good, there’s no point in using ChatGPT anymore.
What open source models? Can you recommend one?
Hugging chat. Basically the state of the art conversational LLMs are hosted there for free.
Especially Facebook’s Llama2 and any checkpoints of that model are solid.
And if you are a coder, I think there was one called “Starchart” just aimed at code autocomplete, which seems to be a good start.
But ofc as alehc already mentioned, these all exist on Hugging Face and you will find a treasure trove of AI models on there regarding every possible implementation.
Here too
I actually just started using it as a way to flesh out information for world building a tabletop tools playing game.
Not all the information tracks, but it is ok for brain storming. When asked to generate a character it gets things wrong. It invents skills and classes. When asked to distribute points it forgets about all the skills. It is best to give it a task of creating a background, but it is very bland and doesn’t go outside of certain ideas.
It’s amazing for DMing. You can use it to whip up a random town, create NPCs, and generate plot hooks.
You still need to work it into the context of your game and be ready to improvise, but it is a very nice alternative to random tables.
It’s moderate at making up npc statblocks if I give it a description and a CR. I do find myself having to tweak the numbers a bit but it’s great for coming up with special abilities or unique spell like effects.
Yes I agree! It is able to determine a classes’ primary stats and what it is weak at. I’m using it for PTU, which suggests multiclassing up to a maximum of 4 times over about 100 levels, so there is a bit of consideration when creating a npc. I’ll ask it to come up with a backstory, name, six suggested classes, and top 6 stats vs bottom six.
Same, I use it for ideas and inspiration, not fully built out anything
I always knew it was going to be a meme for most people.
Well, if they would not nerf it maybe it woyld not go so much down
Who are these people who have datos installed and opted in to the survey their net usage?
I used it today and the answer gave me was right, but the explanation for how it got there was so ridiculously bad.
I use it when I need it to speed up python scripting for CG applications, but I don’t need it on a constant basis. It could be weeks or more between when I’ll dig into it.
Is there any chance that this is fallout from the Reddit API changes? Lots of people were training LLMs using Reddit. If you can’t do that anymore, then that would cause a decrease in use. Right?
That was way too recent. And it wouldn’t affect the users of GPT directly, only the training, which wasn’t using super-recent data to begin with anyway.
chatgpt browser extension is needed for news to provide summary on BS articles lengthened with meaningless filler words.
Yeah this is exactly the sort of use I think we’re going to see become most common, especially tools that allow you to ask about news, events, social media, etc. ‘what have my friends been talking about recently? Did Janet ever post a resolution to the thing with her mom?’
‘What happened with those ships that crashed in the canal, did they get cleared?’ or ‘im thinking of going to Paris in July, what’s been happening there and what’s the weather like’ and it’s able to sum it up, tell you about things you might want to know more about based on your interests.
Also being able to actually sort and filter ‘just miss me with the football news’, ‘just ignore any posts about my friends kids or bullshit about anniversaries, birthdays and stuff’ - might make social media enjoyable to use.