• 1 Post
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • Putting a name on a century-old concept isn’t the worst idea because now we can easily refer to it when it happens once again. And yes, the old age of that problem is why I consider it a bit of a rabit-hole. It’s not just something Twitter does now or that tech companies do now because they copy from each other. It’s a quite old concept you’ll hear about again and again and can read up on quite a bit, if you really are interested into more than the basic concept or why companies keep trying even though the outcome does not always see positive (from an outside, users perspective).


  • Look up enshittitication, it’s an interesting rabbit hole.

    Basically, the idea is that there is a path companies go along where they first please users to build a user base, once you are bound to a platform and don’t want to leave (because “everyone” is there) they instead start to shift towards pleasing advertisers until they also feel trapped (because “everyone” advertises there). The final move is trying to squeeze as much as possible out of all these trapped people and companies. It’s not just social media, although this of course makes it most obvious at least for a trapped user base. But this also applies for any other big thing that “evryone” uses.



  • I don’t disagree. The topics are a bit hit or miss and yes, my newest free ebook from them is from 2020, so all contents should be taken with a grain of salt. I did manage to grab some on C++, Machine Learning and different Pentesting tools, so not everything is completely obscure but as you said, usually they do not choose their most recent books. I see it more as a nice free resource on some topics in the books as of course not everything will be entirely out of date. It’s also not necessarily worse than buying their 2023 books today and using them for the next 3 years… That’s just a general problem with tech books, at least these outdated books are free.





  • No. Chat-GPT is not sourcing it’s claims. I think it would also change it’s (usually misunderstood) purpose. Chat-GPT is a language model made to create responses that appear natural. It’s purpose was not to recite facts, although it often does so as a side effect on how it was trained, it simply creates likely word combinations. The researchers entered millions of texts and Chat-GPT ran some math to figure out which word is mostly likely to come next after each other word. So, simplified, it opperates on a likelyhood table of word relationships, generated when it was trained. This includes following up “Super Mario is” with “a video game character” as most texts it saw will refer to him as that. People mistake this as it generating facts (because if asked about things a factual response is likely because it’s what Chat-GPT usually saw) but this was never the purpose of Chat-GPT. So a response like “Super Mario is an orange cat who loves lasagne” would also be valid output, as it perfectly resembles natural language. It’s factually wrong but a correct sentence and after the first switch from “video game character” to cat, following up “orange cat” with love for lasagne is again a likely sentence. This is also what happens in most “made up” sentences. Chat-GPT takes a wrong turn somewhere (maybe because it does not have facts on a person or thing, maybe because it’s ambiguous or maybe because the user was actually trying to direct it into that sentence) but then continues to follow up with likely words. And once you realise that Chat-GPT always tries to create a natural, logical response, you can easily trick it into making up certain things. So if you ask about lawsuits regarding a certain person, it’ll create a natural sentence describing a lawsuit that this person was involved in that is entirely made up. But most importantly: the text will be grammatically correct and appear natural. And if Chat-GPT would respond “This person was never involved in a lawsuit” you can often simply say “OK, but let’s pretend this person was” and Chat-GPT will happily make something up.

    Bing’s purpose is different. It of course also has that “natural language” approach but additionally might actually run a web search to be able to quote and source it’s claims, whereas Chat-GPT does not even know what’s written on todays websites. It has access to it’s initial training data from September 2021 and a few additional datasets they used since then for refinement, but it has no live access to the Internet to look up up-to-date information. So Bing likely will be able to summarise the last Apple announcements where Chat-GPT will just say "sorry, I do not have that information ". If pushed, Chat-GPT might make up correct natural language sentences about that conference but the statement will be just be likely word combinations, not facts.