i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
- 0 Posts
- 101 Comments
is the evil funding man going to eat the gimp pepper
it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]
Here’s Eyal’s “Hook Model”:
First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.
Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole
like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
iirc L-aminoacids and D-sugars, that is these observed in nature, are very slightly more stable than the opposite because of weak interaction
probably it’s just down to a specific piece of quartz or soot that got lucky and chiral amplification gets you from there
also it’s not physics, or more precisely it’s a very physicy subbranch of chemistry, and it’s done by chemists because physicists suck at doing chemistry for some reason (i’ve seen it firsthand)
fullsquare@awful.systemsto Science Memes@mander.xyz•Eating shit is for alphas, am I rite guiseEnglish10·4 days agosounds suspiciously like something a rabbit would say
For slightly earlier instance of it, there’s also real time bidding
taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don’t want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won’t go away (isn’t future bright?)
this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai
then, again people automate things they don’t want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don’t want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don’t like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like “no” or “your idea is physically impossible” or “there is no reason for any of this”. people who don’t like coding, vibecode. people who don’t like painting, generate images. people who don’t like understanding things, cram text through chatbots to summarize them. maybe you don’t see a problem with this, but it’s entirely a you problem
this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your “thinking” to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won’t repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there’s no junior to promote in their place, because junior was replaced by a chatbot.
this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i’m surely forgetting right now
and also
Is the backlash due to media narratives about AI replacing software engineers?
it’s you getting swept in outsized ad campaign for most bloated startup in history, not “backlash in media”. what you see as “backlash” is everyone else that’s not parroting openai marketing brochure
While I don’t defend them,
are you suure
e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)
e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can’t imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money
congratulations on offloading your critical thinking skills to a chatbot that you most likely don’t own. what are you gonna do when the bubble is over, or when dc with it burns down
chatbot DCs burn enough electricity to power middle sized euro country, all for seven fingered hands and glue-and-rock pizza
But but, now idea man can vibecode. this shit destroys separation between management and codebase making it perfect antiproductivity tool
it’s not ai taking your job, it’s your boss. all they need to believe is that language-shaped noise generator can make it work, doesn’t matter if it does (it doesn’t). then business either suffers greatly or hires people back (like klarna)
and also chatbot-generated bug reports (like curl) and entire open source projects (i guess for some stupid crypto scheme)
fullsquare@awful.systemsto Ask Lemmy@lemmy.world•Just woke up vomiting, choking on vomit, not drunk or on drugs, heart beating slightly off. Is this ER worthy?71·9 days agothis could be a number of things, some of which can be fatal. lab tests or imaging for these things are cheap, but only available at hospital and what else do you expect from random internet people than “haul your ass to hospital and ask someone irl that actually knows”
fullsquare@awful.systemsto Fediverse@lemmy.world•Putting ads for old.Lemmy.world on reddit would make this site perfect.English6·9 days ago“proper migration”? like what, spammy ghost town that is/was alien[.]top?
why has trump two different pfps
This is standard geopolitics. You find dissatisfied people in the population of countries you don’t like, and then you support those people in various ways in order to destabilize your competitors.
no, not really, it’s only the last 20-30 years and mostly used like this by russians (also includes things like setting up conspiracy theory websites without obvious ideological tilt (at first), maybe to groom suggestible population for later use). the old way was to try to bring these people to your cause, not just to try to cause non-directed disorder
fullsquare@awful.systemsto News@lemmy.world•Chinese couple charged with smuggling a biological pathogen into the U.S.16·11 days agogiving them benefit of doubt - they still fucked up. they should get a permit with help of their PI and bring it in overtly with all proper paperwork
https://en.wikipedia.org/wiki/Betteridge’s_law_of_headlines
no
not yet at least, but this might change soon