Trying to build up a new subreddit community, Accelerating A.I.. Thought you might find it interesting. It’s just a space chat about the brighter, optimistic side of AI and its rapid growth.
There has been a strong and very common cynical, borderline doomer, take on A.I in seemingly every tech and, even, AI-specific subreddits. To the point I figured there needed to be some space to discuss positive impacts, developments in LLMs, LMMs, AGI, and more - with the “what about Skynet!” natives taking over every thread.
Genuine question: why? Everyone is optimistic about AI. The market loves it and people already in industry are enjoying the new funding for opportunities. The doomerism is just a fringe group that may be using it as a marketing gimmick (cough open cough ai). Why do you see the need to cater to specifically optimism?
What’s wrong with r/singularity? Folks over there are optimistic, perhaps a little too eager and optimistic. In fact most opinions that aren’t optimistic get downvoted pretty quickly.
There way too much speculation. Speculation is like a fetish on that sub.
Forgive the garekeeping, but people with no technical background (e.g. knowlegde in linear algebra, multivariate calculus, probability and mathematical statistics, numerical optimization, data structures and algorithms) shouldn’t be making sweeping generalizations about machine learning.
When deep learning got popular less than 10 yeard ago due to applications in computer vision, I remember people having nonsensical discussions about whether YOLOv3 was sentient because it was a neural network. Open-ended questions based on words without a defined meaning don’t contribute to anything.
This is a subject where I think gatekeeping is appropriate tbh.
Oh, don’t get me wrong, the dominant sentiment on r/singularity is not for me and I am no fan of the reverence certain public figures get from members of that community. I was going for polite understatement with my comment, but perhaps failed 😅
Yah, it’s hard to balance gate-keeping with moderation.
I really wish the average poster here and across other AI and ML subreddits knew a little more about the specification between the two. I know it can get murky, but I anecdotally feel like a lot of people think AI is just a bunch of random forests, neural nets, and LLMs.
Hello, fellow machine learners. Well I just did a 6-month linear regression boot camp so I’m basically ready to use the GPT4 api. Anyone have any app ideas?
Looks like there are already have issues… it should be using LLMs to automate moderation…