MemoryCache is an experimental developer project to turn a local desktop environment into an on-device AI agent.
The website does a bad job explaining what its current state actually is. Here’s the GitHub repo’s explanation:
Memory Cache is a project that allows you to save a webpage while you’re browsing in Firefox as a PDF, and save it to a synchronized folder that can be used in conjunction with privateGPT to augment a local language model.
So it’s just a way to get data from browser into privateGPT, which is:
PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an Internet connection. The project provides an API offering all the primitives required to build private, context-aware AI applications.
So basically something you can ask questions like “how much butter is needed for that recipe I saw last week?” and “what are the big trends across the news sites I’ve looked at recently?”. But eventually it’ll automatically summarize and data mine everything you look at to help you learn/explore.
Neat.
They should find a better fitting name than MemoryCache. Thanks for this comment.
Damn, this is EXACTLY the kind of project I’ve been seeking out and trying to figure out.
I want all my browsing habits stored locally for AI to tell me what I saw, or to find something I read, or to dig up a citation I swear exists.
Interesting project. Terrible name.
Instead of saying “I googled it”, we can say “I memcached it” 🤔
This thing sounds mostly like a way to explore the possibilities with AI. Which I’m all for.
It sounds like it will learn from what you do in your browser, and as we are humans and therefore have alot of habits, then we might find this tool useful!
We’re not breaking ground on AI innovation (in fact, we’re using an old, “deprecated” file format from a whole six months ago)
The ggml format isn’t “deprecated” it’s completely dead. In those 6 months we’ve also seen 2-4x speedups on some systems, not to mention improved accuracy via kquants. I don’t know why they would build out a new extension with such an ancient dependency.
deleted
This seems interesting.
Why does it have such a generic name that has nothing to do with the project? Also as soon as I see “privacy and terms of conditions” it raised red flags when it’s supposed to be on device.
I’ve been holding back my Mozilla funding because they are shirking work on Firefox and Servo
what in the actual fuck is this stupid shit and why is mozilla anywhere near it?
Open source project focused on giving people features they want but in a privacy and censorship resistant way. Classic Moz
Seriously, what’s with all the Mozilla hate on Lemmy? People bitch about almost everything they do. Sometimes it feels like, because it’s non-profit/open-source, people have this idealized vision of a monastery full of impoverished, but zealous, single-minded monks working feverishly and never deviating from a very tiny mission.
Cards on the table, I remain an AI skeptic, but I also recognize that it’s not going anywhere anytime soon. I vastly prefer to see folks like Mozilla branching out into the space a little than to have them ignore it entirely and cede the space to corporate interests/advertisers.
because “oh no Mozilla foundation bad” “they take google money”
deleted
deleted
That seems more aligned with their mission of fighting misinformation on the web. It looks like fake spot was an acquisition so hopefully efforts like the ones mentioned in this post better help aligne this with their other goals.
deleted
What I’m saying is Mozzilla, from my understanding, didn’t set out to do that but instead aqquired a business that was in order to use their services to fight misinformation. We should pressure them to reform the new part of business to better align with the rest of Mozzilla’s goals.
deleted
but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I’m supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Yes but I like it, so where do we go from here?
You clearly are wrong and you should feel bad /s
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.
What in the actual wow is this positive comment and how did this radiantly joyful kid even get onto our evil Fediverse?
Dunno, this seems like an interesting idea. What if I’ve read through a bunch of engineering papers, maybe I could use this as a sort of flashcard to double check my understanding.
deleted