https://reddit.com/link/17rzqfm/video/fqtexzq5fhzb1/player
Heard Apple’s working on an on-device Siri with LLMs, but these models are memory-intensive, especially for iPhone’s limited RAM. This isn’t just an Apple issue; big tech companies who want to run ML models on device, like samsung, google, meta will face same problem.
What if models could run directly from storage instead of RAM?
Samsung is onto something with their MRAM tech – it’s non-volatile, power-efficient, and can handle some Logic, AI processing. Imagine your phone running models from storage!
Not an ML expert, but this tech evolution is intriguing. is there other attempt like this?
Spoiler alert MRAM is just RAM but more expensive. You would be better off just buying more RAM.
Ram is just faster storage.
None of that is relevant to LLMS.