- cross-posted to:
- technews@radiation.party
- cross-posted to:
- technews@radiation.party
Please create a comment or react with an emoji there.
(IMO, they should’ve limited comments,and gone with reaction count there, its looks mess right now )
Please create a comment or react with an emoji there.
(IMO, they should’ve limited comments,and gone with reaction count there, its looks mess right now )
Yeah, I get what you mean – if I can throw 128GB or 256GB of system memory and parallel compute hardware together, that’d enable use of large models, which would let you do some things that can’t currently be done other than (a) slowly, on a CPU or (b) with far-more-expensive GPU or GPU-like hardware. Like, you could run a huge model with parallel compute hardware in a middle ground for performance that doesn’t exist today.
It doesn’t really sound to me like that’s the goal, though.
https://www.tomshardware.com/news/amd-demoes-ryzen-ai-at-computex-2023
That sounds like the goal is providing low-power parallel compute capability. I’m guessing stuff like local speech recognition on laptops would be a useful local, low-power application that could take advantage of parallel compute.
The demo has it doing facial recognition, though I don’t really know where there’s a lot of demand for doing that with limited power use today.
I can imagine that there would be people who do want cheap, low-power parallel compute, but speaking for myself, I’ve got no particular use for that today. Personally, if they have available resources for Linux, I’d rather that they go towards improving support for beefier systems like their GPUs, doing parallel compute on Radeons. That’s definitely an area that I’ve seen people complain about being under-resourced on the dev side.
I have no idea if it makes business sense for them, but if they can do something like a 80GB GPU (well, compute accelerator, whatever) that costs a lot less than $43k, that’d probably do more to enable the kind of thing that @fhein@lemmy.world is talking about.
The demo is just face detection and not recognition. But usually if something can run one, then it can run the other.
IMO they’d be idiot not to go hard on this. More efficiency on the computing needed for AI can quickly scale to a’y application that will be developed in the future.
We are going in a future of limited resources and expensive energy. That’s a short term problem.