Considering switching to Mac from windows, is it honestly worth it? I mean I don’t plan on running heavy load models, but hopefully decent enough for it to handle midsize models.
I use an M1 as my daily driver, which was given to me by work. I used to be hard line anti-mac, but I have been thoroughly converted. I will say though that mps and PyTorch do not seem to go together very well, and I stick to using the cpu when running models locally.
It’s good enough to play around with certain models. For example at the moment I’m currently using BERT and T5-large for inference (on different projects) and they run OK. This is generally the case for inference on small to medium language models. However, for training, fine-tuning, or running bigger language models (or vision models), I work on a remote server with a GPU. Access to a Nvidia GPU, whether locally or remotely, is simply a must have for training or bigger models.
For learning and small models, a macbook and Google colab are very sufficient.
M2 pro or m3 pro Max
For any serious work you would never run on a laptop. Money better spent, MacBook Air plus a 3090 workstation.
Macbook Air is not money well spent. Workstation with 4090, and A100 in the cloud on the money you saved
Yeah and i suppose you are able to carry it everywhere and work for hours without a power supply
My exact setup. MacBook Air to ssh into my Linux machine with a Nvidia RTX 3090
I would not recommend it unless you only focus on smaller models and small experiments. Biggest advantage is the huge amount of memory available. But the bottleneck is memory bandwidth.
We did some tests out of fun (as there were not many benchmarks available). You can find the results here:
https://www.lightly.ai/post/apple-m1-and-m2-performance-for-training-ssl-models
Support got better but back when we did the tests there was still no proper half precision support and also torch.compile wouldn’t work. There is hope that the software support will catch up. I’m curious to see other results. We definitely need more benchmarks :)
no, I recently participate at a hackathon where I did a small ML project, I had my m2 macbook pro with me and and an i5 laptop with rtx3050, the latter was waay quicker (finished all the tasks in like 3rd time, some were even faster). other than this I never really used a macbook for similar purpose but based on this this experience I would avoid it and if I really needed a laptop for this I’d pick a strong “gamer laptop” for a similar price
Oh thank you!
I have something different to say. First I agree that any serious works should be done on a workstation either a packed desktop or cloud server (with a A100 40GB/80GB conf). However, for prototyping or just playing models, mac with its large shared memory is excellent: there is not many laptops or even desktop gpu have more than 16 gb vram, meaning when you prototyping you are very much limited to batch size or smaller sized backbones. I have a m1 pro 32 gb, it can fit most of the models I want to play with. After I finished prototyping, I simply change the "device = ‘mps’ to ‘cuda’ and run it on cloud. I use pytorch mainly, i have encountered some issues with mps but nothing major. There are walkarounds.
How many core gpu do you have?
Oh wow, that’s amazing and reassuring xD I was considering m3 pro, but might switch to m2 pro with more ram, ssd, and gpu cores. Depends on my budget. Thank you so much for this!
I am using my M1 pro for small and medium models test. While my main server with 3090 runs experiments I prototyping on my M1 laptop. Than change mps to cuda on desktop to run long experiments on desktop.
While your model gets all your resources you want to work as well and don’t experience lags.
I wanted to but something more powerfull, but I don’t really see any applications other than get more memory (16GB is not enough).