Ok, so I lost the draft, and I’m gonna slim down the post, BC I’m not retyping all that…
- BRANCHING WILL be at least moderately important in the model architecture
- Multiple boards (servers in cluster)
- NOT using Python (too slow, and I hate that language).
- PROBABLY not using CUDA, but that is up for debate, if it fits the purpose, but heterogenous OCL is the expected method.
- MANY (10s to 100s of primary models; 100s to 1000s of sliding windowed meta-models with complex interrelated processing) models involved in context-heavy operative multi-modal mass cross-training and only very limited pre-training (the majority of the learning process is scheduled in real-time under operating conditions).
- Learning process is intended to be very heavily spiked (Spike Network).
Is there anyone out there who has any experience with the best type of expansion cards for this sorta thing?
Unfortunately, I have spent a very long time looking for whatever might be the best modern equivalent of the old Xeon Phi cards, but nothing seems to exist.
As I don’t use Intel boards, anyway, that’s halfway a moot point, but I would really like to know if anyone has an actual recommendation.
This post is an automated archive from a submission made on /r/MachineLearning, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.
Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !machinelearning@academy.garden that can benefit from your contribution and join in the conversation.
Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.