You want to run some heavy tasks in a cloud using GPUs. What would you do?

  • What features are the most important for you in a GPU cloud provider? Is it price, availability, GPU models, or something else?
  • How do you choose an instance type to run? Typically there are dozens of different instances in each provider.
  • Do you regularly use one or more providers?
  • chief167@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I find azure terrible for ml in general. They basically force you on databricks, azure ml studio just sucks compared to gcp vertex.

    We’re now on teradata for mlops, and it’s surprisingly ok. High entry cost but overall a lot cheaper than what we used to have on azure/databricks, faster, and better.

    We are forced on azure at work, but I used vertex for hobby projects