• NVIDIA released a demo version of a chatbot that runs locally on your PC, giving it access to your files and documents.

• The chatbot, called Chat with RTX, can answer queries and create summaries based on personal data fed into it.

• It supports various file formats and can integrate YouTube videos for contextual queries, making it useful for data research and analysis.

    • Dojan
      link
      fedilink
      English
      195 months ago

      There were CUDA cores before RTX. I can run LLMs on my CPU just fine.

    • @Steve
      link
      English
      75 months ago

      There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.

    • halfwaythere
      link
      fedilink
      English
      6
      edit-2
      5 months ago

      This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.