Hi I am a newbie c# dev, I am trying to create a home project and until recently I was using Llamasharp. There is little support for it and since the last updates I’ve been unable to get it to work at all with the recent updates.

i’m trying to build a little chat wpf application which can either load AWQ or GGUF LLM files. Are there any simple and easy to use libraries out there which I can facilitate in c#?

I have a GTX 3060 and I’d preferably like to use my GPU RAM if it’s faster than using DDR4 RAM. I admit I am under a few misconceptions. Ideally I’d like to be able to load the Mistral models in c#.

https://preview.redd.it/6tx5ij2imm2c1.jpg?width=877&format=pjpg&auto=webp&s=53e2a07f53e5d7e15ebbe727d6930bfd3bbea25b

  • ThisGonBHard@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The whole AI ecosystem was pretty much designed for python from the ground up.

    I am guessing you can run C# as the front end, and python as back end.

  • TheTerrasque@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t know an alternative, but I did some experimenting with it. I kinda rewrote large parts of it, and I also used a custom build of llama.cpp dll’s. I’m pretty sure it’ll still work with the newest llama.cpp build, you might need to update some native calls if they’ve been expanded or renamed.

    My changes are at https://github.com/TheTerrasque/LLamaSharp/tree/feature/clblast - I haven’t really documented it much, but maybe the git history will help

  • laca_komputilulo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This answer is somewhat OT, but may be the best answer for your situation. Take it from someone who started coding C# in 2001.

    The worst mistake a Dev can make is call themselves “Im a ___ Dev”. This is an option limiting mental handicap.

    Way back I sunk all my interest in the Semantic Web on porting Jena into NJena. Almost finished the conversion but never built anything useful.

    For your problem, dockerize Ooba, llamacpp, etc exposing an api endpoint, call API via ms semantic kernel from your wpf app. Profit…

    Better spend your time on learning containerisation then on coping with missing options in you chosen ecosystem.

  • mrjackspade@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Gonna be honest, you can totally just skip LlamaSharp and call the Llama.dll methods using interop in C#

    Its really not difficult to do and it cuts an entire layer of dependency out of your project.

  • _Lee_B_@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You DO NOT NEED TO LOAD AND RUN MODELS to use AI. Run a server like text-generation-webui, then use its API.