I know the typical answer is “no because all the libs are in python”… but I am kind of baffled why more porting isn’t going on especially to Go given how Go like Python is stupid easy to learn and yet much faster to run. Truly not trying to start a flame war or anything. I am just a bigger fan of Go than Python and was thinking coming in to 2024 especially with all the huge money in AI now, we’d see a LOT more movement in the much faster runtime of Go while largely as easy if not easier to write/maintain code with. Not sure about Rust… it may run a little faster than Go, but the language is much more difficult to learn/use but it has been growing in popularity so was curious if that is a potential option.
There are some Go libs I’ve found but the few I have seem to be 3, 4 or more years old. I was hoping there would be things like PyTorch and the likes converted to Go.
I was even curious with the power of the GPT4 or DeepSeek Coder or similar, how hard would it be to run conversions between python libraries to go and/or is anyone working on that or is it pretty impossible to do so?
Yah… the thing is… do I have to learn very fancy advanced python to do this… or can I use more simple python that then makes use of as you said more optimized libraries. I am wondering how much time its going to take to figure out python well enough to be of use and hence was thinking Go and Rust might work well as I know those well enough.
If it’s just calling APIs even to a local running model, I Can do that in Go just fine. If it’s writing advanced AI code in python, then I have to spend likely months or longer learning the language to use it well enough for that sort of work. In which case I am not sure I am up to that task. I am terrible with math/algos, so not sure how much of all that I am going to have to somehow “master” to be a decent to good AI engineer.
I think the idea is… am I a developer using AI (via APIs or CLI tools) in some capacity, or am I writing AI code that will be used for training, etc AI. I don’t know which path to go, but I’d lean towards using models and API calls over having to become a deep AI expert.
My sense of it is that most training is still just using the APIs to talk to the GPU, and the art is more in the assembly of the training set than it is optimizing the APIs. There are serious researchers working on improving AI, but they’re figuring out how to make the data get in and out of the GPU faster in a way that doesn’t hurt later quality. But that’s not a code optimization problem, it’s much more of an “understanding the math” kind of problem and whether you’re then using Python or Go to tell the GPU to execute this slightly different math isn’t much of a concern.
I think it’s a lot like data science. Getting the good clean data to work with is actually the hard part. For training, getting great training sets is the hard part.
If you just wish to write code that uses AI, or train models for that purpose, the current Python toolkit is more than sufficient, especially given how quickly everything is moving right now; we might have totally different architectures in three years, and Python will be quicker for that R&D iteration than Go is.
Finally, on your personal thing - I’ve been coding for 39 years. I’ve worked in BASIC, Assembly, C, C++, Perl, Python, and Go (and 37 other languages here and there). Go to Python isn’t going to be a difficult jump. Especially now that you can…use an AI to help you if you’re at all confused how to turn a Go concept into a Python one.