Well the code quality has gotten pretty bad so I think it’s time to cancel my subscription to ChatGPT Plus. I have an RX 6600 and an GTX 1650 Super so I don’t think local models are a possible choise (at least for the same style of coding that is done with GPT-4). But I decided to post here anyway since you guys are very knowledgeable. I was looking at cursor.ai and it seemed pretty good. I don’t know how good it’s now since OpenAI has decreased the performance of GPT-4 but I have heard that the API is still ok. Also there is refract which could be a similar choise too. What do you recommend? I do coding in mostly Python and sometimes C++.
I think that the key to using any LLM for writing code is to not overwhelm the model with too many asks or feeding it too much code at once, even if the context window is large. Baby steps and EXTREMELY clear requirements for each one of those steps are absolutely critical to getting the most out of any of these models.
One hack I’ve found is to have the LLM (or yourself) comment your code very explicitly. When you feed it back, the improvement is substantial.