Hello after a long time :)

I am TokenBender.
Some of you may remember my previous model - codeCherryPop
It was very kindly received so I am hoping I won’t be killed this time as well.

Releasing EvolvedSeeker-1.3B v0.0.1
A 1.3B model with 68.29% on HumanEval.
The base model is quite cracked, I just did with it what I usually try to do with every coding model.

Here is the model - https://huggingface.co/TokenBender/evolvedSeeker_1_3
I will post this in TheBloke’s server for GGUF but I find that Deepseek coder’s GGUF sucks for some reason so let’s see.

EvolvedSeeker v0.0.1 (First phase)

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs.

I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.

Around 3k answers were modified via self-instruct.

Recommended format is ChatML, Alpaca will work but take care of EOT token

This is a very early version of 1.3B sized model in my major project PIC (Partner-in-Crime)
Going to teach this model json/md adherence next.

https://preview.redd.it/jhvz3xoj7y1c1.png?width=1500&format=png&auto=webp&s=3c0ec081768293885a9953766950758e9bf6db7d

I will just focus on simple things that I can do for now but anything you guys will say will be taken into consideration for fixes.

  • alchemist1e9@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thank you. Really interesting. I have a question for you. Do you happen know of there are any trained from scratch coding model projects? The reason I ask is I have a very specific idea about how to best teach an LLM to program, but it requires changing some details at the very base encoding level and a change in presentation of the training data. I’ve been programming for over 30 years now and I strongly suspect there is this fairly simple trick to improving coding models, so I’d like to look at something open source that starts from the very beginning. Then I can investigate how hard it would be to implement what I’m thinking. The design I have should result in very small but capable models.

  • BrainSlugs83@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Interesting, is Partner in Crime (PIC) like an open source co-pilot type project? I haven’t heard of it before (did you coin this phrase yourself, or is it well known)?

    I ask because the tasks you describe (json/md/function calling/empathy) and then the name itself, all basically make it sound like the “open source” models equivalent of a co-pilot model.

  • naptastic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Ok, it finally downloaded and I’ve spent a few minutes with it. It keeps getting into endless pathways of jaron (e.g., “fair play make world communal environment tolerant embraces diversity embrace equity promote unity instill resilience proactive leadership” and it just goes on like that–no punctuation, no connecting words–until it reaches the token limit.) What loader and settings work best with this model?

    • AfterAte@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Try using the alpaca template, turn temperature down to 0.1 or 0.2 and repetition penalty to 1. I haven’t tested this yet, but those settings work for Deepseek-coder. If you’re using oobabooga, the StarChat preset works for me.

    • ahm_rimer@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Try the chat inference code mentioned in the model card if you’re running it on GPU. The size is good enough to test on free colab as well.

      • naptastic@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That definitely works better. I wouldn’t trust it too far though. It just told me I can remove the first part of a file with one seek() and one truncate() call…

  • AfterAte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Wow, that’s amazing. On the eval+ leaderboards, Deepseek-coder-1.3B-instruct gets 64.6, so that’s a ~4% increase. It’s about 3% less than Phind-v2’s result, which is amazing.

  • AfterAte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Btw, does your dataset include coding examples? If so, do you include Rust? I find current models really suck at Rust, but can make a pretty good Snake game in Python 😂

    • ahm_rimer@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not enough instruct data for Rust, i also am not familiar with Rust so can’t test it out.

      I usually test things out with C, C++ and python only on my level.

      Though if you know of some good source, I’ll use it for Rust fine tuning.

      • AfterAte@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I don’t know much about Rust, but Easy Rust is a good source for learning: https://github.com/Dhghomon/easy_rust

        But in a useful format for fine-tuning… no idea where to get that. And I’m not qualified to make it either. But i don’t want to burden you with extra work so I guess C++ will have to do for now :) Thank you for the model, from me and everyone else with a potato PC m(_ _)m

  • FullOf_Bad_Ideas@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Do you plan to release the dataset? Have you checked for data contamination with benchmarks? I am overall pretty confused by scores of this model on HumanEval, not just your finetune. DeepSeek AI got very weird scaling in benchmarks, since their 6.7B model scores really closely to 33B one, which usually doesn’t work this way. 6.7B instruct scores 78.6% while 33B instruct scores 79.3%. I am now using 33B model daily at work and it’s really good. I have no evidence to support my claim, but I totally wouldn’t be surprised if they were pre-training on contaminated dataset.