Hi, T have very little experience with Logic based Al algorithms, and my understanding is these machines use propositional logic algorithms (which wasn’t something I have come across before, even when I was in uni).

My experience is in statistical & neural based networks and can understand the pros and cons of those networks.

I am just trying to understand what is the advantage and disadvantages of logic based algorithms specifically Tsetlin Machines? Is this something that I should learn more about? What are some good resources?

Thanks

  • olegranmo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Hi u/Load-Consideration-2! I am currently writing a book on this and some of the chapters are already available: https://tsetlinmachine.org. There is also source code for many of the latest advances here: https://github.com/cair/tmu.

    Logical learning with the Tsetlin machine is fully transparent. Still, it is similar to neural networks because it learns non-linear patterns, supports convolution, and learns online, one example at a time.
    The Tsetlin machine is only 5 years old, and our biggest challenge is actually not inductive bias, but too high expression power that gives overfitting, just like neural networks before us. There is lots of ongoing research and progress here, and I think we have only seen the beginning.
    Here is a recent paper that illustrates the benefits of Tsetlin machines in natural language processing and image analysis: https://ojs.aaai.org/index.php/AAAI/article/view/26588. Here is a paper on medical image analysis: https://arxiv.org/abs/2301.10181.
    Where the Tsetlin machine currently excels is energy-constrained edge machine learning, where you can get up to 10000x less energy consumption and 1000x faster inference (https://www.mignon.ai).
    My goal is to create an alternative to BigTech’s black boxes: free, green, transparent, and logical (http://cair.uia.no).

    • JustAnotherRedUser1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I actually stumbled over your research by accident when searching for a job. Pretty interesting model!

      When is the Timeserie chapter for your book coming out?

      • olegranmo@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Thanks, u/JustAnotherRedUser1 - the convolution chapter is coming next, in a few weeks. After that, the one on regression. Aim to complete the book in the next six months, so time series will be covered sometime before that.

        • JustAnotherRedUser1@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Looking forward to it, great job so far!

          I have a question about applying test sets to models. Although I planned to email you, but i found you here by random!

          My master’s thesis in Finance was based on Easley et al.'s 2020 paper “Microstructure in the Machine Age” from the Review of Financial Studies. This paper used Random Forest (RF) to analyze financial market microstructures. The researchers created a RF-model with dataset of several microstructure indicator. Then evaluated which features were most significant in classifying the new unseen data (test-set). We do not look at prediction in this case, we look at the which features split the most effectively on the test-set.

          In our tests, several indicators had high feature importance when creating the RF-model. But when applied to new, unseen data (the test set), the importance of these features often changed. Features that were crucial in the training phase were not as important for splitting new data, a pattern we consistently observed.

          Easley et al. (2020) suggest that this occurs because financial microstructures are traditionally constructed on “in-sample” data, which has limited predictive power for new, unseen data.

          I believe this finding can be important, as it can serve as an alternative way to back-testing. As backtesting does not make any sense for financial data, as all the random states in the market has already known.

          I’m wondering if the Tsetlin Machine could be used to measure and analyze the same idea?

          • olegranmo@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Sounds like an exciting problem! I guess leveraging the Tsetlin machine clauses can give a fresh take on the task. Tsetlin machines also support reasoning by elimination. That is, it can learn what the target isn’t instead of what it is, for increased robustness: https://www.ijcai.org/proceedings/2022/616

    • Citizen_of_Danksburg@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      This is so cool. I’ve never heard of your creation before this post. I’m a pure mathematician turned statistician just getting their feet wet with neural networks and other more modern approaches to regression and classification. This is very cool work you’re doing.

    • Loud-Consideration-2@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Great reply and thank you for that info. I have come across mignon actually, really cool stuff that they’re doing for Edge AI applications!

      Can’t wait to see how this progresses.

  • currentscurrents@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    One key difference is that they are not trained with end-to-end optimization but rather a hand crafted learning rule. This rule has strong inductive biases that work well for small datasets with pre-extracted features, like tabular data.

    Their big disadvantage (and this applies to logical/symbolic approaches in general) is that they don’t work well with raw data. Even easy datasets like CIFAR10. The world is too messy for perfect logical rules; neural networks are able to capture this complexity, but simpler models struggle to.

    statistical

    Note that learning is a fundamentally statistical process, so Tsetlin Machines are also statistics based.