Hi guys, I am new to ML and was experimenting with kNN search algorithms.

I have very high dimensional data 1000+ dimensions. What data structure is best suited for such high dimensional data.

I can bring down the dimensions to apprix 150 using using PCA without being too lossy.

Even then I am having hard time finding techniques that work with such high dimensional data. I am not looking for Approximate NN search using LSH.

What is the best technique that can be used here, kd tree doesn’t work well with high dimensional data, would Rtree or ball tree be a better choice or something different?

  • eamonnkeogh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Have you tried to simply use highly optimized brute force search?

    You can optimize with early abandoning and reordering etc.

    It is surprising how competitive that can be,

  • resented_ape@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    You don’t say how many items are in your dataset or if this something you need to do a lot or only a few times. Despite the horrible scaling, you can get a surprisingly long way by just doing a brute-force search should you have a beefy enough CPU with lots of threads.

    If you have access to a GPU, then look at Faiss. On my now-puny 1080 laptop GPU it does a fine job for my purposes, although admittedly it does run out of memory on very large datasets (but that’s almost certainly just me not having yet paid enough attention to its API on how to deal with that).

    Finally, you say you are not looking for approximate neighbors using LSH, but the difficulty you are encountering is the reason why people use approximate nearest neighbors. Moreover, LSH is not the only or even the best choice for approximate nearest neighbor search. I have happily used PyNNDescent and HNSW. Annoy will also work fine with 150 dimensions and maybe the full 1000 dimensions. Voyager seems interesting but I haven’t spent any time with it.

    • xero786@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what’s best amongst them in high dimensional space…

      • resented_ape@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I understand your investigative spirit but I think you are going to discover that all methods of finding exact kNNs are different flavors of “slow” at high dimensions. There might be some exotic variations that can shave off some of the constant factor but they are usually restricted to Euclidean or inner-product-like distances (although that’s usually what people want). FWIW 8000 points doesn’t seem like a huge dataset.

        Here’s some R code and timings using the FNN package which has a few options:

        data8k <- matrix(rnorm(n=8000 * 1000), nrow = 8000)
        system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "brute"))
           user  system elapsed 
          70.19    0.06   75.87 
        system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "kd_tree"))
           user  system elapsed 
          78.51    0.14   85.08 
        system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "cover_tree"))
           user  system elapsed 
         129.52    0.14  134.01 
        system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "CR"))
           user  system elapsed 
          70.41    0.08   74.40
        

        That’s single-threaded on my very-much no-longer-impressive laptop. The fact that brute force search does so well in comparison to the others suggests that there aren’t good options for your data.

  • PM_ME_YOUR_BAYES@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    1k features are a lot but not really A LOT. Also you didn’t mention how many samples you have. Without any other knowledge, off the top of my head I would try to fit a self-organizing map and then use it as an “index” to retrieve the closest samples most similar to the query and finish with a knn only on those.

    • xero786@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what’s best amongst them in high dimensional space…

      • PM_ME_YOUR_BAYES@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        SOMs are not like neural network predictors you would see around here, in the sense that they do not learn new feature spaces. It would have been the same if I suggested you to use kmeans to reduce the search space and then doing knn