Hi guys, I am new to ML and was experimenting with kNN search algorithms.
I have very high dimensional data 1000+ dimensions. What data structure is best suited for such high dimensional data.
I can bring down the dimensions to apprix 150 using using PCA without being too lossy.
Even then I am having hard time finding techniques that work with such high dimensional data. I am not looking for Approximate NN search using LSH.
What is the best technique that can be used here, kd tree doesn’t work well with high dimensional data, would Rtree or ball tree be a better choice or something different?
1k features are a lot but not really A LOT. Also you didn’t mention how many samples you have. Without any other knowledge, off the top of my head I would try to fit a self-organizing map and then use it as an “index” to retrieve the closest samples most similar to the query and finish with a knn only on those.
My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what’s best amongst them in high dimensional space…
SOMs are not like neural network predictors you would see around here, in the sense that they do not learn new feature spaces. It would have been the same if I suggested you to use kmeans to reduce the search space and then doing knn