• 0 Posts
  • 3 Comments
Joined 10 months ago
cake
Cake day: November 15th, 2023

help-circle
  • Is there a materials scientist who can explain the significance of this to a reformed computational chemist?

    What does it actually mean when they say they have “found” 2.2 million new stable materials and they think 380,000 are candidates for validation? They say 736 were experimentally validated, but from looking at the paper those 736 were found independently during the time they were developing GNoME, not picked by them as an actual test of their abilities.

    How many of the 380,000 candidates have been experimentally validated by the authors of the paper?

    To put it in small molecule drug discovery terms, if I said I had created 2.2 million new drugs in silico, and suggested 380,000 for validation I don’t think anyone would be very impressed – pretty much any virtual screening tool can do that and has done for decades without the need for deep learning. If 736 molecules turned up in the Journal of Medicinal Chemistry as having been synthesized and found to really be biologically active or whatever during my research that’s cool, but also that still leaves 379,264 to go before we know my hit rate.

    I assume what they have done is actually more impressive than the analogy I have made above…


  • I understand your investigative spirit but I think you are going to discover that all methods of finding exact kNNs are different flavors of “slow” at high dimensions. There might be some exotic variations that can shave off some of the constant factor but they are usually restricted to Euclidean or inner-product-like distances (although that’s usually what people want). FWIW 8000 points doesn’t seem like a huge dataset.

    Here’s some R code and timings using the FNN package which has a few options:

    data8k <- matrix(rnorm(n=8000 * 1000), nrow = 8000)
    system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "brute"))
       user  system elapsed 
      70.19    0.06   75.87 
    system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "kd_tree"))
       user  system elapsed 
      78.51    0.14   85.08 
    system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "cover_tree"))
       user  system elapsed 
     129.52    0.14  134.01 
    system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "CR"))
       user  system elapsed 
      70.41    0.08   74.40
    

    That’s single-threaded on my very-much no-longer-impressive laptop. The fact that brute force search does so well in comparison to the others suggests that there aren’t good options for your data.


  • You don’t say how many items are in your dataset or if this something you need to do a lot or only a few times. Despite the horrible scaling, you can get a surprisingly long way by just doing a brute-force search should you have a beefy enough CPU with lots of threads.

    If you have access to a GPU, then look at Faiss. On my now-puny 1080 laptop GPU it does a fine job for my purposes, although admittedly it does run out of memory on very large datasets (but that’s almost certainly just me not having yet paid enough attention to its API on how to deal with that).

    Finally, you say you are not looking for approximate neighbors using LSH, but the difficulty you are encountering is the reason why people use approximate nearest neighbors. Moreover, LSH is not the only or even the best choice for approximate nearest neighbor search. I have happily used PyNNDescent and HNSW. Annoy will also work fine with 150 dimensions and maybe the full 1000 dimensions. Voyager seems interesting but I haven’t spent any time with it.