i have some dashcam footage from my car and want to see how a model could embed the images in an unsupervised (or self-supervised) way so i dont have to label everything.
like if scenarios that are semantically similar, but different in pixel space (pulling out the driveway in the day versus pulling in at night) could be clustered close-ish together in latent space so that i could label fewer images and have the model get the other using something like k-nearest or whatever.
i am starting off with just frame level before i try to tackle videos as a sequence of images (will probably lose interest by that point, so want to get images dealt with first). i looked in to VAEs and tried training one from scratch on my data but i dont have enough compute power for that.
does anyone here have any ideas about this? any pretrained off the shelf models that i could use for this? any leads for a literature survey?
Found 2 relevant code implementations for “An Introduction to Variational Autoencoders”.
Ask the author(s) a question about the paper or code.
If you have code to share with the community, please add it here 😊🙏
–
To opt out from receiving code links, DM me.
I’ve personally used SimCLR with some success but unless you doctor the embedding scheme, similarity will primarily be a function of pixel likeness