I have a collection of audio files from comedy skits, and I’m looking to train a neural network to autonomously decide when to trigger a “laughing” sound effect. The catch? I want to avoid manually setting cue points for laughter. Instead, I’m aiming for the neural network to determine the right moments to insert laughter, based on the content of the skit.
Does this work in real time or your model has access to the entire sequence so you can use context from before and after the current time point?
You have to be careful with leaking when you preprocess the training data if you remove the laughter and leave an silent time interval.
The text based approach may work but it may not give you a precise timing.