Sports highlights are traditionally created by hand. While this remains the best method for large sporting events that attract large numbers of viewers, it is prohibitively expensive for more minor sports. In addition, even when interesting segments of a sporting event are detected, they are currently simply clipped into one or a few fixed videos that are broadcasted to all users.
This project aims to develop an adaptive video player that automatically speeds up to interesting segments, such that the user can customize the experience to his or her own tastes and time constraints. To do this, the “interest” of each video segment will be detected using audio cues from the commentary feeds. Since this detection is automatic, it reduces the cost of generating highlights and improves the broadcasting of minor sports, in particular. As such, this project combines elements of human-computer interaction, signal processing and machine learning, and involves scientific experts from each of these disciplines.
Project presentations
- «
- »
Watch on YouTube