Video showcases new results on image/video phylogeny

Image and video phylogeny is the task of reconstructing the “genealogical tree” of derivative works in a corpus (i.e. finding which videos/images derive from which). The video below showcases new results obtained by RECOD, in a collaboration between the University of Campinas (Brazil) and Politecnico di Milano (Italy), that use phylogeny to reconstruct a sequences of videos from matching shots :

The work above comes from the article “Who is my parent? Reconstructing video sequences from partially matching shots” that will be presented at the upcoming ICIP’2014 in Paris, next October 27–30. Here’s the abstract :

Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped, so as to extract only a subset of the original frames (e.g., one or more video shots), and other transformations are commonly applied while reusing the content including, e.g., color correction, logo insertion, spatial cropping, etc. In this paper we analyze a pool of video sequences related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source video sequences, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether a group of frames is near- duplicate of another group of frames. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetically generated and real world datasets downloaded from YouTube.

The fulltext is already available in our internal repository.

This work was supported by the European Union under the 7th Framework Program (FP7).

Advertisements

About eduardovalle

Professor at FEEC/UNICAMP, Brazil. Researcher on Machine Learning/Computer Vision, with emphasis on Health & Education applications.
This entry was posted in blog and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s