User-Centric Coordinates for Applications Leveraging 3-Axis Accelerometer Data

Mobile devices are becoming ubiquitous and, sometimes, even extensions of ourselves. These devices are growing fast in terms of delivered computational power, storage capacity, battery duration, and built-in sensors. Time and again, we see headlines advertising new unforeseen applications leveraging this power, especially the sensors, for solving diverse problems, including fall detection, user’s activity recognition, location identification, or even user authentication based on the way of walking (gait).

In this paper, the authors focus on motion sensors and discuss how the provided data can be interpreted and transformed to better serve different purposes. They propose a method to process the data from such sensors that reduces the acquisition noise and possible artifacts, and turns the data invariant to the device’s position and the user’s movement direction. A new coordinate system referred to as user-centric is introduced, as opposed to the two most common coordinate systems used—the device and world-coordinate systems. The results show the importance of properly pre-processing the acquired data to enable more reliable applications underpinned by mobile sensors.

The figure shows the acceleration data represented in the world-coordinate system (left side) and in the user-centric coordinate system (right side). The samples are divided into three straight lines, represented by A, B, and C slots, and two left turns. In the world-coordinate system, after the first user’s left turn, we can notice the acceleration data from axis x in slot A moving to axis y in slot B, as blue box shows. This phenomenon happens again after the second user’s left turn, from slot B to slot C. It indicates that the direction the user is following is somehow represented in the acceleration data. Because it aims at being as much independent as possible from external factors – such as the user’s path – the coordinate systems shall not capture such variations. This is exactly what happens in the proposed user-centric coordinate system, in which the acceleration samples are kept in the same axis even after the user’s left turns.


Ferreira, A., Santos, G., Rocha, A., & Goldenstein, S. (2017). User-Centric Coordinates for Applications Leveraging 3-Axis Accelerometer Data. IEEE Sensors Journal, 17(16), 5231-5243.

Posted in blog, publications, science | Tagged , , , , , , , , , | Leave a comment

RECOD on the news

With the advances of artificial intelligence (AI), specially machine learning, many areas can take advantages of these emerging technologies. The learning experience, for instance, could be much more personal with algorithms that are able to identify possible difficulties of one particular student regarding specific topics and thus provide additional material or extra exercises.

In the following interviews, Prof. Anderson Rocha from RECOD has highlighted some important points in the relationship between AI and the modern society.

1) Pais do milênio confiam na inteligência artificial para educação dos filhos (O Globo, 20 jul. 2017 – in Portuguese)

2) Pais jovens confiam na inteligência artificial para educação dos filhos (Pequenas Empresas & Grandes Negócios, 20 jul. 2017 – in Portuguese)

Posted in blog, media, extra | Tagged , , , , , | Leave a comment

RECOD at ICIP 2017

The 2017 IEEE International Conference on Image Processing (ICIP 2017), to be held in China on September, will have three papers from RECOD. These papers are examples of the obtained results from the collaboration between University of Campinas and University of Notre Dame through research projects supported by DARPA (FA8750-16-2-0173), FAPESP (2015/19222-9), CAPES (DeepEyes) and CNPq (304472/2015-8).

The accepted papers are:

1. Spotting the Difference: Context Retrieval and Analysis for Improved Forgery Detection and Localization. Joel Brogan, Paolo Bestagini, Aparna Bharati, Allan Pinto, Daniel Moreira, Kevin Bowyer, Patrick Flynn, Anderson Rocha, Walter Scheirer.

2. U-Phylogeny: Undirected Provenance Graph Construction in the Wild. Aparna Bharati, Daniel Moreira, Allan Pinto, Joel Brogan, Kevin Bowyer, Patrick Flynn, Walter Scheirer, Anderson Rocha (oral presentation).

3. Provenance Filtering for Multimedia Phylogeny. Allan Pinto, Daniel Moreira, Aparna Bharati, Joel Brogan, Kevin Bowyer, Patrick Flynn, Walter Scheirer, Anderson Rocha (oral presentation).

Posted in blog, publications, science | Tagged , , , , , , , , , , , , , , , | Leave a comment

RECOD’s paper is the 2017 best paper at the Elsevier JVCI

We are happy to announce that the paper entitled “Going deeper into copy-move forgery detection: Exploring image telltales via multi-scale analysis and voting processes” published at the Elsevier Journal of Visual Communication and Image Representation (JVCI) has received the 2017 Best Paper Award.

Silva E., Carvalho T., Ferreira A., Rocha A, Going deeper into copy-move forgery detection: Exploring image telltales via multi-scale analysis and voting processes, vol. 29, May 2015.

Congratulations!

Posted in awards, blog | Tagged , , , , , , | Leave a comment

RECODists have gotten their Ph.D. Thesis awarded

Two Ph.D. Thesis from RECOD lab have been awarded recently. First, the thesis entitled “Multi-Analysis Techniques for Digital Image Forensics” from Anselmo Castelo Branco Ferreira under the supervision of Prof. Anderson Rocha and co-supervision of Prof. Jefersson dos Santos won as best Ph.D. thesis in 2016 from the Institute of Computing – IC/Unicamp.

Then, the thesis entitled “Sensitive-video analysis” from Daniel Henriques Moreira under the supervision of Prof. Anderson Rocha and co-supervision of Prof. Siome Goldenstein won as best Ph.D. thesis at the XXXVII Brazilian Society Congress 2017. The thesis will be published at SpringerBriefs in Computer Science.

Congratulations!

Posted in awards, blog | Tagged , , , , , , , , | Leave a comment

Talk: Sensitive-Video Analysis

This week, on July 6th, Daniel Moreira will talk about his doctoral research entitle “Sensitive-Video Analysis” developed at RECOD under Prof. Anderson Rocha supervision. The talk will happen at Unicamp/IC-3 main auditorium at 2 pm. Currently, he is a postdoctoral research scholar at the Computer Science and Engineering Department of the University of Notre Dame, US, under the supervision of Prof. Walter Scheirer, where he investigates topics in Media Forensics.

Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive-video analysis. In this talk, we will discuss how to tackle this problem in two ways. In the first one (sensitive-video classification), we explore methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. For both cases, we will explain in details how we have designed and developed effective and efficient methods, with low-memory footprint, small runtime, and suitability for deployment on mobile devices. We start with a novel Bag-of-Visual-Words-based pipeline for efficient motion-aware sensitive-video classification. Then, we move to a novel high-level multimodal fusion pipeline for sensitive-content localization. Finally, we introduce a novel space-temporal video interest point detector and video content descriptor, which we call Temporal Robust Features (TRoF).

The slides of this talk are available here.

This slideshow requires JavaScript.

Posted in blog, Keynotes, talk | Tagged , , , , | Leave a comment

Patent: “Two-tiered method and apparatus for fast and accurate face verification on mobile devices”

Yesterday the BioLive team celebrated their patent request approval by Motorola.
This patent, entitled “Two-tiered method and apparatus for fast and accurate face
verification on mobile devices”, is an important result of a collaborative research
project between RECOD and Motorola. It discloses a new face verification method,
which uses state-of-the-art techniques and can be efficiently run in mobile devices.

The BioLive team is formed by Anderson Rocha, Jacques Wainer, Ricardo Torres,
Fernanda Andaló, Rafael Padilha, Waldir Rodrigues, William Dias, Gabriel Bertocco,
and Thiago Resek.

Posted in blog, extra | Tagged , , , , , , , , , , , , , , , | Leave a comment

Workshop about Ethics of Data at IC Unicamp

Looking to present and discuss the new directions about ethics of data, the Institute of Computing (Unicamp) hosted the first edition of the workshop about ethics of data.

Organized by CEPID CCES (Prof. Munir Skaf and Profa. Cláudia Bauzer Medeiros), the goal was to shed some different lights about the topic. This is why there were presentations from computer science, social science and biologic fields.

Prof. Eduardo Valle (RECOD) gave a speech about the relationships between society and artificial intelligence. All presentations (in Portuguese) are available at IC/Unicamp YouTube channel.

Posted in blog, Keynotes, science, talk | Tagged , , , , , | Leave a comment

History matching through dynamic decision-making

High-level workflow of the proposed optimization framework, including the two dynamic decision-making optimization components.

History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy.

In this paper, the authors present a dynamic decision-making optimization framework for history matching. The term ‘dynamic decision-making’ reflects the fact that, during the framework execution, the decision to generate a particular new solution is always guided and supported by the results of a continuous and dynamic analysis of the data from available solutions.

The proposed framework is different from previous approaches reported in the literature in the following aspects: it is not a stochastic method, since there is no randomness in its execution, nor it requires a large number of simulations to converge; it does not use a proxy model to substitute the flow simulator, so the results obtained with the framework are accurate at any moment of the execution; it is not a geostatistical process neither is primarily concerned with uncertainty reduction of the reservoir attributes. Rather, it is an optimization framework which follows a learning approach where the strategy is to dynamically analyze a set of observations (available solutions) to uncover input patterns (values of reservoir uncertain attributes) that lead to desired responses (good history matching for one or more wells) in the available solutions.


Cavalcante CCB, Maschio C, Santos AA, Schiozer D, Rocha A (2017) History matching through dynamic decision-making. PLOS ONE 12(6): e0178507.https://doi.org/10.1371/journal.pone.0178507

Posted in blog, publications, science | Tagged , , , , , , , , , , , , | Leave a comment

Data-Driven Feature Characterization Techniques for Laser Printer Attribution

Laser printer attribution is an increasing problem with several applications, such as pointing out the ownership of crime proofs and authentication of printed documents. However, most of the existing methods are limited by modeling assumptions about printing artifacts.

In this paper, the authors explore solutions able to learn discriminant-printing patterns directly from the available data during an investigation, without any further feature engineering, proposing the first approach based on deep learning to laser printer attribution. Experimental results show that the proposed method is robust to noisy data and outperforms existing counterparts in the literature for this problem.

In summary, the main contributions of this paper are:
1) The design and development of an ad-hoc CNN for laser printer attribution based on the analysis of small patches representing text characters;
2) The use of CNNs on multiple representations of the same character to learn complementary features that are fused together for an increased recognition accuracy;
3) The use of a late-fusion paradigm to merge results coming from the analysis of different characters within the same document. In this way, each character is classified separately, and individual results contribute to the final document label.


Ferreira, Anselmo, et al. “Data-Driven Feature Characterization Techniques for Laser Printer Attribution.” IEEE Transactions on Information Forensics and Security (2017). doi: 10.1109/TIFS.2017.2692722

Posted in blog, publications, science | Tagged , , , , , , , , , , , , , | Leave a comment