3 resultados para Enunciation scene

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this research is to estimate the impact of violent film excerpts on university students (30 f, 30 m) in two different sequences, a “justified” violent scene followed by an “unjustified” one, or vice versa, as follows: 1) before-after sequences, using Aggressive behaviour I-R Questionnaire, Self Depression Scale and ASQ-IPAT Anxiety SCALE; 2) after every excerpt, using a self-report to evaluate the intensity and hedonic tone of emotions and the violence justification level. Emotion regulation processes (suppression, reappraisal, self-efficacy) were considered. In contrast with the “unjustified” violent scene, during the “justified” one, the justification level was higher; intensity and unpleasantness of negative emotions were lower. Anxiety (total and latent) and rumination diminished after both types of sequences. Rumination decreases less after the JV-UV sequence than after the UV-JV sequence. Self-efficacy in controlling negative emotions reduced rumination, whereas suppression reduced irritability. Reappraisal, self-efficacy in positive emotion expression and perceived emphatic selfefficacy did not have any effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.