3 resultados para motion sensors

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The circadian behavior associated with the 24 hours light-dark (LD) cycle (T24) is due to a circadian clock , which in mammals is located in the hypothalamic suprachiasmatic nucleus (SCN). Under experimental conditions in which rats are espoused to a symmetric LD 22h cycle (T22) the two SCN regions, ventrolateral (vl) and dorsomedial (dm), can be functionally isolated, suggesting that each region regulates distinct physiological and behavioral components. The vl region regulates the locomotor activity and slow wave sleep (SWS) rhythms, while the dm region assures the body temperature and paradoxical sleep (PS) rhythms regulation. This research aimed to deepen the knowledge on the functional properties of circadian rhythmicity, specifically about the internal desynchronization process, and its consequences to locomotor activity and body temperature rhythms as well as to the sleep-wake cycle pattern in rats. We applied infrared motion sensors, implanted body temperature sensors and a telemetry system to record electrocorticogram (ECoG) and electromyogram (EMG) in two rat groups. The control group under 24h period LD cycle (T24: 12hL-12hD) to the baseline record and the experimental group under 22h period LD cycle (T22: 11hL- 11hD), in which is known to occur the uncoupling process of the circadian locomotor activity rhythm where the animals show two distinct locomotor activity rhythms: one synchronized to the external LD cycle, and another expressed in free running course, with period greater than 24h. As a result of 22h cycles, characteristic locomotor activity moment appear, that are coincidence moments (T22C) and non coincidence moments (T22NC) which were the main focus or our study. Our results show an increase in locomotor activity, especially in coincidence moments, and the inversion of locomotor activity, body temperature, and sleep-wake cycle patterns in non coincidence moments. We can also observe the increase in SWS and decrease in PS, both in coincidence and non coincidence moments. Probably the increases in locomotor activity as a way to promote the coupling between circadian oscillators generate an increased homeostatic pressure and thus increase SWS, promoting the decreasing in PS

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments