985 resultados para Projector-Camera system


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The dynamic characteristics of reflex eye movements were measured in two strains of chronically prepared mice by using an infrared television camera system. The horizontal vestibulo-ocular reflex (HVOR) and horizontal optokinetic response (HOKR) were induced by sinusoidal oscillations of a turntable, in darkness, by 10° (peak to peak) at 0.11–0.50 Hz and of a checked-pattern screen, in light, by 5–20°at 0.11–0.17 Hz, respectively. The gains and phases of the HVOR and HOKR of the C57BL/6 mice were nearly equivalent to those of rabbits and rats, whereas the 129/Sv mice exhibited very low gains in the HVOR and moderate phase lags in the HOKR, suggesting an inherent sensory-motor anomaly. Adaptability of the HOKR was examined in C57BL/6 mice by sustained screen oscillation. When the screen was oscillated by 10° at 0.17 Hz, which induced sufficient retinal slips, the gain of the HOKR increased by 0.08 in 1 h on average, whereas the stimuli that induced relatively small or no retinal slips affected the gain very little. Lesions of the flocculi induced by local applications of 0.1% ibotenic acid and lesions of the inferior olivary nuclei induced by i.p. injection of 3-acetylpyridine in C57BL/6 mice little affected the dynamic characteristics of the HVOR and HOKR, but abolished the adaptation of the HOKR. These results indicate that the olivo-floccular system plays an essential role in the adaptive control of the ocular reflex in mice, as suggested in other animal species. The data presented provide the basis for analyzing the reflex eye movements of genetically engineered mice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objetivos: Pretende-se verificar as modificações neuromotoras após uma intervenção baseada no conceito de Bobath ao nível dos ajustes posturais durante o alcance funcional dos membros superiores, em três crianças com paralisia cerebral. Pretende-se também, verificar o efeito desta abordagem nas atividades e participação, bem como destacar os aspetos individuais das mesmas crianças com a capacidade de mudança após a intervenção. Metodologia: A avaliação foi realizada antes e três meses após a intervenção em fisioterapia segundo o conceito de Bobath. Optou-se por um registo observacional com uma Máquina Fotográfica Digital, um sistemas de Câmaras de Vídeo, uma Plataforma de Forças e, utilizaram-se ainda instrumentos como o Gross Motor Functional Measure– versão 88 itens, o Gross Motor Function Classification System, o Teste de Alcance Funcional Modificado e a ferramenta, Classificação Internacional de Funcionalidade, Incapacidade e Saúde – crianças e jovens. Resultados: Verificou-se um progresso nos ajustes posturais e na funcionalidade em geral, o que se repercutiu na restrição da participação e na limitação da actividade. A postura na posição de sentado, o deslocamento do centro de pressão, a capacidade de deslocamento no sentido anterior, bem como as capacidades motoras grosseiras modificaram-se em todas as crianças, tendo a criança B apresentado a maior e a criança A a menor capacidade de mudança após a intervenção. Conclusão: A intervenção segundo o Conceito de Bobath promoveu modificações neuromotoras, o que levaram a uma melhoria da funcionalidade geral, da mobilidade e do controlo postural da criança, refletindo-se nos ajustes posturais durante o alcance funcional dos membros superiores na posição de sentado. Verificou-se ainda, uma melhoria na restrição da participação e na limitação da actividade diária.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tämä diplomityö esittelee monikäyttöisen kamerajärjestelmän, mikä on pääsääntöisesti tarkoitettu valovahvistimen perään integroitavaksi. Tarkoituksena on parantaa sotilaan pimeätoimintakykyä ja urbaanissa ympäristössä operointia. Järjestelmän yhtenä moduulina on silmänäyttö, joka perustuu itse-emittoivaan orgaaniseen LED-näyttöelementtiin ja perinteiseen okulaarioptiikkaan. Työssä tutustutaan näyttöelementin teknologiaan, sen mahdollisiin rajoituksiin sekä selvitetään optisilla mittauksilla, voitaisiinko konventionaalinen linssiokulaari korvata muovisella prismaokulaarilla. Tämä mahdollistaisi pienemmän ja kevyemmän silmänäytön kehittämisen, mikä parantaisi kamerajärjestelmän käytettävyyttä ja kilpailukykyä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objectives of this master’s thesis were to understand the importance of bubbling fluidized bed (BFB) conditions and to find out how digital image processing and acoustic emission technology can help in monitoring the bed quality. An acoustic emission (AE) measurement system and a bottom ash camera system were evaluated in acquiring information about the bed conditions. The theory part of the study describes the fundamentals of BFB boiler and evaluates the characteristics of bubbling bed. Causes and effects of bed material coarsening are explained. The ways and methods to monitor the behaviour of BFB are determined. The study introduces the operating principles of AE technology and digital image processing. The empirical part of the study describes an experimental arrangement and results of a case study at an industrial BFB boiler. Sand consumption of the boiler was reduced by optimization of bottom ash handling and sand feeding. Furthermore, data from the AE measurement system and the bottom ash camera system was collected. The feasibility of these two systems was evaluated. The particle size of bottom ash and the changes in particle size distribution were monitored during the test period. Neither of the systems evaluated was ready to serve in bed quality control accurately or fast enough. Particle size distributions according to the bottom ash camera did not correspond to the results of manual sieving. Comprehensive interpretation of the collected AE data requires much experience. Both technologies do have potential and with more research and development they may enable acquiring reliable and real-time information about the bed conditions. This information could help to maintain disturbance-free combustion process and to optimize bottom ash handling system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les pays industrialisés comme le Canada doivent faire face au vieillissement de leur population. En particulier, la majorité des personnes âgées, vivant à domicile et souvent seules, font face à des situations à risques telles que des chutes. Dans ce contexte, la vidéosurveillance est une solution innovante qui peut leur permettre de vivre normalement dans un environnement sécurisé. L’idée serait de placer un réseau de caméras dans l’appartement de la personne pour détecter automatiquement une chute. En cas de problème, un message pourrait être envoyé suivant l’urgence aux secours ou à la famille via une connexion internet sécurisée. Pour un système bas coût, nous avons limité le nombre de caméras à une seule par pièce ce qui nous a poussé à explorer les méthodes monoculaires de détection de chutes. Nous avons d’abord exploré le problème d’un point de vue 2D (image) en nous intéressant aux changements importants de la silhouette de la personne lors d’une chute. Les données d’activités normales d’une personne âgée ont été modélisées par un mélange de gaussiennes nous permettant de détecter tout événement anormal. Notre méthode a été validée à l’aide d’une vidéothèque de chutes simulées et d’activités normales réalistes. Cependant, une information 3D telle que la localisation de la personne par rapport à son environnement peut être très intéressante pour un système d’analyse de comportement. Bien qu’il soit préférable d’utiliser un système multi-caméras pour obtenir une information 3D, nous avons prouvé qu’avec une seule caméra calibrée, il était possible de localiser une personne dans son environnement grâce à sa tête. Concrêtement, la tête de la personne, modélisée par une ellipsoide, est suivie dans la séquence d’images à l’aide d’un filtre à particules. La précision de la localisation 3D de la tête a été évaluée avec une bibliothèque de séquence vidéos contenant les vraies localisations 3D obtenues par un système de capture de mouvement (Motion Capture). Un exemple d’application utilisant la trajectoire 3D de la tête est proposée dans le cadre de la détection de chutes. En conclusion, un système de vidéosurveillance pour la détection de chutes avec une seule caméra par pièce est parfaitement envisageable. Pour réduire au maximum les risques de fausses alarmes, une méthode hybride combinant des informations 2D et 3D pourrait être envisagée.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse porte sur la reconstruction active de modèles 3D à l’aide d’une caméra et d’un projecteur. Les méthodes de reconstruction standards utilisent des motifs de lumière codée qui ont leurs forces et leurs faiblesses. Nous introduisons de nouveaux motifs basés sur la lumière non structurée afin de pallier aux manques des méthodes existantes. Les travaux présentés s’articulent autour de trois axes : la robustesse, la précision et finalement la comparaison des patrons de lumière non structurée aux autres méthodes. Les patrons de lumière non structurée se différencient en premier lieu par leur robustesse aux interréflexions et aux discontinuités de profondeur. Ils sont conçus de sorte à homogénéiser la quantité d’illumination indirecte causée par la projection sur des surfaces difficiles. En contrepartie, la mise en correspondance des images projetées et capturées est plus complexe qu’avec les méthodes dites structurées. Une méthode d’appariement probabiliste et efficace est proposée afin de résoudre ce problème. Un autre aspect important des reconstructions basées sur la lumière non structurée est la capacité de retrouver des correspondances sous-pixels, c’est-à-dire à un niveau de précision plus fin que le pixel. Nous présentons une méthode de génération de code de très grande longueur à partir des motifs de lumière non structurée. Ces codes ont l’avantage double de permettre l’extraction de correspondances plus précises tout en requérant l’utilisation de moins d’images. Cette contribution place notre méthode parmi les meilleures au niveau de la précision tout en garantissant une très bonne robustesse. Finalement, la dernière partie de cette thèse s’intéresse à la comparaison des méthodes existantes, en particulier sur la relation entre la quantité d’images projetées et la qualité de la reconstruction. Bien que certaines méthodes nécessitent un nombre constant d’images, d’autres, comme la nôtre, peuvent se contenter d’en utiliser moins aux dépens d’une qualité moindre. Nous proposons une méthode simple pour établir une correspondance optimale pouvant servir de référence à des fins de comparaison. Enfin, nous présentons des méthodes hybrides qui donnent de très bons résultats avec peu d’images.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image acquisition systems based on multi-head arrangement of digital frame cameras, such as the commercial systems DMC, UltraCam, besides others, are attractive alternatives enabling larger imaging area when compared to a single frame camera. Considering that in these systems, cameras are tightly attached to an external mount, it is assumed that relative position and orientation between cameras are stable during image acquisition and, consequently, these constraint can be included in the calibration step. This constraint is acceptable because estimates of the relative orientation (RO) parameters between cameras, from previously estimated exterior orientation parameters, present higher and significant deviations than the expected physical variations, due to error propagation. In order to solve this problem, this work presents an approach based on simultaneous calibration of two or more cameras using constraints that state that the relative rotation matrix and the distance between the cameras head are stable. Experiments with images acquired by an arrangement of two Hasselblad H2D cameras were accomplished, without and with the mentioned constraints. The experiments showed that the calibration process with RO constraints allows better results than the approach based on single camera calibration, provided that the estimation has included only images with good target distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main environmental cues for the adjustment of temporal organization of the animals is the light-dark cycle (LD), which undergoes changes in phase duration throughout the seasons. Photoperiod signaling by melatonin in mammals allows behavioral changes along the year, as in the activity-rest cycle, in mood states and in cognitive performance. The aim of this study was to investigate if common marmoset (Callithrix jacchus) exhibits behavioral changes under short and long photoperiods in a 24h cycle, assessing their individual behaviors, vocal repertoire, exploratory activity (EA), recognition memory (RM) and the circadian rhythm of locomotor activity (CRA). Eight adult marmosets were exposed to a light-dark cycle of 12:12; LD 08:16; LD 12:12 and LD 16:08, sequentially, for four weeks in each condition. Locomotor activity was recorded 24h/day by passive infrared motion detectors above the individual cages. A video camera system was programmed to record each animal, twice a week, on the first two light hours. From the videos, frequency of behaviors was registered as anxiety-like, grooming, alert, hanging position, staying in nest box and feeding using continuous focal animal sampling method. Simultaneously, the calls emitted in the experimental room were recorded by a single microphone centrally located and categorized as affiliative (whirr, chirp), contact (phee), long distance (loud shrill), agonistic (twitter) and alarm (tsik, seep, see). EA was assessed on the third hour after lights onset on the last week of each condition. In a first session, marmosets were exposed to one unfamiliar object during 15 min and 24h later, on the second session, a novel object was added to evaluate RM. Results showed that long days caused a decreased of amplitude and period variance of the CRA, but not short days. Short days decreased the total daily activity and active phase duration. On long days, active phase duration increased due to an advance of activity onset in relation to symmetric days. However, not all subjects started the activity earlier on long days. The activity offset was similar to symmetric days for the majority of marmosets. Results of EA showed that RM was not affected by short or long days, and that the marmosets exhibited a decreased in duration of EA on long days. Frequency and type of calls and frequency of anxiety-like behaviors, staying in nest box and grooming were lower on the first two light hours on long days. Considering the whole active phase of marmosets as we elucidate the results of vocalizations and behaviors, it is possible that these changes in the first two light hours are due to the shifting of temporal distribution of marmoset activities, since some animals did not advance the activity onset on long days. Consequently, the marmosets mean decreased because the sampling was not possible. In conclusion, marmosets synchronized the CRA to the tested photoperiods and as the phase angle varied a lot among marmosets it is suggested that they can use different strategies. Also, long days had an effect on activity-rest cycle and exploratory behaviors

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study investigated kinematic patterns in clinically normal German Shepherd dogs (GSDs) compared to those with hip dysplasia and with no clinical signs of lameness. Two groups of GSDs, including 10 clinically healthy dogs (G1) and 10 with hip dysplasia (G2), were trotted on a treadmill at a constant speed. Kinematic data were collected by a 3-camera system and analysed by a motion-analysis program. Flexion and extension joint angles and angular velocities were determined for the shoulder, elbow, carpal, hip, stifle, and tarsal joints.Within each group, the differences between the right and left limbs in all kinematic variables were not significant. Minimum angle, angular displacement and minimum angular velocity did not differ between groups. Significant differences were observed in the maximum angular velocity and maximum angle of the hip joint (dysplastic. >. healthy), and in the maximum angular velocity of the carpal joint (healthy. >. dysplastic). It was concluded that, when trotting on a treadmill, dysplastic dogs with no signs of lameness may present joint kinematic alterations in the hind as well as the forelimbs. © 2012 Elsevier Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)