972 resultados para AVT Prosilica GC2450C camera system
Resumo:
High-cadence, synchronized, multiwavelength optical observations of a solar active region (NOAA 10794) are presented. The data were obtained with the Dunn Solar Telescope at the National Solar Observatory/Sacramento Peak using a newly developed camera system: the rapid dual imager. Wavelet analysis is undertaken to search for intensity related oscillatory signatures, and periodicities ranging from 20 to 370 s are found with significance levels exceeding 95%. Observations in the H-α blue wing show more penumbral oscillatory phenomena when compared to simultaneous G-band observations. The H-α oscillations are interpreted as the signatures of plasma motions with a mean velocity of 20 km s-1. The strong oscillatory power over H-α blue-wing and G-band penumbral bright grains is an indication of the Evershed flow with frequencies higher than previously reported.
Resumo:
This document describes best practice and evidence based recommendations for the use of FDG-PET/CT for the purposes of radiotherapy target volume delineation (TVD) for curative intent treatment of non-small cell lung cancer (NSCLC). These recommendations have been written by an expert advisory group, convened by the International Atomic Energy Agency (IAEA) to facilitate a Coordinated Research Project (CRP) aiming to improve the applications of PET based radiation treatment planning (RTP) in low and middle income countries. These guidelines can be applied in routine clinical practice of radiotherapy TVD, for NSCLC patients treated with concurrent chemoradiation or radiotherapy alone, where FDG is used, and where a calibrated PET camera system equipped for RTP patient positioning is available. Recommendations are provided for PET and CT image visualization and interpretation, and for tumor delineation using planning CT with and without breathing motion compensation.
Resumo:
We present a large data set of high-cadence dMe flare light curves obtained with custom continuum filters on the triple-beam, high-speed camera system ULTRACAM. The measurements provide constraints for models of the near-ultraviolet (NUV) and optical continuum spectral evolution on timescales of ≈1 s. We provide a robust interpretation of the flare emission in the ULTRACAM filters using simultaneously obtained low-resolution spectra during two moderate-sized flares in the dM4.5e star YZ CMi. By avoiding the spectral complexity within the broadband Johnson filters, the ULTRACAM filters are shown to characterize bona fide continuum emission in the NUV, blue, and red wavelength regimes. The NUV/blue flux ratio in flares is equivalent to a Balmer jump ratio, and the blue/red flux ratio provides an estimate for the color temperature of the optical continuum emission. We present a new “color-color” relationship for these continuum flux ratios at the peaks of the flares. Using the RADYN and RH codes, we interpret the ULTRACAM filter emission using the dominant emission processes from a radiative-hydrodynamic flare model with a high nonthermal electron beam flux, which explains a hot, T ≈ 104 K, color temperature at blue-to-red optical wavelengths and a small Balmer jump ratio as observed in moderate-sized and large flares alike. We also discuss the high time resolution, high signal-to-noise continuum color variations observed in YZ CMi during a giant flare, which increased the NUV flux from this star by over a factor of 100. Based on observations obtained with the Apache Point Observatory 3.5 m telescope, which is owned and operated by the Astrophysical Research Consortium, based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofsica de Canarias, and observations, and based on observations made with the ESO Telescopes at the La Silla Paranal Observatory under programme ID 085.D-0501(A).
Resumo:
Objetivos: Pretende-se verificar as modificações neuromotoras após uma intervenção baseada no conceito de Bobath ao nível dos ajustes posturais durante o alcance funcional dos membros superiores, em três crianças com paralisia cerebral. Pretende-se também, verificar o efeito desta abordagem nas atividades e participação, bem como destacar os aspetos individuais das mesmas crianças com a capacidade de mudança após a intervenção. Metodologia: A avaliação foi realizada antes e três meses após a intervenção em fisioterapia segundo o conceito de Bobath. Optou-se por um registo observacional com uma Máquina Fotográfica Digital, um sistemas de Câmaras de Vídeo, uma Plataforma de Forças e, utilizaram-se ainda instrumentos como o Gross Motor Functional Measure– versão 88 itens, o Gross Motor Function Classification System, o Teste de Alcance Funcional Modificado e a ferramenta, Classificação Internacional de Funcionalidade, Incapacidade e Saúde – crianças e jovens. Resultados: Verificou-se um progresso nos ajustes posturais e na funcionalidade em geral, o que se repercutiu na restrição da participação e na limitação da actividade. A postura na posição de sentado, o deslocamento do centro de pressão, a capacidade de deslocamento no sentido anterior, bem como as capacidades motoras grosseiras modificaram-se em todas as crianças, tendo a criança B apresentado a maior e a criança A a menor capacidade de mudança após a intervenção. Conclusão: A intervenção segundo o Conceito de Bobath promoveu modificações neuromotoras, o que levaram a uma melhoria da funcionalidade geral, da mobilidade e do controlo postural da criança, refletindo-se nos ajustes posturais durante o alcance funcional dos membros superiores na posição de sentado. Verificou-se ainda, uma melhoria na restrição da participação e na limitação da actividade diária.
Resumo:
Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public.
Resumo:
Les pays industrialisés comme le Canada doivent faire face au vieillissement de leur population. En particulier, la majorité des personnes âgées, vivant à domicile et souvent seules, font face à des situations à risques telles que des chutes. Dans ce contexte, la vidéosurveillance est une solution innovante qui peut leur permettre de vivre normalement dans un environnement sécurisé. L’idée serait de placer un réseau de caméras dans l’appartement de la personne pour détecter automatiquement une chute. En cas de problème, un message pourrait être envoyé suivant l’urgence aux secours ou à la famille via une connexion internet sécurisée. Pour un système bas coût, nous avons limité le nombre de caméras à une seule par pièce ce qui nous a poussé à explorer les méthodes monoculaires de détection de chutes. Nous avons d’abord exploré le problème d’un point de vue 2D (image) en nous intéressant aux changements importants de la silhouette de la personne lors d’une chute. Les données d’activités normales d’une personne âgée ont été modélisées par un mélange de gaussiennes nous permettant de détecter tout événement anormal. Notre méthode a été validée à l’aide d’une vidéothèque de chutes simulées et d’activités normales réalistes. Cependant, une information 3D telle que la localisation de la personne par rapport à son environnement peut être très intéressante pour un système d’analyse de comportement. Bien qu’il soit préférable d’utiliser un système multi-caméras pour obtenir une information 3D, nous avons prouvé qu’avec une seule caméra calibrée, il était possible de localiser une personne dans son environnement grâce à sa tête. Concrêtement, la tête de la personne, modélisée par une ellipsoide, est suivie dans la séquence d’images à l’aide d’un filtre à particules. La précision de la localisation 3D de la tête a été évaluée avec une bibliothèque de séquence vidéos contenant les vraies localisations 3D obtenues par un système de capture de mouvement (Motion Capture). Un exemple d’application utilisant la trajectoire 3D de la tête est proposée dans le cadre de la détection de chutes. En conclusion, un système de vidéosurveillance pour la détection de chutes avec une seule caméra par pièce est parfaitement envisageable. Pour réduire au maximum les risques de fausses alarmes, une méthode hybride combinant des informations 2D et 3D pourrait être envisagée.
Resumo:
Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
Image acquisition systems based on multi-head arrangement of digital frame cameras, such as the commercial systems DMC, UltraCam, besides others, are attractive alternatives enabling larger imaging area when compared to a single frame camera. Considering that in these systems, cameras are tightly attached to an external mount, it is assumed that relative position and orientation between cameras are stable during image acquisition and, consequently, these constraint can be included in the calibration step. This constraint is acceptable because estimates of the relative orientation (RO) parameters between cameras, from previously estimated exterior orientation parameters, present higher and significant deviations than the expected physical variations, due to error propagation. In order to solve this problem, this work presents an approach based on simultaneous calibration of two or more cameras using constraints that state that the relative rotation matrix and the distance between the cameras head are stable. Experiments with images acquired by an arrangement of two Hasselblad H2D cameras were accomplished, without and with the mentioned constraints. The experiments showed that the calibration process with RO constraints allows better results than the approach based on single camera calibration, provided that the estimation has included only images with good target distribution.
Resumo:
One of the main environmental cues for the adjustment of temporal organization of the animals is the light-dark cycle (LD), which undergoes changes in phase duration throughout the seasons. Photoperiod signaling by melatonin in mammals allows behavioral changes along the year, as in the activity-rest cycle, in mood states and in cognitive performance. The aim of this study was to investigate if common marmoset (Callithrix jacchus) exhibits behavioral changes under short and long photoperiods in a 24h cycle, assessing their individual behaviors, vocal repertoire, exploratory activity (EA), recognition memory (RM) and the circadian rhythm of locomotor activity (CRA). Eight adult marmosets were exposed to a light-dark cycle of 12:12; LD 08:16; LD 12:12 and LD 16:08, sequentially, for four weeks in each condition. Locomotor activity was recorded 24h/day by passive infrared motion detectors above the individual cages. A video camera system was programmed to record each animal, twice a week, on the first two light hours. From the videos, frequency of behaviors was registered as anxiety-like, grooming, alert, hanging position, staying in nest box and feeding using continuous focal animal sampling method. Simultaneously, the calls emitted in the experimental room were recorded by a single microphone centrally located and categorized as affiliative (whirr, chirp), contact (phee), long distance (loud shrill), agonistic (twitter) and alarm (tsik, seep, see). EA was assessed on the third hour after lights onset on the last week of each condition. In a first session, marmosets were exposed to one unfamiliar object during 15 min and 24h later, on the second session, a novel object was added to evaluate RM. Results showed that long days caused a decreased of amplitude and period variance of the CRA, but not short days. Short days decreased the total daily activity and active phase duration. On long days, active phase duration increased due to an advance of activity onset in relation to symmetric days. However, not all subjects started the activity earlier on long days. The activity offset was similar to symmetric days for the majority of marmosets. Results of EA showed that RM was not affected by short or long days, and that the marmosets exhibited a decreased in duration of EA on long days. Frequency and type of calls and frequency of anxiety-like behaviors, staying in nest box and grooming were lower on the first two light hours on long days. Considering the whole active phase of marmosets as we elucidate the results of vocalizations and behaviors, it is possible that these changes in the first two light hours are due to the shifting of temporal distribution of marmoset activities, since some animals did not advance the activity onset on long days. Consequently, the marmosets mean decreased because the sampling was not possible. In conclusion, marmosets synchronized the CRA to the tested photoperiods and as the phase angle varied a lot among marmosets it is suggested that they can use different strategies. Also, long days had an effect on activity-rest cycle and exploratory behaviors
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study investigated kinematic patterns in clinically normal German Shepherd dogs (GSDs) compared to those with hip dysplasia and with no clinical signs of lameness. Two groups of GSDs, including 10 clinically healthy dogs (G1) and 10 with hip dysplasia (G2), were trotted on a treadmill at a constant speed. Kinematic data were collected by a 3-camera system and analysed by a motion-analysis program. Flexion and extension joint angles and angular velocities were determined for the shoulder, elbow, carpal, hip, stifle, and tarsal joints.Within each group, the differences between the right and left limbs in all kinematic variables were not significant. Minimum angle, angular displacement and minimum angular velocity did not differ between groups. Significant differences were observed in the maximum angular velocity and maximum angle of the hip joint (dysplastic. >. healthy), and in the maximum angular velocity of the carpal joint (healthy. >. dysplastic). It was concluded that, when trotting on a treadmill, dysplastic dogs with no signs of lameness may present joint kinematic alterations in the hind as well as the forelimbs. © 2012 Elsevier Ltd.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)