970 resultados para Tracking performance
Resumo:
Maximum Power Point tracking (MPPT) in photovoltaic (PV) systems may be achieved by controlling either the voltage or current of the PV device. There is no consensus in the technical literature about how is the best choice. This paper provides a comparative analysis performance among current and voltage control using two different MPPT strategies: the perturb and observe (P&O) and the incremental conductance techniques. © 2011 IEEE.
Resumo:
The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 pb-1 of data collected in pp collisions at s = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV/c is above 95% over the whole region of pseudorapidity covered by the CMS muon system, < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeVc is higher than 90% over the full η range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100GeV/c and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV/c. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation. © 2012 IOP Publishing Ltd and Sissa Medialab srl.
Resumo:
A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For t (t) over bar events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p(T) > 0.9GeV is 94% for pseudorapidities of vertical bar eta vertical bar < 0.9 and 85% for 0.9 < vertical bar eta vertical bar < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p(T) = 100GeV emitted at vertical bar eta vertical bar < 1.4, the resolutions are approximately 2.8% in p(T), and respectively, 10 m m and 30 mu m in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10-12 mu m in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
This session reports on a first-year program designed to assist students-of-color in adjusting to higher education. Session participants will have the opportunity to view the overall structure of the program, including training components, academic tracking methodology, assessment and technology, enhancement programs, and additional services that S.T.A.R.S. provides.
Resumo:
Background: The high and increasing prevalence of Dilated Cardiomyopathy (DCM) represents a serious public health issue. Novel technologies have been used aiming to improve diagnosis and the therapeutic approach. In this context, speckle tracking echocardiography (STE) uses natural myocardial markers to analyze the systolic deformation of the left ventricle (LV). Objective: Measure the longitudinal transmural global strain (GS) of the LV through STE in patients with severe DCM, comparing the results with normal individuals and with echocardiographic parameters established for the analysis of LV systolic function, in order to validate the method in this population. Methods: Seventy-one patients with severe DCM (53 +/- 12 years, 72% men) and 20 controls (30 +/- 8 years, 45% men) were studied. The following variables were studied: LV volumes and ejection fraction calculated by two and three-dimensional echocardiography, Doppler parameters, Tissue Doppler Imaging systolic and diastolic LV velocities and GS obtained by STE. Results: Compared with controls, LV volumes were higher in the DCM group; however, LVEF and peak E-wave velocity were lower in the latter. The myocardial performance index was higher in the patient group. Tissue Doppler myocardial velocities (S', e', a') were significantly lower and E/e' ratio was higher in the DCM group. GS was decreased in the DCM group (-5.5% +/- 2.3%) when compared with controls (-14.0% +/- 1.8%). Conclusion: In this study, GS was significantly lower in patients with severe DCM, bringing new perspectives for therapeutic approaches in this specific population. (Arq Bras Cardiol 2012;99(3):834-842)
Resumo:
FUNDAMENTO: A alta e crescente prevalência de Cardiomiopatia Dilatada (CMD) representa sério problema de saúde pública. Novas tecnologias vêm sendo utilizadas objetivando diagnósticos mais sofisticados, que melhorem a abordagem terapêutica. Nesse cenário, o Speckle Tracking (STE) utiliza marcadores miocárdicos naturais para analisar a deformação sistólica do Ventrículo Esquerdo (VE). OBJETIVO: Mensurar o strain transmural longitudinal global (SG) do VE através do STE em pacientes com CMD grave, comparando os resultados com indivíduos normais e com parâmetros ecocardiográficos consagrados para análise da função sistólica do VE, validando o método nessa população. MÉTODOS: Foram estudados 71 pacientes com CMD grave, (53 ± 12a, 72% homens) e 20 controles (30 ± 8a, 45% homens). Foram obtidos os volumes e a FEVE pela ecocardiografia bi e tridimensional, parâmetros do Doppler, Doppler tecidual e o SG pelo STE. RESULTADOS: Comparados ao grupo controle, os volumes do VE foram maiores no grupo CMD; entretanto, a FEVE e velocidade de pico da onda E foram menores neste último. O índice de performance miocárdica foi maior entre os pacientes. As velocidades do miocárdio pelo Doppler tecidual (S', e', a') foram consideravelmente menores e a relação E/e' foi maior no grupo CMD. O SG apresentou-se diminuído no grupo CMD (-5,5% ± 2,3%), em relação aos controles (-14,0% ± 1,8%). CONCLUSÃO: No presente estudo, o SG foi significativamente menor nos pacientes com CMD grave, abrindo novas perspectivas para abordagens terapêuticas nessa população específica.
Resumo:
[EN] [EN] In this paper we present a new method for image primitives tracking based on a CART (Classification and Regression Tree). Primitives tracking procedure uses lines and circles as primitives. We have applied the proposed method to sport event scenarios, specifically, soccer matches. We estimate CART parameters using a learning procedure based on RGB image channels. In order to illustrate its performance, it has been applied to real HD (High Definition) video sequences and some numerical experiments are shown. The quality of the primitives tracking with the decision tree is validated by the percentage error rates obtained and the comparison with other techniques as a morphological method. We also present applications of the proposed method to camera calibration and graphic object insertion in real video sequences.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
Detection, localization and tracking of non-collaborative objects moving inside an area is of great interest to many surveillance applications. An ultra- wideband (UWB) multistatic radar is considered as a good infrastructure for such anti-intruder systems, due to the high range resolution provided by the UWB impulse-radio and the spatial diversity achieved with a multistatic configuration. Detection of targets, which are typically human beings, is a challenging task due to reflections from unwanted objects in the area, shadowing, antenna cross-talks, low transmit power, and the blind zones arised from intrinsic peculiarities of UWB multistatic radars. Hence, we propose more effective detection, localization, as well as clutter removal techniques for these systems. However, the majority of the thesis effort is devoted to the tracking phase, which is an essential part for improving the localization accuracy, predicting the target position and filling out the missed detections. Since UWB radars are not linear Gaussian systems, the widely used tracking filters, such as the Kalman filter, are not expected to provide a satisfactory performance. Thus, we propose the Bayesian filter as an appropriate candidate for UWB radars. In particular, we develop tracking algorithms based on particle filtering, which is the most common approximation of Bayesian filtering, for both single and multiple target scenarios. Also, we propose some effective detection and tracking algorithms based on image processing tools. We evaluate the performance of our proposed approaches by numerical simulations. Moreover, we provide experimental results by channel measurements for tracking a person walking in an indoor area, with the presence of a significant clutter. We discuss the existing practical issues and address them by proposing more robust algorithms.
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
In the present multi-modal study we aimed to investigate the role of visual exploration in relation to the neuronal activity and performance during visuospatial processing. To this end, event related functional magnetic resonance imaging er-fMRI was combined with simultaneous eye tracking recording and transcranial magnetic stimulation (TMS). Two groups of twenty healthy subjects each performed an angle discrimination task with different levels of difficulty during er-fMRI. The number of fixations as a measure of visual exploration effort was chosen to predict blood oxygen level-dependent (BOLD) signal changes using the general linear model (GLM). Without TMS, a positive linear relationship between the visual exploration effort and the BOLD signal was found in a bilateral fronto-parietal cortical network, indicating that these regions reflect the increased number of fixations and the higher brain activity due to higher task demands. Furthermore, the relationship found between the number of fixations and the performance demonstrates the relevance of visual exploration for visuospatial task solving. In the TMS group, offline theta bursts TMS (TBS) was applied over the right posterior parietal cortex (PPC) before the fMRI experiment started. Compared to controls, TBS led to a reduced correlation between visual exploration and BOLD signal change in regions of the fronto-parietal network of the right hemisphere, indicating a disruption of the network. In contrast, an increased correlation was found in regions of the left hemisphere, suggesting an intent to compensate functionality of the disturbed areas. TBS led to fewer fixations and faster response time while keeping accuracy at the same level, indicating that subjects explored more than actually needed.
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.