7 resultados para Uncertainty in measurement
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The effect of event background fluctuations on charged particle jet reconstruction in Pb-Pb collisions at root s(NN) = 2.76 TeV has been measured with the ALICE experiment. The main sources of non-statistical fluctuations are characterized based purely on experimental data with an unbiased method, as well as by using single high p(t) particles and simulated jets embedded into real Pb-Pb events and reconstructed with the anti-k(t) jet finder. The influence of a low transverse momentum cut-off on particles used in the jet reconstruction is quantified by varying the minimum track p(t) between 0.15 GeV/c and 2 GeV/c. For embedded jets reconstructed from charged particles with p(t) > 0.15 GeV/c, the uncertainty in the reconstructed jet transverse momentum due to the heavy-ion background is measured to be 11.3 GeV/c (standard deviation) for the 10% most central Pb-Pb collisions, slightly larger than the value of 11.0 GeV/c measured using the unbiased method. For a higher particle transverse momentum threshold of 2 GeV/c, which will generate a stronger bias towards hard fragmentation in the jet finding process, the standard deviation of the fluctuations in the reconstructed jet transverse momentum is reduced to 4.8-5.0 GeV/c for the 10% most central events. A non-Gaussian tail of the momentum uncertainty is observed and its impact on the reconstructed jet spectrum is evaluated for varying particle momentum thresholds, by folding the measured fluctuations with steeply falling spectra.
Resumo:
The photons scattered by the Compton effect can be used to characterize the physical properties of a given sample due to the influence that the electron density exerts on the number of scattered photons. However, scattering measurements involve experimental and physical factors that must be carefully analyzed to predict uncertainty in the detection of Compton photons. This paper presents a method for the optimization of the geometrical parameters of an experimental arrangement for Compton scattering analysis, based on its relations with the energy and incident flux of the X-ray photons. In addition, the tool enables the statistical analysis of the information displayed and includes the coefficient of variation (CV) measurement for a comparative evaluation of the physical parameters of the model established for the simulation. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background: The rapid shallow breathing index (RSBI) is the most widely used index within intensive care units as a predictor of the outcome of weaning, but differences in measurement techniques have generated doubts about its predictive value. Objective: To investigate the influence of low levels of pressure support (PS) on the RSBI value of ill patients. Method: Prospective study including 30 patients on mechanical ventilation (MV) for 72 hours or more, ready for extubation. Prior to extubation, the RSBI was measured with the patient connected to the ventilator (Drager (TM) Evita XL) and receiving pressure support ventilation (PSV) and 5 cmH(2)O of positive end expiratory pressure or PEEP (RSBI_MIN) and then disconnected from the VM and connected to a Wright spirometer in which respiratory rate and exhaled tidal volume were recorded for 1 min (RSBI_ESP). Patients were divided into groups according to the outcome: successful extubation group (SG) and failed extubation group (FG). Results: Of the 30 patients, 11 (37%) failed the extubation process. In the within-group comparison (RSBI_MIN versus RSBI_ESP), the values for RSBI_MIN were lower in both groups: SG (34.79 +/- 4.67 and 60.95 +/- 24.64) and FG (38.64 +/- 12.31 and 80.09 +/- 20.71; p<0.05). In the between-group comparison, there was no difference in RSBI_MIN (34.79 +/- 14.67 and 38.64 +/- 12.31), however RSBI_ESP was higher in patients with extubation failure: SG (60.95 +/- 24.64) and FG (80.09 +/- 20.71; p<0.05). Conclusion: In critically ill patients on MV for more than 72h, low levels of PS overestimate the RSBI, and the index needs to be measured with the patient breathing spontaneously without the aid of pressure support.
Resumo:
Aboveground tropical tree biomass and carbon storage estimates commonly ignore tree height (H). We estimate the effect of incorporating H on tropics-wide forest biomass estimates in 327 plots across four continents using 42 656 H and diameter measurements and harvested trees from 20 sites to answer the following questions: 1. What is the best H-model form and geographic unit to include in biomass models to minimise site-level uncertainty in estimates of destructive biomass? 2. To what extent does including H estimates derived in (1) reduce uncertainty in biomass estimates across all 327 plots? 3. What effect does accounting for H have on plot- and continental-scale forest biomass estimates? The mean relative error in biomass estimates of destructively harvested trees when including H (mean 0.06), was half that when excluding H (mean 0.13). Power- and Weibull-H models provided the greatest reduction in uncertainty, with regional Weibull-H models preferred because they reduce uncertainty in smaller-diameter classes (< 40 cm D) that store about one-third of biomass per hectare in most forests. Propagating the relationships from destructively harvested tree biomass to each of the 327 plots from across the tropics shows that including H reduces errors from 41.8 Mg ha(-1) (range 6.6 to 112.4) to 8.0 Mg ha(-1) (-2.5 to 23.0).
Resumo:
OBJECTIVE: This study proposes a new approach that considers uncertainty in predicting and quantifying the presence and severity of diabetic peripheral neuropathy. METHODS: A rule-based fuzzy expert system was designed by four experts in diabetic neuropathy. The model variables were used to classify neuropathy in diabetic patients, defining it as mild, moderate, or severe. System performance was evaluated by means of the Kappa agreement measure, comparing the results of the model with those generated by the experts in an assessment of 50 patients. Accuracy was evaluated by an ROC curve analysis obtained based on 50 other cases; the results of those clinical assessments were considered to be the gold standard. RESULTS: According to the Kappa analysis, the model was in moderate agreement with expert opinions. The ROC analysis (evaluation of accuracy) determined an area under the curve equal to 0.91, demonstrating very good consistency in classifying patients with diabetic neuropathy. CONCLUSION: The model efficiently classified diabetic patients with different degrees of neuropathy severity. In addition, the model provides a way to quantify diabetic neuropathy severity and allows a more accurate patient condition assessment.
Resumo:
A intensificação de atividades inovadoras nas organizações tem desencadeado o surgimento de projetos de alto risco e com determinado nível de complexidade e estimulado a busca por modelos que possam tornam gerenciável as incertezas e riscos destes projetos. Metodologias tradicionais já não são suficientes para asseguram o sucesso destes projetos. A premissa de que um conjunto de ferramentas e técnicas padronizadas possa ser aplicável a todos os tipos de projetos tem sido fortemente questionada, dadas às diferenças fundamentais existentes entre eles. Este artigo apresenta uma revisão e análise da literatura de gerenciamento de riscos projetos inovadores a partir de uma perspectiva contingencial. Foram levantados artigos nas bases de dados científicas considerando- se duas grandes vertentes: tipologia de projetos e metodologias para gerenciamento de riscos de projetos inovativos e complexos. Com base na análise crítica da literatura, realizou-se uma proposição de um modelo estruturado para o gerenciamento das incertezas e riscos de projetos inovativos e complexos
Resumo:
Due to the growing interest in social networks, link prediction has received significant attention. Link prediction is mostly based on graph-based features, with some recent approaches focusing on domain semantics. We propose algorithms for link prediction that use a probabilistic ontology to enhance the analysis of the domain and the unavoidable uncertainty in the task (the ontology is specified in the probabilistic description logic crALC). The scalability of the approach is investigated, through a combination of semantic assumptions and graph-based features. We evaluate empirically our proposal, and compare it with standard solutions in the literature.