983 resultados para 2-sigma error
Resumo:
The production of W bosons in association with two jets in proton-proton collisions at a centre-of-mass energy of root s = 7 TeV has been analysed for the presence of double-parton interactions using data corresponding to an integrated luminosity of 36 pb(-1), collected with the ATLAS detector at the Large Hadron Collider. The fraction of events arising from double-parton interactions, f(DP)((D)), has been measured through the p(T) balance between the two jets and amounts to f(DP)((D)) = 0.08 +/- 0.01 (stat.) +/- 0.02 (sys.) for jets with transverse momentum p(T) > 20 GeV and rapidity vertical bar y vertical bar < 2.8. This corresponds to a measurement of the effective area parameter for hard double-parton interactions of sigma(eff) = 15 +/- 3 (stat.)(-3)(+5) (sys.) mb.
Resumo:
Accurate assessments of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the climate policy process, and project future climate change. Present-day analysis requires the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. Here we describe datasets and a methodology developed by the global carbon cycle science community to quantify all major components of the global carbon budget, including their uncertainties. We discuss changes compared to previous estimates, consistency within and among components, and methodology and data limitations. CO2 emissions from fossil fuel combustion and cement production (EFF) are based on energy statistics, while emissions from Land-Use Change (ELUC), including deforestation, are based on combined evidence from land cover change data, fire activity in regions undergoing deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. Finally, the global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms. For the last decade available (2002–2011), EFF was 8.3 ± 0.4 PgC yr−1, ELUC 1.0 ± 0.5 PgC yr−1, GATM 4.3 ± 0.1 PgC yr−1, SOCEAN 2.5 ± 0.5 PgC yr−1, and SLAND 2.6 ± 0.8 PgC yr−1. For year 2011 alone, EFF was 9.5 ± 0.5 PgC yr−1, 3.0 percent above 2010, reflecting a continued trend in these emissions; ELUC was 0.9 ± 0.5 PgC yr−1, approximately constant throughout the decade; GATM was 3.6 ± 0.2 PgC yr−1, SOCEAN was 2.7 ± 0.5 PgC yr−1, and SLAND was 4.1 ± 0.9 PgC yr−1. GATM was low in 2011 compared to the 2002–2011 average because of a high uptake by the land probably in response to natural climate variability associated to La Niña conditions in the Pacific Ocean. The global atmospheric CO2 concentration reached 391.31 ± 0.13 ppm at the end of year 2011. We estimate that EFF will have increased by 2.6% (1.9–3.5%) in 2012 based on projections of gross world product and recent changes in the carbon intensity of the economy. All uncertainties are reported as ±1 sigma (68% confidence assuming Gaussian error distributions that the real value lies within the given interval), reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. This paper is intended to provide a baseline to keep track of annual carbon budgets in the future.
Resumo:
We derive the fermion loop formulation for the supersymmetric nonlinear O(N) sigma model by performing a hopping expansion using Wilson fermions. In this formulation the fermionic contribution to the partition function becomes a sum over all possible closed non-oriented fermion loop configurations. The interaction between the bosonic and fermionic degrees of freedom is encoded in the constraints arising from the supersymmetry and induces flavour changing fermion loops. For N ≥ 3 this leads to fermion loops which are no longer self-avoiding and hence to a potential sign problem. Since we use Wilson fermions the bare mass needs to be tuned to the chiral point. For N = 2 we determine the critical point and present boson and fermion masses in the critical regime.
Resumo:
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the climate policy process, and project future climate change. Present-day analysis requires the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. Here we describe datasets and a methodology developed by the global carbon cycle science community to quantify all major components of the global carbon budget, including their uncertainties. We discuss changes compared to previous estimates, consistency within and among components, and methodology and data limitations. Based on energy statistics, we estimate that the global emissions of CO2 from fossil fuel combustion and cement production were 9.5 ± 0.5 PgC yr−1 in 2011, 3.0 percent above 2010 levels. We project these emissions will increase by 2.6% (1.9–3.5%) in 2012 based on projections of Gross World Product and recent changes in the carbon intensity of the economy. Global net CO2 emissions from Land-Use Change, including deforestation, are more difficult to update annually because of data availability, but combined evidence from land cover change data, fire activity in regions undergoing deforestation and models suggests those net emissions were 0.9 ± 0.5 PgC yr−1 in 2011. The global atmospheric CO2 concentration is measured directly and reached 391.38 ± 0.13 ppm at the end of year 2011, increasing 1.70 ± 0.09 ppm yr−1 or 3.6 ± 0.2 PgC yr−1 in 2011. Estimates from four ocean models suggest that the ocean CO2 sink was 2.6 ± 0.5 PgC yr−1 in 2011, implying a global residual terrestrial CO2 sink of 4.1 ± 0.9 PgC yr−1. All uncertainties are reported as ±1 sigma (68% confidence assuming Gaussian error distributions that the real value lies within the given interval), reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. This paper is intended to provide a baseline to keep track of annual carbon budgets in the future.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
This dissertation was written in the format of three journal articles. Paper 1 examined the influence of change and fluctuation in body mass index (BMI) over an eleven-year period, on changes in serum lipid levels (total, HDL, and LDL cholesterol, triglyceride) in a population of Mexican Americans with type 2 diabetes. Linear regression models containing initial lipid value, BMI and age, BMI change (slope of BMI), and BMI fluctuation (root mean square error) were used to investigate associations of these variables with change in lipids over time. Increasing BMI over time was associated with gains in total and LDL cholesterol and triglyceride levels in women. Fluctuation of BMI was not associated with detrimental lipid profiles. These effects were independent of age and were not statistically significant in men. In Mexican-American women with type 2 diabetes, weight reduction is likely to result in more favorable levels of total and LDL cholesterol and triglyceride, without concern for possible detrimental effects of weight fluctuation. Weight reduction may not be as effective in men, but does not appear to be harmful either. ^ Paper 2 examined the associations of upper and total body fat with total cholesterol, HDL and LDL cholesterol, and triglyceride levels in the same population. Multilevel analysis was used to predict serum lipid levels from total body fat (BMI and triceps skinfold) and upper body fat (subscapular skinfold), while controlling for the effects of sex, age and self-correlations across time. Body fat was not strikingly associated with trends in serum lipid levels. However, upper body fat was strongly associated with triglyceride levels. This suggests that loss of upper body fat may be more important than weight loss in management of the hypertriglyceridemia commonly seen in type 2 diabetes. ^ Paper 3 was a review of the literature reporting associations between weight fluctuation and lipid levels. Few studies have reported associations between weight fluctuation and total, LDL, and HDL cholesterol and triglyceride levels. The body of evidence to date suggests that weight fluctuation does not strongly influence levels of total, LDL and HDL cholesterol and triglyceride. ^
Resumo:
Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^
Resumo:
Each year, hospitalized patients experience 1.5 million preventable injuries from medication errors and hospitals incur an additional $3.5 billion in cost (Aspden, Wolcott, Bootman, & Cronenwatt; (2007). It is believed that error reporting is one way to learn about factors contributing to medication errors. And yet, an estimated 50% of medication errors go unreported. This period of medication error pre-reporting, with few exceptions, is underexplored. The literature focuses on error prevention and management, but lacks a description of the period of introspection and inner struggle over whether to report an error and resulting likelihood to report. Reporting makes a nurse vulnerable to reprimand, legal liability, and even threat to licensure. For some nurses this state may invoke a disparity between a person‘s belief about him or herself as a healer and the undeniable fact of the error.^ This study explored the medication error reporting experience. Its purpose was to inform nurses, educators, organizational leaders, and policy-makers about the medication error pre-reporting period, and to contribute to a framework for further investigation. From a better understanding of factors that contribute to or detract from the likelihood of an individual to report an error, interventions can be identified to help the nurse come to a psychologically healthy resolution and help increase reporting of error in order to learn from error and reduce the possibility of future similar error.^ The research question was: "What factors contribute to a nurse's likelihood to report an error?" The specific aims of the study were to: (1) describe participant nurses' perceptions of medication error reporting; (2) describe participant explanations of the emotional, cognitive, and physical reactions to making a medication error; (3) identify pre-reporting conditions that make it less likely for a nurse to report a medication error; and (4) identify pre-reporting conditions that make it more likely for a nurse to report a medication error.^ A qualitative research study was conducted to explore the medication error experience and in particular the pre-reporting period from the perspective of the nurse. A total of 54 registered nurses from a large private free-standing not-for-profit children's hospital in the southwestern United States participated in group interviews. The results describe the experience of the nurse as well as the physical, emotional, and cognitive responses to the realization of the commission of a medication error. The results also reveal factors that make it more and less likely to report a medication error.^ It is clear from this study that upon realization that he or she has made a medication error, a nurse's foremost concern is for the safety of the patient. Fear was also described by each group of nurses. The nurses described a fear of several things including physician reaction, manager reaction, peer reaction, as well as family reaction and possible lack of trust as a result. Another universal response was the description of a struggle with guilt, shame, imperfection, blaming oneself, and questioning one's competence.^
Resumo:
Strontium isotope stratigraphy was used to date 16 discrete horizons within the CRP-2/2A drillhole. Reworked Quaternary (<1.7 Ma) and possible Pliocene (<2.4 Ma) sediments overlie a major sequence boundary at 25.92 meters below sea floor (mbsf). This hiatus is estimated to account for c. 16 Myr of missing section. Early Miocene to ?earliest Oligocene (c. 18.6 to >31 Ma) deposits below this boundary were cut by multiple erosion surfaces of uncertain duration. Strontium isotope ages are combined with 40Ar/39Ar dates, diatom and calcareous nannofossil datum and a palaeomagnetic polarity zonation, to produce an age model for the core.
Resumo:
In locations of rapid sediment accumulation receiving substantial amounts of laterally transported material the timescales of transport and accurate quantification of the transported material are at the focus of intense research. Here we present radiocarbon data obtained on co-occurring planktic foraminifera, marine haptophyte biomarkers (alkenones) and total organic carbon (TOC) coupled with excess Thorium-230 (230Thxs) measurements on four sediment cores retrieved in 1649-2879 m water depth from two such high accumulation drift deposits in the Northeast Atlantic, Björn and Gardar Drifts. While 230Thxs inventories imply strong sediment focussing, no age offsets are observed between planktic foraminifera and alkenones, suggesting that redistribution of sediments is rapid and occurs soon after formation of marine organic matter, or that transported material contains negligible amounts of alkenones. An isotopic mass balance calculation based on radiocarbon concentrations of co-occurring sediment components leads us to estimate that transported sediment components contain up to 12% of fossil organic matter that is free of or very poor in alkenones, but nevertheless appears to consist of a mixture of fresh and eroded fossil material. Considering all available constraints to characterize transported material, our results show that although focussing factors calculated from bulk sediment 230Thxs inventories may allow useful approximations of bulk redeposition, they do not provide a unique estimate of the amount of each laterally transported sediment component. Furthermore, our findings provide evidence that the occurrence of lateral sediment redistribution alone does not always hinder the use of multiple proxies but that individual sediment fractions are affected to variable extents by sediment focussing.