973 resultados para 1 sigma standard deviation for the average


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years many problems are emerging due to the aircraft noise on the airport surrounding areas. The solution to this problem is not easy considering that the neighbourhood asks for the reduction of the number of aircraft operations and the airlines ask for a growing demand in the number of operations in the major airports. So the airport and regulatory authorities try to get a solution imposing a fine to the aircraft which its actual trajectory differs from the nominal one more than a lateral deviation. But, which is the value of this deviation?. The current situation is that many operators have to pay a lot of money for exceeding a deviation which has been established without operational criteria. This paper presents the results of a research program which is being carried out by the authors which aims to determine the "delta" deviation to be used for this purpose. In addition it is proposed a customized method per SID and per airport to be used for determining the maximum allowed lateral deviation by which if the aircraft is within it, then none fine will be imposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photogrammetric reanalysis of 1985 aerial photos has revealed substantial submarine melting of the floating ice tongue of Jakobshavn Isbrae, west Greenland. The thickness of the floating tongue determined from hydrostatic equilibrium tapers from ~940 m near the grounding zone to ~600 m near the terminus. Feature tracking on orthophotos shows speeds on the July 1985 ice tongue to be nearly constant (~18.5 m/d), indicating negligible dynamic thinning. The thinning of the ice tongue is mostly due to submarine melting with average rates of 228 ± 49 m/yr (0.62 ± 0.13 m/d) between the summers of 1984 and 1985. The cause of the high melt rate is the circulation of warm seawater (thermal forcing of up to 4.2°C) beneath the tongue with convection driven by the substantial discharge of subglacial freshwater from the grounding zone. We believe that this buoyancy-driven convection is responsible for a deep channel incised into the sole of the floating tongue. A dramatic thinning, retreat, and speedup began in 1998 and continues today. The timing of the change is coincident with a 1.1°C warming of deep ocean waters entering the fjord after 1997. Assuming a linear relationship between thermal forcing and submarine melt rate, average melt rates should have increased by ~25% (~57 m/yr), sufficient to destabilize the ice tongue and initiate the ice thinning and the retreat that followed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The total deposition of environmental tobacco smoke (ETS), diesel and petrol smoke in the respiratory tract of 14 non-smokers between the ages of 20 and 30 was determined experimentally. A scanning mobility particle sizer (SMPS) measuring a size range of 0.016-0.626 mu m was used to characterise the inhaled and exhaled aerosol during relaxed nasal breathing over a period of 10 min. The ETS, diesel, and petrol particles had average count median diameter (and geometric standard deviation) of 0.183 mu m (1.7), 0.125 mu m (1.7), and 0.069 mu m (1.7), respectively. The average total number deposition of ETS was 36% (standard deviation 10%), of diesel smoke 30% (standard deviation 9%), and of petrol smoke 41% (standard deviation 8%). The analysis of the deposition patterns as a function of particle size for the three aerosols in each individual showed that there is a significant difference between each aerosol for a majority of individuals (12 out of 14). This is an important result as it indicates that differences persist regardless of inter-subject variability. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. METHODS: To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. RESULTS: The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. CONCLUSIONS: Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Neurophysiological monitoring aims to improve the safety of pedicle screw placement, but few quantitative studies assess specificity and sensitivity. In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated with intraoperative neurophysiological stimulation thresholds. METHODS: A single surgeon placed 68 thoracic and 136 lumbar screws in 30 consecutive patients during instrumented fusion under EMG control. The female to male ratio was 1.6 and the average age was 61.3 years (SD 17.7). Radiological measurements, blinded to stimulation threshold, were done on reformatted CT reconstructions using OsiriX software. A standard deviation of the screw position of 2.8 mm was determined from pilot measurements, and a 1 mm of screw-pedicle edge distance was considered as a difference of interest (standardised difference of 0.35) leading to a power of the study of 75 % (significance level 0.05). RESULTS: Correct placement and stimulation thresholds above 10 mA were found in 71 % of screws. Twenty-two percent of screws caused cortical breach, 80 % of these had stimulation thresholds above 10 mA (sensitivity 20 %, specificity 90 %). True prediction of correct position of the screw was more frequent for lumbar than for thoracic screws. CONCLUSION: A screw stimulation threshold of >10 mA does not indicate correct pedicle screw placement. A hypothesised gradual decrease of screw stimulation thresholds was not observed as screw placement approaches the nerve root. Aside from a robust threshold of 2 mA indicating direct contact with nervous tissue, a secondary threshold appears to depend on patients' pathology and surgical conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transmission of drug-resistant pathogens presents an almost-universal challenge for fighting infectious diseases. Transmitted drug resistance mutations (TDRM) can persist in the absence of drugs for considerable time. It is generally believed that differential TDRM-persistence is caused, at least partially, by variations in TDRM-fitness-costs. However, in vivo epidemiological evidence for the impact of fitness costs on TDRM-persistence is rare. Here, we studied the persistence of TDRM in HIV-1 using longitudinally-sampled nucleotide sequences from the Swiss-HIV-Cohort-Study (SHCS). All treatment-naïve individuals with TDRM at baseline were included. Persistence of TDRM was quantified via reversion rates (RR) determined with interval-censored survival models. Fitness costs of TDRM were estimated in the genetic background in which they occurred using a previously published and validated machine-learning algorithm (based on in vitro replicative capacities) and were included in the survival models as explanatory variables. In 857 sequential samples from 168 treatment-naïve patients, 17 TDRM were analyzed. RR varied substantially and ranged from 174.0/100-person-years;CI=[51.4, 588.8] (for 184V) to 2.7/100-person-years;[0.7, 10.9] (for 215D). RR increased significantly with fitness cost (increase by 1.6[1.3,2.0] per standard deviation of fitness costs). When subdividing fitness costs into the average fitness cost of a given mutation and the deviation from the average fitness cost of a mutation in a given genetic background, we found that both components were significantly associated with reversion-rates. Our results show that the substantial variations of TDRM persistence in the absence of drugs are associated with fitness-cost differences both among mutations and among different genetic backgrounds for the same mutation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individual identification via DNA profiling is important in molecular ecology, particularly in the case of noninvasive sampling. A key quantity in determining the number of loci required is the probability of identity (PIave), the probability of observing two copies of any profile in the population. Previously this has been calculated assuming no inbreeding or population structure. Here we introduce formulae that account for these factors, whilst also accounting for relatedness structure in the population. These formulae are implemented in API-CALC 1.0, which calculates PIave for either a specified value, or a range of values, for F-IS and F-ST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an assessment of the impacts of climate change on a series of indicators of hydrological regimes across the global domain, using a global hydrological model run with climate scenarios constructed using pattern-scaling from 21 CMIP3 (Coupled Model Intercomparison Project Phase 3) climate models. Changes are compared with natural variability, with a significant change being defined as greater than the standard deviation of the hydrological indicator in the absence of climate change. Under an SRES (Special Report on Emissions Scenarios) A1b emissions scenario, substantial proportions of the land surface (excluding Greenland and Antarctica) would experience significant changes in hydrological behaviour by 2050; under one climate model scenario (Hadley Centre HadCM3), average annual runoff increases significantly over 47% of the land surface and decreases over 36%; only 17% therefore sees no significant change. There is considerable variability between regions, depending largely on projected changes in precipitation. Uncertainty in projected river flow regimes is dominated by variation in the spatial patterns of climate change between climate models (hydrological model uncertainty is not included). There is, however, a strong degree of consistency in the overall magnitude and direction of change. More than two-thirds of climate models project a significant increase in average annual runoff across almost a quarter of the land surface, and a significant decrease over 14%, with considerably higher degrees of consistency in some regions. Most climate models project increases in runoff in Canada and high-latitude eastern Europe and Siberia, and decreases in runoff in central Europe, around the Mediterranean, the Mashriq, central America and Brasil. There is some evidence that projecte change in runoff at the regional scale is not linear with change in global average temperature change. The effects of uncertainty in the rate of future emissions is relatively small

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for the direct determination of Ni in soft drinks by graphite furnace atomic absorption spectrometry using a transversely heated graphite atomizer (THGA), Zeeman-effect background corrector, and Co as the internal standard (IS) is proposed. Magnesium nitrate was used to stabilize both Ni and Co. All diluted samples (1+1) in 0.2% (v/v) HNO3 and reference solutions [5.0-50 mu g L-1 Ni in 0.2% (v/v) HNO3] were spiked with 50 mu g L-1 Co. For a 20-mu L sample dispensed into the atomizer, correlations between the ratio of absorbance of Ni to absorbance of Co and the analyte concentration were close to 0.9996. The relative standard deviation of the measurements varied from 0.5 to 3.4% and 1.0 to 7.0% (n=12) with and without IS, respectively. Recoveries within 98-104% for Ni spikes were obtained using IS. The characteristic mass was calculated as 43 pg Ni and the limit of detection was 1.4 mu g L-1. The accuracy of the method was checked for the direct determination of Ni in soft drinks and the results obtained with IS were better than those without IS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a search for the standard model Higgs boson in final states with a charged lepton (electron or muon), missing transverse energy, and two or three jets, at least one of which is identified as a b-quark jet. The search is primarily sensitive to WH→ νbb̄ production and uses data corresponding to 9.7fb -1 of integrated luminosity collected with the D0 detector at the Fermilab Tevatron pp̄ Collider at √s=1.96TeV. We observe agreement between the data and the expected background. For a Higgs boson mass of 125 GeV, we set a 95% C.L. upper limit on the production of a standard model Higgs boson of 5.2×σ SM, where σ SM is the standard model Higgs boson production cross section, while the expected limit is 4.7×σ SM. © 2012 American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To determine variables that predict the rate of decline in fetal hemoglobin levels in alloimmune disease. Method Retrospective review of singleton pregnancies that underwent first and second intrauterine transfusions for treatment of fetal anemia because of maternal Rh alloimmunization in a tertiary referral center. Results Forty-one first intrauterine transfusions were performed at 26.1?weeks (standard deviation, SD, 4.6), mean volume of blood transfused was 44.4?mL (SD 23.5) and estimated feto-placental volume expansion was 51.3% (SD 14.5%). Between first and second transfusion, hemoglobin levels reduced on average 0.40?g/dl/day (SD 0.25). Stepwise multiple regression analysis demonstrated that this rate significantly correlated with hemoglobin levels after the first transfusion, the interval between both procedures, and middle cerebral artery systolic velocity before the second transfusion. Conclusion The rate of decline in fetal hemoglobin levels between first and second transfusions in alloimmune disease can be predicted by a combination of hemoglobin levels after the first transfusion, interval between both procedures, and middle cerebral artery systolic velocity before the second transfusion. (C) 2012 John Wiley & Sons, Ltd.