973 resultados para Error in substance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Basal ice samples were collected from ice exposures in a natural subglacial cavity beneath an outlet glacier of Øksfjordjøkelen, North Norway. Sediment and cation (Ca2+, Mg2+, Na+, K+) concentrations were then determined, and indicate stacking of basal ice units producing a repeat pattern of ‘clean firnification ice’ overlying sediment-rich ice. All measured cations show correlation with sediment concentration indicating weathering reactions to be the dominant contributor of cations. Regressions of specific sediment surface area per unit volume with cation concentration are performed and used to predict cation concentrations. These predicted values provide an indication of cation relocation within the basal ice sequence. The results suggest limited melting and refreezing resulting in the relocation of predominantly monovalent cations downward through the profile. Exchange of cations into solution during the melting of sediment-rich ice samples has previously been suggested as a source of error in such investigations. Analyses of sediment-free regelation ice spicules formed at the bed show cation concentrations above firnification ice levels and comparable, in many instances, to the basal ice samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE:

To estimate the prevalence of refractive errors in persons 40 years and older.

METHODS:

Counts of persons with phakic eyes with and without spherical equivalent refractive error in the worse eye of +3 diopters (D) or greater, -1 D or less, and -5 D or less were obtained from population-based eye surveys in strata of gender, race/ethnicity, and 5-year age intervals. Pooled age-, gender-, and race/ethnicity-specific rates for each refractive error were applied to the corresponding stratum-specific US, Western European, and Australian populations (years 2000 and projected 2020).

RESULTS:

Six studies provided data from 29 281 persons. In the US, Western European, and Australian year 2000 populations 40 years or older, the estimated crude prevalence for hyperopia of +3 D or greater was 9.9%, 11.6%, and 5.8%, respectively (11.8 million, 21.6 million, and 0.47 million persons). For myopia of -1 D or less, the estimated crude prevalence was 25.4%, 26.6%, and 16.4% (30.4 million, 49.6 million, and 1.3 million persons), respectively, of whom 4.5%, 4.6%, and 2.8% (5.3 million, 8.5 million, and 0.23 million persons), respectively, had myopia of -5 D or less. Projected prevalence rates in 2020 were similar.

CONCLUSIONS:

Refractive errors affect approximately one third of persons 40 years or older in the United States and Western Europe, and one fifth of Australians in this age group.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: To evaluate different refractive cutoffs for spectacle provision with regards to their impact on visual improvement and spectacle compliance. DESIGN: Prospective study of visual improvement and spectacle compliance. PARTICIPANTS: South African school children aged 6-19 years receiving free spectacles in a programme supported by Helen Keller International. METHODS: Refractive error, age, gender, urban versus rural residence, presenting and best-corrected vision were recorded for participants. Spectacle wear was observed directly at an unannounced follow-up examination 4-11 months after initial provision of spectacles. The association between five proposed refractive cutoff protocols and visual improvement and spectacle compliance were examined in separate multivariate models. MAIN OUTCOMES: Refractive cutoffs for spectacle distribution which would effectively identify children with improved vision, and those more likely to comply with spectacle wear. RESULTS: Among 8520 children screened, 810 (9.5%) received spectacles, of whom 636 (79%) were aged 10-14 years, 530 (65%) were girls, 324 (40%) had vision improvement > or = 3 lines, and 483 (60%) were examined 6.4+/-1.5 (range 4.6 to 10.9) months after spectacle dispensing. Among examined children, 149 (31%) were wearing or carrying their glasses. Children meeting cutoffs < or = -0.75 D of myopia, > or = +1.00 D of hyperopia and > or = +0.75 D of astigmatism had significantly greater improvement in vision than children failing to meet these criteria, when adjusting for age, gender and urban versus rural residence. None of the proposed refractive protocols discriminated between children wearing and not wearing spectacles. Presenting vision and improvement in vision were unassociated with subsequent spectacle wear, but girls (p < or = 0.0006 for all models) were more likely to be wearing glasses than were boys. CONCLUSIONS: To the best of our knowledge, this is the first suggested refractive cutoff for glasses dispensing validated with respect to key programme outcomes. The lack of association between spectacle retention and either refractive error or vision may have been due to the relatively modest degree of refractive error in this African population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: To evaluate the prevalence and causes of visual impairment among Chinese children aged 3 to 6 years in Beijing. DESIGN: Population-based prevalence survey. METHODS: Presenting and pinhole visual acuity were tested using picture optotypes or, in children with pinhole vision < 6/18, a Snellen tumbling E chart. Comprehensive eye examinations and cycloplegic refraction were carried out for children with pinhole vision < 6/18 in the better-seeing eye. RESULTS: All examinations were completed on 17,699 children aged 3 to 6 years (95.3% of sample). Subjects with bilateral correctable low vision (presenting vision < 6/18 correctable to >or= 6/18) numbered 57 (0.322%; 95% confidence interval [CI], 0.237% to 0.403%), while 14 (0.079%; 95% CI, 0.038% to 0.120%) had bilateral uncorrectable low vision (best-corrected vision of < 6/18 and >or= 3/60), and 5 subjects (0.028%; 95% CI, 0.004% to 0.054%) were bilaterally blind (best-corrected acuity < 3/60). The etiology of 76 cases of visual impairment included: refractive error in 57 children (75%), hereditary factors (microphthalmos, congenital cataract, congenital motor nystagmus, albinism, and optic nerve disease) in 13 children (17.1 %), amblyopia in 3 children (3.95%), and cortical blindness in 1 child (1.3%). The cause of visual impairment could not be established in 2 (2.63%) children. The prevalence of visual impairment did not differ by gender, but correctable low vision was significantly (P < .0001) more common among urban as compared with rural children. CONCLUSION: The leading causes of visual impairment among Chinese preschool-aged children are refractive error and hereditary eye diseases. A higher prevalence of refractive error is already present among urban as compared with rural children in this preschool population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A growing literature considers the impact of uncertainty using SVAR models that include proxies for uncertainty shocks as endogenous variables. In this paper we consider the impact of measurement error in these proxies on the estimated impulse responses. We show via a Monte-Carlo experiment that measurement error can result in attenuation bias in impulse responses. In contrast, the proxy SVAR that uses the uncertainty shock proxy as an instrument does not su¤er from this bias. Applying this latter method to the Bloom (2009) data-set results in impulse responses to uncertainty shocks that are larger in magnitude and more persistent than those obtained from a recursive SVAR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Next-generation sequencing (NGS) technologies have become the standard for data generation in studies of population genomics, as the 1000 Genomes Project (1000G). However, these techniques are known to be problematic when applied to highly polymorphic genomic regions, such as the human leukocyte antigen (HLA) genes. Because accurate genotype calls and allele frequency estimations are crucial to population genomics analyses, it is important to assess the reliability of NGS data. Here, we evaluate the reliability of genotype calls and allele frequency estimates of the single-nucleotide polymorphisms (SNPs) reported by 1000G (phase I) at five HLA genes (HLA-A, -B, -C, -DRB1, and -DQB1). We take advantage of the availability of HLA Sanger sequencing of 930 of the 1092 1000G samples and use this as a gold standard to benchmark the 1000G data. We document that 18.6% of SNP genotype calls in HLA genes are incorrect and that allele frequencies are estimated with an error greater than ±0.1 at approximately 25% of the SNPs in HLA genes. We found a bias toward overestimation of reference allele frequency for the 1000G data, indicating mapping bias is an important cause of error in frequency estimation in this dataset. We provide a list of sites that have poor allele frequency estimates and discuss the outcomes of including those sites in different kinds of analyses. Because the HLA region is the most polymorphic in the human genome, our results provide insights into the challenges of using of NGS data at other genomic regions of high diversity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The influence of peak-dose drug-induced dyskinesia (DID) on manual tracking (MT) was examined in 10 dyskinetic patients (OPO), and compared to 10 age/gendermatched non-dyskinetic patients (NDPD) and 10 healthy controls. Whole body movement (WBM) and MT were recorded with a 6-degrees of freedom magnetic motion tracker and forearm rotation sensors, respectively. Subjects were asked to match the length of a computer-generated line with a line controlled via wrist rotation. Results show that OPO patients had greater WBM displacement and velocity than other groups. All groups displayed increased WBM from rest to MT, but only DPD and NDPO patients demonstrated a significant increase in WBM displacement and velocity. In addition, OPO patients exhibited excessive increase in WBM suggesting overflow DID. When two distinct target pace segments were examined (FAST/SLOW), all groups had slight increases in WBM displacement and velocity from SLOW to FAST, but only OPO patients showed significantly increased WBM displacement and velocity from SLOW to FAST. Therefore, it can be suggested that overflow DID was further increased with increased task speed. OPO patients also showed significantly greater ERROR matching target velocity, but no significant difference in ERROR in displacement, indicating that significantly greater WBM displacement in the OPO group did not have a direct influence on tracking performance. Individual target and performance traces demonstrated this relatively good tracking performance with the exception of distinct deviations from the target trace that occurred suddenly, followed by quick returns to the target coherent in time with increased performance velocity. In addition, performance hand velocity was not correlated with WBM velocity in DPO patients, suggesting that increased ERROR in velocity was not a direct result of WBM velocity. In conclusion, we propose that over-excitation of motor cortical areas, reported to be present in DPO patients, resulted in overflow DID during voluntary movement. Furthermore, we propose that the increased ERROR in velocity was the result of hypermetric voluntary movements also originating from the over-excitation of motor cortical areas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has become clear over the last few years that many deterministic dynamical systems described by simple but nonlinear equations with only a few variables can behave in an irregular or random fashion. This phenomenon, commonly called deterministic chaos, is essentially due to the fact that we cannot deal with infinitely precise numbers. In these systems trajectories emerging from nearby initial conditions diverge exponentially as time evolves)and therefore)any small error in the initial measurement spreads with time considerably, leading to unpredictable and chaotic behaviour The thesis work is mainly centered on the asymptotic behaviour of nonlinear and nonintegrable dissipative dynamical systems. It is found that completely deterministic nonlinear differential equations describing such systems can exhibit random or chaotic behaviour. Theoretical studies on this chaotic behaviour can enhance our understanding of various phenomena such as turbulence, nonlinear electronic circuits, erratic behaviour of heart and brain, fundamental molecular reactions involving DNA, meteorological phenomena, fluctuations in the cost of materials and so on. Chaos is studied mainly under two different approaches - the nature of the onset of chaos and the statistical description of the chaotic state.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die vorliegende Arbeit befasst sich mit den Fehlern, die bei der Berechnung von Tragstrukturen auftreten können, dem Diskretisierungs- und dem Modellfehler. Ein zentrales Werkzeug für die Betrachtung des lokalen Fehlers in einer FE-Berechnung sind die Greenschen Funktionen, die auch in anderen Bereichen der Statik, wie man zeigen kann, eine tragende Rolle spielen. Um den richtigen Einsatz der Greenschen Funktion mit der FE-Technik sicherzustellen, werden deren Eigenschaften und die konsistente Generierung aufgezeigt. Mit dem vorgestellten Verfahren, der Lagrange-Methode, wird es möglich auch für nichtlineare Probleme eine Greensche Funktion zu ermitteln. Eine logische Konsequenz aus diesen Betrachtungen ist die Verbesserung der Einflussfunktion durch Verwendung von Grundlösungen. Die Greensche Funktion wird dabei in die Grundlösung und einen regulären Anteil, welcher mittels FE-Technik bestimmt wird, aufgespalten. Mit dieser Methode, hier angewandt auf die Kirchhoff-Platte, erhält man deutlich genauere Ergebnisse als mit der FE-Methode bei einem vergleichbaren Rechenaufwand, wie die numerischen Untersuchungen zeigen. Die Lagrange-Methode bietet einen generellen Zugang zur zweiten Fehlerart, dem Modellfehler, und kann für lineare und nichtlineare Probleme angewandt werden. Auch hierbei übernimmt die Greensche Funktion wieder eine tragende Rolle, um die Auswirkungen von Parameteränderungen auf ausgewählte Zielgrößen betrachten zu können.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the Radiative Atmospheric Divergence Using ARM Mobile Facility GERB and AMMA Stations (RADAGAST) project we calculate the divergence of radiative flux across the atmosphere by comparing fluxes measured at each end of an atmospheric column above Niamey, in the African Sahel region. The combination of broadband flux measurements from geostationary orbit and the deployment for over 12 months of a comprehensive suite of active and passive instrumentation at the surface eliminates a number of sampling issues that could otherwise affect divergence calculations of this sort. However, one sampling issue that challenges the project is the fact that the surface flux data are essentially measurements made at a point, while the top-of-atmosphere values are taken over a solid angle that corresponds to an area at the surface of some 2500 km2. Variability of cloud cover and aerosol loading in the atmosphere mean that the downwelling fluxes, even when averaged over a day, will not be an exact match to the area-averaged value over that larger area, although we might expect that it is an unbiased estimate thereof. The heterogeneity of the surface, for example, fixed variations in albedo, further means that there is a likely systematic difference in the corresponding upwelling fluxes. In this paper we characterize and quantify this spatial sampling problem. We bound the root-mean-square error in the downwelling fluxes by exploiting a second set of surface flux measurements from a site that was run in parallel with the main deployment. The differences in the two sets of fluxes lead us to an upper bound to the sampling uncertainty, and their correlation leads to another which is probably optimistic as it requires certain other conditions to be met. For the upwelling fluxes we use data products from a number of satellite instruments to characterize the relevant heterogeneities and so estimate the systematic effects that arise from the flux measurements having to be taken at a single point. The sampling uncertainties vary with the season, being higher during the monsoon period. We find that the sampling errors for the daily average flux are small for the shortwave irradiance, generally less than 5 W m−2, under relatively clear skies, but these increase to about 10 W m−2 during the monsoon. For the upwelling fluxes, again taking daily averages, systematic errors are of order 10 W m−2 as a result of albedo variability. The uncertainty on the longwave component of the surface radiation budget is smaller than that on the shortwave component, in all conditions, but a bias of 4 W m−2 is calculated to exist in the surface leaving longwave flux.