936 resultados para errors-in-variables model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The truncation errors associated with finite difference solutions of the advection-dispersion equation with first-order reaction are formulated from a Taylor analysis. The error expressions are based on a general form of the corresponding difference equation and a temporally and spatially weighted parametric approach is used for differentiating among the various finite difference schemes. The numerical truncation errors are defined using Peclet and Courant numbers and a new Sink/Source dimensionless number. It is shown that all of the finite difference schemes suffer from truncation errors. Tn particular it is shown that the Crank-Nicolson approximation scheme does not have second order accuracy for this case. The effects of these truncation errors on the solution of an advection-dispersion equation with a first order reaction term are demonstrated by comparison with an analytical solution. The results show that these errors are not negligible and that correcting the finite difference scheme for them results in a more accurate solution. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Organs from the so-called marginal donors have been used with a significant higher risk of primary non function than organs retrieved from the optimal donors. We investigated the early metabolic changes and blood flow redistribution in splanchnic territory in an experimental model that mimics marginal brain-dead (BD) donor. Material/Methods: Ten dogs (21.3 +/- 0.9 kg), were subjected to a brain death protocol induced by subdural balloon inflation and observed for 30 min thereafter without ally additional interventions. Mean arterial and intracranial pressures, heart rate, cardiac output (CO), portal vein and hepatic artery blood flows (PVBF and HABF, ultrasonic flowprobe), and O(2)-derived variables were evaluated. Results: An increase in arterial pressure, CO, PVBF and HABF was observed after BD induction. At the end, an intense hypotension with normalization in CO (3.0 +/- 0.2 VS. 2.8 +/- 2.8 L/min) and PVBF (687 +/- 114 vs. 623 +/- 130 ml/min) was observed, whereas HABF (277 33 vs. 134 28 ml/min, p<0.005) remained lower than baseline values. Conclusions: Despite severe hypotension induced by sudden increase of intracranial pressure, the systemic and splanchnic blood flows were partially preserved without signs of severe hypoperfusion (i.e. hyperlactatemia). Additionally, the HABF was mostly negatively affected in this model of marginal BD donor. Our data suggest that not only the cardiac output, but the intrinsic hepatic microcirculatory mechanism plays a role in the hepatic blood flow control after BD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the sequences of round-off errors of the orbits of a discretized planar rotation, from a probabilistic angle. It was shown [Bosio & Vivaldi, 2000] that for a dense set of parameters, the discretized map can be embedded into an expanding p-adic dynamical system, which serves as a source of deterministic randomness. For each parameter value, these systems can generate infinitely many distinct pseudo-random sequences over a finite alphabet, whose average period is conjectured to grow exponentially with the bit-length of the initial condition (the seed). We study some properties of these symbolic sequences, deriving a central limit theorem for the deviations between round-off and exact orbits, and obtain bounds concerning repetitions of words. We also explore some asymptotic problems computationally, verifying, among other things, that the occurrence of words of a given length is consistent with that of an abstract Bernoulli sequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares methods for calculating Input-Output (IO) Type II multipliers. These are formulations of the standard Leontief IO model which endogenise elements of household consumption. An analytical comparison of the two basic IO Type II multiplier methods with the Social Accounting Matrix (SAM) multiplier approach identifies the treatment of non-wage income generated in production as a central problem. The multiplier values for each of the IO and SAM methods are calculated using Scottish data for 2009. These results can be used to choose which Type II IO multiplier to adopt where SAM multiplier values are unavailable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to the working memory model, the phonological loop is the component of working memory specialized in processing and manipulating limited amounts of speech-based information. The Children's Test of Nonword Repetition (CNRep) is a suitable measure of phonological short-term memory for English-speaking children, which was validated by the Brazilian Children's Test of Pseudoword Repetition (BCPR) as a Portuguese-language version. The objectives of the present study were: i) to investigate developmental aspects of the phonological memory processing by error analysis in the nonword repetition task, and ii) to examine phoneme (substitution, omission and addition) and order (migration) errors made in the BCPR by 180 normal Brazilian children of both sexes aged 4-10, from preschool to 4th grade. The dominant error was substitution [F(3,525) = 180.47; P < 0.0001]. The performance was age-related [F(4,175) = 14.53; P < 0.0001]. The length effect, i.e., more errors in long than in short items, was observed [F(3,519) = 108.36; P < 0.0001]. In 5-syllable pseudowords, errors occurred mainly in the middle of the stimuli, before the syllabic stress [F(4,16) = 6.03; P = 0.003]; substitutions appeared more at the end of the stimuli, after the stress [F(12,48) = 2.27; P = 0.02]. In conclusion, the BCPR error analysis supports the idea that phonological loop capacity is relatively constant during development, although school learning increases the efficiency of this system. Moreover, there are indications that long-term memory contributes to holding memory trace. The findings were discussed in terms of distinctiveness, clustering and redintegration hypotheses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data assimilation provides techniques for combining observations and prior model forecasts to create initial conditions for numerical weather prediction (NWP). The relative weighting assigned to each observation in the analysis is determined by its associated error. Remote sensing data usually has correlated errors, but the correlations are typically ignored in NWP. Here, we describe three approaches to the treatment of observation error correlations. For an idealized data set, the information content under each simplified assumption is compared with that under correct correlation specification. Treating the errors as uncorrelated results in a significant loss of information. However, retention of an approximated correlation gives clear benefits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assimilation of temperature observations into an ocean model near the equator often results in a dynamically unbalanced state with unrealistic overturning circulations. The way in which these circulations arise from systematic errors in the model or its forcing is discussed. A scheme is proposed, based on the theory of state augmentation, which uses the departures of the model state from the observations to update slowly evolving bias fields. Results are summarized from an experiment applying this bias correction scheme to an ocean general circulation model. They show that the method produces more balanced analyses and a better fit to the temperature observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SST errors in the tropical Atlantic are large and systematic in current coupled general-circulation models. We analyse the growth of these errors in the region of the south-eastern tropical Atlantic in initialised decadal hindcasts integrations for three of the models participating in the Coupled Model Inter-comparison Project 5. A variety of causes for the initial bias development are identified, but a crucial involvement is found, in all cases considered, of ocean-atmosphere coupling for their maintenance. These involve an oceanic “bridge” between the Equator and the Benguela-Angola coastal seas which communicates sub-surface ocean anomalies and constitutes a coupling between SSTs in the south-eastern tropical Atlantic and the winds over the Equator. The resulting coupling between SSTs, winds and precipitation represents a positive feedback for warm SST errors in the south-eastern tropical Atlantic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To examine the causes of prescribing and monitoring errors in English general practices and provide recommendations for how they may be overcome. Design: Qualitative interview and focus group study with purposive sampling and thematic analysis informed by Reason’s accident causation model. Participants: General practice staff participated in a combination of semi-structured interviews (n=34) and six focus groups (n=46). Setting: Fifteen general practices across three primary care trusts in England. Results: We identified seven categories of high-level error-producing conditions: the prescriber, the patient, the team, the task, the working environment, the computer system, and the primary-secondary care interface. Each of these was further broken down to reveal various error-producing conditions. The prescriber’s therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health, were all identified as possible causes. The patient’s characteristics and the complexity of the individual clinical case were also found to have contributed to prescribing errors. The importance of feeling comfortable within the practice team was highlighted, as well as the safety of general practitioners (GPs) in signing prescriptions generated by nurses when they had not seen the patient for themselves. The working environment with its high workload, time pressures, and interruptions, and computer related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts, were all highlighted as possible causes of prescribing errors and often interconnected. Conclusion: This study has highlighted the complex underlying causes of prescribing and monitoring errors in general practices, several of which are amenable to intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.