960 resultados para Error in essence


Relevância:

90.00% 90.00%

Publicador:

Resumo:

For certain observing types, such as those that are remotely sensed, the observation errors are correlated and these correlations are state- and time-dependent. In this work, we develop a method for diagnosing and incorporating spatially correlated and time-dependent observation error in an ensemble data assimilation system. The method combines an ensemble transform Kalman filter with a method that uses statistical averages of background and analysis innovations to provide an estimate of the observation error covariance matrix. To evaluate the performance of the method, we perform identical twin experiments using the Lorenz ’96 and Kuramoto-Sivashinsky models. Using our approach, a good approximation to the true observation error covariance can be recovered in cases where the initial estimate of the error covariance is incorrect. Spatial observation error covariances where the length scale of the true covariance changes slowly in time can also be captured. We find that using the estimated correlated observation error in the assimilation improves the analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We use the elliptic reconstruction technique in combination with a duality approach to prove a posteriori error estimates for fully discrete backward Euler scheme for linear parabolic equations. As an application, we combine our result with the residual based estimators from the a posteriori estimation for elliptic problems to derive space-error indicators and thus a fully practical version of the estimators bounding the error in the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$ norm. These estimators, which are of optimal order, extend those introduced by Eriksson and Johnson in 1991 by taking into account the error induced by the mesh changes and allowing for a more flexible use of the elliptic estimators. For comparison with previous results we derive also an energy-based a posteriori estimate for the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$-error which simplifies a previous one given by Lakkis and Makridakis in 2006. We then compare both estimators (duality vs. energy) in practical situations and draw conclusions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ruminant production is a vital part of food industry but it raises environmental concerns, partly due to the associated methane outputs. Efficient methane mitigation and estimation of emissions from ruminants requires accurate prediction tools. Equations recommended by international organizations or scientific studies have been developed with animals fed conserved forages and concentrates and may be used with caution for grazing cattle. The aim of the current study was to develop prediction equations with animals fed fresh grass in order to be more suitable to pasture-based systems and for animals at lower feeding levels. A study with 25 nonpregnant nonlactating cows fed solely fresh-cut grass at maintenance energy level was performed over two consecutive grazing seasons. Grass of broad feeding quality, due to contrasting harvest dates, maturity, fertilisation and grass varieties, from eight swards was offered. Cows were offered the experimental diets for at least 2 weeks before housed in calorimetric chambers over 3 consecutive days with feed intake measurements and total urine and faeces collections performed daily. Methane emissions were measured over the last 2 days. Prediction models were developed from 100 3-day averaged records. Internal validation of these equations, and those recommended in literature, was performed. The existing in greenhouse gas inventories models under-estimated methane emissions from animals fed fresh-cut grass at maintenance while the new models, using the same predictors, improved prediction accuracy. Error in methane outputs prediction was decreased when grass nutrient, metabolisable energy and digestible organic matter concentrations were added as predictors to equations already containing dry matter or energy intakes, possibly because they explain feed digestibility and the type of energy-supplying nutrients more efficiently. Predictions based on readily available farm-level data, such as liveweight and grass nutrient concentrations were also generated and performed satisfactorily. New models may be recommended for predictions of methane emissions from grazing cattle at maintenance or low feeding levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective Underreporting of energy intake is prevalent in food surveys, but there is controversy about which dietary assessment method provides greater underreporting rates. Our objective is to compare validity of self-reported energy intake obtained by three dietary assessment methods with total energy expenditure (TEE) obtained by doubly labeled water (DLW) among Brazilian women. Design We used a cross-sectional study. Subjects/setting Sixty-five females aged 18 to 57 years (28 normal-weight, 10 over-weight, and 27 obese) were recruited from two universities to participate. Main outcome measures TEE determined by DLW, energy intake estimated by three 24-hour recalls, 3-day food record, and a food frequency questionnaire (FFQ). Statistical analyses performed Regression and analysis of variance with repeated measures compared TEE and energy intake values, and energy intake-to-TEE ratios and energy intake-TEE values between dietary assessment methods. Bland and Altman plots were provided for each method. chi(2) test compared proportion of underreporters between the methods. Results Mean TEE was 2,622 kcal (standard deviation [SD] =490 kcal), while mean energy intake was 2,078 kcal (SD=430 kcal) for the diet recalls; 2,044 kcal (SD=479 kcal) for the food record and 1,984 kcal (SD=832 kcal) for the FFQ (all energy intake values significantly differed from TEE; P<0.0001). Bland and Altman plots indicated great dispersion, negative mean differences between measurements, and wide limits of agreement. Obese subjects underreported more than normal-weight subjects in the diet recalls and in the food records, but not in the FFQ. Years of education, income and ethnicity were associated with reporting accuracy. Conclusions The FFQ produced greater under- and overestimation of energy intake. Underreporting of energy intake is a serious and prevalent error in dietary self-reports provided by Brazilian women, as has been described in studies conducted in developed countries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Skew-normal distribution is a class of distributions that includes the normal distributions as a special case. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis in a multivariate, null intercept, measurement error model [R. Aoki, H. Bolfarine, J.A. Achcar, and D. Leao Pinto Jr, Bayesian analysis of a multivariate null intercept error-in -variables regression model, J. Biopharm. Stat. 13(4) (2003b), pp. 763-771] where the unobserved value of the covariate (latent variable) follows a skew-normal distribution. The results and methods are applied to a real dental clinical trial presented in [A. Hadgu and G. Koch, Application of generalized estimating equations to a dental randomized clinical trial, J. Biopharm. Stat. 9 (1999), pp. 161-178].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We document a novel type of international financial contagion whose driving force is shared financial intermediation. In the London peripheral sovereign debt market during pre-1914 period financial intermediation played a major informational role to investors, most likely because of the absence of international monitoring agencies and the substantial agency costs. Using two events of financial distress – the Brazilian Funding Loan of 1898 and the Greek Funding Loan of 1893 – as quasi-natural experiments, we document that, following the crises, the bond prices of countries with no meaningful economic links to the distressed countries, but shared the same financial intermediary, suffered a reduction relative to the rest of the market. This result is true for the mean, median and the whole distribution of bond prices, and robust to an extensive sensitivity analysis. We interpret it as evidence that the identity of the financial intermediary was informative, i.e, investors extracted information about the soundness of a debtor based on the existence of financial relationships. This spillover, informational in essence, arises as the flip-side of the relational lending coin: contagion arises for the same reason why relational finance, in this case, underwriting, helps alleviate informational and incentive problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The role of maritime transportation within international trade was drastically revamped during the inception of the globalization process, which enhanced the contribution of ports in world economy as main logistics gateways for global production and trade. As a result, the relationship between ports and governments has changed. Devolution ideologies that had been applied in other industries decades ago were now being considered by governments for the port industry. Many central governments sought to extract themselves from commercial activities of ports and devolving this responsibility to local governments, communities or private entities. The institution of devolution programs also changed the governance structures of ports further influencing port performance. Consequently, the recent worldwide trend towards devolution in the port industry has spawned considerable variety of governance models that are now set in place around the world. While some countries opt for more decentralized structures others prefer to retain a centralization of powers. In this way some governments consider local features and national integration more than others, which ultimately influence the success of a port reform implementation. Nevertheless, the prime intent of governments is now to maximize the efficiency and performance of their domestic ports. This issue intends to examine the changed port governance environment in Brazil by determining how and why imposed port reforms of the Brazilian federal government have been affecting the overall performance of the national port system, over the last decades, using the Port of Santos as a sample upon an exploratory study. For that, the study will use a contingency theory-based framework – the Matching Framework - that views port performance as a function of the fit among the dimensions of external operating environment, strategy and structure of a port organization. In essence, the greater the fit among these dimensions the better the expected performance of a port will be, and vice-versa. Port managers, government officials and academics alike shall be interested in this document.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the absence of the selective availability, which was turned off on May 1, 2000, the ionosphere can be the largest source of error in GPS positioning and navigation. Its effects on GPS observable cause a code delays and phase advances. The magnitude of this error is affected by the local time of the day, season, solar cycle, geographical location of the receiver and Earth's magnetic field. As it is well known, the ionosphere is the main drawback for high accuracy positioning, when using single frequency receivers, either for point positioning or relative positioning of medium and long baselines. The ionosphere effects were investigated in the determination of point positioning and relative positioning using single frequency data. A model represented by a Fourier series type was implemented and the parameters were estimated from data collected at the active stations of RBMC (Brazilian Network for Continuous Monitoring of GPS satellites). The data input were the pseudorange observables filtered by the carrier phase. Quality control was implemented in order to analyse the adjustment and to validate the significance of the estimated parameters. Experiments were carried out in the equatorial region, using data collected from dual frequency receivers. In order to validate the model, the estimated values were compared with ground truth. For point and relative positioning of baselines of approximately 100 km, the values of the discrepancies indicated an error reduction better than 80% and 50% respectively, compared to the processing without the ionospheric model. These results give an indication that more research has to be done in order to provide support to the L1 GPS users in the Equatorial region.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To examine the correlation between the clinical diagnosis and autopsy findings in adult patients who died in an intensive care unit (ICU). To determine the rate of agreement of the basic and terminal causes of death and the types of errors in order to improve quality control of future care,Design, Retrospective study.Setting: Adult ICU in a university hospital.Patients: 30 adult patients who died in the ICU. with the exclusion of medicolegal cases.Methods and main results: Anatomo-clinical meetings were held to analyze the pre- and postmortem correlations in 30 consecutive autopsies at the ICU of the University Hospital, School of Medicine of Botucatu/ UNESP, from January 1994 to January 1997. The rate of correct clinical diagnoses of the basic cause was 66.7 %; in 23.3 % of cases, if the correct diagnosis was made, management would have been different, as would have been the evolution of the patient's course (Class I error): in 10 % of the cases the error would not have led to a change in management (Class II error). The rate of correct clinical diagnoses of terminal cause was 80 %.Conclusions: the rate of recognition of the basic cause was 66.7 %, which is consistent with the literature, but the Class I error rate was higher than that reported in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: To evaluate the use of the center of the incisive papilla as a guide for the selection of the proper width of maxillary dentures in 4 racial groups. Method and Materials: One hundred sixty stone casts were obtained from impressions of the maxillary arch of white, black, mixed, and Asian subjects. The occlusal surfaces of the casts were photocopied and the images placed on a digitizer. The most anterior and posterior points of the papilla and cusp tips of the canines were digitized. Dentofacial Planner Plus software was used to calculate the distance from a line passing through the cusp tips of the canines to the center of the papilla, defined as the midpoint of the anterior and posterior points of the papilla. The selection error (in millimeters) due to the clinical application of the method of the incisive papilla was calculated and analyzed. Results: In all studied racial groups, there was no coincidence between the center of the incisive papilla and the canine line. The utilization of the center of the papilla would lead to the selection of wider artificial teeth. In 24.9% of the white, 19.3% of the mixed, 32.9% of the black, and 15.5% of the Asian populations, errors greater than 4 mm would be present with the utilization of the papilla. Conclusion: The method of the center of the incisive papilla is not accurate, but may aid in initial artificial teeth selection for the racial groups studied. (Quintessence Int 2008;39:841-845)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)