906 resultados para Classical measurement error model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses distribution and the historical phases of capitalism. It assumes that technical progress and growth are taking place, and, given that, its question is on the functional distribution of income between labor and capital, having as reference classical theory of distribution and Marx’s falling tendency of the rate of profit. Based on the historical experience, it, first, inverts the model, making the rate of profit as the constant variable in the long run and the wage rate, as the residuum; second, it distinguishes three types of technical progress (capital-saving, neutral and capital-using) and applies it to the history of capitalism, having the UK and France as reference. Given these three types of technical progress, it distinguishes four phases of capitalist growth, where only the second is consistent with Marx prediction. The last phase, after World War II, should be, in principle, capital-saving, consistent with growth of wages above productivity. Instead, since the 1970s wages were kept stagnant in rich countries because of, first, the fact that the Information and Communication Technology Revolution proved to be highly capital using, opening room for a new wage of substitution of capital for labor; second, the new competition coming from developing countries; third, the emergence of the technobureaucratic or professional class; and, fourth, the new power of the neoliberal class coalition associating rentier capitalists and financiers

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Lagrangian formalism for the N = 2 supersymmetric sinh-Gordon model with a jump defect is considered. The modified conserved momentum and energy are constructed in terms of border functions. The supersymmetric Backlund transformation is given and an one-soliton solution is obtained.The Lax formulation based on the affine super Lie algebra sl(2, 2) within the space split by the defect leads to the integrability of the model and henceforth to the existence of an infinite number of constants of motion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present the first model-independent measurement of the helicity of W bosons produced in top quark decays, based on a 1 fb(-1) sample of candidate t (t) over bar events in the dilepton and lepton plus jets channels collected by the D0 detector at the Fermilab Tevatron p (p) over bar Collider. We reconstruct the angle theta(*) between the momenta of the down-type fermion and the top quark in the W boson rest frame for each top quark decay. A fit of the resulting cos theta(*) distribution finds that the fraction of longitudinal W bosons f(0)=0.425 +/- 0.166(stat)+/- 0.102(syst) and the fraction of right-handed W bosons f(+)=0.119 +/- 0.090(stat)+/- 0.053(syst), which is consistent at the 30% C.L. with the standard model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dynamical properties of the U-238-U-238 system at the classical turning point, specifically the distance of closest approach, the relative orientations of the nuclei, and deformations have been studied at the sub-Coulomb energy of E(lab) = 6.07 MeV/nucleon using a classical dynamical model with a variable moment of inertia. Probability of favorable alignment for anomalous positron-electron pair emission through vacuum decay is calculated. The calculated small favorable alignment probability value of 0.116 is found to be enhanced by about 16% in comparison with the results of a similar study using a fixed moment of inertia as well as the results from a semiquantal calculation reported earlier.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Measurement-based quantum computation is an efficient model to perform universal computation. Nevertheless, theoretical questions have been raised, mainly with respect to realistic noise conditions. In order to shed some light on this issue, we evaluate the exact dynamics of some single-qubit-gate fidelities using the measurement-based quantum computation scheme when the qubits which are used as a resource interact with a common dephasing environment. We report a necessary condition for the fidelity dynamics of a general pure N-qubit state, interacting with this type of error channel, to present an oscillatory behavior, and we show that for the initial canonical cluster state, the fidelity oscillates as a function of time. This state fidelity oscillatory behavior brings significant variations to the values of the computational results of a generic gate acting on that state depending on the instants we choose to apply our set of projective measurements. As we shall see, considering some specific gates that are frequently found in the literature, the fast application of the set of projective measurements does not necessarily imply high gate fidelity, and likewise the slow application thereof does not necessarily imply low gate fidelity. Our condition for the occurrence of the fidelity oscillatory behavior shows that the oscillation presented by the cluster state is due exclusively to its initial geometry. Other states that can be used as resources for measurement-based quantum computation can present the same initial geometrical condition. Therefore, it is very important for the present scheme to know when the fidelity of a particular resource state will oscillate in time and, if this is the case, what are the best times to perform the measurements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Early trauma care is dependent on subjective assessments and sporadic vital sign assessments. We hypothesized that near-infrared spectroscopy-measured cerebral oxygenation (regional oxygen saturation [rSO 2]) would provide a tool to detect cardiovascular compromise during active hemorrhage. We compared rSO 2 with invasively measured mixed venous oxygen saturation (SvO2), mean arterial pressure (MAP), cardiac output, heart rate, and calculated pulse pressure. Methods: Six propofol-anesthetized instrumented swine were subjected to a fixed-rate hemorrhage until cardiovascular collapse. rSO 2 was monitored with noninvasively measured cerebral oximetry; SvO2 was measured with a fiber optic pulmonary arterial catheter. As an assessment of the time responsiveness of each variable, we recorded minutes from start of the hemorrhage for each variable achieving a 5%, 10%, 15%, and 20% change compared with baseline. Results: Mean time to cardiovascular collapse was 35 minutes ± 11 minutes (54 ± 17% total blood volume). Cerebral rSO 2 began a steady decline at an average MAP of 78 mm Hg ± 17 mm Hg, well above the expected autoregulatory threshold of cerebral blood flow. The 5%, 10%, and 15% decreases in rSO 2 during hemorrhage occurred at a similar times to SvO2, but rSO 2 lagged 6 minutes behind the equivalent percentage decreases in MAP. There was a higher correlation between rSO 2 versus MAP (R =0.72) than SvO2 versus MAP (R =0.55). Conclusions: Near-infrared spectroscopy- measured rSO 2 provided reproducible decreases during hemorrhage that were similar in time course to invasively measured cardiac output and SvO2 but delayed 5 to 9 minutes compared with MAP and pulse pressure. rSO 2 may provide an earlier warning of worsening hemorrhagic shock for prompt interventions in patients with trauma when continuous arterial BP measurements are unavailable. © 2012 Lippincott Williams & Wilkins.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In most studies on beef cattle longevity, only the cows reaching a given number of calvings by a specific age are considered in the analyses. With the aim of evaluating all cows with productive life in herds, taking into consideration the different forms of management on each farm, it was proposed to measure cow longevity from age at last calving (ALC), that is, the most recent calving registered in the files. The objective was to characterize this trait in order to study the longevity of Nellore cattle, using the Kaplan-Meier estimators and the Cox model. The covariables and class effects considered in the models were age at first calving (AFC), year and season of birth of the cow and farm. The variable studied (ALC) was classified as presenting complete information (uncensored = 1) or incomplete information (censored = 0), using the criterion of the difference between the date of each cow's last calving and the date of the latest calving at each farm. If this difference was >36 months, the cow was considered to have failed. If not, this cow was censored, thus indicating that future calving remained possible for this cow. The records of 11 791 animals from 22 farms within the Nellore Breed Genetic Improvement Program ('Nellore Brazil') were used. In the estimation process using the Kaplan-Meier model, the variable of AFC was classified into three age groups. In individual analyses, the log-rank test and the Wilcoxon test in the Kaplan-Meier model showed that all covariables and class effects had significant effects (P < 0.05) on ALC. In the analysis considering all covariables and class effects, using the Wald test in the Cox model, only the season of birth of the cow was not significant for ALC (P > 0.05). This analysis indicated that each month added to AFC diminished the risk of the cow's failure in the herd by 2%. Nonetheless, this does not imply that animals with younger AFC had less profitability. Cows with greater numbers of calvings were more precocious than those with fewer calvings. Copyright © The Animal Consortium 2012.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Some dynamical properties for a bouncing ball model are studied. We show that when dissipation is introduced the structure of the phase space is changed and attractors appear. Increasing the amount of dissipation, the edges of the basins of attraction of an attracting fixed point touch the chaotic attractor. Consequently the chaotic attractor and its basin of attraction are destroyed given place to a transient described by a power law with exponent -2. The parameter-space is also studied and we show that it presents a rich structure with infinite self-similar structures of shrimp-shape. © 2013 Elsevier B.V. All rights reserved.