902 resultados para Multivariate measurement model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A way of coupling digital image correlation (to measure displacement fields) and boundary element method (to compute displacements and tractions along a crack surface) is presented herein. It allows for the identification of Young`s modulus and fracture parameters associated with a cohesive model. This procedure is illustrated to analyze the latter for an ordinary concrete in a three-point bend test on a notched beam. In view of measurement uncertainties, the results are deemed trustworthy thanks to the fact that numerous measurement points are accessible and used as entries to the identification procedure. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The CASMIN Project is arguably the most influential contemporary study of class mobility in the world. However, CASMIN results with respect to weak vertical status effects on class mobility have been extensively criticized. Drawing on arguments about how to model vertical mobility, Hout and Hauser (1992) show that class mobility is strongly determined by vertical socioeconomic differences. This paper extends these arguments by estimating the CASMIN model while explicitly controlling for individual determinants of socioeconomic attainment. Using the 1972 Oxford Mobility Data and the 1979 and 1983 British Election Studies, the paper employs mixed legit models to show how individual socioeconomic factors and categorical differences between classes shape intergenerational mobility. The findings highlight the multidimensionality of class mobility and its irreducibility to vertical movement up and down a stratification hierarchy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective. The purpose of this study was to estimate the Down syndrome detection and false-positive rates for second-trimester sonographic prenasal thickness (PT) measurement alone and in combination with other markers. Methods. Multivariate log Gaussian modeling was performed using numerical integration. Parameters for the PT distribution, in multiples of the normal gestation-specific median (MoM), were derived from 105 Down syndrome and 1385 unaffected pregnancies scanned at 14 to 27 weeks. The data included a new series of 25 cases and 535 controls combined with 4 previously published series. The means were estimated by the median and the SDs by the 10th to 90th range divided by 2.563. Parameters for other markers were obtained from the literature. Results. A log Gaussian model fitted the distribution of PT values well in Down syndrome and unaffected pregnancies. The distribution parameters were as follows: Down syndrome, mean, 1.334 MoM; log(10) SD, 0.0772; unaffected pregnancies, 0.995 and 0.0752, respectively. The model-predicted detection rates for 1%, 3%, and 5% false-positive rates for PT alone were 35%, 51%, and 60%, respectively. The addition of PT to a 4 serum marker protocol increased detection by 14% to 18% compared with serum alone. The simultaneous sonographic measurement of PT and nasal bone length increased detection by 19% to 26%, and with a third sonographic marker, nuchal skin fold, performance was comparable with first-trimester protocols. Conclusions. Second-trimester screening with sonographic PT and serum markers is predicted to have a high detection rate, and further sonographic markers could perform comparably with first-trimester screening protocols.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Websites are, nowadays, the face of institutions, but they are often neglected, especially when it comes to contents. In the present paper, we put forth an investigation work whose final goal is the development of a model for the measurement of data quality in institutional websites for health units. To that end, we have carried out a bibliographic review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we are currently carrying out a Delphi Method process, presently in its second stage, with the purpose of reaching an adequate set of attributes for the measurement of content quality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE - The aim of our study was to assess the profile of a wrist monitor, the Omron Model HEM-608, compared with the indirect method for blood pressure measurement. METHODS - Our study population consisted of 100 subjects, 29 being normotensive and 71 being hypertensive. Participants had their blood pressure checked 8 times with alternate techniques, 4 by the indirect method and 4 with the Omron wrist monitor. The validation criteria used to test this device were based on the internationally recognized protocols. RESULTS - Our data showed that the Omron HEM-608 reached a classification B for systolic and A for diastolic blood pressure, according to the one protocol. The mean differences between blood pressure values obtained with each of the methods were -2.3 +7.9mmHg for systolic and 0.97+5.5mmHg for diastolic blood pressure. Therefore, we considered this type of device approved according to the criteria selected. CONCLUSION - Our study leads us to conclude that this wrist monitor is not only easy to use, but also produces results very similar to those obtained by the standard indirect method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software engineering, software measurement, software process engineering, capability, maturity

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2012

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The goal of this paper is to estimate time-varying covariance matrices.Since the covariance matrix of financial returns is known to changethrough time and is an essential ingredient in risk measurement, portfolioselection, and tests of asset pricing models, this is a very importantproblem in practice. Our model of choice is the Diagonal-Vech version ofthe Multivariate GARCH(1,1) model. The problem is that the estimation ofthe general Diagonal-Vech model model is numerically infeasible indimensions higher than 5. The common approach is to estimate more restrictive models which are tractable but may not conform to the data. Our contributionis to propose an alternative estimation method that is numerically feasible,produces positive semi-definite conditional covariance matrices, and doesnot impose unrealistic a priori restrictions. We provide an empiricalapplication in the context of international stock markets, comparing thenew estimator to a number of existing ones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Left atrial (LA) dilatation is associated with a large variety of cardiac diseases. Current cardiovascular magnetic resonance (CMR) strategies to measure LA volumes are based on multi-breath-hold multi-slice acquisitions, which are time-consuming and susceptible to misregistration. AIM: To develop a time-efficient single breath-hold 3D CMR acquisition and reconstruction method to precisely measure LA volumes and function. METHODS: A highly accelerated compressed-sensing multi-slice cine sequence (CS-cineCMR) was combined with a non-model-based 3D reconstruction method to measure LA volumes with high temporal and spatial resolution during a single breath-hold. This approach was validated in LA phantoms of different shapes and applied in 3 patients. In addition, the influence of slice orientations on accuracy was evaluated in the LA phantoms for the new approach in comparison with a conventional model-based biplane area-length reconstruction. As a reference in patients, a self-navigated high-resolution whole-heart 3D dataset (3D-HR-CMR) was acquired during mid-diastole to yield accurate LA volumes. RESULTS: Phantom studies. LA volumes were accurately measured by CS-cineCMR with a mean difference of -4.73 ± 1.75 ml (-8.67 ± 3.54%, r2 = 0.94). For the new method the calculated volumes were not significantly different when different orientations of the CS-cineCMR slices were applied to cover the LA phantoms. Long-axis "aligned" vs "not aligned" with the phantom long-axis yielded similar differences vs the reference volume (-4.87 ± 1.73 ml vs. -4.45 ± 1.97 ml, p = 0.67) and short-axis "perpendicular" vs. "not-perpendicular" with the LA long-axis (-4.72 ± 1.66 ml vs. -4.75 ± 2.13 ml; p = 0.98). The conventional bi-plane area-length method was susceptible for slice orientations (p = 0.0085 for the interaction of "slice orientation" and "reconstruction technique", 2-way ANOVA for repeated measures). To use the 3D-HR-CMR as the reference for LA volumes in patients, it was validated in the LA phantoms (mean difference: -1.37 ± 1.35 ml, -2.38 ± 2.44%, r2 = 0.97). Patient study: The CS-cineCMR LA volumes of the mid-diastolic frame matched closely with the reference LA volume (measured by 3D-HR-CMR) with a difference of -2.66 ± 6.5 ml (3.0% underestimation; true LA volumes: 63 ml, 62 ml, and 395 ml). Finally, a high intra- and inter-observer agreement for maximal and minimal LA volume measurement is also shown. CONCLUSIONS: The proposed method combines a highly accelerated single-breathhold compressed-sensing multi-slice CMR technique with a non-model-based 3D reconstruction to accurately and reproducibly measure LA volumes and function.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present the results obtained with a ureterovesical implant after ipsilateral ureteral obstruction in the rat, suitable for the study of renal function after deobstruction in these animals. Thirty-seven male Wistar rats weighing 260 to 300 g were submitted to distal right ureteral ligation and divided into 3 groups, A (N = 13, 1 week of obstruction), B (N = 14, 2 weeks of obstruction) and C (N = 10, 3 weeks of obstruction). The animals were then submitted to ureterovesical implantation on the right side and nephrectomy on the left side. During the 4-week follow-up period serum levels of urea and creatinine were measured on the 2nd, 7th, 14th, 21st and 28th day and compared with preoperative levels. The ureterovesical implantation included a psoas hitch procedure and the ureter was pulled into the bladder using a transvesical suture. During the first week of the postoperative period 8 animals died, 4/13 in group A (1 week of obstruction) and 4/14 in group B (2 weeks of obstruction). When compared to preoperative serum levels, urea and creatinine showed a significant increase (P<0.05) on the 2nd postoperative day in groups A and B, with a gradual return to lower levels. However, the values in group B animals were higher than those in group A at the end of the follow-up. In group C, 2/10 animals (after 3 weeks of obstruction) were sacrificed at the time of ureterovesical implantation due to infection of the obstructed kidneys. The remaining animals in this group were operated upon but all of them died during the first week of follow-up due to renal failure. This technique of ureterovesical implantation in the rat provides effective drainage of the upper urinary tract, permitting the development of an experimental model for the study of long-term renal function after a period of ureteral obstruction

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.