881 resultados para N-based linear spacers


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The healthcare industry spends billions on worker injury and employee turnover. Hospitals and healthcare settings have one of the highest rates of lost days due to injuries. The occupational hazards for healthcare workers can be classified into biological, chemical, ergonomic, physical, organizational, and psychosocial. Therefore, interventions addressing a range of occupational health risks are needed to prevent injuries and reduce turnover and reduce costs. ^ The Sacred Vocation Program (SVP) seeks to change the content of work, i.e., the meaningfulness of work, to improve work environments. The SVP intervenes at both the individual and organizational level. First the SVP attempts to connect healthcare workers with meaning from their work through a series of 5 self-discovery group sessions. In a sixth session the graduates take an oath recommitting them to do their work as a vocation. Once motivated to connect with meaning in their work, a representative employee group meets in a second set of five meetings. This representative group suggests organizational changes to create a culture that supports employees in their calling. The employees present their plan in the twelfth session to management beginning a new phase in the existing dialogue between employees and management. ^ The SVP was implemented in a large Dallas hospital (almost 1000 licensed beds). The Baylor University Medical Center (BUMC) Pastoral Care department invited front-line caregivers (primarily Patient Care Assistants, PCAs, or Patient Care Technicians, PCTs) to participate in the SVP. Participants completed SVP questionnaires at the beginning and following SVP implementation. Following implementation, employer records were collected on injury, absence and turnover to further evaluate the program's effectiveness on metrics that are meaningful to managers in assessing organizational performance. This provided an opportunity to perform an epidemiological evaluation of the intervention using the two sources of information: employee self-reports and employer administrative data. ^ The ability to evaluate the effectiveness of the SVP on program outcomes could be limited by the strength of the measures used. An ordinal CFA performed on baseline SVP questionnaire measurements examined the construct validity and reliability of the SVP scales. Scales whose item-factor structure was confirmed in ordinal CFA were evaluated for their psychometric properties (i.e., reliability, mean, ceiling and floor effects). CFA supported the construct validity of six of the proposed scales: blocks to spirituality, meaning at work, work satisfaction, affective commitment, collaborative communication, and MHI-5. Five of the six scales confirmed had acceptable measures of reliability (all but MHI-5 had α>0.7). All six scales had a high percentage (>30%) of the scores at the ceiling. These findings supported the use of these items in the evaluation of change although strong ceiling effects may hinder discerning change. ^ Next, the confirmed SVP scales were used to evaluate whether the intervention improved program constructs. To evaluate the SVP a one group pretest-posttest design compared participants’ self-reports before and after the intervention. It was hypothesized that measurements of reduced blocks to spirituality (α = 0.76), meaning at work (α = 0.86), collaborative communication (α = 0.67) and SVP job tasks (α = 0.97) would improve following SVP implementation. The SVP job tasks scale was included even though it was not included in the ordinal CFA analysis due to a limited sample and high inter-item correlation. Changes in scaled measurements were assessed using multilevel linear regression methods. All post-intervention measurements increased (increases <0.28 points) but only reduced blocks to spirituality was statistically significant (0.22 points on a scale from 1 to 7, p < 0.05) after adjustment for covariates. Intensity of the intervention (stratifying on high participation units) strengthened effects; but were not statistically significant. The findings provide preliminary support for the hypothesis that meaning in work can be improved and, importantly, lend greater credence to any observed improvements in the outcomes. (Abstract shortened by UMI.)^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: School districts in the U.S. regularly offer foods that compete with the USDA reimbursable meal, known as `a la carte' foods. These foods must adhere to state nutritional regulations; however, the implementation of these regulations often differs across districts. The purpose of this study is to compare two methods of offering a la carte foods on student's lunch intake: 1) an extensive a la carte program in which schools have a separate area for a la carte food sales, that includes non-reimbursable entrees; and 2) a moderate a la carte program, which offers the sale of a la carte foods on the same serving line with reimbursable meals. ^ Methods: Direct observation was used to assess children's lunch consumption in six schools, across two districts in Central Texas (n=373 observations). Schools were matched on socioeconomic status. Data collectors were randomly assigned to students, and recorded foods obtained, foods consumed, source of food, gender, grade, and ethnicity. Observations were entered into a nutrient database program, FIAS Millennium Edition, to obtain nutritional information. Differences in energy and nutrient intake across lunch sources and districts were assessed using ANOVA and independent t-tests. A linear regression model was applied to control for potential confounders. ^ Results: Students at schools with extensive a la carte programs consumed significantly more calories, carbohydrates, total fat, saturated fat, calcium, and sodium compared to students in schools with moderate a la carte offerings (p<.05). Students in the extensive a la carte program consumed approximately 94 calories more than students in the moderate a la carte program. There was no significant difference in the energy consumption in students who consumed any amount of a la carte compared to students who consumed none. In both districts, students who consumed a la carte offerings were more likely to consume sugar-sweetened beverages, sweets, chips, and pizza compared to students who consumed no a la carte foods. ^ Conclusion: The amount, type and method of offering a la carte foods can significantly affect student dietary intake. This pilot study indicates that when a la carte foods are more available, students consume more calories. Findings underscore the need for further investigation on how availability of a la carte foods affects children's diets. Guidelines for school a la carte offerings should be maximized to encourage the consumption of healthful foods and appropriate energy intake.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is still an open question how equilibrium warming in response to increasing radiative forcing - the specific equilibrium climate sensitivity S - depends on background climate. We here present palaeodata-based evidence on the state dependency of S, by using CO2 proxy data together with a 3-D ice-sheet-model-based reconstruction of land ice albedo over the last 5 million years (Myr). We find that the land ice albedo forcing depends non-linearly on the background climate, while any non-linearity of CO2 radiative forcing depends on the CO2 data set used. This non-linearity has not, so far, been accounted for in similar approaches due to previously more simplistic approximations, in which land ice albedo radiative forcing was a linear function of sea level change. The latitudinal dependency of ice-sheet area changes is important for the non-linearity between land ice albedo and sea level. In our set-up, in which the radiative forcing of CO2 and of the land ice albedo (LI) is combined, we find a state dependence in the calculated specific equilibrium climate sensitivity, S[CO2,LI], for most of the Pleistocene (last 2.1 Myr). During Pleistocene intermediate glaciated climates and interglacial periods, S[CO2,LI] is on average ~ 45 % larger than during Pleistocene full glacial conditions. In the Pliocene part of our analysis (2.6-5 Myr BP) the CO2 data uncertainties prevent a well-supported calculation for S[CO2,LI], but our analysis suggests that during times without a large land ice area in the Northern Hemisphere (e.g. before 2.82 Myr BP), the specific equilibrium climate sensitivity, S[CO2,LI], was smaller than during interglacials of the Pleistocene. We thus find support for a previously proposed state change in the climate system with the widespread appearance of northern hemispheric ice sheets. This study points for the first time to a so far overlooked non-linearity in the land ice albedo radiative forcing, which is important for similar palaeodata-based approaches to calculate climate sensitivity. However, the implications of this study for a suggested warming under CO2 doubling are not yet entirely clear since the details of necessary corrections for other slow feedbacks are not fully known and the uncertainties that exist in the ice-sheet simulations and global temperature reconstructions are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the intercalibration of paleomagnetic secular variation (PSV) and radiocarbon dates of two expanded postglacial sediment cores from geographically proximal, but oceanographically and sedimentologically contrasting settings. The objective is to improve relative correlation and chronology over what can be achieved with either method alone. Core MD99-2269 was taken from the Húnaflóaáll Trough on the north Iceland shelf. Core MD99-2322 was collected from the Kangerlussuaq Trough on the east Greenland margin. Both cores are well dated, with 27 and 20 accelerator mass spectrometry 14C dates for cores 2269 and 2322, respectively. Paleomagnetic measurements made on u channel samples document a strong, stable, single-component magnetization. The temporal similarities of paleomagnetic inclination and declination records are shown using each core's independent calibrated radiocarbon age model. Comparison of the PSV records reveals that the relative correlation between the two cores could be further improved. Starting in the depth domain, tie points initially based on calibrated 14C dates are either adjusted or added to maximize PSV correlations. Radiocarbon dates from both cores are then combined on a common depth scale resulting from the PSV correlation. Support for the correlation comes from the consistent interweaving of dates, correct alignment of the Saksunarvatn tephra, and the improved correlation of paleoceanographic proxy data (percent carbonate). These results demonstrate that PSV correlation used in conjunction with 14C dates can improve relative correlation and also regional chronologies by allowing dates from various stratigraphic sequences to be combined into a single, higher dating density, age-to-depth model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapid, economic and sensitive chemiluminescent method involving flow-injection analysis was developed for the determination of dipyrone in pharmaceutical preparations. The method is based on the chemiluminescent reaction between quinolinic hydrazide and hydrogen peroxide in a strongly alkaline medium, in which vanadium(IV) acts as a catalyst. Principal chemical and physical variables involved in the flow-injection system were optimized using a modified simplex method. The variations in the quantum yield observed when dipyrone was present in the reaction medium were used to determine the concentration of this compound. The proposed method requires no preconcentration steps and reliably quantifies dipyrone over the linear range 1–50 µg/mL. In addition, a sample throughput of 85 samples/h is possible. Copyright © 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show a procedure for constructing a probabilistic atlas based on affine moment descriptors. It uses a normalization procedure over the labeled atlas. The proposed linear registration is defined by closed-form expressions involving only geometric moments. This procedure applies both to atlas construction as atlas-based segmentation. We model the likelihood term for each voxel and each label using parametric or nonparametric distributions and the prior term is determined by applying the vote-rule. The probabilistic atlas is built with the variability of our linear registration. We have two segmentation strategy: a) it applies the proposed affine registration to bring the target image into the coordinate frame of the atlas or b) the probabilistic atlas is non-rigidly aligning with the target image, where the probabilistic atlas is previously aligned to the target image with our affine registration. Finally, we adopt a graph cut - Bayesian framework for implementing the atlas-based segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instability analysis of compressible orthogonal swept leading-edge boundary layer flow was performed in the context of BiGlobal linear theory. 1, 2 An algorithm was developed exploiting the sparsity characteristics of the matrix discretizing the PDE-based eigenvalue problem. This allowed use of the MUMPS sparse linear algebra package 3 to obtain a direct solution of the linear systems associated with the Arnoldi iteration. The developed algorithm was then applied to efficiently analyze the effect of compressibility on the stability of the swept leading-edge boundary layer and obtain neutral curves of this flow as a function of the Mach number in the range 0 ≤ Ma ≤ 1. The present numerical results fully confirmed the asymptotic theory results of Theofilis et al. 4 Up to the maximum Mach number value studied, it was found that an increase of this parameter reduces the critical Reynolds number and the range of the unstable spanwise wavenumbers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an interleaved multiphase buck converter with minimum time control strategy for envelope amplifiers in high efficiency RF power amplifiers. The solution of the envelope amplifier is to combine the proposed converter with a linear regulator in series. High system efficiency can be obtained through modulating the supply voltage of the envelope amplifier with the fast output voltage variation of the converter working with several particular duty cycles that achieve total ripple cancellation. The transient model for minimum time control is explained, and the calculation of transient times that are pre-calculated and inserted into a look-up table is presented. The filter design trade-off that limits capability of envelope modulation is also discussed. The experimental results verify the fast voltage transient obtained with a 4-phase buck prototype.