879 resultados para model testing
Resumo:
BACKGROUND The noble gas xenon is considered as a neuroprotective agent, but availability of the gas is limited. Studies on neuroprotection with the abundant noble gases helium and argon demonstrated mixed results, and data regarding neuroprotection after cardiac arrest are scant. We tested the hypothesis that administration of 50% helium or 50% argon for 24 h after resuscitation from cardiac arrest improves clinical and histological outcome in our 8 min rat cardiac arrest model. METHODS Forty animals had cardiac arrest induced with intravenous potassium/esmolol and were randomized to post-resuscitation ventilation with either helium/oxygen, argon/oxygen or air/oxygen for 24 h. Eight additional animals without cardiac arrest served as reference, these animals were not randomized and not included into the statistical analysis. Primary outcome was assessment of neuronal damage in histology of the region I of hippocampus proper (CA1) from those animals surviving until day 5. Secondary outcome was evaluation of neurobehavior by daily testing of a Neurodeficit Score (NDS), the Tape Removal Test (TRT), a simple vertical pole test (VPT) and the Open Field Test (OFT). Because of the non-parametric distribution of the data, the histological assessments were compared with the Kruskal-Wallis test. Treatment effect in repeated measured assessments was estimated with a linear regression with clustered robust standard errors (SE), where normality is less important. RESULTS Twenty-nine out of 40 rats survived until day 5 with significant initial deficits in neurobehavioral, but rapid improvement within all groups randomized to cardiac arrest. There were no statistical significant differences between groups neither in the histological nor in neurobehavioral assessment. CONCLUSIONS The replacement of air with either helium or argon in a 50:50 air/oxygen mixture for 24 h did not improve histological or clinical outcome in rats subjected to 8 min of cardiac arrest.
Resumo:
The current analysis examined the association of several demographic and behavioral variables with prior HIV testing within a population of injection drug users (IDUs) living in Harris County, Texas in 2005 (n=563). After completing the initial univariate analyses of all potential predictors, a multivariable model was created. This model was designed to guide future intervention efforts. Data used in this analysis were collected by the University of Texas School of Public Health in association with the Houston Department of Health and Human Services for the first IDU cycle of the National HIV Behavioral Surveillance System. About 76% of the IDUs reported previously being tested for HIV. Demographic variables that displayed a significant association with prior testing during the univariate analyses include age, race/ethnicity, birth outside the United States, education level, recent arrest, and current health insurance coverage. Several drug-related and sexual behaviors also demonstrated significant associations with prior testing, including age of first injection drug use, heroin use, methamphetamine use, source of needles or syringes, consistent use of new needles, recent visits to a shooting gallery or similar location, previous alcohol or drug treatment, condom use during their most recent sexual encounter, and having sexual partners who also used injection drugs. Additionally, the univariate analyses revealed that recent use of health or HIV prevention services was associated with previously testing for HIV. The final multivariable model included age, race/ethnicity, recent arrest, previous alcohol or drug treatment, and heroin use. ^
Resumo:
Tuberculosis (TB) is an infectious disease of great public health importance, particularly to institutions that provide health care to large numbers of TB patients such as Parkland Hospital in Dallas, TX. The purpose of this retrospective chart review was to analyze differences in TB positive and TB negative patients to better understand whether or not there were variables that could be utilized to develop a predictive model for use in the emergency department to reduce the overall number of suspected TB patients being sent to respiratory isolation for TB testing. This study included patients who presented to the Parkland Hospital emergency department between November 2006 and December 2007 and were isolated and tested for TB. Outcome of TB was defined as a positive sputum AFB test or a positive M. tuberculosis culture result. Data were collected utilizing the UT Southwestern Medical Center computerized database OACIS and included demographic information, TB risk factors, physical symptoms, and clinical results. Only two variables were significantly (P<0.05) related to TB outcome: dyspnea (shortness of breath) (P<0.001) and abnormal x-ray (P<0.001). Marginally significant variables included hemoptysis (P=0.06), weight loss (P=0.11), night sweats (P=0.20), history of homelessness or incarceration (P=0.15), and history of positive skin PPD (P=0.19). Using a combination of significant and marginally significant variables, a predictive model was designed which demonstrated a specificity of 24% and a sensitivity of 70%. In conclusion, a predictive model for TB outcome based on patients who presented to the Parkland Hospital emergency department between November 2006 and December 2007 was unsuccessful given the limited number of variables that differed significantly between TB positive and TB negative patients. It is suggested that a future prospective cohort study should be implemented to collect data on TB positive and TB negative patients. It may be possible that a more thorough prospective collection of data may lead to clearer comparisons between TB positive and TB negative patients and ultimately to the design of a more sensitive predictive model for TB outcome. ^
Resumo:
Background. At present, prostate cancer screening (PCS) guidelines require a discussion of risks, benefits, alternatives, and personal values, making decision aids an important tool to help convey information and to help clarify values. Objective: The overall goal of this study is to provide evidence of the reliability and validity of a PCS anxiety measure and the Decisional Conflict Scale (DCS). Methods. Using data from a randomized, controlled PCS decision aid trial that measured PCS anxiety at baseline and DCS at baseline (T0) and at two-weeks (T2), four psychometric properties were assessed: (1) internal consistency reliability, indicated by factor analysis intraclass correlations and Cronbach's α; (2) construct validity, indicated by patterns of Pearson correlations among subscales; (3) discriminant validity, indicated by the measure's ability to discriminate between undecided men and those with a definite screening intention; and (4) factor validity and invariance using confirmatory factor analyses (CFA). Results. The PCS anxiety measure had adequate internal consistency reliability and good construct and discriminant validity. CFAs indicated that the 3-factor model did not have adequate fit. CFAs for a general PCS anxiety measure and a PSA anxiety measure indicated adequate fit. The general PCS anxiety measure was invariant across clinics. The DCS had adequate internal consistency reliability except for the support subscale and had adequate discriminate validity. Good construct validity was found at the private clinic, but was only found for the feeling informed subscale at the public clinic. The traditional DCS did not have adequate fit at T0 or at T2. The alternative DCS had adequate fit at T0 but was not identified at T2. Factor loadings indicated that two subscales, feeling informed and feeling clear about values, were not distinct factors. Conclusions. Our general PCS anxiety measure can be used in PCS decision aid studies. The alternative DCS may be appropriate for men eligible for PCS. Implications: More emphasis needs to be placed on the development of PCS anxiety items relating to testing procedures. We recommend that the two DCS versions be validated in other samples of men eligible for PCS and in other health care decisions that involve uncertainty. ^
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.
Resumo:
This paper presents a simple mathematical model to estimateshadinglosses on PVarrays. The model is applied directly to power calculations, without the need to consider the whole current–voltage curve. This allows the model to be used with common yield estimation software. The model takes into account both the shaded fraction of the array area and the number of blocks (a group of solar cells protected by a bypass diode) affected by shade. The results of an experimental testing campaign on several shaded PVarrays to check the validity of model are also reported.
Resumo:
Belief propagation (BP) is a technique for distributed inference in wireless networks and is often used even when the underlying graphical model contains cycles. In this paper, we propose a uniformly reweighted BP scheme that reduces the impact of cycles by weighting messages by a constant ?edge appearance probability? rho ? 1. We apply this algorithm to distributed binary hypothesis testing problems (e.g., distributed detection) in wireless networks with Markov random field models. We demonstrate that in the considered setting the proposed method outperforms standard BP, while maintaining similar complexity. We then show that the optimal ? can be approximated as a simple function of the average node degree, and can hence be computed in a distributed fashion through a consensus algorithm.
Resumo:
This paper presents a simple mathematical model to estimate shading losses on PV arrays. The model is applied directly to power calculations, without the need to consider the whole current–voltage curve. This allows the model to be used with common yield estimation software. The model takes into account both the shaded fraction of the array area and the number of blocks (a group of solar cells protected by a bypass diode) affected by shade. The results of an experimental testing campaign on several shaded PV arrays to check the validity of model are also reported.
Resumo:
This paper reports a packaging and calibration procedure for surface mounting of fiber Bragg grating (FBG) sensors to measure strain in rocks. The packaging of FBG sensors is performed with glass fiber and polyester resin, and then subjected to tensile loads in order to obtain strength and deformability parameters, necessaries to assess the mechanical performance of the sensor packaging. For a specific package, an optimal curing condition has been found, showing good repeatability and adaptability for non-planar surfaces, such as occurs in rock engineering. The successfully packaged sensors and electrical strain gages were attached to standard rock specimens of gabbro. Longitudinal and transversal strains under compression loads were measured with both techniques, showing that response of FBG sensors is linear and reliable. An analytical model is used to characterize the influences of rock substrate and FBG packaging in strain transmission. As a result, we obtained a sensor packaging for non-planar and complex natural material under acceptable sensitivity suitable for very small strains as occurs in hard rocks.
Resumo:
Satellites and space equipment are exposed to diffuse acoustic fields during the launch process. The use of adequate techniques to model the response to the acoustic loads is a fundamental task during the design and verification phases. Considering the modal density of each element is necessary to identify the correct methodology. In this report selection criteria are presented in order to choose the correct modelling technique depending on the frequency ranges. A model satellite’s response to acoustic loads is presented, determining the modal densities of each component in different frequency ranges. The paper proposes to select the mathematical method in each modal density range and the differences in the response estimation due to the different used techniques. In addition, the methodologies to analyse the intermediate range of the system are discussed. The results are compared with experimental testing data obtained in an experimental modal test.
Resumo:
Padding materials are commonly used in fruit packing lines with the objective of diminishing impact damage in postharvest handling. Two sensors, instrumented sphere IS 100 and impact tester, have been compared to analyze the performance of six different padding materials used in Spanish fruit packing lines. Padding materials tested have been classified according to their capability to decrease impact intensities inflicted to fruit in packing lines. A procedure to test padding materials has been developed for "Golden" apples. Its basis is a logistic regression to predict bruise probability in fruit. The model combines two kinds of parameters: padding material parameters measured with IS, and fruit properties.