354 resultados para b-hCG regression curve
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Purpose: Progression to the castration-resistant state is the incurable and lethal end stage of prostate cancer, and there is strong evidence that androgen receptor (AR) still plays a central role in this process. We hypothesize that knocking down AR will have a major effect on inhibiting growth of castration-resistant tumors. Experimental Design: Castration-resistant C4-2 human prostate cancer cells stably expressing a tetracycline-inducible AR-targeted short hairpin RNA (shRNA) were generated to directly test the effects of AR knockdown in C4-2 human prostate cancer cells and tumors. Results:In vitro expression of AR shRNA resulted in decreased levels of AR mRNA and protein, decreased expression of prostate-specific antigen (PSA), reduced activation of the PSA-luciferase reporter, and growth inhibition of C4-2 cells. Gene microarray analyses revealed that AR knockdown under hormone-deprived conditions resulted in activation of genes involved in apoptosis, cell cycle regulation, protein synthesis, and tumorigenesis. To ensure that tumors were truly castration-resistant in vivo, inducible AR shRNA expressing C4-2 tumors were grown in castrated mice to an average volume of 450 mm3. In all of the animals, serum PSA decreased, and in 50% of them, there was complete tumor regression and disappearance of serum PSA. Conclusions: Whereas castration is ineffective in castration-resistant prostate tumors, knockdown of AR can decrease serum PSA, inhibit tumor growth, and frequently cause tumor regression. This study is the first direct evidence that knockdown of AR is a viable therapeutic strategy for treatment of prostate tumors that have already progressed to the castration-resistant state.
Resumo:
This study was designed to derive central and peripheral oxygen transmissibility (Dk/t) thresholds for soft contact lenses to avoid hypoxia-induced corneal swelling (increased corneal thickness) during open eye wear. Central and peripheral corneal thicknesses were measured in a masked and randomized fashion for the left eye of each of seven subjects before and after 3 h of afternoon wear of five conventional hydrogel and silicone hydrogel contact lens types offering a range of Dk/t from 2.4 units to 115.3 units. Curve fitting for plots of change in corneal thickness versus central and peripheral Dk/t found threshold values of 19.8 and 32.6 units to avoid corneal swelling during open eye contact lens wear for a typical wearer. Although some conventional hydrogel soft lenses are able to achieve this criterion for either central or peripheral lens areas (depending on lens power), in general, no conventional hydrogel soft lenses meet both the central and peripheral thresholds. Silicone hydrogel contact lenses typically meet both the central and peripheral thresholds and use of these lenses therefore avoids swelling in all regions of the cornea. ' 2009 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater 92B: 361–365, 2010
Resumo:
Gel dosimeters are of increasing interest in the field of radiation oncology as the only truly three-dimensional integrating radiation dosimeter. There are a range of ferrous-sulphate and polymer gel dosimeters. To be of use, they must be water-equivalent. On their own, this relates to their radiological properties as determined by their composition. In the context of calibration of gel dosimeters, there is the added complexity of the calibration geometry; the presence of containment vessels may influence the dose absorbed. Five such methods of calibration are modelled here using the Monte Carlo method. It is found that the Fricke gel best matches water for most of the calibration methods, and that the best calibration method involves the use of a large tub into which multiple fields of different dose are directed. The least accurate calibration method involves the use of a long test tube along which a depth dose curve yields multiple calibration points.
Resumo:
Focuses on a study which introduced an iterative modeling method that combines properties of ordinary least squares (OLS) with hierarchical tree-based regression (HTBR) in transportation engineering. Information on OLS and HTBR; Comparison and contrasts of OLS and HTBR; Conclusions.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Understanding the expected safety performance of rural signalized intersections is critical for (a) identifying high-risk sites where the observed safety performance is substantially worse than the expected safety performance, (b) understanding influential factors associated with crashes, and (c) predicting the future performance of sites and helping plan safety-enhancing activities. These three critical activities are routinely conducted for safety management and planning purposes in jurisdictions throughout the United States and around the world. This paper aims to develop baseline expected safety performance functions of rural signalized intersections in South Korea, which to date have not yet been established or reported in the literature. Data are examined from numerous locations within South Korea for both three-legged and four-legged configurations. The safety effects of a host of operational and geometric variables on the safety performance of these sites are also examined. In addition, supplementary tables and graphs are developed for comparing the baseline safety performance of sites with various geometric and operational features. These graphs identify how various factors are associated with safety. The expected safety prediction tables offer advantages over regression prediction equations by allowing the safety manager to isolate specific features of the intersections and examine their impact on expected safety. The examination of the expected safety performance tables through illustrated examples highlights the need to correct for regression-to-the-mean effects, emphasizes the negative impacts of multicollinearity, shows why multivariate models do not translate well to accident modification factors, and illuminates the need to examine road safety carefully and methodically. Caveats are provided on the use of the safety performance prediction graphs developed in this paper.
Resumo:
Recent epidemiologic studies have suggested that ultraviolet radiation (UV) may protect against non-Hodgkin lymphoma (NHL), but few, if any, have assessed multiple indicators of ambient and personal UV exposure. Using the US Radiologic Technologists study, we examined the association between NHL and self-reported time outdoors in summer, as well as average year-round and seasonal ambient exposures based on satellite estimates for different age periods, and sun susceptibility in participants who had responded to two questionnaires (1994–1998, 2003–2005) and who were cancer-free as of the earlier questionnaire. Using unconditional logistic regression, we estimated the odds ratio (OR) and 95% confidence intervals for 64,103 participants with 137 NHL cases. Self-reported time outdoors in summer was unrelated to risk. Lower risk was somewhat related to higher average year-round and winter ambient exposure for the period closest in time, and prior to, diagnosis (ages 20–39). Relative to 1.0 for the lowest quartile of average year-round ambient UV, the estimated OR for successively higher quartiles was 0.68 (0.42–1.10); 0.82 (0.52–1.29); and 0.64 (0.40–1.03), p-trend = 0.06), for this age period. The lower NHL risk associated with higher year-round average and winter ambient UV provides modest additional support for a protective relationship between UV and NHL.