783 resultados para open data value chain
Resumo:
BACKGROUND The population-based effectiveness of thoracic endovascular aortic repair (TEVAR) versus open surgery for descending thoracic aortic aneurysm remains in doubt. METHODS Patients aged over 50 years, without a history of aortic dissection, undergoing repair of a thoracic aortic aneurysm between 2006 and 2011 were assessed using mortality-linked individual patient data from Hospital Episode Statistics (England). The principal outcomes were 30-day operative mortality, long-term survival (5 years) and aortic-related reinterventions. TEVAR and open repair were compared using crude and multivariable models that adjusted for age and sex. RESULTS Overall, 759 patients underwent thoracic aortic aneurysm repair, mainly for intact aneurysms (618, 81·4 per cent). Median ages of TEVAR and open cohorts were 73 and 71 years respectively (P < 0·001), with more men undergoing TEVAR (P = 0·004). For intact aneurysms, the operative mortality rate was similar for TEVAR and open repair (6·5 versus 7·6 per cent; odds ratio 0·79, 95 per cent confidence interval (c.i.) 0·41 to 1·49), but the 5-year survival rate was significantly worse after TEVAR (54·2 versus 65·6 per cent; adjusted hazard ratio 1·45, 95 per cent c.i. 1·08 to 1·94). After 5 years, aortic-related mortality was similar in the two groups, but cardiopulmonary mortality was higher after TEVAR. TEVAR was associated with more aortic-related reinterventions (23·1 versus 14·3 per cent; adjusted HR 1·70, 95 per cent c.i. 1·11 to 2·60). There were 141 procedures for ruptured thoracic aneurysm (97 TEVAR, 44 open), with TEVAR showing no significant advantage in terms of operative mortality. CONCLUSION In England, operative mortality for degenerative descending thoracic aneurysm was similar after either TEVAR or open repair. Patients who had TEVAR appeared to have a higher reintervention rate and worse long-term survival, possibly owing to cardiopulmonary morbidity and other selection bias.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
In this dissertation, we propose a continuous-time Markov chain model to examine the longitudinal data that have three categories in the outcome variable. The advantage of this model is that it permits a different number of measurements for each subject and the duration between two consecutive time points of measurements can be irregular. Using the maximum likelihood principle, we can estimate the transition probability between two time points. By using the information provided by the independent variables, this model can also estimate the transition probability for each subject. The Monte Carlo simulation method will be used to investigate the goodness of model fitting compared with that obtained from other models. A public health example will be used to demonstrate the application of this method. ^