955 resultados para Gold standard creation
Resumo:
OBJECTIVE: To evaluate the validity of hemoglobin A1C (A1C) as a diagnostic tool for type 2 diabetes and to determine the most appropriate A1C cutoff point for diagnosis in a sample of Haitian-Americans. SUBJECTS AND METHODS: Subjects (n = 128) were recruited from Miami-Dade and Broward counties, FL. Receiver operating characteristics (ROC) analysis was run in order to measure sensitivity and specificity of A1C for detecting diabetes at different cutoff points. RESULTS: The area under the ROC curve was 0.86 using fasting plasma glucose ≥ 7.0 mmol/L as the gold standard. An A1C cutoff point of 6.26% had sensitivity of 80% and specificity of 74%, whereas an A1C cutoff point of 6.50% (recommended by the American Diabetes Association – ADA) had sensitivity of 73% and specificity of 89%. CONCLUSIONS: A1C is a reliable alternative to fasting plasma glucose in detecting diabetes in this sample of Haitian-Americans. A cutoff point of 6.26% was the optimum value to detect type 2 diabetes.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Inaccurate diagnosis of vulvovaginitis generates inadequate treatments that cause damages women's health. Objective: evaluate the effectiveness of methods when diagnosing vulvovaginitis. Method: a cross-sectional study was performed with 200 women who complained about vaginal discharge. Vaginal smear was collected for microbiological tests, considering the gram stain method as gold standard. The efficacy of the available methods for diagnosis of vaginal discharge was assessed (sensitivity, specificity, positive predictive value and negative predictive value). Data were inserted to Graphpad Prism 6, for statistical analysis. Results: the following results were obtained: wet mount for vaginal candidiasis: sensitivity = 31%; specificity = 97%; positive predictive value (PPV) = 54%; negative predictive value (NPV) =93%; accuracy = 91%. Wet mount for bacterial vaginosis: sensitivity = 80%; specificity =95%; positive predictive value (PPV) = 80%; negative predictive value (NPV) = 95%; accuracy = 92%. Syndromic approach for bacterial vaginosis: sensitivity = 95%; specificity=43%; positive predictive value (PPV) =30%; negative predictive value (NPV) = 97%; accuracy = 54%. Syndromic approach for vaginal candidiasis: sensitivity = 75%; specificity =91%; positive predictive value (PPV) = 26%; negative predictive value (NPV) = 98%; accuracy = 90%. Pap smear for vaginal candidiasis: sensitivity = 68%, specificity = 98%; positive predictive value (PPV) = 86%; negative predictive value (NPV) =96%; accuracy = 96%. Pap smear for bacterial vaginosis: sensitivity = 75%; specificity = 100%; positive predictive value (PPV) = 100%; negative predictive value (NPV) =94%; accuracy = 95%. There was only one case of vaginal trichomoniasis reported – diagnosed by oncological cytology and wet mount – confirmed by Gram. The syndromic approach diagnosed it as bacterial vaginosis. From the data generated and with support on world literature, the Maternidade Escola Januário Cicco’s vulvovaginitis protocol was constructed. Conclusion: Pap smear and wet mount showed respectively low and very low sensitivity for vaginal candidiasis. Syndromic approach presented very low specificity and accuracy for bacterial vaginosis, which implies a large number of patients who are diagnosed or treated incorrectly.
Resumo:
Chronic Hepatitis C is the leading cause of chronic liver disease in advanced final stage of hepatocellular carcinoma (HCC) and of death related to liver disease. Evolves progressively in time 20-30 years. Evolutionary rates vary depending on factors virus, host and behavior. This study evaluated the impact of hepatitis C on the lives of patients treated at a referral service in Hepatology of the University Hospital Onofre Lopes - Liver Study Group - from May 1995 to December 2013. A retrospective evaluation was performed on 10,304 records, in order to build a cohort of patients with hepatitis C, in which all individuals had their diagnosis confirmed by gold standard molecular biological test. Data were obtained directly from patient charts and recorded in an Excel spreadsheet, previously built, following an elaborate encoding with the study variables, which constitute individual data and prognostic factors defined in the literature in the progression of chronic hepatitis C. The Research Ethics Committee approved the project. The results were statistically analyzed with the Chi-square test and Fisher's exact used to verify the association between variable for the multivariate analysis, we used the Binomial Logistic regression method. For both tests, it was assumed significance p < 0.05 and 95%. The results showed that the prevalence of chronic hepatitis C in NEF was 4.96 %. The prevalence of cirrhosis due to hepatitis C was 13.7%. The prevalence of diabetes in patients with Hepatitis C was 8.78 % and diabetes in cirrhotic patients with hepatitis C 38.0 %. The prevalence of HCC was 5.45%. The clinical follow-up discontinuation rates were 67.5 %. The mortality in confirmed cases without cirrhosis was 4.10% and 32.1% in cirrhotic patients. The factors associated with the development of cirrhosis were genotype 1 (p = 0.0015) and bilirubin > 1.3 mg % (p = 0.0017). Factors associated with mortality were age over 35 years, abandon treatment, diabetes, insulin use, AST> 60 IU, ALT> 60 IU, high total bilirubin, extended TAP, INR high, low albumin, treatment withdrawal, cirrhosis and hepatocarcinoma. The occurrence of diabetes mellitus increased mortality of patients with hepatitis C in 6 times. Variables associated with the diagnosis of cirrhosis by us were blood donor (odds ratio 0.24, p = 0.044) and professional athlete (odds ratio 0.18, p = 0.35). It is reasonable to consider a revaluation in screening models for CHC currently proposed. The condition of cirrhosis and diabetes modifies the clinical course of patients with chronical hepatitis C, making it a disease more mortality. However, being a blood donor or professional athlete is a protective factor that reduces the risk of cirrhosis, independent of alcohol consumption. Public policies to better efficient access, hosting and resolution are needed for this population.
Resumo:
Nel corso degli ultimi due decenni il trapianto di cuore si è evoluto come il gold standard per il trattamento dell’insufficienza cardiaca allo stadio terminale. Rimane un intervento estremamente complesso; infatti due sono gli aspetti fondamentali per la sua buona riuscita: la corretta conservazione e metodo di trasporto. Esistono due diversi metodi di conservazione e trasporto: il primo si basa sul tradizionale metodo di protezione miocardica e sul trasporto del cuore con utilizzo di contenitori frigoriferi, mentre il secondo, più innovativo, si basa sull’ utilizzo di Organ Care System Heart, un dispositivo appositamente progettato per contenere il cuore e mantenerlo in uno stato fisiologico normotermico attivo simulando le normali condizioni presenti all’interno del corpo umano. La nuova tecnologia di Organ Care System Heart permette un approccio completamente diverso rispetto ai metodi tradizionali, in quanto non solo conserva e trasporta il cuore, ma permette anche il continuo monitoraggio ex-vivo delle sue funzioni, dal momento in cui il cuore viene rimosso dal torace del donatore fino all’ impianto nel ricevente. Il motivo principale che spinge la ricerca ad investire molto per migliorare i metodi di protezione degli organi è legato alla possibilità di ridurre il rischio di ischemia fredda. Questo termine definisce la condizione per cui un organo rimane privo di apporto di sangue, il cui mancato afflusso causa danni via via sempre più gravi ed irreversibili con conseguente compromissione della funzionalità. Nel caso del cuore, il danno da ischemia fredda risulta significativamente ridotto grazie all’utilizzo di Organ Care System Heart con conseguenti benefici in termini di allungamento dei tempi di trasporto, ottimizzazione dell’organo e più in generale migliori risultati sui pazienti.
Resumo:
Per investigare i carichi sopportati dal corpo nella vita di tutti i giorni, è neccesario un metodo semplice per la stima delle forze di reazione piede-suolo (GRFs) . In questo studio viene presentato un modello per stimare l’andamento delle GRFs durante la corsa, a partire dalle accelerazioni misurate tramite sensori inerziali. I due soggetti che hanno partecipato all’esperimento hanno corso a 4 diverse velocità predefinite e indossato cinque sensori: uno su pelvi, due su tibia (destra e sinistra) e due su piede (destro e sinitro). A partire dai dati ottenuti è stato elaborato un modello che stima l’andamento delle GRFs (verticali e anteroposteriori) e i tempi di contatto e volo del passo tramite l’accelerazione assiale tibiale. Per la stima delle forze di reazione viene utilizzato un modello di stima basato sui tempi di contatto e volo, unito ad un modello che prevede la presenza o meno e il modulo degli impact peak sfruttando due picchi negativi individuati nelle accelerazioni assiali tibiali. Sono state utilizzate due pedane di carico come gold standard per valutare la qualità delle stime ottenute. Il modello prevede correttamente la presenza dell'impact peak nell'85% dei casi, con un errore sul modulo compreso fra il 6% e il 9%. Le GRFs verticali massime vengono approssimate con un errore fra l'1% e 5%, mentre le GRFs antero-posteriori con un errore fra l'8% e il 14% del range massimo-minimo del segnale.
Resumo:
Tra le patologie ossee attualmente riconosciute, l’osteoporosi ricopre il ruolo di protagonista data le sua diffusione globale e la multifattorialità delle cause che ne provocano la comparsa. Essa è caratterizzata da una diminuzione quantitativa della massa ossea e da alterazioni qualitative della micro-architettura del tessuto osseo con conseguente aumento della fragilità di quest’ultimo e relativo rischio di frattura. In campo medico-scientifico l’imaging con raggi X, in particolare quello tomografico, da decenni offre un ottimo supporto per la caratterizzazione ossea; nello specifico la microtomografia, definita attualmente come “gold-standard” data la sua elevata risoluzione spaziale, fornisce preziose indicazioni sulla struttura trabecolare e corticale del tessuto. Tuttavia la micro-CT è applicabile solo in-vitro, per cui l’obiettivo di questo lavoro di tesi è quello di verificare se e in che modo una diversa metodica di imaging, quale la cone-beam CT (applicabile invece in-vivo), possa fornire analoghi risultati, pur essendo caratterizzata da risoluzioni spaziali più basse. L’elaborazione delle immagini tomografiche, finalizzata all’analisi dei più importanti parametri morfostrutturali del tessuto osseo, prevede la segmentazione delle stesse con la definizione di una soglia ad hoc. I risultati ottenuti nel corso della tesi, svolta presso il Laboratorio di Tecnologia Medica dell’Istituto Ortopedico Rizzoli di Bologna, mostrano una buona correlazione tra le due metodiche quando si analizzano campioni definiti “ideali”, poiché caratterizzati da piccole porzioni di tessuto osseo di un solo tipo (trabecolare o corticale), incluso in PMMA, e si utilizza una soglia fissa per la segmentazione delle immagini. Diversamente, in casi “reali” (vertebre umane scansionate in aria) la stessa correlazione non è definita e in particolare è da escludere l’utilizzo di una soglia fissa per la segmentazione delle immagini.
Resumo:
Il lavoro presentato in questa tesi è stato svolto presso il Department of Computer Science, University of Oxford, durante il mio periodo all’estero nel Computational Biology Group. Scopo del presente lavoro è stato lo sviluppo di un modello matematico del potenziale d’azione per cellule umane cardiache di Purkinje. Tali cellule appartengono al sistema di conduzione elettrico del cuore, e sono considerate molto importanti nella genesi di aritmie. Il modello, elaborato in linguaggio Matlab, è stato progettato utilizzando la tecnica delle Popolazione di Modelli, un innovativo approccio alla modellazione cellulare sviluppato recentemente proprio dal Computational Biology Group. Tale modello è stato sviluppato in 3 fasi: • Inizialmente è stato sviluppato un nuovo modello matematico di cellula umana del Purkinje cardiaco, tenendo in considerazione i modelli precedenti disponibili in letteratura e le più recenti pubblicazioni in merito alle caratteristiche elettrofisiologiche proprie della cellula cardiaca umana di Purkinje. Tale modello è stato costruito a partire dall’attuale gold standard della modellazione cardiaca ventricolare umana, il modello pubblicato da T. O’Hara e Y. Rudy nel 2011, modificandone sia le specifiche correnti ioniche che la struttura interna cellulare. • Il modello così progettato è stato, poi, utilizzato come “modello di base” per la costruzione di una popolazione di 3000 modelli, tramite la variazione di alcuni parametri del modello all’interno di uno specifico range. La popolazione così generata è stata calibrata sui dati sperimentali di cellule umane del Purkinje. A valle del processo di calibrazione si è ottenuta una popolazione di 76 modelli. • A partire dalla popolazione rimanente, è stato ricavato un nuovo modello ai valori medi, che riproduce le principali caratteristiche del potenziale d’azione di una cellula di Purkinje cardiaca umana, e che rappresenta il dataset sperimentale utilizzato nel processo di calibrazione.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Purpose
The objective of our study was to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam used in fast kV switch dual-energy (DE) computed tomography (CT). The two primary focuses of the study were to first validate experimentally the dose equivalency between MOSFET and ion chamber (as a gold standard) in a fast kV switch DE environment, and secondly to estimate effective dose (ED) of DECT scans using MOSFET detectors and an anthropomorphic phantom.
Materials and Methods
A GE Discovery 750 CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. The specific aims of our study were to (1) Characterize the effective energy of the dual energy environment; (2) Estimate the f-factor for soft tissue; (3) Calibrate the MOSFET detectors using a beam with effective energy equal to the combined DE environment; (4) Validate our calibration by using MOSFET detectors and ion chamber to measure dose at the center of a CTDI body phantom; (5) Measure ED for an abdomen/pelvis scan using an anthropomorphic phantom and applying ICRP 103 tissue weighting factors; and (6) Estimate ED using AAPM Dose Length Product (DLP) method. The effective energy of the combined beam was calculated by measuring dose with an ion chamber under varying thicknesses of aluminum to determine half-value layer (HVL).
Results
The effective energy of the combined dual-energy beams was found to be 42.8 kV. After calibration, tissue dose in the center of the CTDI body phantom was measured at 1.71 ± 0.01 cGy using an ion chamber, and 1.73±0.04 and 1.69±0.09 using two separate MOSFET detectors. This result showed a -0.93% and 1.40 % difference, respectively, between ion chamber and MOSFET. ED from the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method.
Resumo:
Background: Depression-screening tools exist and are widely used in Western settings. There have been few studies done to explore whether or not existing tools are valid and effective to use in sub-Saharan Africa. Our study aimed to develop and validate a perinatal depression-screening tool in rural Kenya.
Methods: We utilized conducted free listing and card sorting exercises with a purposive sample of 12 women and 38 CHVs living in a rural community to explore the manifestations of perinatal depression in that setting. We used the information obtained to produce a locally relevant depression-screening tool that comprised of existing Western psychiatric concepts and locally derived items. Subsequently, we administered the novel depression-screening tool and two existing screening tools (the Edinburgh Postnatal Depression Scale and the Patient Health Questionnaire-9) to 193 women and compared the results of the screening tool with that of a gold standard structured clinical interview to determine validity.
Results: The free listing and card sorting exercise produced a set of 60 screening items. Of the items in this set, we identified the 10 items that most accurately classified cases and non-cases. This 10-item scale had a sensitivity of 100.0 and specificity of 81.2. This compared to 90.0, 31.5 and 90.0, 49.7 for the EPDS and the PHQ-9, respectively. Overall, we found a prevalence of depression of 5.2 percent.
Conclusions: The new scale does very well in terms of diagnostic validity, having the highest scores in this domain compared to the EPDS, EPDS-R and PHQ-9. The adapted scale does very well with regards to convergent validity-illustrating clear distinction between mean scores across the different categories. It does well with regards to discriminant validity, internal consistency reliability, and test-retest reliability- not securing top scores in those domains but still yielding satisfactory results.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
The ability of systemically administered bacteria to target and replicate to high numbers within solid tumours is well established. Tumour localising bacteria can be exploited as biological vehicles for the delivery of nucleic acid, protein or therapeutic payloads to tumour sites and present researchers with a highly targeted and safe vehicle for tumour imaging and cancer therapy. This work aimed to utilise bacteria to activate imaging probes or prodrugs specifically within target tissue in order to facilitate the development of novel imaging and therapeutic strategies. The vast majority of existing bacterial-mediated cancer therapy strategies rely on the use of bacteria that have been genetically modified (GM) to express genes of interest. While these approaches have been shown to be effective in a preclinical setting, GM presents extra regulatory hurdles in a clinical context. Also, many strains of bacteria are not genetically tractably and hence cannot currently be engineered to express genes of interest. For this reason, the development of imaging and therapeutic systems that utilise unengineered bacteria for the activation of probes or drugs represents a significant improvement on the current gold standard. Endogenously expressed bacterial enzymes that are not found in mammalian cells can be used for the targeted activation of imaging probes or prodrugs whose activation is only achieved in the presence of these enzymes. Exploitation of the intrinsic enzymatic activity of bacteria allows the use of a wider range of bacteria and presents a more clinically relevant system than those that are currently in use. The nitroreductase (NTR) enzymes, found only in bacteria, represent one such option. Chapter 2 introduces the novel concept of utilising native bacterial NTRs for the targeted activation of the fluorophore CytoCy5S. Bacterial-mediated probe activation allowed for non-invasive fluorescence imaging of in vivo bacteria in models of infection and cancer. Chapter 3 extends the concept of using native bacterial enzymes to activate a novel luminescent, NTR activated probe. The use of luminescence based imaging improved the sensitivity of the system and provides researchers with a more accessible modality for preclinical imaging. It also represents an improvement over existing caged luciferin probe systems described to date. Chapter 4 focuses on the employment of endogenous bacterial enzymes for use in a therapeutic setting. Native bacterial enzymatic activity (including NTR enzymes) was shown to be capable of activating multiple prodrugs, in isolation and in combination, and eliciting therapeutic responses in murine models of cancer. Overall, the data presented in this thesis advance the fields of bacterial therapy and imaging and introduce novel strategies for disease diagnosis and treatment. These preclinical studies demonstrate potential for clinical translation in multiple fields of research and medicine.
Resumo:
The work described in this thesis focuses on the development of an innovative bioimpedance device for the detection of breast cancer using electrical impedance as the detection method. The ability for clinicians to detect and treat cancerous lesions as early as possible results in improved patient outcomes and can reduce the severity of the treatment the patient has to undergo. Therefore, new technology and devices are continually required to improve the specificity and sensitivity of the accepted detection methods. The gold standard for breast cancer detection is digital x-ray mammography but it has some significant downsides associated with it. The development of an adjunct technology to aid in the detection of breast cancers could represent a significant patient and economic benefit. In this project silicon substrates were pattern with two gold microelectrodes that allowed electrical impedance measurements to be recorded from intact tissue structures. These probes were tested and characterised using a range of in vitro and ex vivo experiments. The end application of this novel sensor device was in a first-in-human clinical trial. The initial results of this study showed that the silicon impedance device was capable of differentiating between normal and abnormal (benign and cancerous) breast tissue. The mean separation between the two tissue types 4,340 Ω with p < 0.001. The cancer type and grade at the site of the probe recordings was confirmed histologically and correlated with the electrical impedance measurements to determine if the different subtypes of cancer could each be differentiated. The results presented in this thesis showed that the novel impedance device demonstrated excellent electrochemical recording potential; was biocompatible with the growth of cultured cell lines and was capable of differentiating between intact biological tissues. The results outlined in this thesis demonstrate the potential feasibility of using electrical impedance for the differentiation of biological tissue samples. The novelty of this thesis is in the development of a new method of tissue determination with an application in breast cancer detection.
Resumo:
Hypoxic ischaemic encephalopathy (HIE) is a devastating neonatal condition which affects 2-3 per 1000 infants annually. The current gold standard of treatment - induced hypothermia, has the ability to reduce neonatal mortality and improve neonatal morbidity. However, to be effective it needs to be initiated within the therapeutic window which exists following initial insult until approximately 6 hours after birth. Current methods of assessment which are relied upon to identify infants with HIE are subjective and unreliable. To overcome this issue, an early and reliable biomarker of HIE severity must be identified. MicroRNA (miRNA) are a class of small non-coding RNA molecules which have potential as biomarkers of disease state and potential therapeutic targets. These tiny molecules can modulate gene expression by inhibiting translation of messenger RNA (mRNA) and as a result, can regulate protein synthesis. These miRNA are understood to be released into the circulation during cellular stress, where they are highly stable and relatively easy to quantify. Therefore, these miRNAs may be ideal candidates for biomarkers of HIE severity and may aid in directing the clinical management of these infants. By using both transcriptomic and proteomic approaches to analyse the expression of miRNAs and their potential targets in the umbilical cord blood, I have confirmed that infants with perinatal asphyxia and HIE have a significantly different UCB miRNA signature compared to UCB samples from healthy controls. Finally, I have identified and investigated 2 individual miRNAs; both of which show some potential as classifiers of HIE severity and predictors of long term outcome, particularly when coupled with their downstream targets. While this work will need to be validated and expanded in a new and larger cohort of infants, it suggests the potential of miRNA as biomarkers of neonatal pathological conditions such as HIE.