924 resultados para Accuracy and precision
Resumo:
Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.
Resumo:
Contrary to interviewing guidelines, a considerable portion of witness interviews are not recorded. Investigators’ memory, their interview notes, and any subsequent interview reports therefore become important pieces of evidence; the accuracy of interviewers’ memory or such reports is therefore of crucial importance when interviewers testify in court regarding witness interviews. A detailed recollection of the actual exchange during such interviews and how information was elicited from the witness will allow for a better assessment of statement veracity in court. ^ Two studies were designed to examine interviewers’ memory for a prior witness interview. Study One varied interviewer note-taking and type of subsequent interview report written by interviewers by including a sample of undergraduates and implementing a two-week delay between interview and recall. Study Two varied levels of interviewing experience in addition to report type and note-taking by comparing experienced police interviewers to a student sample. Participants interviewed a mock witness about a crime, while taking notes or not, and wrote an interview report two weeks later (Study One) or immediately after (Study Two). Interview reports were written either in a summarized format, which asked interviewers for a summary of everything that occurred during the interview, or verbatim format, which asked interviewers to record in transcript format the questions they asked and the witness’s responses. Interviews were videotaped and transcribed. Transcriptions were compared to interview reports to score for accuracy and omission of interview content. ^ Results from both studies indicate that much interview information is lost between interview and report especially after a two-week delay. The majority of information reported by interviewers is accurate, although even interviewers who recalled information immediately after still reported a troubling amount of inaccurate information. Note-taking was found to increase accuracy and completeness of interviewer reports especially after a two week delay. Report type only influenced recall of interviewer questions. Experienced police interviewers were not any better at recalling a prior witness interview than student interviewers. Results emphasize the need to record witness interviews to allow for more accurate and complete interview reconstruction by interviewers, even if interview notes are available. ^
Resumo:
This study investigated the effects of word prediction and text-to-speech on the narrative composition writing skills of 6, fifth-grade Hispanic boys with specific learning disabilities (SLD). A multiple baseline design across subjects was used to explore the efficacy of word prediction and text-to-speech alone and in combination on four dependent variables: writing fluency (words per minute), syntax (T-units), spelling accuracy, and overall organization (holistic scoring rubric). Data were collected and analyzed during baseline, assistive technology interventions, and at 2-, 4-, and 6-week maintenance probes. ^ Participants were equally divided into Cohorts A and B, and two separate but related studies were conducted. Throughout all phases of the study, participants wrote narrative compositions for 15-minute sessions. During baseline, participants used word processing only. During the assistive technology intervention condition, Cohort A participants used word prediction followed by word prediction with text-to-speech. Concurrently, Cohort B participants used text-to-speech followed by text-to-speech with word prediction. ^ The results of this study indicate that word prediction alone or in combination with text-to-speech has a positive effect on the narrative writing compositions of students with SLD. Overall, participants in Cohorts A and B wrote more words, more T-units, and spelled more words correctly. A sign test indicated that these perceived effects were not likely due to chance. Additionally, the quality of writing improved as measured by holistic rubric scores. When participants in Cohort B used text-to-speech alone, with the exception of spelling accuracy, inconsequential results were observed on all dependent variables. ^ This study demonstrated that word prediction alone or in combination assists students with SLD to write longer, improved-quality, narrative compositions. These results suggest that word prediction or word prediction with text-to-speech be considered as a writing support to facilitate the production of a first draft of a narrative composition. However, caution should be given to the use of text-to-speech alone as its effectiveness has not been established. Recommendations for future research include investigating the use of these technologies in other phases of the writing process, with other student populations, and with other writing styles. Further, these technologies should be investigated while integrated into classroom composition instruction. ^
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
This study was a qualitative investigation to ascertain and describe two of the current issues at the International Community School of Abidjan, examine their historical bases, and analyze their impact on the school environment.^ Two issues emerged during the inquiry phase of this study: (1) the relationship between local-hired and overseas-hired teachers in light of the January 1994 devaluation which polarized the staff by negating a four-year salary scale that established equity, (2) the school community's wide variance in the perceived power that the U.S. Embassy has on school operations based on its role as ICSA's founding sponsor.^ A multiple studies approach was used in gathering data. An extensive examination of the school's archives was used to reconstruct an historical overview of ICSA. An initial questionnaire was distributed to teachers and administrators at an educational conference to determine the scope of the 1994 devaluation of the West and Central African CFA and its impact on school personnel in West African American-sponsored overseas schools (ASOS). Personal interviews were conducted with the school staff, administration, school board members, and relevant historical participants to determine the principal issues at ICSA at that time. The researcher, an overseas-hired teacher, also used participant observations to collect data. Findings based on these sources were used to analyze the two issues from an historical perspective and to form conclusions.^ Findings in this study pertaining to the events induced by the French and African governments' decision to implement a currency devaluation in January 1994 were presented in ex post-facto chronological narrative form to describe the events which transpired, describe the perception of school personnel involved in these events, examine the final resolution and interpret these events within a historical framework for analysis.^ The topic of the U.S. Embassy and its role at ICSA emerged inductively from open-ended personal interviews conducted over the course of a year. Contradictory perspectives were examined and researched for accuracy and cause. The results of this inquiry presented the U.S. Embassy role at ICSA from a two-sided perspective, examined the historical role of the Embassy, and presented means by which the role and responsibility of the U.S. Embassy could best be communicated to the school community.^ The final chapter provides specific actions for mediation of problems stemming from these issues, implications for administrators and teachers currently involved in overseas schools or considering the possibility, and suggestions for future inquiries.^ Examination of a two-tier salary scale for local-hired and overseas-hired teachers generated the following recommendations: movement towards a single salary scale when feasible, clearly stated personnel policies and full disclosure of benefits, a uniform certification standard, professional development programs and awareness of the impact of this issue on staff morale.^ Divergent perceptions and attitudes toward the role of the U.S. Embassy produced these recommendations: a view towards limiting the number of Americans on ASOS school boards, open school board meetings, selection of Embassy Administrative Officers who can educate school communities on the exact role of the Embassy, educating parents through the outreach activities that communicate American educational philosophy and involve all segments of the international community, and a firm effort on the part of the ASOS to establish the school's autonomy from special interests. ^
Resumo:
Recently, researchers have begun to investigate the benefits of cross-training teams. It has been hypothesized that cross-training should help improve team processes and team performance (Cannon-Bowers, Salas, Blickensderfer, & Bowers, 1998; Travillian, Volpe, Cannon-Bowers, & Salas, 1993). The current study extends previous research by examining different methods of cross-training (positional clarification and positional modeling) and the impact they have on team process and performance in both more complex and less complex environments. One hundred and thirty-five psychology undergraduates were placed in 45 three-person teams. Participants were randomly assigned to roles within teams. Teams were asked to “fly” a series of missions on a PC-based helicopter flight simulation. ^ Results suggest that cross-training improves team mental model accuracy and similarity. Accuracy of team mental models was found to be a predictor of coordination quality, but similarity of team mental models was not. Neither similarity nor accuracy of team mental models was found to be a predictor of backup behavior (quality and quantity). As expected, both team coordination (quality) and backup behaviors (quantity and quality) were significant predictors of overall team performance. Contrary to expectations, there was no interaction between cross-training and environmental complexity. Results from this study further cross-training research by establishing positional clarification and positional modeling as training strategies for improving team performance. ^
Resumo:
Rated trust in intuitive efficacy (measured as trust, belief, use, accuracy and weighting of intuition) was investigated as a predictor of self-designated use of intuitive (hunch and hunch plus evidential belief) vs. deliberative (evidential belief and evidential belief plus hunch) deception detection judgments and actual accuracy. Twenty-nine student participants were filmed as they made true and deceptive statements about their everyday activities on a given evening (last Friday night), and college students (N=238) judged 20 (10=true, 10=deceptive) of these filmed statements as truthful or deceptive. Participants provided ratings of reliance on hunches vs. evidential belief, confidence in film judgments, intuitive efficacy, accuracy in deception detection, reliance on cues to deception, and experiences with intuition. Generalized estimated equation modeling using binary logistics demonstrated accuracy in identifying true vs. deceptive statements was predicted by film number, hunch-evidence ratings, weighting of intuition, and total cues cited. Weighting of intuition was predictive of accuracy across participants, with higher weighting predictive of higher accuracy in general. Participants who cited evidential belief plus hunch and moderate to high weighting incorrectly reversed their true vs. deceptive judgments. Accuracy for true statements was higher for hunches and hunch plus evidential belief, whereas accuracy for deceptive statements was higher for evidential belief Accuracy for participants who relied on evidential belief plus hunch was at chance. Subjective experiences underlying judgments differed by participant and type of film viewed (true vs. deceptive) and were predicted by hunch-evidence ratings, trust, use, intuitive accuracy, and total cues cited. Trust predicted increases in judging films to be true, whereas use and accuracy predicted increases in judging films as deceptive; none were predictive of accuracy. Increased number of cues cited predicted judgments of deception, whereas decreased number of cues cited predicted truth. The study concluded that participants have the capacity to self-define their judgments as subjectively vs. deliberately based, provide subjective assessments of the influence of intuitive vs. objective information on their judgments, and can apply this self-knowledge, through effective weighting of intuition vs. other types of information, in making accurate judgments of true and deceptive everyday statements.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
A circumpolar representative and consistent wetland map is required for a range of applications ranging from upscaling of carbon fluxes and pools to climate modelling and wildlife habitat assessment. Currently available data sets lack sufficient accuracy and/or thematic detail in many regions of the Arctic. Synthetic aperture radar (SAR) data from satellites have already been shown to be suitable for wetland mapping. Envisat Advanced SAR (ASAR) provides global medium-resolution data which are examined with particular focus on spatial wetness patterns in this study. It was found that winter minimum backscatter values as well as their differences to summer minimum values reflect vegetation physiognomy units of certain wetness regimes. Low winter backscatter values are mostly found in areas vegetated by plant communities typically for wet regions in the tundra biome, due to low roughness and low volume scattering caused by the predominant vegetation. Summer to winter difference backscatter values, which in contrast to the winter values depend almost solely on soil moisture content, show expected higher values for wet regions. While the approach using difference values would seem more reasonable in order to delineate wetness patterns considering its direct link to soil moisture, it was found that a classification of winter minimum backscatter values is more applicable in tundra regions due to its better separability into wetness classes. Previous approaches for wetland detection have investigated the impact of liquid water in the soil on backscatter conditions. In this study the absence of liquid water is utilized. Owing to a lack of comparable regional to circumpolar data with respect to thematic detail, a potential wetland map cannot directly be validated; however, one might claim the validity of such a product by comparison with vegetation maps, which hold some information on the wetness status of certain classes. It was shown that the Envisat ASAR-derived classes are related to wetland classes of conventional vegetation maps, indicating its applicability; 30% of the land area north of the treeline was identified as wetland while conventional maps recorded 1-7%.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Visual inspection with Acetic Acid (VIA) and Visual Inspection with Lugol’s Iodine (VILI) are increasingly recommended in various cervical cancer screening protocols in low-resource settings. Although VIA is more widely used, VILI has been advocated as an easier and more specific screening test. VILI has not been well-validated as a stand-alone screening test, compared to VIA or validated for use in HIV-infected women. We carried out a randomized clinical trial to compare the diagnostic accuracy of VIA and VILI among HIV-infected women. Women attending the Family AIDS Care and Education Services (FACES) clinic in western Kenya were enrolled and randomized to undergo either VIA or VILI with colposcopy. Lesions suspicious for cervical intraepithelial neoplasia 2 or greater (CIN2+) were biopsied. Between October 2011 and June 2012, 654 were randomized to undergo VIA or VILI. The test positivity rates were 26.2% for VIA and 30.6% for VILI (p = 0.22). The rate of detection of CIN2+ was 7.7% in the VIA arm and 11.5% in the VILI arm (p = 0.10). There was no significant difference in the diagnostic performance of VIA and VILI for the detection of CIN2+. Sensitivity and specificity were 84.0% and 78.6%, respectively, for VIA and 84.2% and 76.4% for VILI. The positive and negative predictive values were 24.7% and 98.3% for VIA, and 31.7% and 97.4% for VILI. Among women with CD4+ count < 350, VILI had a significantly decreased specificity (66.2%) compared to VIA in the same group (83.9%, p = 0.02) and compared to VILI performed among women with CD4+ count ≥ 350 (79.7%, p = 0.02). VIA and VILI had similar diagnostic accuracy and rates of CIN2+ detection among HIV-infected women.
Resumo:
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon's capabilities.