7 resultados para Density-based Scanning Algorithm

em Duke University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides, amino acids, indels, and transposable elements, as well as tree files containing gene trees and species trees. Inferring an accurate phylogeny required generating: 1) A well annotated data set across species based on genome synteny; 2) Alignments with unaligned or incorrectly overaligned sequences filtered out; and 3) Diverse data sets, including genes and their inferred trees, indels, and transposable elements. Our total evidence nucleotide tree (TENT) data set (consisting of exons, introns, and UCEs) gave what we consider our most reliable species tree when using the concatenation-based ExaML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest vertebrate phylogenomics project to date that we are aware of. The sequence, alignment, and tree data are expected to accelerate analyses in phylogenomics and other related areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: A projection onto convex sets reconstruction of multiplexed sensitivity encoded MRI (POCSMUSE) is developed to reduce motion-related artifacts, including respiration artifacts in abdominal imaging and aliasing artifacts in interleaved diffusion-weighted imaging. THEORY: Images with reduced artifacts are reconstructed with an iterative projection onto convex sets (POCS) procedure that uses the coil sensitivity profile as a constraint. This method can be applied to data obtained with different pulse sequences and k-space trajectories. In addition, various constraints can be incorporated to stabilize the reconstruction of ill-conditioned matrices. METHODS: The POCSMUSE technique was applied to abdominal fast spin-echo imaging data, and its effectiveness in respiratory-triggered scans was evaluated. The POCSMUSE method was also applied to reduce aliasing artifacts due to shot-to-shot phase variations in interleaved diffusion-weighted imaging data corresponding to different k-space trajectories and matrix condition numbers. RESULTS: Experimental results show that the POCSMUSE technique can effectively reduce motion-related artifacts in data obtained with different pulse sequences, k-space trajectories and contrasts. CONCLUSION: POCSMUSE is a general post-processing algorithm for reduction of motion-related artifacts. It is compatible with different pulse sequences, and can also be used to further reduce residual artifacts in data produced by existing motion artifact reduction methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Testing for two-sample differences is challenging when the differences are local and only involve a small portion of the data. To solve this problem, we apply a multi- resolution scanning framework that performs dependent local tests on subsets of the sample space. We use a nested dyadic partition of the sample space to get a collection of windows and test for sample differences within each window. We put a joint prior on the states of local hypotheses that allows both vertical and horizontal message passing among the partition tree to reflect the spatial dependency features among windows. This information passing framework is critical to detect local sample differences. We use both the loopy belief propagation algorithm and MCMC to get the posterior null probability on each window. These probabilities are then used to report sample differences based on decision procedures. Simulation studies are conducted to illustrate the performance. Multiple testing adjustment and convergence of the algorithms are also discussed.