902 resultados para Density-based Scanning Algorithm
Resumo:
BACKGROUND: Suction-based wound healing devices with open-pore foam interfaces are widely used to treat complex tissue defects. The impact of changes in physicochemical parameters of the wound interfaces has not been investigated. METHODS: Full-thickness wounds in diabetic mice were treated with occlusive dressing or a suction device with a polyurethane foam interface varying in mean pore size diameter. Wound surface deformation on day 2 was measured on fixed tissues. Histologic cross-sections were analyzed for granulation tissue thickness (hematoxylin and eosin), myofibroblast density (α-smooth muscle actin), blood vessel density (platelet endothelial cell adhesion molecule-1), and cell proliferation (Ki67) on day 7. RESULTS: Polyurethane foam-induced wound surface deformation increased with polyurethane foam pore diameter: 15 percent (small pore size), 60 percent (medium pore size), and 150 percent (large pore size). The extent of wound strain correlated with granulation tissue thickness that increased 1.7-fold in small pore size foam-treated wounds, 2.5-fold in medium pore size foam-treated wounds, and 4.9-fold in large pore size foam-treated wounds (p < 0.05) compared with wounds treated with an occlusive dressing. All polyurethane foams increased the number of myofibroblasts over occlusive dressing, with maximal presence in large pore size foam-treated wounds compared with all other groups (p < 0.05). CONCLUSIONS: The pore size of the interface material of suction devices has a significant impact on the wound healing response. Larger pores increased wound surface strain, tissue growth, and transformation of contractile cells. Modification of the pore size is a powerful approach for meeting biological needs of specific wounds.
Resumo:
Geometric parameters of binary (1:1) PdZn and PtZn alloys with CuAu-L10 structure were calculated with a density functional method. Based on the total energies, the alloys are predicted to feature equal formation energies. Calculated surface energies of PdZn and PtZn alloys show that (111) and (100) surfaces exposing stoichiometric layers are more stable than (001) and (110) surfaces comprising alternating Pd (Pt) and Zn layers. The surface energy values of alloys lie between the surface energies of the individual components, but they differ from their composition weighted averages. Compared with the pure metals, the valence d-band widths and the Pd or Pt partial densities of states at the Fermi level are dramatically reduced in PdZn and PtZn alloys. The local valence d-band density of states of Pd and Pt in the alloys resemble that of metallic Cu, suggesting that a similar catalytic performance of these systems can be related to this similarity in the local electronic structures.
Resumo:
The genotyping of human papillomaviruses (HPV) is essential for the surveillance of HPV vaccines. We describe and validate a low-cost PGMY-based PCR assay (PGMY-CHUV) for the genotyping of 31 HPV by reverse blotting hybridization (RBH). Genotype-specific detection limits were 50 to 500 genome equivalents per reaction. RBH was 100% specific and 98.61% sensitive using DNA sequencing as the gold standard (n = 1,024 samples). PGMY-CHUV was compared to the validated and commercially available linear array (Roche) on 200 samples. Both assays identified the same positive (n = 182) and negative samples (n = 18). Seventy-six percent of the positives were fully concordant after restricting the comparison to the 28 genotypes shared by both assays. At the genotypic level, agreement was 83% (285/344 genotype-sample combinations; κ of 0.987 for single infections and 0.853 for multiple infections). Fifty-seven of the 59 discordant cases were associated with multiple infections and with the weakest genotypes within each sample (P < 0.0001). PGMY-CHUV was significantly more sensitive for HPV56 (P = 0.0026) and could unambiguously identify HPV52 in mixed infections. PGMY-CHUV was reproducible on repeat testing (n = 275 samples; 392 genotype-sample combinations; κ of 0.933) involving different reagents lots and different technicians. Discordant results (n = 47) were significantly associated with the weakest genotypes in samples with multiple infections (P < 0.0001). Successful participation in proficiency testing also supported the robustness of this assay. The PGMY-CHUV reagent costs were estimated at $2.40 per sample using the least expensive yet proficient genotyping algorithm that also included quality control. This assay may be used in low-resource laboratories that have sufficient manpower and PCR expertise.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
A magnetic resonance imaging (MRI) pulse sequence and a corresponding image processing algorithm to localize prostate brachytherapy seeds during or after therapy are presented. Inversion-Recovery with ON-resonant water suppression (IRON) is an MRI methodology that generates positive contrast in regions of magnetic field susceptibility, as created by prostate brachytherapy seeds. Phantoms comprising of several materials found in brachytherapy seeds were created to assess the usability of the IRON pulse sequence for imaging seeds. Resulting images show that seed materials are clearly visible with high contrast using IRON, agreeing with theoretical predictions. A seed localization algorithm to process IRON images demonstrates the potential of this imaging technique for seed localization and dosimetry.
Resumo:
A novel approach for the identification of tumor antigen-derived sequences recognized by CD8(+) cytolytic T lymphocytes (CTL) consists in using synthetic combinatorial peptide libraries. Here we have screened a library composed of 3.1 x 10(11) nonapeptides arranged in a positional scanning format, in a cytotoxicity assay, to search the antigen recognized by melanoma-reactive CTL of unknown specificity. The results of this analysis enabled the identification of several optimal peptide ligands, as most of the individual nonapeptides deduced from the primary screening were efficiently recognized by the CTL. The results of the library screening were also analyzed with a mathematical approach based on a model of independent and additive contribution of individual amino acids to antigen recognition. This biometrical data analysis enabled the retrieval, in public databases, of the native antigenic peptide SSX-2(41-49), whose sequence is highly homologous to the ones deduced from the library screening, among the ones with the highest stimulatory score. These results underline the high predictive value of positional scanning synthetic combinatorial peptide library analysis and encourage its use for the identification of CTL ligands.
Resumo:
FRAX(®) is a fracture risk assessment algorithm developed by the World Health Organization in cooperation with other medical organizations and societies. Using easily available clinical information and femoral neck bone mineral density (BMD) measured by dual-energy X-ray absorptiometry (DXA), when available, FRAX(®) is used to predict the 10-year probability of hip fracture and major osteoporotic fracture. These values may be included in country specific guidelines to aid clinicians in determining when fracture risk is sufficiently high that the patient is likely to benefit from pharmacological therapy to reduce that risk. Since the introduction of FRAX(®) into clinical practice, many practical clinical questions have arisen regarding its use. To address such questions, the International Society for Clinical Densitometry (ISCD) and International Osteoporosis Foundations (IOF) assigned task forces to review the best available medical evidence and make recommendations for optimal use of FRAX(®) in clinical practice. Questions were identified and divided into three general categories. A task force was assigned to investigating the medical evidence in each category and developing clinically useful recommendations. The BMD Task Force addressed issues that included the potential use of skeletal sites other than the femoral neck, the use of technologies other than DXA, and the deletion or addition of clinical data for FRAX(®) input. The evidence and recommendations were presented to a panel of experts at the ISCD-IOF FRAX(®) Position Development Conference, resulting in the development of ISCD-IOF Official Positions addressing FRAX(®)-related issues.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
Voxel-based morphometry from conventional T1-weighted images has proved effective to quantify Alzheimer's disease (AD) related brain atrophy and to enable fairly accurate automated classification of AD patients, mild cognitive impaired patients (MCI) and elderly controls. Little is known, however, about the classification power of volume-based morphometry, where features of interest consist of a few brain structure volumes (e.g. hippocampi, lobes, ventricles) as opposed to hundreds of thousands of voxel-wise gray matter concentrations. In this work, we experimentally evaluate two distinct volume-based morphometry algorithms (FreeSurfer and an in-house algorithm called MorphoBox) for automatic disease classification on a standardized data set from the Alzheimer's Disease Neuroimaging Initiative. Results indicate that both algorithms achieve classification accuracy comparable to the conventional whole-brain voxel-based morphometry pipeline using SPM for AD vs elderly controls and MCI vs controls, and higher accuracy for classification of AD vs MCI and early vs late AD converters, thereby demonstrating the potential of volume-based morphometry to assist diagnosis of mild cognitive impairment and Alzheimer's disease.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
In this paper, we present the segmentation of the headand neck lymph node regions using a new active contourbased atlas registration model. We propose to segment thelymph node regions without directly including them in theatlas registration process; instead, they are segmentedusing the dense deformation field computed from theregistration of the atlas structures with distinctboundaries. This approach results in robust and accuratesegmentation of the lymph node regions even in thepresence of significant anatomical variations between theatlas-image and the patient's image to be segmented. Wealso present a quantitative evaluation of lymph noderegions segmentation using various statistical as well asgeometrical metrics: sensitivity, specificity, dicesimilarity coefficient and Hausdorff distance. Acomparison of the proposed method with two other state ofthe art methods is presented. The robustness of theproposed method to the atlas selection, in segmenting thelymph node regions, is also evaluated.
Resumo:
For radiotherapy treatment planning of retinoblastoma inchildhood, Computed Tomography (CT) represents thestandard method for tumor volume delineation, despitesome inherent limitations. CT scan is very useful inproviding information on physical density for dosecalculation and morphological volumetric information butpresents a low sensitivity in assessing the tumorviability. On the other hand, 3D ultrasound (US) allows ahigh accurate definition of the tumor volume thanks toits high spatial resolution but it is not currentlyintegrated in the treatment planning but used only fordiagnosis and follow-up. Our ultimate goal is anautomatic segmentation of gross tumor volume (GTV) in the3D US, the segmentation of the organs at risk (OAR) inthe CT and the registration of both. In this paper, wepresent some preliminary results in this direction. Wepresent 3D active contour-based segmentation of the eyeball and the lens in CT images; the presented approachincorporates the prior knowledge of the anatomy by usinga 3D geometrical eye model. The automated segmentationresults are validated by comparing with manualsegmentations. Then, for the fusion of 3D CT and USimages, we present two approaches: (i) landmark-basedtransformation, and (ii) object-based transformation thatmakes use of eye ball contour information on CT and USimages.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Plants forming a rosette during their juvenile growth phase, such as Arabidopsis thaliana (L.) Heynh., are able to adjust the size, position and orientation of their leaves. These growth responses are under the control of the plants circadian clock and follow a characteristic diurnal rhythm. For instance, increased leaf elongation and hyponasty - defined here as the increase in leaf elevation angle - can be observed when plants are shaded. Shading can either be caused by a decrease in the fluence rate of photosynthetically active radiation (direct shade) or a decrease in the fluence rate of red compared with far-red radiation (neighbour detection). In this paper we report on a phenotyping approach based on laser scanning to measure the diurnal pattern of leaf hyponasty and increase in rosette size. In short days, leaves showed constitutively increased leaf elevation angles compared with long days, but the overall diurnal pattern and the magnitude of up and downward leaf movement was independent of daylength. Shade treatment led to elevated leaf angles during the first day of application, but did not affect the magnitude of up and downward leaf movement in the following day. Using our phenotyping device, individual plants can be non-invasively monitored during several days under different light conditions. Hence, it represents a proper tool to phenotype light- and circadian clock-mediated growth responses in order to better understand the underlying regulatory genetic network.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.