937 resultados para iterative multitier ensembles
Resumo:
PURPOSE To investigate whether the effects of hybrid iterative reconstruction (HIR) on coronary artery calcium (CAC) measurements using the Agatston score lead to changes in assignment of patients to cardiovascular risk groups compared to filtered back projection (FBP). MATERIALS AND METHODS 68 patients (mean age 61.5 years; 48 male; 20 female) underwent prospectively ECG-gated, non-enhanced, cardiac 256-MSCT for coronary calcium scoring. Scanning parameters were as follows: Tube voltage, 120 kV; Mean tube current time-product 63.67 mAs (50 - 150 mAs); collimation, 2 × 128 × 0.625 mm. Images were reconstructed with FBP and with HIR at all levels (L1 to L7). Two independent readers measured Agatston scores of all reconstructions and assigned patients to cardiovascular risk groups. Scores of HIR and FBP reconstructions were correlated (Spearman). Interobserver agreement and variability was assessed with ĸ-statistics and Bland-Altmann-Plots. RESULTS Agatston scores of HIR reconstructions were closely correlated with FBP reconstructions (L1, R = 0.9996; L2, R = 0.9995; L3, R = 0.9991; L4, R = 0.986; L5, R = 0.9986; L6, R = 0.9987; and L7, R = 0.9986). In comparison to FBP, HIR led to reduced Agatston scores between 97 % (L1) and 87.4 % (L7) of the FBP values. Using HIR iterations L1 - L3, all patients were assigned to identical risk groups as after FPB reconstruction. In 5.4 % of patients the risk group after HIR with the maximum iteration level was different from the group after FBP reconstruction. CONCLUSION There was an excellent correlation of Agatston scores after HIR and FBP with identical risk group assignment at levels 1 - 3 for all patients. Hence it appears that the application of HIR in routine calcium scoring does not entail any disadvantages. Thus, future studies are needed to demonstrate whether HIR is a reliable method for reducing radiation dose in coronary calcium scoring.
Resumo:
OBJECTIVE The aim of this study was to directly compare metal artifact reduction (MAR) of virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (CT) with iterative MAR (iMAR) from single energy in pelvic CT with hip prostheses. MATERIALS AND METHODS A human pelvis phantom with unilateral or bilateral metal inserts of different material (steel and titanium) was scanned with third-generation dual-source CT using single (120 kVp) and dual-energy (100/150 kVp) at similar radiation dose (CT dose index, 7.15 mGy). Three image series for each phantom configuration were reconstructed: uncorrected, VME, and iMAR. Two independent, blinded radiologists assessed image quality quantitatively (noise and attenuation) and subjectively (5-point Likert scale). Intraclass correlation coefficients (ICCs) and Cohen κ were calculated to evaluate interreader agreements. Repeated measures analysis of variance and Friedman test were used to compare quantitative and qualitative image quality. Post hoc testing was performed using a corrected (Bonferroni) P < 0.017. RESULTS Agreements between readers were high for noise (all, ICC ≥ 0.975) and attenuation (all, ICC ≥ 0.986); agreements for qualitative assessment were good to perfect (all, κ ≥ 0.678). Compared with uncorrected images, VME showed significant noise reduction in the phantom with titanium only (P < 0.017), and iMAR showed significantly lower noise in all regions and phantom configurations (all, P < 0.017). In all phantom configurations, deviations of attenuation were smallest in images reconstructed with iMAR. For VME, there was a tendency toward higher subjective image quality in phantoms with titanium compared with uncorrected images, however, without reaching statistical significance (P > 0.017). Subjective image quality was rated significantly higher for images reconstructed with iMAR than for uncorrected images in all phantom configurations (all, P < 0.017). CONCLUSIONS Iterative MAR showed better MAR capabilities than VME in settings with bilateral hip prosthesis or unilateral steel prosthesis. In settings with unilateral hip prosthesis made of titanium, VME and iMAR performed similarly well.
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
Finite element hp-adaptivity is a technology that allows for very accurate numerical solutions. When applied to open region problems such as radar cross section prediction or antenna analysis, a mesh truncation method needs to be used. This paper compares the following mesh truncation methods in the context of hp-adaptive methods: Infinite Elements, Perfectly Matched Layers and an iterative boundary element based methodology. These methods have been selected because they are exact at the continuous level (a desirable feature required by the extreme accuracy delivered by the hp-adaptive strategy) and they are easy to integrate with the logic of hp-adaptivity. The comparison is mainly based on the number of degrees of freedom needed for each method to achieve a given level of accuracy. Computational times are also included. Two-dimensional examples are used, but the conclusions directly extrapolated to the three dimensional case.
Resumo:
Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains
Resumo:
In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.
Resumo:
To date, crop models have been little used for characterising the types of cultivars suited to a changed climate, though simulations of altered management (e.g. sowing) are often reported. However, in neither case are model uncertainties evaluated at the same time.
Resumo:
The cortex of the brain is organized into clear horizontal layers, laminae, which subserve much of the connectional anatomy of the brain. We hypothesize that there is also a vertical anatomical organization that might subserve local interactions of neuronal functional units, in accord with longstanding electrophysiological observations. We develop and apply a general quantitative method, inspired by analogous methods in condensed matter physics, to examine the anatomical organization of the cortex in human brain. We find, in addition to obvious laminae, anatomical evidence for tightly packed microcolumnar ensembles containing approximately 11 neurons, with a periodicity of about 80 μm. We examine the structural integrity of this new architectural feature in two common dementing illnesses, Alzheimer disease and dementia with Lewy bodies. In Alzheimer disease, there is a dramatic, nearly complete loss of microcolumnar ensemble organization. The relative degree of loss of microcolumnar ensembles is directly proportional to the number of neurofibrillary tangles, but not related to the amount of amyloid-β deposition. In dementia with Lewy bodies, a similar disruption of microcolumnar ensemble architecture occurs despite minimal neuronal loss. These observations show that quantitative analysis of complex cortical architecture can be applied to analyze the anatomical basis of brain disorders.
Resumo:
We present a method (ENERGI) for extracting energy-like quantities from a data base of protein structures. In this paper, we use the method to generate pairwise additive amino acid "energy" scores. These scores are obtained by iteration until they correctly discriminate a set of known protein folds from decoy conformations. The method succeeds in lattice model tests and in the gapless threading problem as defined by Maiorov and Crippen [Maiorov, V. N. & Crippen, G. M. (1992) J. Mol. Biol. 227, 876-888]. A more challenging test of threading a larger set of test proteins derived from the representative set of Hobohm and Sander [Hobohm, U. & Sander, C. (1994) Protein Sci. 3, 522-524] is used as a "workbench" for exploring how the ENERGI scores depend on their parameter sets.
Resumo:
We develop a heuristic model for chaperonin-facilitated protein folding, the iterative annealing mechanism, based on theoretical descriptions of "rugged" conformational free energy landscapes for protein folding, and on experimental evidence that (i) folding proceeds by a nucleation mechanism whereby correct and incorrect nucleation lead to fast and slow folding kinetics, respectively, and (ii) chaperonins optimize the rate and yield of protein folding by an active ATP-dependent process. The chaperonins GroEL and GroES catalyze the folding of ribulose bisphosphate carboxylase at a rate proportional to the GroEL concentration. Kinetically trapped folding-incompetent conformers of ribulose bisphosphate carboxylase are converted to the native state in a reaction involving multiple rounds of quantized ATP hydrolysis by GroEL. We propose that chaperonins optimize protein folding by an iterative annealing mechanism; they repeatedly bind kinetically trapped conformers, randomly disrupt their structure, and release them in less folded states, allowing substrate proteins multiple opportunities to find pathways leading to the most thermodynamically stable state. By this mechanism, chaperonins greatly expand the range of environmental conditions in which folding to the native state is possible. We suggest that the development of this device for optimizing protein folding was an early and significant evolutionary event.
Resumo:
The GroE proteins are molecular chaperones involved in protein folding. The general mechanism by which they facilitate folding is still enigmatic. One of the central open questions is the conformation of the GroEL-bound nonnative protein. Several suggestions have been made concerning the folding stage at which a protein can interact with GroEL. Furthermore, the possibility exists that binding of the nonnative protein to GroEL results in its unfolding. We have addressed these issues that are basic for understanding the GroE-mediated folding cycle by using folding intermediates of an Fab antibody fragment as molecular probes to define the binding properties of GroEL. We show that, in addition to binding to an early folding intermediate, GroEL is able to recognize and interact with a late quaternary-structured folding intermediate (Dc) without measurably unfolding it. Thus, the prerequisite for binding is not a certain folding stage of a nonnative protein. In contrast, general surface properties of nonnative proteins seem to be crucial for binding. Furthermore, unfolding of a highly structured intermediate does not necessarily occur upon binding to GroEL. Folding of Dc in the presence of GroEL and ATP involves cycles of binding and release. Because in this system no off-pathway reactions or kinetic traps are involved, a quantitative analysis of the reactivation kinetics observed is possible. Our results indicate that the association reaction of Dc and GroEL in the presence of ATP is rather slow, whereas in the absence of ATP association is several orders of magnitude more efficient. Therefore, it seems that ATP functions by inhibiting reassociation rather than promoting release of the bound substrate.
Resumo:
The so-called parallel multisplitting nonstationary iterative Model A was introduced by Bru, Elsner, and Neumann [Linear Algebra and its Applications 103:175-192 (1988)] for solving a nonsingular linear system Ax = b using a weak nonnegative multisplitting of the first type. In this paper new results are introduced when A is a monotone matrix using a weak nonnegative multisplitting of the second type and when A is a symmetric positive definite matrix using a P -regular multisplitting. Also, nonstationary alternating iterative methods are studied. Finally, combining Model A and alternating iterative methods, two new models of parallel multisplitting nonstationary iterations are introduced. When matrix A is monotone and the multisplittings are weak nonnegative of the first or of the second type, both models lead to convergent schemes. Also, when matrix A is symmetric positive definite and the multisplittings are P -regular, the schemes are also convergent.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016
Resumo:
"April 1985."