977 resultados para Short Loadlength, Fast Algorithms
Resumo:
Two-dimensional (2D)-breath-hold coronary magnetic resonance angiography (MRA) has been shown to be a fast and reliable method to depict the proximal coronary arteries. Recent developments, however, allow for free-breathing navigator gated and navigator corrected three-dimensional (3D) coronary MRA. These 3D approaches have potential for improved signal-to-noise ratio (SNR) and allow for the acquisition of adjacent thin slices without the misregistration problems known from 2D approaches. Still, a major impediment of a 3D acquisition is the increased scan time. The purpose of this study was the implementation of a free-breathing navigator gated and corrected ultra-fast 3D coronary MRA technique, which allows for scan times of less than 5 minutes. Twelve healthy adult subjects were examined in the supine position using a navigator gated and corrected ECG triggered ultra-fast 3D interleaved gradient echo planar imaging sequence (TFE-EPI). A 3D slab, consisting of 20 slices with a reconstructed slice thickness of 1.5 mm, was acquired with free-breathing. The diastolic TFE-EPI acquisition block was preceded by a T2prep pre-pulse, a diaphragmatic navigator pulse, and a fat suppression pre-pulse. With a TR of 19 ms and an effective TE of 5.4 ms, the duration of the data acquisition window duration was 38 ms. The in-plane spatial resolution was 1.0-1.3 mm*1.5-1.9 mm. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and right coronary artery (RCA) could be visualized with an average scan time for the entire 3D-volume data set of 2:57 +/- 0:51 minutes. Average contiguous vessel length visualized was 53 +/- 11 mm (range: 42 to 75 mm) for the LAD and 84 +/- 14 mm (range: 62 to 112 mm) for the RCA. Contrast-to-noise between coronary blood and myocardium was 5.0 +/- 2.3 for the LM/LAD and 8.0 +/- 2.9 for the RCA, resulting in an excellent suppression of myocardium. We present a new approach for free-breathing 3D coronary MRA, which allows for scan times superior to corresponding 2D coronary MRA approaches, and which takes advantage of the enhanced SNR of 3D acquisitions and the post-processing benefits of thin adjacent slices. The robust image quality and the short average scanning time suggest that this approach may be useful for screening the major coronary arteries or identification of anomalous coronary arteries. J. Magn. Reson. Imaging 1999;10:821-825.
Resumo:
At high magnetic field strengths (≥ 3T), the radiofrequency wavelength used in MRI is of the same order of magnitude of (or smaller than) the typical sample size, making transmit magnetic field (B1+) inhomogeneities more prominent. Methods such as radiofrequency-shimming and transmit SENSE have been proposed to mitigate these undesirable effects. A prerequisite for such approaches is an accurate and rapid characterization of the B1+ field in the organ of interest. In this work, a new phase-sensitive three-dimensional B1+-mapping technique is introduced that allows the acquisition of a 64 × 64 × 8 B1+-map in ≈ 20 s, yielding an accurate mapping of the relative B1+ with a 10-fold dynamic range (0.2-2 times the nominal B1+). Moreover, the predominant use of low flip angle excitations in the presented sequence minimizes specific absorption rate, which is an important asset for in vivo B1+-shimming procedures at high magnetic fields. The proposed methodology was validated in phantom experiments and demonstrated good results in phantom and human B1+-shimming using an 8-channel transmit-receive array.
Resumo:
OBJECT: To determine whether glycine can be measured at 7 T in human brain with (1)H magnetic resonance spectroscopy (MRS). MATERIALS AND METHODS: The glycine singlet is overlapped by the larger signal of myo-inositol. Density matrix simulations were performed to determine the TE at which the myo-inositol signal was reduced the most, following a single spin-echo excitation. (1)H MRS was performed on an actively shielded 7 T scanner, in five healthy volunteers. RESULTS: At the TE of 30 ms, the myo-inositol signal intensity was substantially reduced. Quantification using LCModel yielded a glycine-to-creatine ratio of 0.14 +/- 0.01, with a Cramer-Rao lower bound (CRLB) of 7 +/- 1%. Furthermore, quantification of metabolites other than glycine was possible as well, with a CRLB mostly below 10%. CONCLUSION: It is possible to detect glycine at 7 T in human brain, at the short TE of 30 ms with a single spin-echo excitation scheme.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose.
Resumo:
Iowa state, county, and city engineering offices expend considerable effort monitoring the state’s approximately 25,000 bridges, most of which span small waterways. In fact, the need for monitoring is actually greater for bridges over small waterways because scour processes are exacerbated by the close proximity of abutments, piers, channel banks, approach embankments, and other local obstructions. The bridges are customarily inspected biennially by the county’s road department bridge inspectors. It is extremely time consuming and difficult to obtain consistent, reliable, and timely information on bridge-waterway conditions for so many bridges. Moreover, the current approaches to gather survey information is not uniform, complete, and quantitative. The methodology and associated software (DIGIMAP) developed through the present project enable a non-intrusive means to conduct fast, efficient, and accurate inspection of the waterways in the vicinity of the bridges and culverts using one technique. The technique combines algorithms image of registration and velocimetry using images acquired with conventional devices at the inspection site. The comparison of the current bridge inspection and monitoring methods with the DIGIMAP methodology enables to conclude that the new procedure assembles quantitative information on the waterway hydrodynamic and morphologic features with considerable reduced effort, time, and cost. It also improves the safety of the bridge and culvert inspections conducted during normal and extreme hydrologic events. The data and information are recorded in a digital format, enabling immediate and convenient tracking of the waterway changes over short or long time intervals.
Resumo:
In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.
Resumo:
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.
Resumo:
Thereis now growing evidencethatthe hippocampus generatestheta rhythmsthat can phase biasfast neural oscillationsinthe neocortex, allowing coordination of widespread fast oscillatory populations outside limbic areas. A recent magnetoencephalographic study showed that maintenance of configural-relational scene information in a delayed match-to-sample (DMS) task was associated with replay of that information during the delay period. The periodicity of the replay was coordinated by the phase of the ongoing theta rhythm, and the degree of theta coordination during the delay period was positively correlated with DMS performance. Here, we reanalyzed these data to investigate which brain regions were involved in generating the theta oscillations that coordinated the periodic replay of configural- relational information. We used a beamformer algorithm to produce estimates of regional theta rhythms and constructed volumetric images of the phase-locking between the local theta cycle and the instances of replay (in the 13- 80 Hz band). We found that individual differences in DMS performancefor configural-relational associations were relatedtothe degree of phase coupling of instances of cortical reactivations to theta oscillations generated in the right posterior hippocampus and the right inferior frontal gyrus. This demonstrates that the timing of memory reactivations in humans is biased toward hippocampal theta phase
Resumo:
Here we discuss two consecutive MERLIN observations of the X-ray binary LS I +61° 303 . The first observation shows a double-sided jet extending up to about 200 AU on both sides of a central source. The jet shows a bent S-shaped structure similar to the one displayed by the well-known precessing jet of SS 433 . The precession suggested in the first MERLIN image becomes evident in the second one, showing a one-sided bent jet significantly rotated with respect to the jet of the day before. We conclude that the derived precession of the relativistic (beta=0.6) jet explains puzzling previous VLBI results. Moreover, the fact that the precession is fast could be the explanation of the never understood short term (days) variability of the associated gamma-ray source 2CG 135+01 / 3EG J0241+6103
Resumo:
The aim of this project is to get used to another kind of programming. Since now, I used very complex programming languages to develop applications or even to program microcontrollers, but PicoCricket system is the evidence that we don’t need so complex development tools to get functional devices. PicoCricket system is the clear example of simple programming to make devices work the way we programmed it. There’s an easy but effective way to programs mall devices just saying what we want them to do. We cannot do complex algorithms and mathematical operations but we can program them in a short time. Nowadays, the easier and faster we produce, the more we earn. So the tendency is to develop fast, cheap and easy, and PicoCricket system can do it.
Resumo:
The influence of voltage on the conductance of toad skin was studied to identify the time course of the activation/deactivation dynamics of voltage-dependent Cl- channels located in the apical membrane of mitochondrion-rich cells in this tissue. Positive apical voltage induced an important conductance inhibition which took a few seconds to fully develop and was instantaneously released by pulse inversion to negative voltage, indicating a short-duration memory of the inhibiting factors. Sinusoidal stimulation at 23.4 mM [Cl-] showed hysteresis in the current versus voltage curves, even at very low frequency, suggesting that the rate of voltage application was also relevant for the inhibition/releasing effect to develop. We conclude that the voltage modulation of apical Cl- permeability is essentially a fast process and the apparent slow components of activation/deactivation obtained in the whole skin are a consequence of a gradual voltage build-up across the apical membrane due to voltage sharing between apical and basolateral membranes
Resumo:
Variations in different types of genomes have been found to be responsible for a large degree of physical diversity such as appearance and susceptibility to disease. Identification of genomic variations is difficult and can be facilitated through computational analysis of DNA sequences. Newly available technologies are able to sequence billions of DNA base pairs relatively quickly. These sequences can be used to identify variations within their specific genome but must be mapped to a reference sequence first. In order to align these sequences to a reference sequence, we require mapping algorithms that make use of approximate string matching and string indexing methods. To date, few mapping algorithms have been tailored to handle the massive amounts of output generated by newly available sequencing technologies. In otrder to handle this large amount of data, we modified the popular mapping software BWA to run in parallel using OpenMPI. Parallel BWA matches the efficiency of multithreaded BWA functions while providing efficient parallelism for BWA functions that do not currently support multithreading. Parallel BWA shows significant wall time speedup in comparison to multithreaded BWA on high-performance computing clusters, and will thus facilitate the analysis of genome sequencing data.
Resumo:
Diatoms are renowned for their robust ability to perform NPQ (Non-Photochemical Quenching of chlorophyll fluorescence) as a dissipative response to heightened light stress on photosystem II, plausibly explaining their dominance over other algal groups in turbulent light environs. Their NPQ mechanism has been principally attributed to a xanthophyll cycle involving the lumenal pH regulated reversible de-epoxidation of diadinoxanthin. The principal goal of this dissertation is to reveal the physiological and physical origins and consequences of the NPQ response in diatoms during short-term transitions to excessive irradiation. The investigation involves diatom species from different originating light environs to highlight the diversity of diatom NPQ and to facilitate the detection of core mechanisms common among the diatoms as a group. A chiefly spectroscopic approach was used to investigate NPQ in diatom cells. Prime methodologies include: the real time monitoring of PSII excitation and de-excitation pathways via PAM fluorometry and pigment interconversion via transient absorbance measurements, the collection of cryogenic absorbance spectra to measure pigment energy levels, and the collection of cryogenic fluorescence spectra and room temperature picosecond time resolved fluorescence decay spectra to study excitation energy transfer and dissipation. Chemical inhibitors that target the trans-thylakoid pH gradient, the enzyme responsible for diadinoxanthin de-epoxidation, and photosynthetic electron flow were additionally used to experimentally manipulate the NPQ response. Multifaceted analyses of the NPQ responses from two previously un-photosynthetically characterised species, Nitzschia curvilineata and Navicula sp., were used to identify an excitation pressure relief ‘strategy’ for each species. Three key areas of NPQ were examined: (i) the NPQ activation/deactivation processes, (ii) how NPQ affects the collection, dissipation, and usage of absorbed light energy, and (iii) the interdependence of NPQ and photosynthetic electron flow. It was found that Nitzschia cells regulate excitation pressure via performing a high amplitude, reversible antenna based quenching which is dependent on the de-epoxidation of diadinoxanthin. In Navicula cells excitation pressure could be effectively regulated solely within the PSII reaction centre, whilst antenna based, diadinoxanthin de-epoxidation dependent quenching was implicated to be used as a supplemental, long-lasting source of excitation energy dissipation. These strategies for excitation balance were discussed in the context of resource partitioning under these species’ originating light climates. A more detailed investigation of the NPQ response in Nitzschia was used to develop a comprehensive model describing the mechanism for antenna centred non-photochemical quenching in this species. The experimental evidence was strongly supportive of a mechanism whereby: an acidic lumen triggers the diadinoxanthin de-epoxidation and protonation mediated aggregation of light harvesting complexes leading to the formation of quencher chlorophyll a-chlorophyll a dimers with short-lived excited states; quenching relaxes when a rise in lumen pH triggers the dispersal of light harvesting complex aggregates via deprotonation events and the input of diadinoxanthin. This model may also be applicable for describing antenna based NPQ in other diatom species.
Resumo:
Le problème de localisation-routage avec capacités (PLRC) apparaît comme un problème clé dans la conception de réseaux de distribution de marchandises. Il généralisele problème de localisation avec capacités (PLC) ainsi que le problème de tournées de véhicules à multiples dépôts (PTVMD), le premier en ajoutant des décisions liées au routage et le deuxième en ajoutant des décisions liées à la localisation des dépôts. Dans cette thèse on dévelope des outils pour résoudre le PLRC à l’aide de la programmation mathématique. Dans le chapitre 3, on introduit trois nouveaux modèles pour le PLRC basés sur des flots de véhicules et des flots de commodités, et on montre comment ceux-ci dominent, en termes de la qualité de la borne inférieure, la formulation originale à deux indices [19]. Des nouvelles inégalités valides ont été dévelopées et ajoutées aux modèles, de même que des inégalités connues. De nouveaux algorithmes de séparation ont aussi été dévelopés qui dans la plupart de cas généralisent ceux trouvés dans la litterature. Les résultats numériques montrent que ces modèles de flot sont en fait utiles pour résoudre des instances de petite à moyenne taille. Dans le chapitre 4, on présente une nouvelle méthode de génération de colonnes basée sur une formulation de partition d’ensemble. Le sous-problème consiste en un problème de plus court chemin avec capacités (PCCC). En particulier, on utilise une relaxation de ce problème dans laquelle il est possible de produire des routes avec des cycles de longueur trois ou plus. Ceci est complété par des nouvelles coupes qui permettent de réduire encore davantage le saut d’intégralité en même temps que de défavoriser l’apparition de cycles dans les routes. Ces résultats suggèrent que cette méthode fournit la meilleure méthode exacte pour le PLRC. Dans le chapitre 5, on introduit une nouvelle méthode heuristique pour le PLRC. Premièrement, on démarre une méthode randomisée de type GRASP pour trouver un premier ensemble de solutions de bonne qualité. Les solutions de cet ensemble sont alors combinées de façon à les améliorer. Finalement, on démarre une méthode de type détruir et réparer basée sur la résolution d’un nouveau modèle de localisation et réaffectation qui généralise le problème de réaffectaction [48].