898 resultados para Additional Numerical Acceleration
Resumo:
vegeu resum en el fitxer adjunt a l'inici del treball de recerca
Resumo:
BACKGROUND: Up to 60% of syncopal episodes remain unexplained. We report the results of a standardized, stepwise evaluation of patients referred to an ambulatory clinic for unexplained syncope. METHODS AND RESULTS: We studied 939 consecutive patients referred for unexplained syncope, who underwent a standardized evaluation, including history, physical examination, electrocardiogram, head-up tilt testing (HUTT), carotid sinus massage (CSM) and hyperventilation testing (HYV). Echocardiogram and stress test were performed when underlying heart disease was initially suspected. Electrophysiological study (EPS) and implantable loop recorder (ILR) were used only in patients with underlying structural heart disease or major unexplained syncope. We identified a cause of syncope in 66% of patients, including 27% vasovagal, 14% psychogenic, 6% arrhythmias, and 6% hypotension. Noninvasive testing identified 92% and invasive testing an additional 8% of the causes. HUTT yielded 38%, CSM 28%, HYV 49%, EPS 22%, and ILR 56% of diagnoses. On average, patients with arrhythmic causes were older, had a lower functional capacity, longer P-wave duration, and presented with fewer prodromes than patients with vasovagal or psychogenic syncope. CONCLUSIONS: A standardized stepwise evaluation emphasizing noninvasive tests yielded 2/3 of causes in patients referred to an ambulatory clinic for unexplained syncope. Neurally mediated and psychogenic mechanisms were behind >50% of episodes, while cardiac arrhythmias were uncommon. Sudden syncope, particularly in older patients with functional limitations or a prolonged P-wave, suggests an arrhythmic cause.
Resumo:
Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.
Resumo:
Sixty d,l- or l-methadone treated patients in maintenance therapy were interviewed for additional drug abuse and psychiatric comorbidity; 51.7% of the entire population had a comorbid Axis-I disorder, with a higher prevalence in females (P=0.05). Comorbid patients tended to have higher abuse of benzodiazepines, alcohol, cannabis, and cocaine, but not of heroin. They had received a significantly lower d,l- (P<0.05) and l-methadone dose than non-comorbid subjects. The duration of maintenance treatment showed an inverse relationship to frequency of additional heroin intake (P<0.01). Patients with additional heroin intake over the past 30 days had been treated with a significantly lower l-methadone dosage (P<0.05) than patients without. Axis-I comorbidity appears to be decreased when relatively higher dosages of d,l- (and l-methadone) are administered; comorbid individuals, however, were on significantly lower dosages. Finally, l-, but not d,l-methadone seems to be more effective in reducing additional heroin abuse.
Resumo:
In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.
Resumo:
PURPOSE: To introduce a new k-space traversal strategy for segmented three-dimensional echo planar imaging (3D EPI) that encodes two partitions per radiofrequency excitation, effectively reducing the number excitations used to acquire a 3D EPI dataset by half. METHODS: The strategy was evaluated in the context of functional MRI applications for: image quality compared with segmented 3D EPI, temporal signal-to-noise ratio (tSNR) (the ability to detect resting state networks compared with multislice two-dimensional (2D) EPI and segmented 3D EPI, and temporal resolution (the ability to separate cardiac- and respiration-related fluctuations from the desired blood oxygen level-dependent signal of interest). RESULTS: Whole brain images with a nominal voxel size of 2 mm isotropic could be acquired with a temporal resolution under half a second using traditional parallel imaging acceleration up to 4× in the partition-encode direction and using novel data acquisition speed-up of 2× with a 32-channel coil. With 8× data acquisition speed-up in the partition-encode direction, 3D reduced excitations (RE)-EPI produced acceptable image quality without introduction of noticeable additional artifacts. Due to increased tSNR and better characterization of physiological fluctuations, the new strategy allowed detection of more resting state networks compared with multislice 2D-EPI and segmented 3D EPI. CONCLUSION: 3D RE-EPI resulted in significant increases in temporal resolution for whole brain acquisitions and in improved physiological noise characterization compared with 2D-EPI and segmented 3D EPI. Magn Reson Med 72:786-792, 2014. © 2013 Wiley Periodicals, Inc.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Resumo:
To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.
Resumo:
In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.
Resumo:
OBJECTIVES: Age- and height-adjusted spirometric lung function of South Asian children is lower than those of white children. It is unclear whether this is purely genetic, or partly explained by the environment. In this study, we assessed whether cultural factors, socioeconomic status, intrauterine growth, environmental exposures, or a family and personal history of wheeze contribute to explaining the ethnic differences in spirometric lung function. METHODS: We studied children aged 9 to 14 years from a population-based cohort, including 1088 white children and 275 UK-born South Asians. Log-transformed spirometric data were analyzed using multiple linear regressions, adjusting for anthropometric factors. Five different additional models adjusted for (1) cultural factors, (2) indicators of socioeconomic status, (3) perinatal data reflecting intrauterine growth, (4) environmental exposures, and (5) personal and family history of wheeze. RESULTS: Height- and gender-adjusted forced vital capacity (FVC) and forced expired volume in 1 second (FEV1) were lower in South Asian than white children (relative difference -11% and -9% respectively, P < .001), but PEF and FEF50 were similar (P ≥ .5). FEV1/FVC was higher in South Asians (1.8%, P < .001). These differences remained largely unchanged in all 5 alternative models. CONCLUSIONS: Our study confirmed important differences in lung volumes between South Asian and white children. These were not attenuated after adjustment for cultural and socioeconomic factors and intrauterine growth, neither were they explained by differences in environmental exposures nor a personal or family history of wheeze. This suggests that differences in lung function may be mainly genetic in origin. The implication is that ethnicity-specific predicted values remain important specifically for South Asian children.
Resumo:
Human perception of bitterness displays pronounced interindividual variation. This phenotypic variation is mirrored by equally pronounced genetic variation in the family of bitter taste receptor genes. To better understand the effects of common genetic variations on human bitter taste perception, we conducted a genome-wide association study on a discovery panel of 504 subjects and a validation panel of 104 subjects from the general population of São Paulo in Brazil. Correction for general taste-sensitivity allowed us to identify a SNP in the cluster of bitter taste receptors on chr12 (10.88- 11.24 Mb, build 36.1) significantly associated (best SNP: rs2708377, P = 5.31 × 10(-13), r(2) = 8.9%, β = -0.12, s.e. = 0.016) with the perceived bitterness of caffeine. This association overlaps with-but is statistically distinct from-the previously identified SNP rs10772420 influencing the perception of quinine bitterness that falls in the same bitter taste cluster. We replicated this association to quinine perception (P = 4.97 × 10(-37), r(2) = 23.2%, β = 0.25, s.e. = 0.020) and additionally found the effect of this genetic locus to be concentration specific with a strong impact on the perception of low, but no impact on the perception of high concentrations of quinine. Our study, thus, furthers our understanding of the complex genetic architecture of bitter taste perception.