64 resultados para Parallel Computations
Resumo:
Two new approaches to quantitatively analyze diffuse diffraction intensities from faulted layer stacking are reported. The parameters of a probability-based growth model are determined with two iterative global optimization methods: a genetic algorithm (GA) and particle swarm optimization (PSO). The results are compared with those from a third global optimization method, a differential evolution (DE) algorithm [Storn & Price (1997). J. Global Optim. 11, 341–359]. The algorithm efficiencies in the early and late stages of iteration are compared. The accuracy of the optimized parameters improves with increasing size of the simulated crystal volume. The wall clock time for computing quite large crystal volumes can be kept within reasonable limits by the parallel calculation of many crystals (clones) generated for each model parameter set on a super- or grid computer. The faulted layer stacking in single crystals of trigonal three-pointedstar- shaped tris(bicylco[2.1.1]hexeno)benzene molecules serves as an example for the numerical computations. Based on numerical values of seven model parameters (reference parameters), nearly noise-free reference intensities of 14 diffuse streaks were simulated from 1280 clones, each consisting of 96 000 layers (reference crystal). The parameters derived from the reference intensities with GA, PSO and DE were compared with the original reference parameters as a function of the simulated total crystal volume. The statistical distribution of structural motifs in the simulated crystals is in good agreement with that in the reference crystal. The results found with the growth model for layer stacking disorder are applicable to other disorder types and modeling techniques, Monte Carlo in particular.
Resumo:
The analytic continuation needed for the extraction of transport coefficients necessitates in principle a continuous function of the Euclidean time variable. We report on progress towards achieving the continuum limit for 2-point correlator measurements in thermal SU(3) gauge theory, with specific attention paid to scale setting. In particular, we improve upon the determination of the critical lattice coupling and the critical temperature of pure SU(3) gauge theory, estimating r0Tc ≃ 0.7470(7) after a continuum extrapolation. As an application the determination of the heavy quark momentum diffusion coefficient from a correlator of colour-electric fields attached to a Polyakov loop is discussed.
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯⋯benzene and pyrrole⋯⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller–Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.
Resumo:
PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
The efficient recognition of the pyrimidine base uracil by hypoxanthine or thymine in the parallel DNA triplex motif is based on the interplay of a conventional N−H⋅⋅⋅O and an unconventional C−H⋅⋅⋅O hydrogen bond.
Resumo:
BACKGROUND Biomarkers are a promising tool for the management of patients with atherosclerosis, but their variation is largely unknown. We assessed within-subject and between-subject biological variation of biomarkers in peripheral artery disease (PAD) patients and healthy controls, and defined which biomarkers have a favorable variation profile for future studies. METHODS Prospective, parallel-group cohort study, including 62 patients with stable PAD (79% men, 65±7years) and 18 healthy control subjects (44% men, 57±7years). Blood samples were taken at baseline, and after 3-, 6-, and 12-months. We calculated within-subject (CVI) and between-subject (CVG) coefficients of variation and intra-class correlation coefficient (ICC). RESULTS Mean levels of D-dimer, hs-CRP, IL-6, IL-8, MMP-9, MMP-3, S100A8/A9, PAI-1, sICAM-1, and sP-selectin levels were higher in PAD patients than in healthy controls (P≤.05 for all). CVI and CVG of the different biomarkers varied considerably in both groups. An ICC≥0.5 (indicating moderate-to-good reliability) was found for hs-CRP, D-Dimer, E-selectin, IL-10, MCP-1, MMP-3, oxLDL, sICAM-1 and sP-selectin in both groups, for sVCAM in healthy controls and for MMP-9, PAI-1 and sCD40L in PAD patients. CONCLUSIONS Single biomarker measurements are of limited utility due to large within-subject variation, both in PAD patients and healthy subjects. D-dimer, hs-CRP, MMP-9, MMP-3, PAI-1, sP-selectin and sICAM-1 are biomarkers with both higher mean levels in PAD patients and a favorable variation profile making them most suitable for future studies.
Resumo:
We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.
Resumo:
Specialization to nectarivory is associated with radiations within different bird groups, including parrots. One of them, the Australasian lories, were shown to be unexpectedly species rich. Their shift to nectarivory may have created an ecological opportunity promoting species proliferation. Several morphological specializations of the feeding tract to nectarivory have been described for parrots. However, they have never been assessed in a quantitative framework considering phylogenetic nonindependence. Using a phylogenetic comparative approach with broad taxon sampling and 15 continuous characters of the digestive tract, we demonstrate that nectarivorous parrots differ in several traits from the remaining parrots. These trait-changes indicate phenotype–environment correlations and parallel evolution, and may reflect adaptations to feed effectively on nectar. Moreover, the diet shift was associated with significant trait shifts at the base of the radiation of the lories, as shown by an alternative statistical approach. Their diet shift might be considered as an evolutionary key innovation which promoted significant non-adaptive lineage diversification through allopatric partitioning of the same new niche. The lack of increased rates of cladogenesis in other nectarivorous parrots indicates that evolutionary innovations need not be associated one-to-one with diversification events.
Resumo:
In this paper, we question the homogeneity of a large parallel corpus by measuring the similarity between various sub-parts. We compare results obtained using a general measure of lexical similarity based on χ2 and by counting the number of discourse connectives. We argue that discourse connectives provide a more sensitive measure, revealing differences that are not visible with the general measure. We also provide evidence for the existence of specific characteristics defining translated texts as opposed to non-translated ones, due to a universal tendency for explicitation.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
BACKGROUND Oesophageal clearance has been scarcely studied. AIMS Oesophageal clearance in endoscopy-negative heartburn was assessed to detect differences in bolus clearance time among patients sub-grouped according to impedance-pH findings. METHODS In 118 consecutive endoscopy-negative heartburn patients impedance-pH monitoring was performed off-therapy. Acid exposure time, number of refluxes, baseline impedance, post-reflux swallow-induced peristaltic wave index and both automated and manual bolus clearance time were calculated. Patients were sub-grouped into pH/impedance positive (abnormal acid exposure and/or number of refluxes) and pH/impedance negative (normal acid exposure and number of refluxes), the former further subdivided on the basis of abnormal/normal acid exposure time (pH+/-) and abnormal/normal number of refluxes (impedance+/-). RESULTS Poor correlation (r=0.35) between automated and manual bolus clearance time was found. Manual bolus clearance time progressively decreased from pH+/impedance+ (42.6s), pH+/impedance- (27.1s), pH-/impedance+ (17.8s) to pH-/impedance- (10.8s). There was an inverse correlation between manual bolus clearance time and both baseline impedance and post-reflux swallow-induced peristaltic wave index, and a direct correlation between manual bolus clearance and acid exposure time. A manual bolus clearance time value of 14.8s had an accuracy of 93% to differentiate pH/impedance positive from pH/impedance negative patients. CONCLUSIONS When manually measured, bolus clearance time reflects reflux severity, confirming the pathophysiological relevance of oesophageal clearance in reflux disease.