928 resultados para BENCHMARK
Resumo:
The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Evolutionary change in New World Monkey (NWM) skulls occurred primarily along the line of least resistance defined by size (including allometric) variation (g(max)). Although the direction of evolution was aligned with this axis, it was not clear whether this macroevolutionary pattern results from the conservation of within population genetic covariance patterns (long-term constraint) or long-term selection along a size dimension, or whether both, constraints and selection, were inextricably involved. Furthermore, G-matrix stability can also be a consequence of selection, which implies that both, constraints embodied in g(max) and evolutionary changes observed on the trait averages, would be influenced by selection Here, we describe a combination of approaches that allows one to test whether any particular instance of size evolution is a correlated by-product due to constraints (g(max)) or is due to direct selection on size and apply it to NWM lineages as a case study. The approach is based on comparing the direction and amount of evolutionary change produced by two different simulated sets of net-selection gradients (beta), a size (isometric and allometric size) and a nonsize set. Using this approach it is possible to distinguish between the two hypotheses (indirect size evolution due to constraints or direct selection on size), because although both may produce an evolutionary response aligned with g(max), the amount of change produced by random selection operating through the variance/covariance patterns (constraints hypothesis) will be much smaller than that produced by selection on size (selection hypothesis). Furthermore, the alignment of simulated evolutionary changes with g(max) when selection is not on size is not as tight as when selection is actually on size, allowing a statistical test of whether a particular observed case of evolution along the line of least resistance is the result of selection along it or not. Also, with matrix diagonalization (principal components [PC]) it is possible to calculate directly the net-selection gradient on size alone (first PC [PC1]) by dividing the amount of phenotypic difference between any two populations by the amount of variation in PC1, which allows one to benchmark whether selection was on size or not
Resumo:
The importance of the HSO(2) system in atmospheric and combustion chemistry has motivated several works dedicated to the study of associated structures and chemical reactions. Nevertheless controversy still exists in connection with the reaction SH + O(2) -> H + SO(2) and also related to the role of the HSOO isomers in the potential energy surface (PES). Here we report high-level ab initio calculation for the electronic ground state of the HSO(2) system. Energetic, geometric, and frequency properties for the major stationary states of the PES are reported at the same level of calculations:,CASPT2/aug-cc-pV(T+d)Z. This study introduces three new stationary points (two saddle points and one minimum). These structures allow the connection of the skewed HSOOs and the HSO(2) minima defining new reaction paths for SH + O(2) -> H + SO(2) and SH + O(2) -> OH + SO. In addition, the location of the HSOO isomers in the reaction pathways have been clarified.
Resumo:
The thermodynamic properties of a selected set of benchmark hydrogen-bonded systems (acetic acid dimer and the complexes of acetic acid with acetamide and methanol) was studied with the goal of obtaining detailed information on solvent effects on the hydrogen-bonded interactions using water, chloroform, and n-heptane as representatives for a wide range in the dielectric constant. Solvent effects were investigated using both explicit and implicit solvation models. For the explicit description of the solvent, molecular dynamics and Monte Carlo simulations in the isothermal isobaric (NpT) ensemble combined with the free energy perturbation technique were performed to determine solvation free energies. Within the implicit solvation approach, the polarizable continuum model and the conductor-like screening model were applied. Combination of gas phase results with the results obtained from the different solvation models through an appropriate thermodynamic cycle allows estimation of complexation free energies, enthalpies, and the respective entropic contributions in solution. Owing to the strong solvation effects of water the cyclic acetic acid dimer is not stable in aqueous solution. In less polar solvents the double hydrogen bond structure of the acetic acid dimer remains stable. This finding is in agreement with previous theoretical and experimental results. A similar trend as for the acetic acid dimer is also observed for the acetamide complex. The methanol complex was found to be thermodynamically unstable in gas phase as well as in any of the three solvents. (C) 2010 Wiley Periodicals, Inc. J Comput Chem 31: 2046-2055, 2010
Resumo:
We use a new technique to investigate the systematic behavior of near barrier complete fusion, total fusion and total reaction cross sections of weakly bound systems. A dimensionless fusion excitation function is used as a benchmark to which renormalized fusion data are compared and dynamic breakup effects can be disentangled from static effects. The same reduction procedure is used to study the effect of the direct reaction mechanisms on the total reaction cross section.
Resumo:
High-energy nuclear collisions create an energy density similar to that of the Universe microseconds after the Big Bang(1); in both cases, matter and antimatter are formed with comparable abundance. However, the relatively short-lived expansion in nuclear collisions allows antimatter to decouple quickly from matter, and avoid annihilation. Thus, a high-energy accelerator of heavy nuclei provides an efficient means of producing and studying antimatter. The antimatter helium-4 nucleus ((4)(He) over bar), also known as the anti-alpha ((alpha) over bar), consists of two antiprotons and two antineutrons (baryon number B = -4). It has not been observed previously, although the alpha-particle was identified a century ago by Rutherford and is present in cosmic radiation at the ten per cent level(2). Antimatter nuclei with B -1 have been observed only as rare products of interactions at particle accelerators, where the rate of antinucleus production in high-energy collisions decreases by a factor of about 1,000 with each additional antinucleon(3-5). Here we report the observation of (4)<(He) over bar, the heaviest observed antinucleus to date. In total, 18 (4)(He) over bar counts were detected at the STAR experiment at the Relativistic Heavy Ion Collider (RHIC; ref. 6) in 10(9) recorded gold-on-gold (Au+Au) collisions at centre-of-mass energies of 200 GeV and 62 GeV per nucleon-nucleon pair. The yield is consistent with expectations from thermodynamic(7) and coalescent nucleosynthesis(8) models, providing an indication of the production rate of even heavier antimatter nuclei and a benchmark for possible future observations of (4)(He) over bar in cosmic radiation.
Resumo:
Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barabasi-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the number of different types of singular node motifs as well as the number of their instances were found to be considerably higher in the real-world networks than in any of the benchmark networks. Copyright (C) EPLA, 2009
Resumo:
Global optimization seeks a minimum or maximum of a multimodal function over a discrete or continuous domain. In this paper, we propose a hybrid heuristic-based on the CGRASP and GENCAN methods-for finding approximate solutions for continuous global optimization problems subject to box constraints. Experimental results illustrate the relative effectiveness of CGRASP-GENCAN on a set of benchmark multimodal test functions.
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.
Resumo:
Dynamic system test methods for heating systems were developed and applied by the institutes SERC and SP from Sweden, INES from France and SPF from Switzerland already before the MacSheep project started. These test methods followed the same principle: a complete heating system – including heat generators, storage, control etc., is installed on the test rig; the test rig software and hardware simulates and emulates the heat load for space heating and domestic hot water of a single family house, while the unit under test has to act autonomously to cover the heat demand during a representative test cycle. Within the work package 2 of the MacSheep project these similar – but different – test methods were harmonized and improved. The work undertaken includes: • Harmonization of the physical boundaries of the unit under test. • Harmonization of the boundary conditions of climate and load. • Definition of an approach to reach identical space heat load in combination with an autonomous control of the space heat distribution by the unit under test. • Derivation and validation of new six day and a twelve day test profiles for direct extrapolation of test results. The new harmonized test method combines the advantages of the different methods that existed before the MacSheep project. The new method is a benchmark test, which means that the load for space heating and domestic hot water preparation will be identical for all tested systems, and that the result is representative for the performance of the system over a whole year. Thus, no modelling and simulation of the tested system is needed in order to obtain the benchmark results for a yearly cycle. The method is thus also applicable to products for which simulation models are not available yet. Some of the advantages of the new whole system test method and performance rating compared to the testing and energy rating of single components are: • Interaction between the different components of a heating system, e.g. storage, solar collector circuit, heat pump, control, etc. are included and evaluated in this test. • Dynamic effects are included and influence the result just as they influence the annual performance in the field. • Heat losses are influencing the results in a more realistic way, since they are evaluated under "real installed" and representative part-load conditions rather than under single component steady state conditions. The described method is also suited for the development process of new systems, where it replaces time-consuming and costly field testing with the advantage of a higher accuracy of the measured data (compared to the typically used measurement equipment in field tests) and identical, thus comparable boundary conditions. Thus, the method can be used for system optimization in the test bench under realistic operative conditions, i.e. under relevant operating environment in the lab. This report describes the physical boundaries of the tested systems, as well as the test procedures and the requirements for both the unit under test and the test facility. The new six day and twelve day test profiles are also described as are the validation results.
Resumo:
In this paper, we propose a new method for solving large scale p-median problem instances based on real data. We compare different approaches in terms of runtime, memory footprint and quality of solutions obtained. In order to test the different methods on real data, we introduce a new benchmark for the p-median problem based on real Swedish data. Because of the size of the problem addressed, up to 1938 candidate nodes, a number of algorithms, both exact and heuristic, are considered. We also propose an improved hybrid version of a genetic algorithm called impGA. Experiments show that impGA behaves as well as other methods for the standard set of medium-size problems taken from Beasley’s benchmark, but produces comparatively good results in terms of quality, runtime and memory footprint on our specific benchmark based on real Swedish data.