218 resultados para Radial Distribution Functions
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Recently, it has been proposed that there are two type Ia supernova progenitors: short-lived and long-lived. On the basis of this idea, we develop a theory of a unified mechanism for the formation of the bimodal radial distribution of iron and oxygen in the Galactic disc. The underlying cause for the formation of the fine structure of the radial abundance pattern is the influence of the spiral arms, specifically the combined effect of the corotation resonance and turbulent diffusion. From our modelling, we conclude that in order to explain the bimodal radial distributions simultaneously for oxygen and iron and to obtain approximately equal total iron output from different types of supernovae, the mean ejected iron mass per supernova event should be the same as quoted in the literature if the maximum mass of stars, which eject heavy elements, is 50 M(circle dot). For the upper mass limit of 70 M(circle dot), the production of iron by a type II supernova explosion should increase by about 1.5 times.
Resumo:
We analyze the intrinsic polarization of two classical Be stars in the process of losing their circumstellar disks via a Be to normal B star transition originally reported by Wisniewski et al. During each of five polarimetric outbursts which interrupt these disk-loss events, we find that the ratio of the polarization across the Balmer jump (BJ+/BJ-) versus the V-band polarization traces a distinct loop structure as a function of time. Since the polarization change across the Balmer jump is a tracer of the innermost disk density whereas the V-band polarization is a tracer of the total scattering mass of the disk, we suggest that such correlated loop structures in Balmer jump-V-band polarization diagrams (BJV diagrams) provide a unique diagnostic of the radial distribution of mass within Be disks. We use the three-dimensional Monte Carlo radiation transfer code HDUST to reproduce the observed clockwise loops simply by turning ""on/off"" the mass decretion from the disk. We speculate that counterclockwise loop structures we observe in BJV diagrams might be caused by the mass decretion rate changing between subsequent ""on/off"" sequences. Applying this new diagnostic to a larger sample of Be disk systems will provide insight into the time-dependent nature of each system's stellar decretion rate.
Resumo:
Aims. We derive lists of proper-motions and kinematic membership probabilities for 49 open clusters and possible open clusters in the zone of the Bordeaux PM2000 proper motion catalogue (+ 11 degrees <= delta <= + 18 degrees). We test different parametrisations of the proper motion and position distribution functions and select the most successful one. In the light of those results, we analyse some objects individually. Methods. We differenciate between cluster and field member stars, and assign membership probabilities, by applying a new and fully automated method based on both parametrisations of the proper motion and position distribution functions, and genetic algorithm optimization heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. Results. We present a catalogue comprising kinematic parameters and associated membership probability lists for 49 open clusters and possible open clusters in the Bordeaux PM2000 catalogue region. We note that this is the first determination of proper motions for five open clusters. We confirm the non-existence of two kinematic populations in the region of 15 previously suspected non-existent objects.
Resumo:
Structural and dynamical properties of liquid trimethylphosphine (TMP), (CH(3))(3)P, as a function of temperature is investigated by molecular dynamics (MD) simulations. The force field used in the MD simulations, which has been proposed from molecular mechanics and quantum chemistry calculations, is able to reproduce the experimental density of liquid TMP at room temperature. Equilibrium structure is investigated by the usual radial distribution function, g(r), and also in the reciprocal space by the static structure factor, S(k). On the basis of center of mass distances, liquid TMP behaves like a simple liquid of almost spherical particles, but orientational correlation due to dipole-dipole interactions is revealed at short-range distances. Single particle and collective dynamics are investigated by several time correlation functions. At high temperatures, diffusion and reorientation occur at the same time range as relaxation of the liquid structure. Decoupling of these dynamic properties starts below ca. 220 K, when rattling dynamics of a given TMP molecules due to the cage effect of neighbouring molecules becomes important. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3624408]
Resumo:
Power distribution automation and control are import-ant tools in the current restructured electricity markets. Unfortunately, due to its stochastic nature, distribution systems faults are hardly avoidable. This paper proposes a novel fault diagnosis scheme for power distribution systems, composed by three different processes: fault detection and classification, fault location, and fault section determination. The fault detection and classification technique is wavelet based. The fault-location technique is impedance based and uses local voltage and current fundamental phasors. The fault section determination method is artificial neural network based and uses the local current and voltage signals to estimate the faulted section. The proposed hybrid scheme was validated through Alternate Transient Program/Electromagentic Transients Program simulations and was implemented as embedded software. It is currently used as a fault diagnosis tool in a Southern Brazilian power distribution company.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The reconstruction of Extensive Air Showers (EAS) observed by particle detectors at the ground is based on the characteristics of observables like the lateral particle density and the arrival times. The lateral densities, inferred for different EAS components from detector data, are usually parameterised by applying various lateral distribution functions (LDFs). The LDFs are used in turn for evaluating quantities like the total number of particles or the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the lateral particle density from this assumption arising from various reasons are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. KASCADE-Grande, an extension of the former KASCADE experiment, is a multi-component Extensive Air Shower (EAS) experiment located at the Karlsruhe Institute of Technology (Campus North), Germany. The lateral distributions of charged particles are deduced from the basic information provided by the Grande scintillators - the energy deposits - first in the observation plane, then in the intrinsic shower plane. In all steps azimuthal dependences should be taken into account. As the energy deposit in the scintillators is dependent on the angles of incidence of the particles, azimuthal dependences are already involved in the first step: the conversion from the energy deposits to the charged particle density. This is done by using the Lateral Energy Correction Function (LECF) that evaluates the mean energy deposited by a charged particle taking into account the contribution of other particles (e.g. photons) to the energy deposit. By using a very fast procedure for the evaluation of the energy deposited by various particles we prepared realistic LECFs depending on the angle of incidence of the shower and on the radial and azimuthal coordinates of the location of the detector. Mapping the lateral density from the observation plane onto the intrinsic shower plane does not remove the azimuthal dependences arising from geometric and attenuation effects, in particular for inclined showers. Realistic procedures for applying correction factors are developed. Specific examples of the bias due to neglecting the azimuthal asymmetries in the conversion from the energy deposit in the Grande detectors to the lateral density of charged particles in the intrinsic shower plane are given. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.
Resumo:
Measurements of double-helicity asymmetries in inclusive hadron production in polarized p + p collisions are sensitive to helicity-dependent parton distribution functions, in particular, to the gluon helicity distribution, Delta g. This study focuses on the extraction of the double-helicity asymmetry in eta production ((p) over right arrow + (p) over right arrow -> eta + X), the eta cross section, and the eta/pi(0) cross section ratio. The cross section and ratio measurements provide essential input for the extraction of fragmentation functions that are needed to access the helicity-dependent parton distribution functions.
Resumo:
We have performed ab initio molecular dynamics simulations to generate an atomic structure model of amorphous hafnium oxide (a-HfO(2)) via a melt-and-quench scheme. This structure is analyzed via bond-angle and partial pair distribution functions. These results give a Hf-O average nearest-neighbor distance of 2.2 angstrom, which should be compared to the bulk value, which ranges from 1.96 to 2.54 angstrom. We have also investigated the neutral O vacancy and a substitutional Si impurity for various sites, as well as the amorphous phase of Hf(1-x)Si(x)O(2) for x=0.25, 0375, and 0.5.
Resumo:
We report the first measurement of the parity-violating single-spin asymmetries for midrapidity decay positrons and electrons from W(+) and W(-) boson production in longitudinally polarized proton-proton collisions at root s = 500 GeV by the STAR experiment at RHIC. The measured asymmetries, A(L)(W+) = -0.27 +/- 0.10(stat.) +/- 0.02(syst.) +/- 0.03(norm.) and A(L)(W-) = 0.14 +/- 0.19(stat.) +/- 0.02(syst.) +/- 0.01(norm.), are consistent with theory predictions, which are large and of opposite sign. These predictions are based on polarized quark and antiquark distribution functions constrained by polarized deep-inelastic scattering measurements.
Resumo:
Forward-backward multiplicity correlation strengths have been measured with the STAR detector for Au + Au and p + p collisions at root s(NN) = 200 GeV. Strong short- and long-range correlations (LRC) are seen in central Au + Au collisions. The magnitude of these correlations decrease with decreasing centrality until only short-range correlations are observed in peripheral Au + Au collisions. Both the dual parton model (DPM) and the color glass condensate (CGC) predict the existence of the long-range correlations. In the DPM, the fluctuation in the number of elementary (parton) inelastic collisions produces the LRC. In the CGC, longitudinal color flux tubes generate the LRC. The data are in qualitative agreement with the predictions of the DPM and indicate the presence of multiple parton interactions.
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
Aims. Given that in most cases just thermal pressure is taken into account in the hydrostatic equilibrium equation to estimate galaxy cluster mass, the main purpose of this paper is to consider the contribution of all three non-thermal components to total mass measurements. The non-thermal pressure is composed by cosmic rays, turbulence and magnetic pressures. Methods. To estimate the thermal pressure we used public XMM-Newton archival data of five Abell clusters to derive temperature and density profiles. To describe the magnetic pressure, we assume a radial distribution for the magnetic field, B(r) proportional to rho(alpha)(g). To seek generality we assume alpha within the range of 0.5 to 0.9, as indicated by observations and numerical simulations. Turbulent motions and bulk velocities add a turbulent pressure, which is considered using an estimate from numerical simulations. For this component, we assume an isotropic pressure, P(turb) = 1/3 rho(g)(sigma(2)(r) + sigma(2)(t)). We also consider the contribution of cosmic ray pressure, P(cr) proportional to r(-0.5). Thus, besides the gas (thermal) pressure, we include these three non-thermal components in the magnetohydrostatic equilibrium equation and compare the total mass estimates with the values obtained without them. Results. A consistent description for the non-thermal component could yield a variation in mass estimates that extends from 10% to similar to 30%. We verified that in the inner parts of cool core clusters the cosmic ray component is comparable to the magnetic pressure, while in non-cool core clusters the cosmic ray component is dominant. For cool core clusters the magnetic pressure is the dominant component, contributing more than 50% of the total mass variation due to non-thermal pressure components. However, for non-cool core clusters, the major influence comes from the cosmic ray pressure that accounts for more than 80% of the total mass variation due to non-thermal pressure effects. For our sample, the maximum influence of the turbulent component to the total mass variation can be almost 20%. Although all of the assumptions agree with previous works, it is important to notice that our results rely on the specific parametrization adopted in this work. We show that this analysis can be regarded as a starting point for a more detailed and refined exploration of the influence of non-thermal pressure in the intra-cluster medium (ICM).
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.