860 resultados para Large-scale Structure
Resumo:
Large earthquakes, such as the Chile earthquake in 1960 and the Sumatra-Andaman earthquake on Dec 26, 2004 in Indonesia, have generated the Earth’s free oscillations. The eigenfrequencies of the Earth’s free oscillations are closely related to the Earth’s internal structures. The conventional methods, which mainly focus on calculating the eigenfrequecies by analytical ways, and the analysis on observations can not easily study the whole processes from earthquake occurrence to the Earth’s free oscillation inspired. Therefore, we try to use numerical method incorporated with large-scale parallel computing to study on the Earth’s free oscillations excited by giant earthquakes. We first give a review of researches and developments of the Earth’s free oscillation, and basical theories under spherical coordinate system. We then give a review of the numerical simulation of seismic wave propagation and basical theories of spectral element method to simulate global seismic wave propagation. As a first step to study the Earth’s free oscillations, we use a finite element method to simulate the propagation of elastic waves and the generation of oscillations of the chime bell of Marquis Yi of Zeng, by striking different parts of the bell, which possesses the oval crosssection. The bronze chime bells of Marquis Yi of Zeng are precious cultural relics of China. The bells have a two-tone acoustic characteristic, i.e., striking different parts of the bell generates different tones. By analysis of the vibration in the bell and the spectrum analysis, we further help the understanding of the mechanism of two-tone acoustic characteristics of the chime bell of Marquis Yi of Zeng. The preliminary calculations have clearly shown that two different modes of oscillation can be generated by striking different parts of the bell, and indicate that finite element numerical simulation of the processes of wave propagation and two-tone generation of the chime bell of Marquis Yi of Zeng is feasible. These analyses provide a new quantitative and visual way to explain the mystery of the two-tone acoustic characteristics. The method suggested by this study can be applied to simulate free oscillations excited by great earthquakes with complex Earth structure. Taking into account of such large-scale structure of the Earth, small-scale low-precision numerical simulation can not simply meet the requirement. The increasing capacity in high-performance parallel computing and progress on fully numerical solutions for seismic wave fields in realistic three-dimensional spherical models, Spectral element method and high-performance parallel computing were incorporated to simulate the seismic wave propagation processes in the Earth’s interior, without the effects of the Earth’s gravitational potential. The numerical simulation shows that, the results of the toroidal modes of our calculation agree well with the theoretical values, although the accuracy of our results is much limited, the calculated peaks are little distorted due to three-dimensional effects. There exist much great differences between our calculated values of spheroidal modes and theoretical values, because we don’t consider the effect the Earth’ gravitation in numerical model, which leads our values are smaller than the theoretical values. When , is much smaller, the effect of the Earth’s gravitation make the periods of spheroidal modes become shorter. However, we now can not consider effects of the Earth’s gravitational potential into the numerical model to simulate the spheroidal oscillations, but those results still demonstrate that, the numerical simulation of the Earth’s free oscillation is very feasible. We make the numerical simulation on processes of the Earth’s free oscillations under spherically symmetric Earth model using different special source mechanisms. The results quantitatively show that Earth’s free oscillations excited by different earthquakes are different, and oscillations at different locations are different for free oscillation excited by the same earthquake. We also explore how the Earth’s medium attenuation will take effects on the Earth’s free oscillations, and take comparisons with the observations. The medium attenuation can make influences on the Earth’s free oscillations, though the effects on lower-frequency fundamental oscillations are weak. At last, taking 2008 Wenchuan earthquake for example, we employ spectral element method incorporated with large-scale parallel computing technology to investigate the characteristics of seismic wave propagation excited by Wenchuan earthquake. We calculate synthetic seismograms with one-point source model and three-point source model respectively. Full 3-D visualization of the numerical results displays the profile of the seismic wave propagation with respect to time. The three-point source, which was proposed by the latest investigations through field observation and reverse estimation, can better demonstrate the spatial and temporal characteristics of the source rupture processes than one-point source. Primary results show that those synthetic signals calculated from three-point source agree well with the observations. This can further reveal that the source rupturing process of Wenchuan earthquake is a multi-rupture process, which is composed by at least three or more stages of rupture processes. In conclusion, the numerical simulation can not only solve some problems concluding the Earth’s ellipticity and anisotropy, which can be easily solved by conventional methods, but also finally solve the problems concluding topography model and lateral heterogeneity. We will try to find a way to fully implement self-gravitation in spectral element method in future, and do our best to continue researching the Earth’s free oscillations using the numerical simulations to see how the Earth’ lateral heterogeneous will affect the Earth’s free oscillations. These will make it possible to bring modal spectral data increasingly to bear on furthering our understanding of the Earth’s three-dimensional structure.
Resumo:
Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on design principles which include representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation and visualization applications such as ns and SSF. We call such a tool a universal topology generator. In this paper we discuss the design, implementation and usage of the BRITE universal topology generation tool that we have built. We also describe the BRITE Analysis Engine, BRIANA, which is an independent piece of software designed and built upon BRITE design goals of flexibility and extensibility. The purpose of BRIANA is to act as a repository of analysis routines along with a user–friendly interface that allows its use on different topology formats.
Resumo:
The visible matter in the universe is turbulent and magnetized. Turbulence in galaxy clusters is produced by mergers and by jets of the central galaxies and believed responsible for the amplification of magnetic fields. We report on experiments looking at the collision of two laser-produced plasma clouds, mimicking, in the laboratory, a cluster merger event. By measuring the spectrum of the density fluctuations, we infer developed, Kolmogorov-like turbulence. From spectral line broadening, we estimate a level of turbulence consistent with turbulent heating balancing radiative cooling, as it likely does in galaxy clusters. We show that the magnetic field is amplified by turbulent motions, reaching a nonlinear regime that is a precursor to turbulent dynamo. Thus, our experiment provides a promising platform for understanding the structure of turbulence and the amplification of magnetic fields in the universe.
Resumo:
As the ESA Rosetta mission approached, orbited, and sent a lander to comet 67P/Churyumov-Gerasimenko in 2014, a large campaign of ground-based observations also followed the comet. We constrain the total activity level of the comet by photometry and spectroscopy to place Rosetta results in context and to understand the large-scale structure of the comet's coma pre-perihelion. We performed observations using a number of telescopes, but concentrate on results from the 8m VLT and Gemini South telescopes in Chile. We use R-band imaging to measure the dust coma contribution to the comet's brightness and UV-visible spectroscopy to search for gas emissions, primarily using VLT/FORS. In addition we imaged the comet in near-infrared wavelengths (JHK) in late 2014 with Gemini-S/Flamingos 2. We find that the comet was already active in early 2014 at heliocentric distances beyond 4 au. The evolution of the total activity (measured by dust) followed previous predictions. No gas emissions were detected despite sensitive searches. The comet maintains a similar level of activity from orbit to orbit, and is in that sense predictable, meaning that Rosetta results correspond to typical behaviour for this comet. The gas production (for CN at least) is highly asymmetric with respect to perihelion, as our upper limits are below the measured production rates for similar distances post-perihelion in previous orbits.
Resumo:
A survey of the non-radial flows (NRFs) during nearly five years of interplanetary observations revealed the average non-radial speed of the solar wind flows to be �30 km/s, with approximately one-half of the large (>100 km/s) NRFs associated with ICMEs. Conversely, the average non-radial flow speed upstream of all ICMEs is �100 km/s, with just over one-third preceded by large NRFs. These upstream flow deflections are analysed in the context of the large-scale structure of the driving ICME. We chose 5 magnetic clouds with relatively uncomplicated upstream flow deflections. Using variance analysis it was possible to infer the local axis orientation, and to qualitatively estimate the point of interception of the spacecraft with the ICME. For all 5 events the observed upstream flows were in agreement with the point of interception predicted by variance analysis. Thus we conclude that the upstream flow deflections in these events are in accord with the current concept of the large scale structure of an ICME: a curved axial loop connected to the Sun, bounded by a curved (though not necessarily circular)cross section.
Resumo:
The heliospheric magnetic field (HMF) is the extension of the coronal magnetic field carried out into the solar system by the solar wind. It is the means by which the Sun interacts with planetary magnetospheres and channels charged particles propagating through the heliosphere. As the HMF remains rooted at the solar photosphere as the Sun rotates, the large-scale HMF traces out an Archimedean spiral. This pattern is distorted by the interaction of fast and slow solar wind streams, as well as the interplanetary manifestations of transient solar eruptions called coronal mass ejections. On the smaller scale, the HMF exhibits an array of waves, discontinuities, and turbulence, which give hints to the solar wind formation process. This review aims to summarise observations and theory of the small- and large-scale structure of the HMF. Solar-cycle and cycle-to-cycle evolution of the HMF is discussed in terms of recent spacecraft observations and pre-spaceage proxies for the HMF in geomagnetic and galactic cosmic ray records.
Resumo:
We present the first comprehensive intercomparison of currently available satellite ozone climatologies in the upper troposphere/lower stratosphere (UTLS) (300–70 hPa) as part of the Stratosphere-troposphere Processes and their Role in Climate (SPARC) Data Initiative. The Tropospheric Emission Spectrometer (TES) instrument is the only nadir-viewing instrument in this initiative, as well as the only instrument with a focus on tropospheric composition. We apply the TES observational operator to ozone climatologies from the more highly vertically resolved limb-viewing instruments. This minimizes the impact of differences in vertical resolution among the instruments and allows identification of systematic differences in the large-scale structure and variability of UTLS ozone. We find that the climatologies from most of the limb-viewing instruments show positive differences (ranging from 5 to 75%) with respect to TES in the tropical UTLS, and comparison to a “zonal mean” ozonesonde climatology indicates that these differences likely represent a positive bias for p ≤ 100 hPa. In the extratropics, there is good agreement among the climatologies regarding the timing and magnitude of the ozone seasonal cycle (differences in the peak-to-peak amplitude of <15%) when the TES observational operator is applied, as well as very consistent midlatitude interannual variability. The discrepancies in ozone temporal variability are larger in the tropics, with differences between the data sets of up to 55% in the seasonal cycle amplitude. However, the differences among the climatologies are everywhere much smaller than the range produced by current chemistry-climate models, indicating that the multiple-instrument ensemble is useful for quantitatively evaluating these models.
Resumo:
A new inflationary scenario whose exponential potential V (Phi) has a quadratic dependence on the field Phi in addition to the standard linear term is confronted with the five-year observations of the Wilkinson-Microwave Anisotropy Probe and the Sloan Digital Sky Survey data. The number of e-folds (N), the ratio of tensor-to-scalar perturbations (r), the spectral scalar index of the primordial power spectrum (n(s)) and its running (dn(s)/d ln k) depend on the dimensionless parameter a multiplying the quadratic term in the potential. In the limit a. 0 all the results of the exponential potential are fully recovered. For values of alpha not equal 0, we find that the model predictions are in good agreement with the current observations of the Cosmic Microwave Background (CMB) anisotropies and Large-Scale Structure (LSS) in the Universe. Copyright (C) EPLA, 2008.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
We recently predicted the existence of random primordial magnetic fields (RPMFs) in the form of randomly oriented cells with dipole-like structure with a cell size L(0) and an average magnetic field B(0). Here, we investigate models for primordial magnetic field with a similar web-like structure, and other geometries, differing perhaps in L(0) and B(0). The effect of RPMF on the formation of the first galaxies is investigated. The filtering mass, M(F), is the halo mass below which baryon accretion is severely depressed. We show that these RPMF could influence the formation of galaxies by altering the filtering mass and the baryon gas fraction of a halo, f(g). The effect is particularly strong in small galaxies. We find, for example, for a comoving B(0) = 0.1 mu G, and a reionization epoch that starts at z(s) = 11 and ends at z(e) = 8, for L(0) = 100 pc at z = 12, the f(g) becomes severely depressed for M < 10(7) M(circle dot), whereas for B(0) = 0 the f(g) becomes severely depressed only for much smaller masses, M < 10(5) M(circle dot). We suggest that the observation of M(F) and f(g) at high redshifts can give information on the intensity and structure of primordial magnetic fields.
Resumo:
We investigate the impact of the existence of a primordial magnetic field on the filter mass, characterizing the minimum baryonic mass that can form in dark matter (DM) haloes. For masses below the filter mass, the baryon content of DM haloes are severely depressed. The filter mass is the mass when the baryon to DM mass ratio in a halo is equal to half the baryon to DM ratio of the Universe. The filter mass has previously been used in semi-analytic calculations of galaxy formation, without taking into account the possible existence of a primordial magnetic field. We examine here its effect on the filter mass. For homogeneous comoving primordial magnetic fields of B(0) similar to 1 or 2 nG and a re-ionization epoch that starts at a redshift z(s) = 11 and is completed at z(r) = 8, the filter mass is increased at redshift 8, for example, by factors of 4.1 and 19.8, respectively. The dependence of the filter mass on the parameters describing the re-ionization epoch is investigated. Our results are particularly important for the formation of low-mass galaxies in the presence of a homogeneous primordial magnetic field. For example, for B(0) similar to 1 nG and a re-ionization epoch of z(s) similar to 11 and z(r) similar to 7, our results indicate that galaxies of total mass M similar to 5 x 108 M(circle dot) need to form at redshifts z(F) greater than or similar to 2.0, and galaxies of total mass M similar to 108 M(circle dot) at redshifts z(F) greater than or similar to 7.7.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
A new accelerating cosmology driven only by baryons plus cold dark matter (CDM) is proposed in the framework of general relativity. In this scenario the present accelerating stage of the Universe is powered by the negative pressure describing the gravitationally-induced particle production of cold dark matter particles. This kind of scenario has only one free parameter and the differential equation governing the evolution of the scale factor is exactly the same of the Lambda CDM model. For a spatially flat Universe, as predicted by inflation (Omega(dm) + Omega(baryon) = 1), it is found that the effectively observed matter density parameter is Omega(meff) = 1 - alpha, where alpha is the constant parameter specifying the CDM particle creation rate. The supernovae test based on the Union data (2008) requires alpha similar to 0.71 so that Omega(meff) similar to 0.29 as independently derived from weak gravitational lensing, the large scale structure and other complementary observations.
Resumo:
Based on perturbation theory, we study the dynamics of how dark matter and dark energy in the collapsing system approach dynamical equilibrium when they are in interaction. We find that the interaction between dark sectors cannot ensure the dark energy to fully cluster along with dark matter. When dark energy does not trace dark matter, we present a new treatment on studying the structure formation in the spherical collapsing system. Furthermore we examine the cluster number counts dependence on the interaction between dark sectors and analyze how dark energy inhomogeneities affect cluster abundances. It is shown that cluster number counts can provide specific signature of dark sectors interaction and dark energy inhomogeneities.