987 resultados para cosmology, numerical simulations, dark matter, dark energy, initial conditions
Resumo:
We explore the statistical properties of grain boundaries in the vortex polycrystalline phase of type-II superconductors. Treating grain boundaries as arrays of dislocations interacting through linear elasticity, we show that self-interaction of a deformed grain boundary is equivalent to a nonlocal long-range surface tension. This affects the pinning properties of grain boundaries, which are found to be less rough than isolated dislocations. The presence of grain boundaries has an important effect on the transport properties of type-II superconductors as we show by numerical simulations: our results indicate that the critical current is higher for a vortex polycrystal than for a regular vortex lattice. Finally, we discuss the possible role of grain boundaries in vortex lattice melting. Through a phenomenological theory we show that melting can be preceded by an intermediate polycrystalline phase.
Resumo:
In this work, we present an atomistic-continuum model for simulations of ultrafast laser-induced melting processes in semiconductors on the example of silicon. The kinetics of transient non-equilibrium phase transition mechanisms is addressed with MD method on the atomic level, whereas the laser light absorption, strong generated electron-phonon nonequilibrium, fast heat conduction, and photo-excited free carrier diffusion are accounted for with a continuum TTM-like model (called nTTM). First, we independently consider the applications of nTTM and MD for the description of silicon, and then construct the combined MD-nTTM model. Its development and thorough testing is followed by a comprehensive computational study of fast nonequilibrium processes induced in silicon by an ultrashort laser irradiation. The new model allowed to investigate the effect of laser-induced pressure and temperature of the lattice on the melting kinetics. Two competing melting mechanisms, heterogeneous and homogeneous, were identified in our big-scale simulations. Apart from the classical heterogeneous melting mechanism, the nucleation of the liquid phase homogeneously inside the material significantly contributes to the melting process. The simulations showed, that due to the open diamond structure of the crystal, the laser-generated internal compressive stresses reduce the crystal stability against the homogeneous melting. Consequently, the latter can take a massive character within several picoseconds upon the laser heating. Due to the large negative volume of melting of silicon, the material contracts upon the phase transition, relaxes the compressive stresses, and the subsequent melting proceeds heterogeneously until the excess of thermal energy is consumed. A series of simulations for a range of absorbed fluences allowed us to find the threshold fluence value at which homogeneous liquid nucleation starts contributing to the classical heterogeneous propagation of the solid-liquid interface. A series of simulations for a range of the material thicknesses showed that the sample width we chosen in our simulations (800 nm) corresponds to a thick sample. Additionally, in order to support the main conclusions, the results were verified for a different interatomic potential. Possible improvements of the model to account for nonthermal effects are discussed and certain restrictions on the suitable interatomic potentials are found. As a first step towards the inclusion of these effects into MD-nTTM, we performed nanometer-scale MD simulations with a new interatomic potential, designed to reproduce ab initio calculations at the laser-induced electronic temperature of 18946 K. The simulations demonstrated that, similarly to thermal melting, nonthermal phase transition occurs through nucleation. A series of simulations showed that higher (lower) initial pressure reinforces (hinders) the creation and the growth of nonthermal liquid nuclei. For the example of Si, the laser melting kinetics of semiconductors was found to be noticeably different from that of metals with a face-centered cubic crystal structure. The results of this study, therefore, have important implications for interpretation of experimental data on the kinetics of melting process of semiconductors.
Resumo:
Strong vertical gradients at the top of the atmospheric boundary layer affect the propagation of electromagnetic waves and can produce radar ducts. A three-dimensional, time-dependent, nonhydrostatic numerical model was used to simulate the propagation environment in the atmosphere over the Persian Gulf when aircraft observations of ducting had been made. A division of the observations into high- and low-wind cases was used as a framework for the simulations. Three sets of simulations were conducted with initial conditions of varying degrees of idealization and were compared with the observations taken in the Ship Antisubmarine Warfare Readiness/Effectiveness Measuring (SHAREM-115) program. The best results occurred with the initialization based on a sounding taken over the coast modified by the inclusion of data on low-level atmospheric conditions over the Gulf waters. The development of moist, cool, stable marine internal boundary layers (MIBL) in air flowing from land over the waters of the Gulf was simulated. The MIBLs were capped by temperature inversions and associated lapses of humidity and refractivity. The low-wind MIBL was shallower and the gradients at its top were sharper than in the high-wind case, in agreement with the observations. Because it is also forced by land–sea contrasts, a sea-breeze circulation frequently occurs in association with the MIBL. The size, location, and internal structure of the sea-breeze circulation were realistically simulated. The gradients of temperature and humidity that bound the MIBL cause perturbations in the refractivity distribution that, in turn, lead to trapping layers and ducts. The existence, location, and surface character of the ducts were well captured. Horizontal variations in duct characteristics due to the sea-breeze circulation were also evident. The simulations successfully distinguished between high- and low-wind occasions, a notable feature of the SHAREM-115 observations. The modeled magnitudes of duct depth and strength, although leaving scope for improvement, were most encouraging.
Resumo:
Cloud-resolving numerical simulations of airflow over a diurnally heated mountain ridge are conducted to explore the mechanisms and sensitivities of convective initiation under high pressure conditions. The simulations are based on a well-observed convection event from the Convective and Orographically Induced Precipitation Study (COPS) during summer 2007, where an isolated afternoon thunderstorm developed over the Black Forest mountains of central Europe, but they are idealized to facilitate understanding and reduce computational expense. In the conditionally unstable but strongly inhibited flow under consideration, sharp horizontal convergence over the mountain acts to locally weaken the inhibition and moisten the dry midtroposphere through shallow cumulus detrainment. The onset of deep convection occurs not through the deep ascent of a single updraft but rather through a rapid succession of thermals that are vented through the mountain convergence zone into the deepening cloud mass. Emerging thermals rise through the saturated wakes of their predecessors, which diminishes the suppressive effects of entrainment and allows for rapid glaciation above the freezing level as supercooled cloud drops rime onto preexisting ice particles. These effects strongly enhance the midlevel cloud buoyancy and enable rapid ascent to the tropopause. The existence and vigor of the convection is highly sensitive to small changes in background wind speed U0, which controls the strength of the mountain convergence and the ability of midlevel moisture to accumulate above the mountain. Whereas vigorous deep convection develops for U0 = 0 m s−1, deep convection is completely eliminated for U0 = 3 m s−1. Although deep convection is able to develop under intermediate winds (U0 = 1.5 m s−1), its formation is highly sensitive to small-amplitude perturbations in the initial flow.
Resumo:
A cloud-resolving model is modified to implement the weak temperature gradient approximation in order to simulate the interactions between tropical convection and the large-scale tropical circulation. The instantaneous domain-mean potential temperature is relaxed toward a reference profile obtained from a radiative–convective equilibrium simulation of the cloud-resolving model. For homogeneous surface conditions, the model state at equilibrium is a large-scale circulation with its descending branch in the simulated column. This is similar to the equilibrium state found in some other studies, but not all. For this model, the development of such a circulation is insensitive to the relaxation profile and the initial conditions. Two columns of the cloud-resolving model are fully coupled by relaxing the instantaneous domain-mean potential temperature in both columns toward each other. This configuration is energetically closed in contrast to the reference-column configuration. No mean large-scale circulation develops over homogeneous surface conditions, regardless of the relative area of the two columns. The sensitivity to nonuniform surface conditions is similar to that obtained in the reference-column configuration if the two simulated columns have very different areas, but it is markedly weaker for columns of comparable area. The weaker sensitivity can be understood as being a consequence of a formulation for which the energy budget is closed. The reference-column configuration has been used to study the convection in a local region under the influence of a large-scale circulation. The extension to a two-column configuration is proposed as a methodology for studying the influence on local convection of changes in remote convection.
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.
Resumo:
Numerical simulations are presented of the ion distribution functions seen by middle-altitude spacecraft in the low-latitude boundary layer (LLBL) and cusp regions when reconnection is, or has recently been, taking place at the equatorial magnetopause. From the evolution of the distribution function with time elapsed since the field line was opened, both the observed energy/observation-time and pitch-angle/energy dispersions are well reproduced. Distribution functions showing a mixture of magnetosheath and magnetospheric ions, often thought to be a signature of the LLBL, are found on newly opened field lines as a natural consequence of the magnetopause effects on the ions and their flight times. In addition, it is shown that the extent of the source region of the magnetosheath ions that are detected by a satellite is a function of the sensitivity of the ion instrument . If the instrument one-count level is high (and/or solar-wind densities are low), the cusp ion precipitation detected comes from a localised region of the mid-latitude magnetopause (around the magnetic cusp), even though the reconnection takes place at the equatorial magnetopause. However, if the instrument sensitivity is high enough, then ions injected from a large segment of the dayside magnetosphere (in the relevant hemisphere) will be detected in the cusp. Ion precipitation classed as LLBL is shown to arise from the low-latitude magnetopause, irrespective of the instrument sensitivity. Adoption of threshold flux definitions has the same effect as instrument sensitivity in artificially restricting the apparent source region.
Resumo:
Multiple alternating zonal jets are a ubiquitous feature of planetary atmospheres and oceans. However, most studies to date have focused on the special case of barotropic jets. Here, the dynamics of freely evolving baroclinic jets are investigated using a two-layer quasigeostrophic annulus model with sloping topography. In a suite of 15 numerical simulations, the baroclinic Rossby radius and baroclinic Rhines scale are sampled by varying the stratification and root-mean-square eddy velocity, respectively. Small-scale eddies in the initial state evolve through geostrophic turbulence and accelerate zonally as they grow in horizontal scale, first isotropically and then anisotropically. This process leads ultimately to the formation of jets, which take about 2500 rotation periods to equilibrate. The kinetic energy spectrum of the equilibrated baroclinic zonal flow steepens from a −3 power law at small scales to a −5 power law near the jet scale. The conditions most favorable for producing multiple alternating baroclinic jets are large baroclinic Rossby radius (i.e., strong stratification) and small baroclinic Rhines scale (i.e., weak root-mean-square eddy velocity). The baroclinic jet width is diagnosed objectively and found to be 2.2–2.8 times larger than the baroclinic Rhines scale, with a best estimate of 2.5 times larger. This finding suggests that Rossby wave motions must be moving at speeds of approximately 6 times the turbulent eddy velocity in order to be capable of arresting the isotropic inverse energy cascade.
Resumo:
Westerly wind bursts (WWBs) that occur in the western tropical Pacific are believed to play an important role in the development of El Niño events. Here, following the study of Lengaigne et al. (Clim Dyn 23(6):601–620, 2004), we conduct numerical simulations in which we reexamine the response of the climate system to an observed wind burst added to a coupled general circulation model. Two sets of twin ensemble experiments are conducted (each set has control and perturbed experiments). In the first set, the initial ocean heat content of the system is higher than the model climatology (recharged), while in the second set it is nearly normal (neutral). For the recharged state, in the absence of WWBs, a moderate El Niño with a maximum warming in the central Pacific (CP) develops in about a year. In contrast, for the neutral state, there develops a weak La Niña. However, when the WWB is imposed, the situation dramatically changes: the recharged state slides into an El Niño with a maximum warming in the eastern Pacific, while the neutral set produces a weak CP El Niño instead of previous La Niña conditions. The different response of the system to the exact same perturbations is controlled by the initial state of the ocean and the subsequent ocean–atmosphere interactions involving the interplay between the eastward shift of the warm pool and the warming of the eastern equatorial Pacific. Consequently, the observed diversity of El Niño, including the occurrence of extreme events, may depend on stochastic atmospheric processes, modulating El Niño properties within a broad continuum.
Resumo:
Idealized explicit convection simulations of the Met Office Unified Model exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen in other models in previous studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapor field. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy (MSE), following Wing and Emanuel [2014], reveals that the direct radiative feedback (including significant cloud longwave effects) is dominant in both the initial development of self-aggregation and the maintenance of an aggregated state. A low-level circulation at intermediate stages of aggregation does appear to transport MSE from drier to moister regions, but this circulation is mostly balanced by other advective effects of opposite sign and is forced by horizontal anomalies of convective heating (not radiation). Sensitivity studies with either fixed prescribed radiative cooling, fixed prescribed surface fluxes, or both do not show full self-aggregation from homogeneous initial conditions, though fixed surface fluxes do not disaggregate an initialized aggregated state. A sensitivity study in which rain evaporation is turned off shows more rapid self-aggregation, while a run with this change plus fixed radiative cooling still shows strong self-aggregation, supporting a “moisture memory” effect found in Muller and Bony [2015]. Interestingly, self-aggregation occurs even in simulations with sea surface temperatures (SSTs) of 295 K and 290 K, with direct radiative feedbacks dominating the budget of MSE variance, in contrast to results in some previous studies.
Resumo:
In this paper, a power management strategy (PMS) has been developed for the control of energy storage in a system subjected to loads of random duration. The PMS minimises the costs associated with the energy consumption of specific systems powered by a primary energy source and equipped with energy storage, under the assumption that the statistical distribution of load durations is known. By including the variability of the load in the cost function, it was possible to define the optimality criteria for the power flow of the storage. Numerical calculations have been performed obtaining the control strategies associated with the global minimum in energy costs, for a wide range of initial conditions of the system. The results of the calculations have been tested on a MATLAB/Simulink model of a rubber tyre gantry (RTG) crane equipped with a flywheel energy storage system (FESS) and subjected to a test cycle, which corresponds to the real operation of a crane in the Port of Felixstowe. The results of the model show increased energy savings and reduced peak power demand with respect to existing control strategies, indicating considerable potential savings for port operators in terms of energy and maintenance costs.
Resumo:
A possible slowing down of the cosmic expansion is investigated through a cosmographic approach. By expanding the luminosity distance to fourth order and fitting the SN Ia data from the most recent compilations (Union, Constitution and Union 2), the marginal likelihood distributions for the deceleration parameter today suggest a recent reduction of the cosmic acceleration and indicate that there is a considerable probability for q(0) > 0. Also in contrast to the prediction of the Lambda CDM model, the cosmographic q(z) reconstruction permits a cosmic expansion history where the cosmic acceleration could already have peaked and be presently slowing down, which would imply that the recent accelerated expansion of the universe is a transient phenomenon. It is also shown that to describe a transient acceleration the luminosity distance needs to be expanded at least to fourth order. The present cosmographic results depend neither on the validity of general relativity nor on the matter-energy contents of the universe.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
The thermodynamic properties of dark energy fluids described by an equation of state parameter omega = p/rho are rediscussed in the context of FRW type geometries. Contrarily to previous claims, it is argued here that the phantom regime omega < -1 is not physically possible since that both the temperature and the entropy of every physical fluids must be always positive definite. This means that one cannot appeal to negative temperature in order to save the phantom dark energy hypothesis as has been recently done in the literature. Such a result remains true as long as the chemical potential is zero. However, if the phantom fluid is endowed with a non-null chemical potential, the phantom field hypothesis becomes thermodynamically consistent, that is, there are macroscopic equilibrium states with T > 0 and S > 0 in the course of the Universe expansion. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.