984 resultados para homogeneous cosmological models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lamination-dependent shear corrective terms in the analysis of bending of laminated plates are derived from a priori assumed linear thicknesswise distributions for gradients of transverse shear stresses by using CLPT inplane stresses in the two in-plane equilibrium equations of elasticity in each ply. In the development of a general model for angle-ply laminated plates, special cases like cylindrical bending of laminates in either direction, symmetric laminates, cross-ply laminates, antisymmetric angle-ply laminates, homogeneous plates are taken into consideration. Adding these corrective terms to the assumed displacements in (i) Classical Laminate Plate Theory (CLPT) and (ii) Classical Laminate Shear Deformation Theory (CLSDT), two new refined lamination-dependent shear deformation models are developed. Closed form solutions from these models are obtained for antisymmetric angle-ply laminates under sinusoidal load for a type of simply supported boundary conditions. Results obtained from the present models and also from Ren's model (1987) are compared with each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review some advances in the theory of homogeneous, isotropic turbulence. Our emphasis is on the new insights that have been gained from recent numerical studies of the three-dimensional Navier Stokes equation and simpler shell models for turbulence. In particular, we examine the status of multiscaling corrections to Kolmogorov scaling, extended self similarity, generalized extended self similarity, and non-Gaussian probability distributions for velocity differences and related quantities. We recount our recent proposal of a wave-vector-space version of generalized extended self similarity and show how it allows us to explore an intriguing and apparently universal crossover from inertial- to dissipation-range asymptotics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term , based on the condition that the instantaneous fluctuation of excess temperature is small. With this model, it is shown that the CMC equations describe the autoignition process all the way up to near the equilibrium limit. The effect of second-order terms (namely, conditional variance of temperature excess sigma(2) and conditional correlations of species q(ij)) in modeling is examined. Comparison with DNS data shows that sigma(2) has little effect on the predicted conditional mean temperature evolution, if the average conditional scalar dissipation rate is properly modeled. Using DNS data, a correction factor is introduced in the modeling of nonlinear terms to include the effect of species fluctuations. Computations including such a correction factor show that the species conditional correlations q(ij) have little effect on model predictions with a one-step reaction, but those q(ij) involving intermediate species are found to be crucial when four-step reduced kinetics is considered. The "most reactive mixture fraction" is found to vary with time when a four-step kinetics is considered. First-order CMC results are found to be qualitatively wrong if the conditional mean scalar dissipation rate is not modeled properly. The autoignition delay time predicted by the CMC model compares excellently with DNS results and shows a trend similar to experimental data over a range of initial temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A careful comparison of the experimental results reported in the literature reveals different variations of the melting temperature even for the same materials. Though there are different theoretical models, thermodynamic model has been extensively used to understand different variations of size-dependent melting of nanoparticles. There are different hypotheses such as homogeneous melting (HMH), liquid nucleation and growth (LNG) and liquid skin melting (LSM) to resolve different variations of melting temperature as reported in the literature. HMH and LNG account for the linear variation where as LSM is applied to understand the nonlinear behaviour in the plot of melting temperature against reciprocal of particle size. However, a bird's eye view reveals that either HMH or LSM has been extensively used by experimentalists. It has also been observed that not a single hypothesis can explain the size-dependent melting in the complete range. Therefore we describe an approach which can predict the plausible hypothesis for a given data set of the size-dependent melting temperature. A variety of data have been analyzed to ascertain the hypothesis and to test the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is divided into two parts: interacting dark matter and fluctuations in cosmology. There is an incongruence between the properties that dark matter is expected to possess between the early universe and the late universe. Weakly-interacting dark matter yields the observed dark matter relic density and is consistent with large-scale structure formation; however, there is strong astrophysical evidence in favor of the idea that dark matter has large self-interactions. The first part of this thesis presents two models in which the nature of dark matter fundamentally changes as the universe evolves. In the first model, the dark matter mass and couplings depend on the value of a chameleonic scalar field that changes as the universe expands. In the second model, dark matter is charged under a hidden SU(N) gauge group and eventually undergoes confinement. These models introduce very different mechanisms to explain the separation between the physics relevant for freezeout and for small-scale dynamics.

As the universe continues to evolve, it will asymptote to a de Sitter vacuum phase. Since there is a finite temperature associated with de Sitter space, the universe is typically treated as a thermal system, subject to rare thermal fluctuations, such as Boltzmann brains. The second part of this thesis begins by attempting to escape this unacceptable situation within the context of known physics: vacuum instability induced by the Higgs field. The vacuum decay rate competes with the production rate of Boltzmann brains, and the cosmological measures that have a sufficiently low occurrence of Boltzmann brains are given more credence. Upon further investigation, however, there are certain situations in which de Sitter space settles into a quiescent vacuum with no fluctuations. This reasoning not only provides an escape from the Boltzmann brain problem, but it also implies that vacuum states do not uptunnel to higher-energy vacua and that perturbations do not decohere during slow-roll inflation, suggesting that eternal inflation is much less common than often supposed. Instead, decoherence occurs during reheating, so this analysis does not alter the conventional understanding of the origin of density fluctuations from primordial inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Novel statistical models are proposed and developed in this paper for automated multiple-pitch estimation problems. Point estimates of the parameters of partial frequencies of a musical note are modeled as realizations from a non-homogeneous Poisson process defined on the frequency axis. When several notes are combined, the processes for the individual notes combine to give a new Poisson process whose likelihood is easy to compute. This model avoids the data-association step of linking the harmonics of each note with the corresponding partials and is ideal for efficient Bayesian inference of unknown multiple fundamental frequencies in a signal. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze in this paper the general covariant energy-momentum tensor of the gravitational system in general five-dimensional cosmological brane-world models. Then through calculating this energy-momentum for the cosmological generalization of the Randall-Sundrum model, which includes the original RS model as the static limit, we are able to show that the weakness of the gravitation on the "visible" brane is a general feature of this model. This is the origin of the gauge hierarchy from a gravitational point of view. Our results are also consistent with the fact that a gravitational system has vanishing total energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a natural norm associated with a starting point of the homogeneous self-dual (HSD) embedding model for conic convex optimization. In this norm two measures of the HSD model’s behavior are precisely controlled independent of the problem instance: (i) the sizes of ε-optimal solutions, and (ii) the maximum distance of ε-optimal solutions to the boundary of the cone of the HSD variables. This norm is also useful in developing a stopping-rule theory for HSD-based interior-point methods such as SeDuMi. Under mild assumptions, we show that a standard stopping rule implicitly involves the sum of the sizes of the ε-optimal primal and dual solutions, as well as the size of the initial primal and dual infeasibility residuals. This theory suggests possible criteria for developing starting points for the homogeneous self-dual model that might improve the resulting solution time in practice

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a model commonly used in dynamic traffic assignment the link travel time for a vehicle entering a link at time t is taken as a function of the number of vehicles on the link at time t. In an alternative recently introduced model, the travel time for a vehicle entering a link at time t is taken as a function of an estimate of the flow in the immediate neighbourhood of the vehicle, averaged over the time the vehicle is traversing the link. Here we compare the solutions obtained from these two models when applied to various inflow profiles. We also divide the link into segments, apply each model sequentially to the segments and again compare the results. As the number of segments is increased, the discretisation refined to the continuous limit, the solutions from the two models converge to the same solution, which is the solution of the Lighthill, Whitham, Richards (LWR) model for traffic flow. We illustrate the results for different travel time functions and patterns of inflows to the link. In the numerical examples the solutions from the second of the two models are closer to the limit solutions. We also show that the models converge even when the link segments are not homogeneous, and introduce a correction scheme in the second model to compensate for an approximation error, hence improving the approximation to the LWR model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural and thermodynamic properties of spherical particles carrying classical spins are investigated by Monte Carlo simulations. The potential energy is the sum of short range, purely repulsive pair contributions, and spin-spin interactions. These last are of the dipole-dipole form, with however, a crucial change of sign. At low density and high temperature the system is a homogeneous fluid of weakly interacting particles and short range spin correlations. With decreasing temperature particles condense into an equilibrium population of free floating vesicles. The comparison with the electrostatic case, giving rise to predominantly one-dimensional aggregates under similar conditions, is discussed. In both cases condensation is a continuous transformation, provided the isotropic part of the interatomic potential is purely repulsive. At low temperature the model allows us to investigate thermal and mechanical properties of membranes. At intermediate temperatures it provides a simple model to investigate equilibrium polymerization in a system giving rise to predominantly two-dimensional aggregates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.