214 resultados para Minimal models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lepton masses and mixing angles via localization of 5-dimensional fields in the bulk are revisited in the context of Randall-Sundrum models. The Higgs is assumed to be localized on the IR brane. Three cases for neutrino masses are considered: (a) The higher-dimensional neutrino mass operator (LH.LH), (b) Dirac masses, and (c) Type I seesaw with bulk Majorana mass terms. Neutrino masses and mixing as well as charged lepton masses are fit in the first two cases using chi(2) minimization for the bulk mass parameters, while varying the O(1) Yukawa couplings between 0.1 and 4. Lepton flavor violation is studied for all the three cases. It is shown that large negative bulk mass parameters are required for the right-handed fields to fit the data in the LH.LH case. This case is characterized by a very large Kaluza-Klein (KK) spectrum and relatively weak flavor-violating constraints at leading order. The zero modes for the charged singlets are composite in this case, and their corresponding effective 4-dimensional Yukawa couplings to the KK modes could be large. For the Dirac case, good fits can be obtained for the bulk mass parameters, c(i), lying between 0 and 1. However, most of the ``best-fit regions'' are ruled out from flavor-violating constraints. In the bulk Majorana terms case, we have solved the profile equations numerically. We give example points for inverted hierarchy and normal hierarchy of neutrino masses. Lepton flavor violating rates are large for these points. We then discuss various minimal flavor violation schemes for Dirac and bulk Majorana cases. In the Dirac case with minimal-flavor-violation hypothesis, it is possible to simultaneously fit leptonic masses and mixing angles and alleviate lepton flavor violating constraints for KK modes with masses of around 3 TeV. Similar examples are also provided in the Majorana case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pivotal point of the paper is to discuss the behavior of temperature, pressure, energy density as a function of volume along with determination of caloric EoS from following two model: w(z)=w (0)+w (1)ln(1+z) & . The time scale of instability for this two models is discussed. In the paper we then generalize our result and arrive at general expression for energy density irrespective of the model. The thermodynamical stability for both of the model and the general case is discussed from this viewpoint. We also arrive at a condition on the limiting behavior of thermodynamic parameter to validate the third law of thermodynamics and interpret the general mathematical expression of integration constant U (0) (what we get while integrating energy conservation equation) physically relating it to number of micro states. The constraint on the allowed values of the parameters of the models is discussed which ascertains stability of universe. The validity of thermodynamical laws within apparent and event horizon is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since a universally accepted dynamo model of grand minima does not exist at the present time, we concentrate on the physical processes which may be behind the grand minima. After summarizing the relevant observational data, we make the point that, while the usual sources of irregularities of solar cycles may be sufficient to cause a grand minimum, the solar dynamo has to operate somewhat differently from the normal to bring the Sun out of the grand minimum. We then consider three possible sources of irregularities in the solar dynamo: (i) nonlinear effects; (ii) fluctuations in the poloidal field generation process; (iii) fluctuations in the meridional circulation. We conclude that (i) is unlikely to be the cause behind grand minima, but a combination of (ii) and (iii) may cause them. If fluctuations make the poloidal field fall much below the average or make the meridional circulation significantly weaker, then the Sun may be pushed into a grand minimum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The presence of new matter fields charged under the Standard Model gauge group at intermediate scales below the Grand Unification scale modifies the renormalization group evolution of the gauge couplings. This can in turn significantly change the running of the Minimal Supersymmetric Standard Model parameters, in particular the gaugino and the scalar masses. In the absence of new large Yukawa couplings we can parameterise all the intermediate scale models in terms of only two parameters controlling the size of the unified gauge coupling. As a consequence of the modified running, the low energy spectrum can be strongly affected with interesting phenomenological consequences. In particular, we show that scalar over gaugino mass ratios tend to increase and the regions of the parameter space with neutralino Dark Matter compatible with cosmological observations get drastically modified. Moreover, we discuss some observables that can be used to test the intermediate scale physics at the LHC in a wide class of models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Epoxy resin bonded mica splitting is the insulation of choice for machine stators. However, this system is seen to be relatively weak under time varying mechanical stress, in particular the vibration causing delamination of mica and deboning of mica from the resin matrix. The situation is accentuated under the combined action of electrical, thermal and mechanical stress. Physical and probabilistic models for failure of such systems have been proposed by one of the authors of this paper earlier. This paper presents a pragmatic accelerated failure data acquisition and analytical paradigm under multi factor coupled stress, Electrical, Thermal. The parameters of the phenomenological model so developed are estimated based on sound statistical treatment of failure data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on the status of supersymmetric seesaw models in the light of recent experimental results on mu -> e + gamma, theta(13) and the light Higgs mass at the LHC. SO(10)-like relations are assumed for neutrino Dirac Yukawa couplings and two cases of mixing, one large, PMNS-like, and another small, CKM-like, are considered. It is shown that for the large mixing case, only a small range of parameter space with moderate tan beta is still allowed. This remaining region can be ruled out by an order of magnitude improvement in the current limit on BR(mu -> e + gamma). We also explore a model with non-universal Higgs mass boundary conditions at the high scale. It is shown that the renormalization group induced flavor violating slepton mass terms are highly sensitive to the Higgs boundary conditions. Depending on the choice of the parameters, they can either lead to strong enhancements or cancellations within the flavor violating terms. Such cancellations might relax the severe constraints imposed by lepton flavor violation compared to mSUGRA. Nevertheless for a large region of parameter space the predicted rates lie within the reach of future experiments once the light Higgs mass constraint is imposed. We also update the potential of the ongoing and future experimental searches for lepton flavor violation in constraining the supersymmetric parameter space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently it has been shown that the fidelity of the ground state of a quantum many-body system can be used todetect its quantum critical points (QCPs). If g denotes the parameter in the Hamiltonian with respect to which the fidelity is computed, we find that for one-dimensional models with large but finite size, the fidelity susceptibility chi(F) can detect a QCP provided that the correlation length exponent satisfies nu < 2. We then show that chi(F) can be used to locate a QCP even if nu >= 2 if we introduce boundary conditions labeled by a twist angle N theta, where N is the system size. If the QCP lies at g = 0, we find that if N is kept constant, chi(F) has a scaling form given by chi(F) similar to theta(-2/nu) f (g/theta(1/nu)) if theta << 2 pi/N. We illustrate this both in a tight-binding model of fermions with a spatially varying chemical potential with amplitude h and period 2q in which nu = q, and in a XY spin-1/2 chain in which nu = 2. Finally we show that when q is very large, the model has two additional QCPs at h = +/- 2 which cannot be detected by studying the energy spectrum but are clearly detected by chi(F). The peak value and width of chi(F) seem to scale as nontrivial powers of q at these QCPs. We argue that these QCPs mark a transition between extended and localized states at the Fermi energy. DOI: 10.1103/PhysRevB.86.245424

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of identifying the constituent sources in a single-sensor mixture signal consisting of contributions from multiple simultaneously active sources. We propose a generic framework for mixture signal analysis based on a latent variable approach. The basic idea of the approach is to detect known sources represented as stochastic models, in a single-channel mixture signal without performing signal separation. A given mixture signal is modeled as a convex combination of known source models and the weights of the models are estimated using the mixture signal. We show experimentally that these weights indicate the presence/absence of the respective sources. The performance of the proposed approach is illustrated through mixture speech data in a reverberant enclosure. For the task of identifying the constituent speakers using data from a single microphone, the proposed approach is able to identify the dominant source with up to 8 simultaneously active background sources in a room with RT60 = 250 ms, using models obtained from clean speech data for a Source to Interference Ratio (SIR) greater than 2 dB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the advanced analytical methodologies such as Double- G and Double - K models for fracture analysis of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete. Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Double-G model is based on energy concept and couples the Griffith's brittle fracture theory with the bridging softening property of concrete. The double-K fracture model is based on stress intensity factor approach. Various fracture parameters such as cohesive fracture toughness (4), unstable fracture toughness (K-Ic(c)), unstable fracture toughness (K-Ic(un)) and initiation fracture toughness (K-Ic(ini)) have been evaluated based on linear elastic fracture mechanics and nonlinear fracture mechanics principles. Double-G and double-K method uses the secant compliance at the peak point of measured P-CMOD curves for determining the effective crack length. Bi-linear tension softening model has been employed to account for cohesive stresses ahead of the crack tip. From the studies, it is observed that the fracture parameters obtained by using double - G and double - K models are in good agreement with each other. Crack extension resistance has been estimated by using the fracture parameters obtained through double - K model. It is observed that the values of the crack extension resistance at the critical unstable point are almost equal to the values of the unstable fracture toughness K-Ic(un) of the materials. The computed fracture parameters will be useful for crack growth study, remaining life and residual strength evaluation of concrete structural components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the asymptotics of the invariant measure for the process of spatial distribution of N coupled Markov chains in the limit of a large number of chains. Each chain reflects the stochastic evolution of one particle. The chains are coupled through the dependence of transition rates on the spatial distribution of particles in the various states. Our model is a caricature for medium access interactions in wireless local area networks. Our model is also applicable in the study of spread of epidemics in a network. The limiting process satisfies a deterministic ordinary differential equation called the McKean-Vlasov equation. When this differential equation has a unique globally asymptotically stable equilibrium, the spatial distribution converges weakly to this equilibrium. Using a control-theoretic approach, we examine the question of a large deviation from this equilibrium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach to represent transients using spectral-domain amplitude-modulated/frequency -modulated (AM-FM) functions. The model is applied to the real and imaginary parts of the Fourier transform (FT) of the transient. The suitability of the model lies in the observation that since transients are well-localized in time, the real and imaginary parts of the Fourier spectrum have a modulation structure. The spectral AM is the envelope and the spectral FM is the group delay function. The group delay is estimated using spectral zero-crossings and the spectral envelope is estimated using a coherent demodulator. We show that the proposed technique is robust to additive noise. We present applications of the proposed technique to castanets and stop-consonants in speech.