996 resultados para Averaged models
Resumo:
Since a universally accepted dynamo model of grand minima does not exist at the present time, we concentrate on the physical processes which may be behind the grand minima. After summarizing the relevant observational data, we make the point that, while the usual sources of irregularities of solar cycles may be sufficient to cause a grand minimum, the solar dynamo has to operate somewhat differently from the normal to bring the Sun out of the grand minimum. We then consider three possible sources of irregularities in the solar dynamo: (i) nonlinear effects; (ii) fluctuations in the poloidal field generation process; (iii) fluctuations in the meridional circulation. We conclude that (i) is unlikely to be the cause behind grand minima, but a combination of (ii) and (iii) may cause them. If fluctuations make the poloidal field fall much below the average or make the meridional circulation significantly weaker, then the Sun may be pushed into a grand minimum.
Resumo:
Epoxy resin bonded mica splitting is the insulation of choice for machine stators. However, this system is seen to be relatively weak under time varying mechanical stress, in particular the vibration causing delamination of mica and deboning of mica from the resin matrix. The situation is accentuated under the combined action of electrical, thermal and mechanical stress. Physical and probabilistic models for failure of such systems have been proposed by one of the authors of this paper earlier. This paper presents a pragmatic accelerated failure data acquisition and analytical paradigm under multi factor coupled stress, Electrical, Thermal. The parameters of the phenomenological model so developed are estimated based on sound statistical treatment of failure data.
Resumo:
We report on the status of supersymmetric seesaw models in the light of recent experimental results on mu -> e + gamma, theta(13) and the light Higgs mass at the LHC. SO(10)-like relations are assumed for neutrino Dirac Yukawa couplings and two cases of mixing, one large, PMNS-like, and another small, CKM-like, are considered. It is shown that for the large mixing case, only a small range of parameter space with moderate tan beta is still allowed. This remaining region can be ruled out by an order of magnitude improvement in the current limit on BR(mu -> e + gamma). We also explore a model with non-universal Higgs mass boundary conditions at the high scale. It is shown that the renormalization group induced flavor violating slepton mass terms are highly sensitive to the Higgs boundary conditions. Depending on the choice of the parameters, they can either lead to strong enhancements or cancellations within the flavor violating terms. Such cancellations might relax the severe constraints imposed by lepton flavor violation compared to mSUGRA. Nevertheless for a large region of parameter space the predicted rates lie within the reach of future experiments once the light Higgs mass constraint is imposed. We also update the potential of the ongoing and future experimental searches for lepton flavor violation in constraining the supersymmetric parameter space.
Resumo:
Recently it has been shown that the fidelity of the ground state of a quantum many-body system can be used todetect its quantum critical points (QCPs). If g denotes the parameter in the Hamiltonian with respect to which the fidelity is computed, we find that for one-dimensional models with large but finite size, the fidelity susceptibility chi(F) can detect a QCP provided that the correlation length exponent satisfies nu < 2. We then show that chi(F) can be used to locate a QCP even if nu >= 2 if we introduce boundary conditions labeled by a twist angle N theta, where N is the system size. If the QCP lies at g = 0, we find that if N is kept constant, chi(F) has a scaling form given by chi(F) similar to theta(-2/nu) f (g/theta(1/nu)) if theta << 2 pi/N. We illustrate this both in a tight-binding model of fermions with a spatially varying chemical potential with amplitude h and period 2q in which nu = q, and in a XY spin-1/2 chain in which nu = 2. Finally we show that when q is very large, the model has two additional QCPs at h = +/- 2 which cannot be detected by studying the energy spectrum but are clearly detected by chi(F). The peak value and width of chi(F) seem to scale as nontrivial powers of q at these QCPs. We argue that these QCPs mark a transition between extended and localized states at the Fermi energy. DOI: 10.1103/PhysRevB.86.245424
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We address the problem of identifying the constituent sources in a single-sensor mixture signal consisting of contributions from multiple simultaneously active sources. We propose a generic framework for mixture signal analysis based on a latent variable approach. The basic idea of the approach is to detect known sources represented as stochastic models, in a single-channel mixture signal without performing signal separation. A given mixture signal is modeled as a convex combination of known source models and the weights of the models are estimated using the mixture signal. We show experimentally that these weights indicate the presence/absence of the respective sources. The performance of the proposed approach is illustrated through mixture speech data in a reverberant enclosure. For the task of identifying the constituent speakers using data from a single microphone, the proposed approach is able to identify the dominant source with up to 8 simultaneously active background sources in a room with RT60 = 250 ms, using models obtained from clean speech data for a Source to Interference Ratio (SIR) greater than 2 dB.
Resumo:
This paper presents the advanced analytical methodologies such as Double- G and Double - K models for fracture analysis of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete. Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Double-G model is based on energy concept and couples the Griffith's brittle fracture theory with the bridging softening property of concrete. The double-K fracture model is based on stress intensity factor approach. Various fracture parameters such as cohesive fracture toughness (4), unstable fracture toughness (K-Ic(c)), unstable fracture toughness (K-Ic(un)) and initiation fracture toughness (K-Ic(ini)) have been evaluated based on linear elastic fracture mechanics and nonlinear fracture mechanics principles. Double-G and double-K method uses the secant compliance at the peak point of measured P-CMOD curves for determining the effective crack length. Bi-linear tension softening model has been employed to account for cohesive stresses ahead of the crack tip. From the studies, it is observed that the fracture parameters obtained by using double - G and double - K models are in good agreement with each other. Crack extension resistance has been estimated by using the fracture parameters obtained through double - K model. It is observed that the values of the crack extension resistance at the critical unstable point are almost equal to the values of the unstable fracture toughness K-Ic(un) of the materials. The computed fracture parameters will be useful for crack growth study, remaining life and residual strength evaluation of concrete structural components.
Resumo:
We consider the asymptotics of the invariant measure for the process of spatial distribution of N coupled Markov chains in the limit of a large number of chains. Each chain reflects the stochastic evolution of one particle. The chains are coupled through the dependence of transition rates on the spatial distribution of particles in the various states. Our model is a caricature for medium access interactions in wireless local area networks. Our model is also applicable in the study of spread of epidemics in a network. The limiting process satisfies a deterministic ordinary differential equation called the McKean-Vlasov equation. When this differential equation has a unique globally asymptotically stable equilibrium, the spatial distribution converges weakly to this equilibrium. Using a control-theoretic approach, we examine the question of a large deviation from this equilibrium.
Resumo:
The SUSY Les Houches Accord (SLHA) 2 extended the first SLHA to include various generalisations of the Minimal Supersymmetric Standard Model (MSSM) as well as its simplest next-to-minimal version. Here, we propose further extensions to it, to include the most general and well-established see-saw descriptions (types I/II/III, inverse, and linear) in both an effective and a simple gauged extension of the MSSM framework. In addition, we generalise the PDG numbering scheme to reflect the properties of the particles. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
We present a novel approach to represent transients using spectral-domain amplitude-modulated/frequency -modulated (AM-FM) functions. The model is applied to the real and imaginary parts of the Fourier transform (FT) of the transient. The suitability of the model lies in the observation that since transients are well-localized in time, the real and imaginary parts of the Fourier spectrum have a modulation structure. The spectral AM is the envelope and the spectral FM is the group delay function. The group delay is estimated using spectral zero-crossings and the spectral envelope is estimated using a coherent demodulator. We show that the proposed technique is robust to additive noise. We present applications of the proposed technique to castanets and stop-consonants in speech.
Resumo:
Parabolized stability equation (PSE) models are being deve loped to predict the evolu-tion of low-frequency, large-scale wavepacket structures and their radiated sound in high-speed turbulent round jets. Linear PSE wavepacket models were previously shown to be in reasonably good agreement with the amplitude envelope and phase measured using a microphone array placed just outside the jet shear layer. 1,2 Here we show they also in very good agreement with hot-wire measurements at the jet center line in the potential core,for a different set of experiments. 3 When used as a model source for acoustic analogy, the predicted far field noise radiation is in reasonably good agreement with microphone measurements for aft angles where contributions from large -scale structures dominate the acoustic field. Nonlinear PSE is then employed in order to determine the relative impor-tance of the mode interactions on the wavepackets. A series of nonlinear computations with randomized initial conditions are use in order to obtain bounds for the evolution of the modes in the natural turbulent jet flow. It was found that n onlinearity has a very limited impact on the evolution of the wavepackets for St≥0. 3. Finally, the nonlinear mechanism for the generation of a low-frequency mode as the difference-frequency mode 4,5 of two forced frequencies is investigated in the scope of the high Reynolds number jets considered in this paper.
Resumo:
N-gram language models and lexicon-based word-recognition are popular methods in the literature to improve recognition accuracies of online and offline handwritten data. However, there are very few works that deal with application of these techniques on online Tamil handwritten data. In this paper, we explore methods of developing symbol-level language models and a lexicon from a large Tamil text corpus and their application to improving symbol and word recognition accuracies. On a test database of around 2000 words, we find that bigram language models improve symbol (3%) and word recognition (8%) accuracies and while lexicon methods offer much greater improvements (30%) in terms of word recognition, there is a large dependency on choosing the right lexicon. For comparison to lexicon and language model based methods, we have also explored re-evaluation techniques which involve the use of expert classifiers to improve symbol and word recognition accuracies.
Resumo:
There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.
Resumo:
Using continuous and near-real time measurements of the mass concentrations of black carbon (BC) aerosols near the surface, for a period of 1 year (from January to December 2006) from a network of eight observatories spread over different environments of India, a space-time synthesis is generated. The strong seasonal variations observed, with a winter high and summer low, are attributed to the combined effects of changes in synoptic air mass types, modulated strongly by the atmospheric boundary layer dynamics. Spatial distribution shows much higher BC concentration over the Indo-Gangetic Plain (IGP) than the peninsular Indian stations. These were examined against the simulations using two chemical transport models, GOCART (Goddard Global Ozone Chemistry Aerosol Radiation and Transport) and CHIMERE for the first time over Indian region. Both the model simulations significantly deviated from the measurements at all the stations; more so during the winter and pre-monsoon seasons and over mega cities. However, the CHIMERE model simulations show better agreement compared with the measurements. Notwithstanding this, both the models captured the temporal variations; at seasonal and subseasonal timescales and the natural variabilities (intra-seasonal oscillations) fairly well, especially at the off-equatorial stations. It is hypothesized that an improvement in the atmospheric boundary layer (ABL) parameterization scheme for tropical environment might lead to better results with GOCART.
Resumo:
Transient signals such as plosives in speech or Castanets in audio do not have a specific modulation or periodic structure in time domain. However, in the spectral domain they exhibit a prominent modulation structure, which is a direct consequence of their narrow time localization. Based on this observation, a spectral-domain AM-FM model for transients is proposed. The spectral AM-FM model is built starting from real spectral zero-crossings. The AM and FM correspond to the spectral envelope (SE) and group delay (GD), respectively. Taking into account the modulation structure and spectral continuity, a local polynomial regression technique is proposed to estimate the GD function from the real spectral zeros. The SE is estimated based on the phase function computed from the estimated GD. Since the GD estimation is parametric, the degree of smoothness can be controlled directly. Simulation results based on synthetic transient signals generated using a beta density function are presented to analyze the noise-robustness of the SEGD model. Three specific applications are considered: (1) SEGD based modeling of Castanet sounds; (2) appropriateness of the model for transient compression; and (3) determining glottal closure instants in speech using a short-time SEGD model of the linear prediction residue.