959 resultados para Minimal Realizations
Resumo:
In view of the recent measurement of the reactor mixing angle theta(13) and updated limit on BRd(mu -> e gamma) by the MEG experiment, we reexamine the charged lepton flavor violations in a framework of the supersymmetric type II seesaw mechanism. The supersymmetric type II seesaw predicts a strong correlation between BR(mu -> e gamma) and BR(tau -> mu gamma) mainly in terms of the neutrino mixing angles. We show that such a correlation can be determined accurately after the measurement of theta(13). We compute different factors that can affect this correlation and show that the minimal supergravity-like scenarios, in which slepton masses are taken to be universal at the high scale, predict 3.5 <= BR(tau -> mu gamma)/= BR(mu -> e gamma) <= 30 for normal hierarchical neutrino masses. Any experimental indication of deviation from this prediction would rule out the minimal models of the supersymmetric type II seesaw. We show that the current MEG limit puts severe constraints on the light sparticle spectrum in the minimal supergravity model if the seesaw scale lies within 10(13)-10(15) GeV. It is shown that these constraints can be relaxed and a relatively light sparticle spectrum can be obtained in a class of models in which the soft mass of a triplet scalar is taken to be nonuniversal at the high scale.
Resumo:
A common-mode (CM) filter based on the LCL filter topology is proposed in this paper, which provides a parallel path for ground currents and which also restricts the magnitude of the EMI noise injected into the grid. The CM filter makes use of the components of a line to line LCL filter, which is modified to address the CM voltage with minimal additional components. This leads to a compact filtering solution. The CM voltage of an adjustable speed drive using a PWM rectifier is analyzed for this purpose. The filter design is based on the CM equivalent circuit of the drive system. The filter addresses the adverse effects of the PWM rectifier in an adjustable speed drive. Guidelines are provided on the selection of the filter components. Different variants of the filter topology are evaluated to establish the effectiveness of the proposed circuit. Experimental results based on EMI measurement on the grid side and the CM current measurement on the motor side are presented. These results validate the effectiveness of the filter.
Resumo:
We theoretically explore the annihilation of vortex dipoles, generated when an obstacle moves through an oblate Bose-Einstein condensate, and examine the energetics of the annihilation event. We show that the grey soliton, which results from the vortex dipole annihilation, is lower in energy than the vortex dipole. We also investigate the annihilation events numerically and observe that annihilation occurs only when the vortex dipole overtakes the obstacle and comes closer than the coherence length. Furthermore, we find that noise reduces the probability of annihilation events. This may explain the lack of annihilation events in experimental realizations.
Resumo:
Automated security is one of the major concerns of modern times. Secure and reliable authentication systems are in great demand. A biometric trait like the finger knuckle print (FKP) of a person is unique and secure. Finger knuckle print is a novel biometric trait and is not explored much for real-time implementation. In this paper, three different algorithms have been proposed based on this trait. The first approach uses Radon transform for feature extraction. Two levels of security are provided here and are based on eigenvalues and the peak points of the Radon graph. In the second approach, Gabor wavelet transform is used for extracting the features. Again, two levels of security are provided based on magnitude values of Gabor wavelet and the peak points of Gabor wavelet graph. The third approach is intended to authenticate a person even if there is a damage in finger knuckle position due to injury. The FKP image is divided into modules and module-wise feature matching is done for authentication. Performance of these algorithms was found to be much better than very few existing works. Moreover, the algorithms are designed so as to implement in real-time system with minimal changes.
Resumo:
We consider supersymmetric models in which the lightest Higgs scalar can decay invisibly consistent with the constraints on the 126 GeV state discovered at the CERN LHC. We consider the invisible decay in the minimal supersymmetric standard model (MSSM), as well its extension containing an additional chiral singlet superfield, the so-called next-to-minimal or nonminimal supersymmetric standard model (NMSSM). We consider the case of MSSM with both universal as well as nonuniversal gaugino masses at the grand unified scale, and find that only an E-6 grand unified model with unnaturally large representation can give rise to sufficiently light neutralinos which can possibly lead to the invisible decay h(0) -> (chi) over tilde (0)(1)(chi) over tilde (0)(1). Following this, we consider the case of NMSSM in detail, where we also find that it is not possible to have the invisible decay of the lightest Higgs scalar with universal gaugino masses at the grand unified scale. We delineate the regions of the NMSSM parameter space where it is possible for the lightest Higgs boson to have a mass of about 126 GeV, and then concentrate on the region where this Higgs can decay into light neutralinos, with the soft gaugino masses M-1 and M-2 as two independent parameters, unconstrained by grand unification. We also consider, simultaneously, the other important invisible Higgs decay channel in the NMSSM, namely the decay into the lightest CP-odd scalars, h(1) -> a(1)a(1), which is studied in detail. With the invisible Higgs branching ratio being constrained by the present LHC results, we find that mu(eff) < 170 GeV and M-1 < 80 GeV are disfavored in NMSSM for fixed values of the other input parameters. The dependence of our results on the parameters of NMSSM is discussed in detail.
Resumo:
Electrical switching studies on amorphous Si15Te75Ge10 thin film devices reveal the existence of two distinct, stable low-resistance, SET states, achieved by varying the electrical input to the device. The multiple resistance levels can be attributed to multi-stage crystallization, as observed from temperature dependant resistance studies. The devices are tested for their ability to be RESET with minimal resistance degradation; further, they exhibit a minimal drift in the SET resistance value even after several months of switching. (c) 2013 Elsevier B.V. All rights reserved.
Resumo:
Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.
Resumo:
Transmit antenna selection (AS) has been adopted in contemporary wideband wireless standards such as Long Term Evolution (LTE). We analyze a comprehensive new model for AS that captures several key features about its operation in wideband orthogonal frequency division multiple access (OFDMA) systems. These include the use of channel-aware frequency-domain scheduling (FDS) in conjunction with AS, the hardware constraint that a user must transmit using the same antenna over all its assigned subcarriers, and the scheduling constraint that the subcarriers assigned to a user must be contiguous. The model also captures the novel dual pilot training scheme that is used in LTE, in which a coarse system bandwidth-wide sounding reference signal is used to acquire relatively noisy channel state information (CSI) for AS and FDS, and a dense narrow-band demodulation reference signal is used to acquire accurate CSI for data demodulation. We analyze the symbol error probability when AS is done in conjunction with the channel-unaware, but fair, round-robin scheduling and with channel-aware greedy FDS. Our results quantify how effective joint AS-FDS is in dispersive environments, the interactions between the above features, and the ability of the user to lower SRS power with minimal performance degradation.
Resumo:
A droplet residing on a vibrating surface and in the pressure antinode of an asymmetric standing wave can spread radially outward and atomize. In this work, proper orthogonal decomposition through high speed imaging is shown to predict the likelihood of atomization for various viscous fluids based on prior information in the droplet spreading phase. Capillary instabilities are seen to affect ligament rupture. Viscous dissipation plays an important role in determining the wavelength of the most unstable mode during the inception phase of the ligaments. However, the highest ligament capillary number achieved was less than 1, and the influence of viscosity in the ligament growth and breakup phases is quite minimal. It is inferred from the data that the growth of a typical ligament is governed by a balance between the inertial force obtained from the inception phase and capillary forces. By including the effect of acoustic pressure field around the droplet, the dynamics of the ligament growth phase is revealed and the ligament growth profiles for different fluids are shown to collapse on a straight line using a new characteristic time scale.
Interaction of Silver Nanoparticles with Serum Proteins Affects Their Antimicrobial Activity In Vivo
Resumo:
The emergence of multidrug-resistant bacteria is a global threat for human society. There exist recorded data that silver was used as an antimicrobial agent by the ancient Greeks and Romans during the 8th century. Silver nanoparticles (AgNPs) are of potential interest because of their effective antibacterial and antiviral activities, with minimal cytotoxic effects on the cells. However, very few reports have shown the usage of AgNPs for antibacterial therapy in vivo. In this study, we deciphered the importance of the chosen methods for synthesis and capping of AgNPs for their improved activity in vivo. The interaction of AgNPs with serum albumin has a significant effect on their antibacterial activity. It was observed that uncapped AgNPs exhibited no antibacterial activity in the presence of serum proteins, due to the interaction with bovine serum albumin (BSA), which was confirmed by UV-Vis spectroscopy. However, capped AgNPs with citrate or poly(vinylpyrrolidone)] exhibited antibacterial properties due to minimized interactions with serum proteins. The damage in the bacterial membrane was assessed by flow cytometry, which also showed that only capped AgNPs exhibited antibacterial properties, even in the presence of BSA. In order to understand the in vivo relevance of the antibacterial activities of different AgNPs, a murine salmonellosis model was used. It was conclusively proved that AgNPs capped with citrate or PVP exhibited significant antibacterial activities in vivo against Salmonella infection compared to uncapped AgNPs. These results clearly demonstrate the importance of capping agents and the synthesis method for AgNPs in their use as antimicrobial agents for therapeutic purposes.
Resumo:
The design of modulation schemes for the physical layer network-coded two-way relaying scenario is considered with a protocol which employs two phases: multiple access (MA) phase and broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of MA interference which occurs at the relay during the MA phase and all these network coding maps should satisfy a requirement called the exclusive law. We show that every network coding map that satisfies the exclusive law is representable by a Latin Square and conversely, that this relationship can be used to get the network coding maps satisfying the exclusive law. The channel fade states for which the minimum distance of the effective constellation at the relay become zero are referred to as the singular fade states. For M - PSK modulation (M any power of 2), it is shown that there are (M-2/4 - M/2 + 1) M singular fade states. Also, it is shown that the constraints which the network coding maps should satisfy so that the harmful effects of the singular fade states are removed, can be viewed equivalently as partially filled Latin Squares (PFLS). The problem of finding all the required maps is reduced to finding a small set of maps for M - PSK constellations (any power of 2), obtained by the completion of PFLS. Even though the completability of M x M PFLS using M symbols is an open problem, specific cases where such a completion is always possible are identified and explicit construction procedures are provided. Having obtained the network coding maps, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map chosen in a particular region. It is shown that the complex plane can be partitioned into two regions: a region in which any network coding map which satisfies the exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for M = 4-PSK signal set by Koike-Akino et al., when specialized for Simulation results show that the proposed scheme performs better than the conventional exclusive-OR (XOR) network coding and in some cases outperforms the scheme proposed by Koike-Akino et al.
Resumo:
We consider extremal limits of the recently constructed ``subtracted geometry''. We show that extremality makes the horizon attractive against scalar perturbations, but radial evolution of such perturbations changes the asymptotics: from a conical-box to flat Minkowski. Thus these are black holes that retain their near-horizon geometry under perturbations that drastically change their asymptotics. We also show that this extremal subtracted solution (''subttractor'') can arise as a boundary of the basin of attraction for flat space attractors. We demonstrate this by using a fairly minimal action (that has connections with STU model) where the equations of motion are integrable and we are able to find analytic solutions that capture the flow from the horizon to the asymptotic region. The subttractor is a boundary between two qualitatively different flows. We expect that these results have generalizations for other theories with charged dilatonic black holes.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
We give explicit construction of vertex-transitive tight triangulations of d-manifolds for d >= 2. More explicitly, for each d >= 2, we construct two (d(2) + 5d + 5)-vertex neighborly triangulated d-manifolds whose vertex-links are stacked spheres. The only other non-trivial series of such tight triangulated manifolds currently known is the series of non-simply connected triangulated d-manifolds with 2d + 3 vertices constructed by Kuhnel. The manifolds we construct are strongly minimal. For d >= 3, they are also tight neighborly as defined by Lutz, Sulanke and Swartz. Like Kuhnel complexes, our manifolds are orientable in even dimensions and non-orientable in odd dimensions. (c) 2013 Elsevier Inc. All rights reserved.
Resumo:
The problem addressed in this paper is concerned with an important issue faced by any green aware global company to keep its emissions within a prescribed cap. The specific problem is to allocate carbon reductions to its different divisions and supply chain partners in achieving a required target of reductions in its carbon reduction program. The problem becomes a challenging one since the divisions and supply chain partners, being autonomous, may exhibit strategic behavior. We use a standard mechanism design approach to solve this problem. While designing a mechanism for the emission reduction allocation problem, the key properties that need to be satisfied are dominant strategy incentive compatibility (DSIC) (also called strategy-proofness), strict budget balance (SBB), and allocative efficiency (AE). Mechanism design theory has shown that it is not possible to achieve the above three properties simultaneously. In the literature, a mechanism that satisfies DSIC and AE has recently been proposed in this context, keeping the budget imbalance minimal. Motivated by the observation that SBB is an important requirement, in this paper, we propose a mechanism that satisfies DSIC and SBB with slight compromise in allocative efficiency. Our experimentation with a stylized case study shows that the proposed mechanism performs satisfactorily and provides an attractive alternative mechanism for carbon footprint reduction by global companies.