167 resultados para Hasse Theorem
Resumo:
Let G be the group . For this group we prove a version of Schwartz's theorem on spectral analysis for the group G. We find the sharp range of Lebesgue spaces L (p) (G) for which a smooth function is not mean periodic unless it is a cusp form. Failure of the Schwartz-like theorem is also proved when C (a)(G) is replaced by L (p) (G) with suitable p. We show that the last result is linked with the failure of the Wiener-tauberian theorem for G.
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Automated image segmentation techniques are useful tools in biological image analysis and are an essential step in tracking applications. Typically, snakes or active contours are used for segmentation and they evolve under the influence of certain internal and external forces. Recently, a new class of shape-specific active contours have been introduced, which are known as Snakuscules and Ovuscules. These contours are based on a pair of concentric circles and ellipses as the shape templates, and the optimization is carried out by maximizing a contrast function between the outer and inner templates. In this paper, we present a unified approach to the formulation and optimization of Snakuscules and Ovuscules by considering a specific form of affine transformations acting on a pair of concentric circles. We show how the parameters of the affine transformation may be optimized for, to generate either Snakuscules or Ovuscules. Our approach allows for a unified formulation and relies only on generic regularization terms and not shape-specific regularization functions. We show how the calculations of the partial derivatives may be made efficient thanks to the Green's theorem. Results on synthesized as well as real data are presented.
Resumo:
The classical approach to A/D conversion has been uniform sampling and we get perfect reconstruction for bandlimited signals by satisfying the Nyquist Sampling Theorem. We propose a non-uniform sampling scheme based on level crossing (LC) time information. We show stable reconstruction of bandpass signals with correct scale factor and hence a unique reconstruction from only the non-uniform time information. For reconstruction from the level crossings we make use of the sparse reconstruction based optimization by constraining the bandpass signal to be sparse in its frequency content. While overdetermined system of equations is resorted to in the literature we use an undetermined approach along with sparse reconstruction formulation. We could get a reconstruction SNR > 20dB and perfect support recovery with probability close to 1, in noise-less case and with lower probability in the noisy case. Random picking of LC from different levels over the same limited signal duration and for the same length of information, is seen to be advantageous for reconstruction.
Resumo:
This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.
Resumo:
We show that Riesz transforms associated to the Grushin operator G = -Delta - |x|(2 similar to) (t) (2) are bounded on L (p) (a''e (n+1)). We also establish an analogue of the Hormander-Mihlin Multiplier Theorem and study Bochner-Riesz means associated to the Grushin operator. The main tools used are Littlewood-Paley theory and an operator-valued Fourier multiplier theorem due to L. Weis.
Resumo:
Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.
Resumo:
Recent data from high-statistics experiments that have measured the modulus of the pion electromagnetic form factor from threshold to relatively high energies are used as input in a suitable mathematical framework of analytic continuation to find stringent constraints on the shape parameters of the form factor at t = 0. The method uses also as input a precise description of the phase of the form factor in the elastic region based on Fermi-Watson theorem and the analysis of the pi pi scattering amplitude with dispersive Roy equations, and some information on the spacelike region coming from recent high precision experiments. Our analysis confirms the inconsistencies of several data on the modulus, especially from low energies, with analyticity and the input phase, noted in our earlier work. Using the data on the modulus from energies above 0.65 GeV, we obtain, with no specific parametrisation, the prediction < r(pi)(2)> is an element of (0.42, 0.44) fm(2) for the charge radius. The same formalism leads also to very narrow allowed ranges for the higher-order shape parameters at t = 0, with a strong correlation among them.
Resumo:
Entropy is a fundamental thermodynamic property that has attracted a wide attention across domains, including chemistry. Inference of entropy of chemical compounds using various approaches has been a widely studied topic. However, many aspects of entropy in chemical compounds remain unexplained. In the present work, we propose two new information-theoretical molecular descriptors for the prediction of gas phase thermal entropy of organic compounds. The descriptors reflect the bulk and size of the compounds as well as the gross topological symmetry in their structures, all of which are believed to determine entropy. A high correlation () between the entropy values and our information-theoretical indices have been found and the predicted entropy values, obtained from the corresponding statistically significant regression model, have been found to be within acceptable approximation. We provide additional mathematical result in the form of a theorem and proof that might further help in assessing changes in gas phase thermal entropy values with the changes in molecular structures. The proposed information-theoretical molecular descriptors, regression model and the mathematical result are expected to augment predictions of gas phase thermal entropy for a large number of chemical compounds.
Resumo:
The von Neumann entropy of a generic quantum state is not unique unless the state can be uniquely decomposed as a sum of extremal or pure states. As pointed out to us by Sorkin, this happens if the GNS representation (of the algebra of observables in some quantum state) is reducible, and some representations in the decomposition occur with non-trivial degeneracy. This non-unique entropy can occur at zero temperature. We will argue elsewhere in detail that the degeneracies in the GNS representation can be interpreted as an emergent broken gauge symmetry, and play an important role in the analysis of emergent entropy due to non-Abelian anomalies. Finally, we establish the analogue of an H-theorem for this entropy by showing that its evolution is Markovian, determined by a stochastic matrix.
Resumo:
Given a smooth, projective variety Y over an algebraically closed field of characteristic zero, and a smooth, ample hyperplane section X subset of Y, we study the question of when a bundle E on X, extends to a bundle epsilon on a Zariski open set U subset of Y containing X. The main ingredients used are explicit descriptions of various obstruction classes in the deformation theory of bundles, together with Grothendieck-Lefschetz theory. As a consequence, we prove a Noether-Lefschetz theorem for higher rank bundles, which recovers and unifies the Noether-Lefschetz theorems of Joshi and Ravindra-Srinivas.
Resumo:
By applying the lower bound theorem of limit analysis in conjunction with finite elements and nonlinear optimization, the bearing capacity factor N has been computed for a rough strip footing by incorporating pseudostatic horizontal seismic body forces. As compared with different existing approaches, the present analysis is more rigorous, because it does not require an assumption of either the failure mechanism or the variation of the ratio of the shear to the normal stress along the footing-soil interface. The magnitude of N decreases considerably with an increase in the horizontal seismic acceleration coefficient (kh). With an increase in kh, a continuous spread in the extent of the plastic zone toward the direction of the horizontal seismic body force is noted. The results obtained from this paper have been found to compare well with the solutions reported in the literature. (C) 2013 American Society of Civil Engineers.
Resumo:
We study the structure constants of the N = 1 beta deformed theory perturbatively and at strong coupling. We show that the planar one loop corrections to the structure constants of single trace gauge invariant operators in the scalar sector is determined by the anomalous dimension Hamiltonian. This result implies that 3 point functions of the chiral primaries of the theory do not receive corrections at one loop. We then study the structure constants at strong coupling using the Lunin-Maldacena geometry. We explicitly construct the supergravity mode dual to the chiral primary with three equal U(1) R-charges in the Lunin-Maldacena geometry. We show that the 3 point function of this supergravity mode with semi-classical states representing two other similar chiral primary states but with large U(1) charges to be independent of the beta deformation and identical to that found in the AdS(5) x S-5 geometry. This together with the one-loop result indicate that these structure constants are protected by a non-renormalization theorem. We also show that three point function of U(1) R-currents with classical massive strings is proportional to the R-charge carried by the string solution. This is in accordance with the prediction of the R-symmetry Ward identity.
Resumo:
Is the Chandrasekhar mass limit for white dwarfs (WDs) set in stone? Not anymore, recent observations of over-luminous, peculiar type Ia supernovae can be explained if significantly super-Chandrasekhar WDs exist as their progenitors, thus barring them to be used as cosmic distance indicators. However, there is no estimate of a mass limit for these super-Chandrasekhar WD candidates yet. Can they be arbitrarily large? In fact, the answer is no! We arrive at this revelation by exploiting the flux freezing theorem in observed, accreting, magnetized WDs, which brings in Landau quantization of the underlying electron degenerate gas. This essay presents the calculations which pave the way for the ultimate (significantly super-Chandrasekhar) mass limit of WDs, heralding a paradigm shift 80 years after Chandrasekhar's discovery.