955 resultados para theorem
Resumo:
In the simple theory of flexure of beams, the slope, bending moment, shearing force, load and other quantities are functions of a derivative of y with respect to x. It is shown that the elastic curve of a transversely loaded beam can be represented by the Maclaurin series. Substitution of the values of the derivatives gives a direct solution of beam problems. In this paper the method is applied to derive the Theorem or three moments and slope deflection equations. The method is extended to the solution of a rigid portal frame. Finally the method is applied to deduce results on which the moment distribution method of analyzing rigid frames is based.
Resumo:
A formal chemical nomenclature system WISENOM based on a context-free grammar and graph coding is described. The system is unique, unambiguous, easily pronounceable, encodable, and decodable for organic compounds. Being a formal system, every name is provable as a theorem or derivable as a terminal sentence by using the basic axioms and rewrite rules. The syntax in Backus-Naur form, examples of name derivations, and the corresponding derivation trees are provided. Encoding procedures to convert connectivity tables to WISENOM, parsing, and decoding are described.
Resumo:
A computational algorithm (based on Smullyan's analytic tableau method) that varifies whether a given well-formed formula in propositional calculus is a tautology or not has been implemented on a DEC system 10. The stepwise refinement approch of program development used for this implementation forms the subject matter of this paper. The top-down design has resulted in a modular and reliable program package. This computational algoritlhm compares favourably with the algorithm based on the well-known resolution principle used in theorem provers.
Resumo:
The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free,''in,'' and ''out'' eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian: the singularities of the ''out'' eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of ''complete'' sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the ''out'' eigenvectors. The free, ''in'' and ''out'' eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee–Friedrichs model and to the scattering of a spinless particle by a local central potential. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
The recently introduced generalized pencil of Sudarshan which gives an exact ray picture of wave optics is analysed in some situations of interest to wave optics. A relationship between ray dispersion and statistical inhomogeneity of the field is obtained. A paraxial approximation which preserves the rectilinear propagation character of the generalized pencils is presented. Under this approximation the pencils can be computed directly from the field conditions on a plane, without the necessity to compute the cross-spectral density function in the entire space as an intermediate quantity. The paraxial results are illustrated with examples. The pencils are shown to exhibit an interesting scaling behaviour in the far-zone. This scaling leads to a natural generalization of the Fraunhofer range criterion and of the classical van Cittert-Zernike theorem to planar sources of arbitrary state of coherence. The recently derived results of radiometry with partially coherent sources are shown to be simple consequences of this scaling.
Resumo:
The vertical uplift resistance of two interfering rigid rough strip anchors embedded horizontally in sand at shallow depths has been examined. The analysis is performed by using an upper bound theorem o limit analysis in combination with finite elements and linear programming. It is specified that both the anchors are loaded to failure simultaneously at the same magnitude of the failure load. For different clear spacing (S) between the anchors, the magnitude of the efficiency factor (xi(gamma)) is determined. On account of interference, the magnitude of xi(gamma) is found to reduce continuously with a decrease in the spacing between the anchors. The results from the numerical analysis were found to compare reasonably well with the available theoretical data from the literature.
Resumo:
A cut (A, B) (where B = V - A) in a graph G = (V, E) is called internal if and only if there exists a vertex x in A that is not adjacent to any vertex in B and there exists a vertex y is an element of B such that it is not adjacent to any vertex in A. In this paper, we present a theorem regarding the arrangement of cliques in a chordal graph with respect to its internal cuts. Our main result is that given any internal cut (A, B) in a chordal graph G, there exists a clique with kappa(G) + vertices (where kappa(G) is the vertex connectivity of G) such that it is (approximately) bisected by the cut (A, B). In fact we give a stronger result: For any internal cut (A, B) of a chordal graph, and for each i, 0 <= i <= kappa(G) + 1 such that vertical bar K-i vertical bar = kappa(G) + 1, vertical bar A boolean AND K-i vertical bar = i and vertical bar B boolean AND K-i vertical bar = kappa(G) + 1 - i. An immediate corollary of the above result is that the number of edges in any internal cut (of a chordal graph) should be Omega(k(2)), where kappa(G) = k. Prompted by this observation, we investigate the size of internal cuts in terms of the vertex connectivity of the chordal graphs. As a corollary, we show that in chordal graphs, if the edge connectivity is strictly less than the minimum degree, then the size of the mincut is at least kappa(G)(kappa(G)+1)/2 where kappa(G) denotes the vertex connectivity. In contrast, in a general graph the size of the mincut can be equal to kappa(G). This result is tight.
Resumo:
This paper describes a switching theoretic algorithm for the folding of programmable logic arrays (PLA). The algorithm is valid for both column and row folding, although it has been presented considering only the simple column folding. The pairwise compatibility relations among all the pairs of the columns of the PLA are mapped into a square matrix, called the compatibility matrix of the PLA. A foldable compatibility matrix (FCM), a new concept introduced by the author, is then derived from the compatibility matrix. A new theorem called the folding theorem is then proved. The theorem states that the existence of an m by 2m FCM is both necessary and sufficient to fold 2m columns of the n column PLA (2m ≤ n). Once an FCM is obtained, the ordered pairs of foldable columns and the re-ordering of the rows are readily determined.
Resumo:
The stress problem of two equal circular elastic inclusions in a pressurised cylindrical shell has been solved by using single inclusion solutions together with Graf’s addition theorem. The effect of the inter-inclusion distance on the interface stresses in the shell as well as in the inclusion is studied. The results obtained for small values of curvature parameter fi @*=(a*/8Rt) [12(1-v*)]“*, a, R, t being inclusion radius and shell radius and thickness) when compared with the flat-plate results show good agreement. The results obtained in non-dimensional form are presented graphically.
Resumo:
Brooks' Theorem says that if for a graph G,Δ(G)=n, then G is n-colourable, unless (1) n=2 and G has an odd cycle as a component, or (2) n>2 and Kn+1 is a component of G. In this paper we prove that if a graph G has none of some three graphs (K1,3;K5−e and H) as an induced subgraph and if Δ(G)greater-or-equal, slanted6 and d(G)<Δ(G), then χ(G)<Δ(G). Also we give examples to show that the hypothesis Δ(G)greater-or-equal, slanted6 can not be non-trivially relaxed and the graph K5−e can not be removed from the hypothesis. Moreover, for a graph G with none of K1,3;K5−e and H as an induced subgraph, we verify Borodin and Kostochka's conjecture that if for a graph G,Δ(G)greater-or-equal, slanted9 and d(G)<Δ(G), then χ(G)<Δ(G).
Resumo:
We carried out a discriminant analysis with identity by descent (IBD) at each marker as inputs, and the sib pair type (affected-affected versus affected-unaffected) as the output. Using simple logistic regression for this discriminant analysis, we illustrate the importance of comparing models with different number of parameters. Such model comparisons are best carried out using either the Akaike information criterion (AIC) or the Bayesian information criterion (BIC). When AIC (or BIC) stepwise variable selection was applied to the German Asthma data set, a group of markers were selected which provide the best fit to the data (assuming an additive effect). Interestingly, these 25-26 markers were not identical to those with the highest (in magnitude) single-locus lod scores.
Resumo:
The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.
Resumo:
The parametric resonance in a system having two modes of the same frequency is studied. The simultaneous occurence of the instabilities of the first and second kind is examined, by using a generalized perturbation procedure. The region of instability in the first approximation is obtained by using the Sturm's theorem for the roots of a polynomial equation.
Resumo:
This paper describes the application of vector spaces over Galois fields, for obtaining a formal description of a picture in the form of a very compact, non-redundant, unique syntactic code. Two different methods of encoding are described. Both these methods consist in identifying the given picture as a matrix (called picture matrix) over a finite field. In the first method, the eigenvalues and eigenvectors of this matrix are obtained. The eigenvector expansion theorem is then used to reconstruct the original matrix. If several of the eigenvalues happen to be zero this scheme results in a considerable compression. In the second method, the picture matrix is reduced to a primitive diagonal form (Hermite canonical form) by elementary row and column transformations. These sequences of elementary transformations constitute a unique and unambiguous syntactic code-called Hermite code—for reconstructing the picture from the primitive diagonal matrix. A good compression of the picture results, if the rank of the matrix is considerably lower than its order. An important aspect of this code is that it preserves the neighbourhood relations in the picture and the primitive remains invariant under translation, rotation, reflection, enlargement and replication. It is also possible to derive the codes for these transformed pictures from the Hermite code of the original picture by simple algebraic manipulation. This code will find extensive applications in picture compression, storage, retrieval, transmission and in designing pattern recognition and artificial intelligence systems.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.