944 resultados para Liapunov convexity theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give an explicit, direct, and fairly elementary proof that the radial energy eigenfunctions for the hydrogen atom in quantum mechanics, bound and scattering states included, form a complete set. The proof uses only some properties of the confluent hypergeometric functions and the Cauchy residue theorem from analytic function theory; therefore it would form useful supplementary reading for a graduate course on quantum mechanics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some theorems derived recently by the authors on the stability of multidimensional linear time varying systems are reported in this paper. To begin with, criteria based on Liapunov�s direct method are stated. These are followed by conditions on the asymptotic behaviour and boundedness of solutions. Finally,L 2 andL ? stabilities of these systems are discussed. In conclusion, mention is made of some of the problems in aerospace engineering to which these theorems have been applied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between site characteristics and understorey vegetation composition was analysed with quantitative methods, especially from the viewpoint of site quality estimation. Theoretical models were applied to an empirical data set collected from the upland forests of southern Finland comprising 104 sites dominated by Scots pine (Pinus sylvestris L.), and 165 sites dominated by Norway spruce (Picea abies (L.) Karsten). Site index H100 was used as an independent measure of site quality. A new model for the estimation of site quality at sites with a known understorey vegetation composition was introduced. It is based on the application of Bayes' theorem to the density function of site quality within the study area combined with the species-specific presence-absence response curves. The resulting posterior probability density function may be used for calculating an estimate for the site variable. Using this method, a jackknife estimate of site index H100 was calculated separately for pine- and spruce-dominated sites. The results indicated that the cross-validation root mean squared error (RMSEcv) of the estimates improved from 2.98 m down to 2.34 m relative to the "null" model (standard deviation of the sample distribution) in pine-dominated forests. In spruce-dominated forests RMSEcv decreased from 3.94 m down to 3.16 m. In order to assess these results, four other estimation methods based on understorey vegetation composition were applied to the same data set. The results showed that none of the methods was clearly superior to the others. In pine-dominated forests, RMSEcv varied between 2.34 and 2.47 m, and the corresponding range for spruce-dominated forests was from 3.13 to 3.57 m.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive the Langevin equations for a spin interacting with a heat bath, starting from a fully dynamical treatment. The obtained equations are non-Markovian with multiplicative fluctuations and concommitant dissipative terms obeying the fluctuation-dissipation theorem. In the Markovian limit our equations reduce to the phenomenological equations proposed by Kubo and Hashitsume. The perturbative treatment on our equations lead to Landau-Lifshitz equations and to other known results in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Photoelectron spectroscopy (PES) provides valuable information on the ionization energies of atoms and molecules. The ionization energy (IE) is given by the relation.hv = IE + T where hv is t h e energy of the radiation and T i s the kinetic energy of the electron. The IEs are directly related to the orbital energies (Koopmans' theorem). By employing UV radiation (HeI. 21.2 eV. or HeII. 40.8 eV). extensive data on the ionization of valence electrons in organic molecules have been obtained in recent years. These studies of UV photoelectron spectroscopy. originated by Turner, have provided a direct probe into the energy levels of organic molecules. Molecular orbital calculations of various degrees of sophistication are generally employed to make assignments of the PES bands. Analysis of the vibrational structure of PES bands has not only provided structural information on the molecular ions, but has also been of value in band assignments. Dewar and co-workers [1, 2) presented summaries of available PES data on organic molecules in 1969 and 1970. Turner et al. [3] published a handbook of Hel spectra of organic molecules in 1970. Since then, a few books [4-7] discussing the principles and applications of UV photoelectron spectroscopy have appeared of which special mention should be made of the recent article by Heilbronner and Maier [7]. There has, however, been no comprehensive review of the vast amount of data on the UV-PES of organic molecules published in the literature since 1970.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A BEM formulation to obtain the inelastic response of R.C. Beam-Column joints subjected to sinusoidal loading along the boundary is presented. The equations of motion are written along with kinematical and constitutive equations. The dynamic reciprocal theorem is presented and the temporal dependence is removed by assuming steady state response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkielman tarkoituksena oli soveltaa toistetun pelin teoria- ja empiriapohjaa suomalaiseen tutkimusaineistoon. Kartellin toimintadynamiikka on mallinnettu peliteorian osa-alueen, toistetun pelin kentäksi. Toistetussa pelissä samaa, kerran pelattua peliä pelataan useita kierroksia. Äärettömästi toistetusta pelistä muodostuu toistetun pelin yleinen teoria (The Folk Theorem), jossa jokaisella pelaajalla on yksilöllisesti rationaalinen käytössykli. Toisen pelaajan kanssa tehty yhteistyö kasvattaa pelaajan käytössykliltä kertyvää kokonaishyötyä. Kartellitutkimuksessa ei voi ohittaa oikeustieteellistä näkökulmaa, joten sekin on tiivistetysti mukana esityksessä. Äänettömässä tai implisiittisessä kartellissa ( tacit collusion ) ei avoimen kartellin tavoin ole osapuolten välistä kommunikointia, mutta sen lopputulos on sama. Tästä syystä äänetön kartelli on yhdenmukaistettuna käytöksenä kielletty. Koska myös tunnusmerkit ovat osin samat, kartellitutkimus on saanut arvokasta mittausaineistoa paljastuneiden kartellien käytöksestä. Pelkkään hintatiedostoonkin perustuvalla tutkimuksella on vankka teoreettinen ja empiirinen pohja. Oikeuskirjallisuudessa ja käytännössä hintayhteneväisyyden on yhdessä muiden tunnusmerkkien kanssa katsottu olevan indisio kartellista. Bensiinin vähittäismyyntimarkkinat ovat rakenteellisesti otollinen kenttä toistetulle pelille. Tutkielman empiirisessä osuudessa kohteena olivat pääkaupunkiseudun bensiinin vähittäismyyntimarkkinat ja tiedosto sisälsi otoksia hinta-aikasarjoista ajalta 1.8.2004 - 30.6.2005 kaikkiaan 116:ltä jakeluasemalta Espoosta, Helsingistä ja Vantaalta. Tutkimusmenetelmänä oli toistettujen mittausten varianssianalyysi post hoc-vertailuin. Tilastollisesti merkitsevä hinnoitteluyhtenevyys lähellä sijaitsevien asemien kesken löytyi 47 asemalta, ja näin ollen näillä asemilla on yksi kartellin tunnusmerkeistä. Hinnoitteluyhtenevyyden omaavat asemat muodostivat liikenneyhteyksien mukaan jaetuilla kilpailualueillaan ryhmittymiä ja kaikkiaan tällaisia yhtenevästi hinnoittelevia ryhmittymiä oli 21. Näistä ryhmittymistä 9 oli ns. sekapareja eli osapuolina olivat kylmäasema ja liikenneasema. Useimmissa tapauksissa oli kyseessä alueensa kalleimmin hinnoitteleva kylmäasema. Tutkielman tärkeimmät lähteet: Abrantes-Metz, Rosa M. – Froeb, Luke M. – Geweke, John F. – Taylor, Cristopher T. (2005): A Variance screen for collusion. Working paper no. 275, Bureau of economics, Federal Trade Commission, Washington DC 20580. Dutta, Prajit K. (1999): Strategies and Games, Theory and Practice. The MIT Press, Cambridge, Massachusetts, London, England. Harrington, Joseph E. (2004): Detecting cartels. Working paper. John Hopkins University. Ivaldi, Marc – Jullien, Bruno – Rey, Patric – Seabright, Paul – Tirole, Jean (2003): The Economics of Tacit Collusion. EU:n komission kilpailun pääosaston julkaisu. Phlips, Louis (1996): On the detection of collusion and predation. European Economic Review 40 (1996), 495–510.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hamiltonian systems in stellar and planetary dynamics are typically near integrable. For example, Solar System planets are almost in two-body orbits, and in simulations of the Galaxy, the orbits of stars seem regular. For such systems, sophisticated numerical methods can be developed through integrable approximations. Following this theme, we discuss three distinct problems. We start by considering numerical integration techniques for planetary systems. Perturbation methods (that utilize the integrability of the two-body motion) are preferred over conventional "blind" integration schemes. We introduce perturbation methods formulated with Cartesian variables. In our numerical comparisons, these are superior to their conventional counterparts, but, by definition, lack the energy-preserving properties of symplectic integrators. However, they are exceptionally well suited for relatively short-term integrations in which moderately high positional accuracy is required. The next exercise falls into the category of stability questions in solar systems. Traditionally, the interest has been on the orbital stability of planets, which have been quantified, e.g., by Liapunov exponents. We offer a complementary aspect by considering the protective effect that massive gas giants, like Jupiter, can offer to Earth-like planets inside the habitable zone of a planetary system. Our method produces a single quantity, called the escape rate, which characterizes the system of giant planets. We obtain some interesting results by computing escape rates for the Solar System. Galaxy modelling is our third and final topic. Because of the sheer number of stars (about 10^11 in Milky Way) galaxies are often modelled as smooth potentials hosting distributions of stars. Unfortunately, only a handful of suitable potentials are integrable (harmonic oscillator, isochrone and Stäckel potential). This severely limits the possibilities of finding an integrable approximation for an observed galaxy. A solution to this problem is torus construction; a method for numerically creating a foliation of invariant phase-space tori corresponding to a given target Hamiltonian. Canonically, the invariant tori are constructed by deforming the tori of some existing integrable toy Hamiltonian. Our contribution is to demonstrate how this can be accomplished by using a Stäckel toy Hamiltonian in ellipsoidal coordinates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the Segal-Bargmann transform on a motion group R-n v K, where K is a compact subgroup of SO(n) A characterization of the Poisson integrals associated to the Laplacian on R-n x K is given We also establish a Paley-Wiener type theorem using complexified representations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with the area of vector-valued Harmonic Analysis, where the central theme is to determine how results from classical Harmonic Analysis generalize to functions with values in an infinite dimensional Banach space. The work consists of three articles and an introduction. The first article studies the Rademacher maximal function that was originally defined by T. Hytönen, A. McIntosh and P. Portal in 2008 in order to prove a vector-valued version of Carleson's embedding theorem. The boundedness of the corresponding maximal operator on Lebesgue-(Bochner) -spaces defines the RMF-property of the range space. It is shown that the RMF-property is equivalent to a weak type inequality, which does not depend for instance on the integrability exponent, hence providing more flexibility for the RMF-property. The second article, which is written in collaboration with T. Hytönen, studies a vector-valued Carleson's embedding theorem with respect to filtrations. An earlier proof of the dyadic version assumed that the range space satisfies a certain geometric type condition, which this article shows to be also necessary. The third article deals with a vector-valued generalizations of tent spaces, originally defined by R. R. Coifman, Y. Meyer and E. M. Stein in the 80's, and concerns especially the ones related to square functions. A natural assumption on the range space is then the UMD-property. The main result is an atomic decomposition for tent spaces with integrability exponent one. In order to suit the stochastic integrals appearing in the vector-valued formulation, the proof is based on a geometric lemma for cones and differs essentially from the classical proof. Vector-valued tent spaces have also found applications in functional calculi for bisectorial operators. In the introduction these three themes come together when studying paraproduct operators for vector-valued functions. The Rademacher maximal function and Carleson's embedding theorem were applied already by Hytönen, McIntosh and Portal in order to prove boundedness for the dyadic paraproduct operator on Lebesgue-Bochner -spaces assuming that the range space satisfies both UMD- and RMF-properties. Whether UMD implies RMF is thus an interesting question. Tent spaces, on the other hand, provide a method to study continuous time paraproduct operators, although the RMF-property is not yet understood in the framework of tent spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two identities involving quarter-wave plates and half-wave plates are established. These are used to improve on an earlier gadget involving four wave plates leading to a new gadget involving just three plates, a half-wave plate and two quarter-wave plates, which can realize all SU(2) polarization transformations. This gadget is shown to involve the minimum number of quarter-wave and half-wave plates. The analysis leads to a decomposition theorem for SU (2) matrices in terms of factors which are symmetric fourth and eighth roots of the identity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study stochastic games with countable state space, compact action spaces, and limiting average payoff. ForN-person games, the existence of an equilibrium in stationary strategies is established under a certain Liapunov stability condition. For two-person zero-sum games, the existence of a value and optimal strategies for both players are established under the same stability condition.