954 resultados para Delsarte-Mceliece Theorem
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
Resumo:
We study the Segal-Bargmann transform on M(2). The range of this transform is characterized as a weighted Bergman space. In a similar fashion Poisson integrals are investigated. Using a Gutzmer's type formula we characterize the range as a class of functions extending holomorphically to an appropriate domain in the complexification of M(2). We also prove a Paley-Wiener theorem for the inverse Fourier transform.
Resumo:
We give an explicit, direct, and fairly elementary proof that the radial energy eigenfunctions for the hydrogen atom in quantum mechanics, bound and scattering states included, form a complete set. The proof uses only some properties of the confluent hypergeometric functions and the Cauchy residue theorem from analytic function theory; therefore it would form useful supplementary reading for a graduate course on quantum mechanics.
Resumo:
In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.
Resumo:
The relationship between site characteristics and understorey vegetation composition was analysed with quantitative methods, especially from the viewpoint of site quality estimation. Theoretical models were applied to an empirical data set collected from the upland forests of southern Finland comprising 104 sites dominated by Scots pine (Pinus sylvestris L.), and 165 sites dominated by Norway spruce (Picea abies (L.) Karsten). Site index H100 was used as an independent measure of site quality. A new model for the estimation of site quality at sites with a known understorey vegetation composition was introduced. It is based on the application of Bayes' theorem to the density function of site quality within the study area combined with the species-specific presence-absence response curves. The resulting posterior probability density function may be used for calculating an estimate for the site variable. Using this method, a jackknife estimate of site index H100 was calculated separately for pine- and spruce-dominated sites. The results indicated that the cross-validation root mean squared error (RMSEcv) of the estimates improved from 2.98 m down to 2.34 m relative to the "null" model (standard deviation of the sample distribution) in pine-dominated forests. In spruce-dominated forests RMSEcv decreased from 3.94 m down to 3.16 m. In order to assess these results, four other estimation methods based on understorey vegetation composition were applied to the same data set. The results showed that none of the methods was clearly superior to the others. In pine-dominated forests, RMSEcv varied between 2.34 and 2.47 m, and the corresponding range for spruce-dominated forests was from 3.13 to 3.57 m.
Resumo:
We derive the Langevin equations for a spin interacting with a heat bath, starting from a fully dynamical treatment. The obtained equations are non-Markovian with multiplicative fluctuations and concommitant dissipative terms obeying the fluctuation-dissipation theorem. In the Markovian limit our equations reduce to the phenomenological equations proposed by Kubo and Hashitsume. The perturbative treatment on our equations lead to Landau-Lifshitz equations and to other known results in the literature.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Photoelectron spectroscopy (PES) provides valuable information on the ionization energies of atoms and molecules. The ionization energy (IE) is given by the relation.hv = IE + T where hv is t h e energy of the radiation and T i s the kinetic energy of the electron. The IEs are directly related to the orbital energies (Koopmans' theorem). By employing UV radiation (HeI. 21.2 eV. or HeII. 40.8 eV). extensive data on the ionization of valence electrons in organic molecules have been obtained in recent years. These studies of UV photoelectron spectroscopy. originated by Turner, have provided a direct probe into the energy levels of organic molecules. Molecular orbital calculations of various degrees of sophistication are generally employed to make assignments of the PES bands. Analysis of the vibrational structure of PES bands has not only provided structural information on the molecular ions, but has also been of value in band assignments. Dewar and co-workers [1, 2) presented summaries of available PES data on organic molecules in 1969 and 1970. Turner et al. [3] published a handbook of Hel spectra of organic molecules in 1970. Since then, a few books [4-7] discussing the principles and applications of UV photoelectron spectroscopy have appeared of which special mention should be made of the recent article by Heilbronner and Maier [7]. There has, however, been no comprehensive review of the vast amount of data on the UV-PES of organic molecules published in the literature since 1970.
Resumo:
A BEM formulation to obtain the inelastic response of R.C. Beam-Column joints subjected to sinusoidal loading along the boundary is presented. The equations of motion are written along with kinematical and constitutive equations. The dynamic reciprocal theorem is presented and the temporal dependence is removed by assuming steady state response.
Resumo:
Tutkielman tarkoituksena oli soveltaa toistetun pelin teoria- ja empiriapohjaa suomalaiseen tutkimusaineistoon. Kartellin toimintadynamiikka on mallinnettu peliteorian osa-alueen, toistetun pelin kentäksi. Toistetussa pelissä samaa, kerran pelattua peliä pelataan useita kierroksia. Äärettömästi toistetusta pelistä muodostuu toistetun pelin yleinen teoria (The Folk Theorem), jossa jokaisella pelaajalla on yksilöllisesti rationaalinen käytössykli. Toisen pelaajan kanssa tehty yhteistyö kasvattaa pelaajan käytössykliltä kertyvää kokonaishyötyä. Kartellitutkimuksessa ei voi ohittaa oikeustieteellistä näkökulmaa, joten sekin on tiivistetysti mukana esityksessä. Äänettömässä tai implisiittisessä kartellissa ( tacit collusion ) ei avoimen kartellin tavoin ole osapuolten välistä kommunikointia, mutta sen lopputulos on sama. Tästä syystä äänetön kartelli on yhdenmukaistettuna käytöksenä kielletty. Koska myös tunnusmerkit ovat osin samat, kartellitutkimus on saanut arvokasta mittausaineistoa paljastuneiden kartellien käytöksestä. Pelkkään hintatiedostoonkin perustuvalla tutkimuksella on vankka teoreettinen ja empiirinen pohja. Oikeuskirjallisuudessa ja käytännössä hintayhteneväisyyden on yhdessä muiden tunnusmerkkien kanssa katsottu olevan indisio kartellista. Bensiinin vähittäismyyntimarkkinat ovat rakenteellisesti otollinen kenttä toistetulle pelille. Tutkielman empiirisessä osuudessa kohteena olivat pääkaupunkiseudun bensiinin vähittäismyyntimarkkinat ja tiedosto sisälsi otoksia hinta-aikasarjoista ajalta 1.8.2004 - 30.6.2005 kaikkiaan 116:ltä jakeluasemalta Espoosta, Helsingistä ja Vantaalta. Tutkimusmenetelmänä oli toistettujen mittausten varianssianalyysi post hoc-vertailuin. Tilastollisesti merkitsevä hinnoitteluyhtenevyys lähellä sijaitsevien asemien kesken löytyi 47 asemalta, ja näin ollen näillä asemilla on yksi kartellin tunnusmerkeistä. Hinnoitteluyhtenevyyden omaavat asemat muodostivat liikenneyhteyksien mukaan jaetuilla kilpailualueillaan ryhmittymiä ja kaikkiaan tällaisia yhtenevästi hinnoittelevia ryhmittymiä oli 21. Näistä ryhmittymistä 9 oli ns. sekapareja eli osapuolina olivat kylmäasema ja liikenneasema. Useimmissa tapauksissa oli kyseessä alueensa kalleimmin hinnoitteleva kylmäasema. Tutkielman tärkeimmät lähteet: Abrantes-Metz, Rosa M. – Froeb, Luke M. – Geweke, John F. – Taylor, Cristopher T. (2005): A Variance screen for collusion. Working paper no. 275, Bureau of economics, Federal Trade Commission, Washington DC 20580. Dutta, Prajit K. (1999): Strategies and Games, Theory and Practice. The MIT Press, Cambridge, Massachusetts, London, England. Harrington, Joseph E. (2004): Detecting cartels. Working paper. John Hopkins University. Ivaldi, Marc – Jullien, Bruno – Rey, Patric – Seabright, Paul – Tirole, Jean (2003): The Economics of Tacit Collusion. EU:n komission kilpailun pääosaston julkaisu. Phlips, Louis (1996): On the detection of collusion and predation. European Economic Review 40 (1996), 495–510.
Resumo:
We study the Segal-Bargmann transform on a motion group R-n v K, where K is a compact subgroup of SO(n) A characterization of the Poisson integrals associated to the Laplacian on R-n x K is given We also establish a Paley-Wiener type theorem using complexified representations
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
Resumo:
This thesis is concerned with the area of vector-valued Harmonic Analysis, where the central theme is to determine how results from classical Harmonic Analysis generalize to functions with values in an infinite dimensional Banach space. The work consists of three articles and an introduction. The first article studies the Rademacher maximal function that was originally defined by T. Hytönen, A. McIntosh and P. Portal in 2008 in order to prove a vector-valued version of Carleson's embedding theorem. The boundedness of the corresponding maximal operator on Lebesgue-(Bochner) -spaces defines the RMF-property of the range space. It is shown that the RMF-property is equivalent to a weak type inequality, which does not depend for instance on the integrability exponent, hence providing more flexibility for the RMF-property. The second article, which is written in collaboration with T. Hytönen, studies a vector-valued Carleson's embedding theorem with respect to filtrations. An earlier proof of the dyadic version assumed that the range space satisfies a certain geometric type condition, which this article shows to be also necessary. The third article deals with a vector-valued generalizations of tent spaces, originally defined by R. R. Coifman, Y. Meyer and E. M. Stein in the 80's, and concerns especially the ones related to square functions. A natural assumption on the range space is then the UMD-property. The main result is an atomic decomposition for tent spaces with integrability exponent one. In order to suit the stochastic integrals appearing in the vector-valued formulation, the proof is based on a geometric lemma for cones and differs essentially from the classical proof. Vector-valued tent spaces have also found applications in functional calculi for bisectorial operators. In the introduction these three themes come together when studying paraproduct operators for vector-valued functions. The Rademacher maximal function and Carleson's embedding theorem were applied already by Hytönen, McIntosh and Portal in order to prove boundedness for the dyadic paraproduct operator on Lebesgue-Bochner -spaces assuming that the range space satisfies both UMD- and RMF-properties. Whether UMD implies RMF is thus an interesting question. Tent spaces, on the other hand, provide a method to study continuous time paraproduct operators, although the RMF-property is not yet understood in the framework of tent spaces.
Resumo:
Two identities involving quarter-wave plates and half-wave plates are established. These are used to improve on an earlier gadget involving four wave plates leading to a new gadget involving just three plates, a half-wave plate and two quarter-wave plates, which can realize all SU(2) polarization transformations. This gadget is shown to involve the minimum number of quarter-wave and half-wave plates. The analysis leads to a decomposition theorem for SU (2) matrices in terms of factors which are symmetric fourth and eighth roots of the identity.