954 resultados para Liapunov convexity theorem
Resumo:
BACKGROUND: Major factors influencing the phenotypic diversity of a lineage can be recognized by characterizing the extent and mode of trait evolution between related species. Here, we compared the evolutionary dynamics of traits associated with floral morphology and climatic preferences in a clade composed of the genera Codonanthopsis, Codonanthe and Nematanthus (Gesneriaceae). To test the mode and specific components that lead to phenotypic diversity in this group, we performed a Bayesian phylogenetic analysis of combined nuclear and plastid DNA sequences and modeled the evolution of quantitative traits related to flower shape and size and to climatic preferences. We propose an alternative approach to display graphically the complex dynamics of trait evolution along a phylogenetic tree using a wide range of evolutionary scenarios. RESULTS: Our results demonstrated heterogeneous trait evolution. Floral shapes displaced into separate regimes selected by the different pollinator types (hummingbirds versus insects), while floral size underwent a clade-specific evolution. Rates of evolution were higher for the clade that is hummingbird pollinated and experienced flower resupination, compared with species pollinated by bees, suggesting a relevant role of plant-pollinator interactions in lowland rainforest. The evolution of temperature preferences is best explained by a model with distinct selective regimes between the Brazilian Atlantic Forest and the other biomes, whereas differentiation along the precipitation axis was characterized by higher rates, compared with temperature, and no regime or clade-specific patterns. CONCLUSIONS: Our study shows different selective regimes and clade-specific patterns in the evolution of morphological and climatic components during the diversification of Neotropical species. Our new graphical visualization tool allows the representation of trait trajectories under parameter-rich models, thus contributing to a better understanding of complex evolutionary dynamics.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Resumo:
One of the global targets for non-communicable diseases is to halt, by 2025, the rise in the age-standardised adult prevalence of diabetes at its 2010 levels. We aimed to estimate worldwide trends in diabetes, how likely it is for countries to achieve the global target, and how changes in prevalence, together with population growth and ageing, are affecting the number of adults with diabetes. We pooled data from population-based studies that had collected data on diabetes through measurement of its biomarkers. We used a Bayesian hierarchical model to estimate trends in diabetes prevalence-defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs-in 200 countries and territories in 21 regions, by sex and from 1980 to 2014. We also calculated the posterior probability of meeting the global diabetes target if post-2000 trends continue. We used data from 751 studies including 4,372,000 adults from 146 of the 200 countries we make estimates for. Global age-standardised diabetes prevalence increased from 4.3% (95% credible interval 2.4-7.0) in 1980 to 9.0% (7.2-11.1) in 2014 in men, and from 5.0% (2.9-7.9) to 7.9% (6.4-9.7) in women. The number of adults with diabetes in the world increased from 108 million in 1980 to 422 million in 2014 (28.5% due to the rise in prevalence, 39.7% due to population growth and ageing, and 31.8% due to interaction of these two factors). Age-standardised adult diabetes prevalence in 2014 was lowest in northwestern Europe, and highest in Polynesia and Micronesia, at nearly 25%, followed by Melanesia and the Middle East and north Africa. Between 1980 and 2014 there was little change in age-standardised diabetes prevalence in adult women in continental western Europe, although crude prevalence rose because of ageing of the population. By contrast, age-standardised adult prevalence rose by 15 percentage points in men and women in Polynesia and Micronesia. In 2014, American Samoa had the highest national prevalence of diabetes (>30% in both sexes), with age-standardised adult prevalence also higher than 25% in some other islands in Polynesia and Micronesia. If post-2000 trends continue, the probability of meeting the global target of halting the rise in the prevalence of diabetes by 2025 at the 2010 level worldwide is lower than 1% for men and is 1% for women. Only nine countries for men and 29 countries for women, mostly in western Europe, have a 50% or higher probability of meeting the global target. Since 1980, age-standardised diabetes prevalence in adults has increased, or at best remained unchanged, in every country. Together with population growth and ageing, this rise has led to a near quadrupling of the number of adults with diabetes worldwide. The burden of diabetes, both in terms of prevalence and number of adults affected, has increased faster in low-income and middle-income countries than in high-income countries. Wellcome Trust.
Resumo:
Let $X$ be a smooth complex algebraic variety. Morgan showed that the rational homotopy type of $X$ is a formal consequence of the differential graded algebra defined by the first term $E_{1}(X,W)$ of its weight spectral sequence. In the present work, we generalize this result to arbitrary nilpotent complex algebraic varieties (possibly singular and/or non-compact) and to algebraic morphisms between them. In particular, our results generalize the formality theorem of Deligne, Griffiths, Morgan and Sullivan for morphisms of compact Kähler varieties, filling a gap in Morgan"s theory concerning functoriality over the rationals. As an application, we study the Hopf invariant of certain algebraic morphisms using intersection theory.
Resumo:
In this paper we propose an approach to homotopical algebra where the basic ingredient is a category with two classes of distinguished morphisms: strong and weak equivalences. These data determine the cofibrant objects by an extension property analogous to the classical lifting property of projective modules. We define a Cartan-Eilenberg category as a category with strong and weak equivalences such that there is an equivalence of categories between its localisation with respect to weak equivalences and the relative localisation of the subcategory of cofibrant objects with respect to strong equivalences. This equivalence of categories allows us to extend the classical theory of derived additive functors to this non additive setting. The main examples include Quillen model categories and categories of functors defined on a category endowed with a cotriple (comonad) and taking values on a category of complexes of an abelian category. In the latter case there are examples in which the class of strong equivalences is not determined by a homotopy relation. Among other applications of our theory, we establish a very general acyclic models theorem.
Resumo:
We introduce a new notion for the deformation of Gabor systems. Such deformations are in general nonlinear and, in particular, include the standard jitter error and linear deformations of phase space. With this new notion we prove a strong deformation result for Gabor frames and Gabor Riesz sequences that covers the known perturbation and deformation results. Our proof of the deformation theorem requires a new characterization of Gabor frames and Gabor Riesz sequences. It is in the style of Beurling's characterization of sets of sampling for bandlimited functions and extends significantly the known characterization of Gabor frames 'without inequalities' from lattices to non-uniform sets.
Resumo:
We show the existence of families of hip-hop solutions in the equal-mass 2N-body problem which are close to highly eccentric planar elliptic homographic motions of 2N bodies plus small perpendicular non-harmonic oscillations. By introducing a parameter ϵ, the homographic motion and the small amplitude oscillations can be uncoupled into a purely Keplerian homographic motion of fixed period and a vertical oscillation described by a Hill type equation. Small changes in the eccentricity induce large variations in the period of the perpendicular oscillation and give rise, via a Bolzano argument, to resonant periodic solutions of the uncoupled system in a rotating frame. For small ϵ ≠ 0, the topological transversality persists and Brouwer's fixed point theorem shows the existence of this kind of solutions in the full system
Resumo:
Potential parameters sensitivity analysis for helium unlike molecules, HeNe, HeAr, HeKr and HeXe is the subject of this work. Number of bound states these rare gas dimers can support, for different angular momentum, will be presented and discussed. The variable phase method, together with the Levinson's theorem, is used to explore the quantum scattering process at very low collision energy using the Tang and Toennies potential. These diatomic dimers can support a bound state even for relative angular momentum equal to five, as in HeXe. Vibrational excited states, with zero angular momentum, are also possible for HeKr and HeXe. Results from sensitive analysis will give acceptable order of magnitude on potentials parameters.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
We investigate expressiveness and definability issues with respect to minimal models, particularly in the scope of Circumscription. First, we give a proof of the failure of the Löwenheim-Skolem Theorem for Circumscription. Then we show that, if the class of P; Z-minimal models of a first-order sentence is Δ-elementary, then it is elementary. That is, whenever the circumscription of a first-order sentence is equivalent to a first-order theory, then it is equivalent to a finitely axiomatizable one. This means that classes of models of circumscribed theories are either elementary or not Δ-elementary. Finally, using the previous result, we prove that, whenever a relation Pi is defined in the class of P; Z-minimal models of a first-order sentence Φ and whenever such class of P; Z-minimal models is Δ-elementary, then there is an explicit definition ψ for Pi such that the class of P; Z-minimal models of Φ is the class of models of Φ ∧ ψ. In order words, the circumscription of P in Φ with Z varied can be replaced by Φ plus this explicit definition ψ for Pi.
Resumo:
The three main topics of this work are independent systems and chains of word equations, parametric solutions of word equations on three unknowns, and unique decipherability in the monoid of regular languages. The most important result about independent systems is a new method giving an upper bound for their sizes in the case of three unknowns. The bound depends on the length of the shortest equation. This result has generalizations for decreasing chains and for more than three unknowns. The method also leads to shorter proofs and generalizations of some old results. Hmelevksii’s theorem states that every word equation on three unknowns has a parametric solution. We give a significantly simplified proof for this theorem. As a new result we estimate the lengths of parametric solutions and get a bound for the length of the minimal nontrivial solution and for the complexity of deciding whether such a solution exists. The unique decipherability problem asks whether given elements of some monoid form a code, that is, whether they satisfy a nontrivial equation. We give characterizations for when a collection of unary regular languages is a code. We also prove that it is undecidable whether a collection of binary regular languages is a code.
Resumo:
Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.
Resumo:
After introducing the no-cloning theorem and the most common forms of approximate quantum cloning, universal quantum cloning is considered in detail. The connections it has with universal NOT-gate, quantum cryptography and state estimation are presented and briefly discussed. The state estimation connection is used to show that the amount of extractable classical information and total Bloch vector length are conserved in universal quantum cloning. The 1 2 qubit cloner is also shown to obey a complementarity relation between local and nonlocal information. These are interpreted to be a consequence of the conservation of total information in cloning. Finally, the performance of the 1 M cloning network discovered by Bužek, Hillery and Knight is studied in the presence of decoherence using the Barenco et al. approach where random phase fluctuations are attached to 2-qubit gates. The expression for average fidelity is calculated for three cases and it is found to depend on the optimal fidelity and the average of the phase fluctuations in a specific way. It is conjectured to be the form of the average fidelity in the general case. While the cloning network is found to be rather robust, it is nevertheless argued that the scalability of the quantum network implementation is poor by studying the effect of decoherence during the preparation of the initial state of the cloning machine in the 1 ! 2 case and observing that the loss in average fidelity can be large. This affirms the result by Maruyama and Knight, who reached the same conclusion in a slightly different manner.