972 resultados para WEAK POLYELECTROLYTES
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
Weak radiative decays of the B mesons belong to the most important flavor changing processes that provide constraints on physics at the TeV scale. In the derivation of such constraints, accurate standard model predictions for the inclusive branching ratios play a crucial role. In the current Letter we present an update of these predictions, incorporating all our results for the O(α2s) and lower-order perturbative corrections that have been calculated after 2006. New estimates of nonperturbative effects are taken into account, too. For the CP- and isospin-averaged branching ratios, we find Bsγ=(3.36±0.23)×10−4 and Bdγ=(1.73+0.12−0.22)×10−5, for Eγ>1.6 GeV. Both results remain in agreement with the current experimental averages. Normalizing their sum to the inclusive semileptonic branching ratio, we obtain Rγ≡(Bsγ+Bdγ)/Bcℓν=(3.31±0.22)×10−3. A new bound from Bsγ on the charged Higgs boson mass in the two-Higgs-doublet-model II reads MH±>480 GeV at 95% C.L.
Resumo:
The interaction of a comet with the solar wind undergoes various stages as the comet’s activity varies along its orbit. For a comet like 67P/Churyumov–Gerasimenko, the target comet of ESA’s Rosetta mission, the various features include the formation of a Mach cone, the bow shock, and close to perihelion even a diamagnetic cavity. There are different approaches to simulate this complex interplay between the solar wind and the comet’s extended neutral gas coma which include magnetohydrodynamics (MHD) and hybrid-type models. The first treats the plasma as fluids (one fluid in basic single fluid MHD) and the latter treats the ions as individual particles under the influence of the local electric and magnetic fields. The electrons are treated as a charge-neutralizing fluid in both cases. Given the different approaches both models yield different results, in particular for a low production rate comet. In this paper we will show that these differences can be reduced when using a multifluid instead of a single-fluid MHD model and increase the resolution of the Hybrid model. We will show that some major features obtained with a hybrid type approach like the gyration of the cometary heavy ions and the formation of the Mach cone can be partially reproduced with the multifluid-type model.
Resumo:
Aim The usual hypothesis about the relationship between niche breadth and range size posits that species with the capacity to use a wider range of resources or to tolerate a greater range of environmental conditions should be more widespread. In plants, broader niches are often hypothesized to be due to pronounced phenotypic plasticity, and more plastic species are therefore predicted to be more common. We examined the relationship between the magnitude of phenotypic plasticity in five functional traits, mainly related to leaves, and several measures of abundance in 105 Central European grassland species. We further tested whether mean values of traits, rather than their plasticity, better explain the commonness of species, possibly because they are pre-adapted to exploiting the most common resources. Location Central Europe. Methods In a multispecies experiment with 105 species we measured leaf thickness, leaf greenness, specific leaf area, leaf dry matter content and plant height, and the plasticity of these traits in response to fertilization, waterlogging and shading. For the same species we also obtained five measures of commonness, ranging from plot-level abundance to range size in Europe. We then examined whether these measures of commonness were associated with the magnitude of phenotypic plasticity, expressed as composite plasticity of all traits across the experimental treatments. We further estimated the relative importance of trait plasticity and trait means for abundance and geographical range size. Results More abundant species were less plastic. This negative relationship was fairly consistent across several spatial scales of commonness, but it was weak. Indeed, compared with trait means, plasticity was relatively unimportant for explaining differences in species commonness. Main conclusions Our results do not indicate that larger phenotypic plasticity of leaf morphological traits enhances species abundance. Furthermore, possession of a particular trait value, rather than of trait plasticity, is a more important determinant of species commonness.
Resumo:
The goal of this paper is to revisit the influential work of Mauro [1995] focusing on the strength of his results under weak identification. He finds a negative impact of corruption on investment and economic growth that appears to be robust to endogeneity when using two-stage least squares (2SLS). Since the inception of Mauro [1995], much literature has focused on 2SLS methods revealing the dangers of estimation and thus inference under weak identification. We reproduce the original results of Mauro [1995] with a high level of confidence and show that the instrument used in the original work is in fact 'weak' as defined by Staiger and Stock [1997]. Thus we update the analysis using a test statistic robust to weak instruments. Our results suggest that under Mauro's original model there is a high probability that the parameters of interest are locally almost unidentified in multivariate specifications. To address this problem, we also investigate other instruments commonly used in the corruption literature and obtain similar results.
Resumo:
A problem with a practical application of Varian.s Weak Axiom of Cost Minimization is that an observed violation may be due to random variation in the output quantities produced by firms rather than due to inefficiency on the part of the firm. In this paper, unlike in Varian (1985), the output rather than the input quantities are treated as random and an alternative statistical test of the violation of WACM is proposed. We assume that there is no technical inefficiency and provide a test of the hypothesis that an observed violation of WACM is merely due to random variations in the output levels of the firms being compared.. We suggest an intuitive approach for specifying a value of the variance of the noise term that is needed for the test. The paper includes an illustrative example utilizing a data set relating to a number of U.S. airlines.
Resumo:
Como profesoras de Fonética y Fonología Inglesa hemos percibido que nuestros alumnos –hispano-hablantes- enfrentan una serie de dificultades en relación con la percepción y producción de las formas débiles de las palabras estructurales. En consecuencia, evaluamos la percepción y producción de las formas débiles y fuertes en nuestros alumnos, a través de una prueba de escucha y otra de habla, una vez que los estudiantes de primer año de la Facultad de Filosofía, Humanidades y Artes, Universidad Nacional de San Juan (UNSJ), habían finalizado el período de entrenamiento en estas formas. Para la recolección de datos utilizamos dos tipos de tests: uno de percepción y otro de producción. En general, los resultados corroboraron nuestra percepción: los alumnos continuaron teniendo mayor dificultad en la percepción y producción de las formas débiles de las palabras estructurales que en la de las formas fuertes, aún después de haber estado expuestos al entrenamiento explícito.
Resumo:
The goal of the AEgIS experiment is to measure the gravitational acceleration of antihydrogen – the simplest atom consisting entirely of antimatter – with the ultimate precision of 1%. We plan to verify the Weak Equivalence Principle (WEP), one of the fundamental laws of nature, with an antimatter beam. The experiment consists of a positron accumulator, an antiproton trap and a Stark accelerator in a solenoidal magnetic field to form and accelerate a pulsed beam of antihydrogen atoms towards a free-fall detector. The antihydrogen beam passes through a moir ́e deflectometer to measure the vertical displacement due to the gravitational force. A position and time sensitive hybrid detector registers the annihilation points of the antihydrogen atoms and their time-of-flight. The detection principle has been successfully tested with antiprotons and a miniature moir ́e deflectometer coupled to a nuclear emulsion detector.
Resumo:
This paper analyzes the newly institutionalized political system in democratizing Indonesia, with particular reference to the presidential system. Consensus has not yet been reached among scholars on whether the Indonesian president is strong or weak. This paper tries to answer this question by analyzing the legislative and partisan powers of the Indonesian president. It must be acknowledged, however, that these two functions do not on their own explain the strengths and weaknesses of the president. This paper suggests that in order to fully understand the presidential system in Indonesia, we need to take into account not just the president's legislative and partisan powers, but also the legislative process and the characteristics of coalition government.
Resumo:
In weak grids, an important problem with voltage stability and protections coordination of power plants exists. This problem appears because all the generation groups are connected to the same bus bar. As a result, if a fault occurs in any of the generation groups, or in the bus bar that connect them, the system voltage will have large oscillations. Hence, in weak grids the correct adjustment of AVR (Automatic Voltage Regulator) is critical. In this work an experimental study of differents AVR adjustments against fault in weak grids is described.
Resumo:
In this work, an improvement of the results presented by [1] Abellanas et al. (Weak Equilibrium in a Spatial Model. International Journal of Game Theory, 40(3), 449-459) is discussed. Concretely, this paper investigates an abstract game of competition between two players that want to earn the maximum number of points from a finite set of points in the plane. It is assumed that the distribution of these points is not uniform, so an appropriate weight to each position is assigned. A definition of equilibrium which is weaker than the classical one is included in order to avoid the uniqueness of the equilibrium position typical of the Nash equilibrium in these kinds of games. The existence of this approximated equilibrium in the game is analyzed by means of computational geometry techniques.
Resumo:
Domestic animals have played a key role in human history. Despite their importance, however, the origins of most domestic species remain poorly understood. We assessed the phylogenetic history and population structure of domestic goats by sequencing a hypervariable segment (481 bp) of the mtDNA control region from 406 goats representing 88 breeds distributed across the Old World. Phylogeographic analysis revealed three highly divergent goat lineages (estimated divergence >200,000 years ago), with one lineage occurring only in eastern and southern Asia. A remarkably similar pattern exists in cattle, sheep, and pigs. These results, combined with recent archaeological findings, suggest that goats and other farm animals have multiple maternal origins with a possible center of origin in Asia, as well as in the Fertile Crescent. The pattern of goat mtDNA diversity suggests that all three lineages have undergone population expansions, but that the expansion was relatively recent for two of the lineages (including the Asian lineage). Goat populations are surprisingly less genetically structured than cattle populations. In goats only ≈10% of the mtDNA variation is partitioned among continents. In cattle the amount is ≥50%. This weak structuring suggests extensive intercontinental transportation of goats and has intriguing implications about the importance of goats in historical human migrations and commerce.
Resumo:
Oscillating electric fields can be rectified by proteins in cell membranes to give rise to a dc transport of a substance across the membrane or a net conversion of a substrate to a product. This provides a basis for signal averaging and may be important for understanding the effects of weak extremely low frequency (ELF) electric fields on cellular systems. We consider the limits imposed by thermal and "excess" biological noise on the magnitude and exposure duration of such electric field-induced membrane activity. Under certain circumstances, the excess noise leads to an increase in the signal-to-noise ratio in a manner similar to processes labeled "stochastic resonance." Numerical results indicate that it is difficult to reconcile biological effects with low field strengths.
Resumo:
This paper proposes a new feature representation method based on the construction of a Confidence Matrix (CM). This representation consists of posterior probability values provided by several weak classifiers, each one trained and used in different sets of features from the original sample. The CM allows the final classifier to abstract itself from discovering underlying groups of features. In this work the CM is applied to isolated character image recognition, for which several set of features can be extracted from each sample. Experimentation has shown that the use of CM permits a significant improvement in accuracy in most cases, while the others remain the same. The results were obtained after experimenting with four well-known corpora, using evolved meta-classifiers with the k-Nearest Neighbor rule as a weak classifier and by applying statistical significance tests.