975 resultados para GRAVITATIONAL LENSING: WEAK
Resumo:
Food security is important. A rising world population coupled with climate change creates growing pressure on global world food supplies. States alleviate this pressure domestically by attracting agri-foreign direct investment (agri-FDI). This is a high-risk strategy for weak states: the state may gain valuable foreign currency, technology and debt-free growth; but equally, investors may fail to deliver on their commitments and exploit weak domestic legal infrastructure to ‘grab’ large areas of prime agricultural land, leaving only marginal land for domestic production. A net loss to local food security and to the national economy results. This is problematic because the state must continue to guarantee its citizens’ right to food and property. Agri-FDI needs close regulation to maximise its benefit. This article maps the multilevel system of governance covering agri-FDI. We show how this system creates asymmetric rights in favour of the investor to the detriment of the host state’s food security and how these problems might be alleviated.
Resumo:
We tested a core assumption of the bidirectional model of executive function (EF) (Blair & Ursache, 2011) indicating that EF is dependent on arousal. From a bottom-up perspective the performance on EF tasks is assumed to be curvilinearly related to arousal, with very high or low levels of arousal impairing EF. N = 107 4-and 6-year-olds’ performance on EF tasks was explored as a function of a weak stress manipulation aiming to raise children’s emotional arousal. EF (Stroop, Flanker, Go/no-go, and Backwards Color Recall) was assessed and stress was induced in half of the children by imposing a mild social evaluative threat. Furthermore, children’s temperament was assessed as a potential moderator. We found that stress effects on children’s EF performance were moderated by age and temperament: 4-year-olds with high Inhibitory Control and high Attentional Focusing were negatively affected by the stressor. However, it is unclear whether these effects were mediated by self-reported arousal. Our findings disconfirmed the hypotheses that adverse effects of the stressor are particularly high in children high on emotional reactivity aspects of temperament and low on self-regulatory aspects of temperament. Further, 6-year-olds did not show any stress effects. Results will be discussed within the framework of the Yerkes-Dodson law and with regard to stress manipulations in children.
Resumo:
AEgIS experiment’s main goal is to measure the local gravitational acceleration of antihydrogen¯g and thus perform a direct test of the weak equivalence principle with antimatter. In the first phase of the experiment the aim is to measure ¯g with 1% relative precision. This paper presents the antihydrogen production method and a description of some components of the experiment, which are necessary for the gravity measurement. Current status of the AE¯gIS experimental apparatus is presented and recent commissioning results with antiprotons are outlined. In conclusion we discuss the short-term goals of the AE¯gIS collaboration that will pave the way for the first gravity measurement in the near future.
Resumo:
Antihydrogen holds the promise to test, for the first time, the universality of freefall with a system composed entirely of antiparticles. The AEgIS experiment at CERN’s antiproton decelerator aims to measure the gravitational interaction between matter and antimatter by measuring the deflection of a beam of antihydrogen in the Earths gravitational field (g). The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malberg trap and are Stark accelerated towards a moir´e deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is the spatial precision of the position sensitive detector.We propose a novel free-fall detector based on a hybrid of two technologies: emulsion detectors, which have an intrinsic spatial resolution of 50 nm but no temporal information, and a silicon strip / scintillating fiber tracker to provide timing and positional information. In 2012 we tested emulsion films in vacuum with antiprotons from CERN’s antiproton decelerator. The annihilation vertices could be observed directly on the emulsion surface using the microscope facility available at the University of Bern. The annihilation vertices were successfully reconstructed with a resolution of 1–2 μmon the impact parameter. If such a precision can be realized in the final detector, Monte Carlo simulations suggest of order 500 antihydrogen annihilations will be sufficient to determine gwith a 1 % accuracy. This paper presents current research towards the development of this technology for use in the AEgIS apparatus and prospects for the realization of the final detector.
AEgIS Experiment: Measuring the acceleration g of the earth gravitational field on antihydrogen beam
Resumo:
The precise measurement of forces is one way to obtain deep insight into the fundamental interactions present in nature. In the context of neutral antimatter, the gravitational interaction is of high interest, potentially revealing new forces that violate the weak equivalence principle. Here we report on a successful extension of a tool from atom optics—the moiré deflectometer—for a measurement of the acceleration of slow antiprotons. The setup consists of two identical transmission gratings and a spatially resolving emulsion detector for antiproton annihilations. Absolute referencing of the observed antimatter pattern with a photon pattern experiencing no deflection allows the direct inference of forces present. The concept is also straightforwardly applicable to antihydrogen measurements as pursued by the AEgIS collaboration. The combination of these very different techniques from high energy and atomic physics opens a very promising route to the direct detection of the gravitational acceleration of neutral antimatter.
Resumo:
A measurement of the B 0 s →J/ψϕ decay parameters, updated to include flavor tagging is reported using 4.9 fb −1 of integrated luminosity collected by the ATLAS detector from s √ =7 TeV pp collisions recorded in 2011 at the LHC. The values measured for the physical parameters are ϕ s 0.12±0.25(stat)±0.05(syst) rad ΔΓ s 0.053±0.021(stat)±0.010(syst) ps −1 Γ s 0.677±0.007(stat)±0.004(syst) ps −1 |A ∥ (0)| 2 0.220±0.008(stat)±0.009(syst) |A 0 (0)| 2 0.529±0.006(stat)±0.012(syst) δ ⊥ =3.89±0.47(stat)±0.11(syst) rad where the parameter ΔΓ s is constrained to be positive. The S -wave contribution was measured and found to be compatible with zero. Results for ϕ s and ΔΓ s are also presented as 68% and 95% likelihood contours, which show agreement with the Standard Model expectations.
Resumo:
A well developed theoretical framework is available in which paleofluid properties, such as chemical composition and density, can be reconstructed from fluid inclusions in minerals that have undergone no ductile deformation. The present study extends this framework to encompass fluid inclusions hosted by quartz that has undergone weak ductile deformation following fluid entrapment. Recent experiments have shown that such deformation causes inclusions to become dismembered into clusters of irregularly shaped relict inclusions surrounded by planar arrays of tiny, new-formed (neonate) inclusions. Comparison of the experimental samples with a naturally sheared quartz vein from Grimsel Pass, Aar Massif, Central Alps, Switzerland, reveals striking similarities. This strong concordance justifies applying the experimentally derived rules of fluid inclusion behaviour to nature. Thus, planar arrays of dismembered inclusions defining cleavage planes in quartz may be taken as diagnostic of small amounts of intracrystalline strain. Deformed inclusions preserve their pre-deformation concentration ratios of gases to electrolytes, but their H2O contents typically have changed. Morphologically intact inclusions, in contrast, preserve the pre-deformation composition and density of their originally trapped fluid. The orientation of the maximum principal compressive stress (σ1σ1) at the time of shear deformation can be derived from the pole to the cleavage plane within which the dismembered inclusions are aligned. Finally, the density of neonate inclusions is commensurate with the pressure value of σ1σ1 at the temperature and time of deformation. This last rule offers a means to estimate magnitudes of shear stresses from fluid inclusion studies. Application of this new paleopiezometer approach to the Grimsel vein yields a differential stress (σ1–σ3σ1–σ3) of ∼300 MPa∼300 MPa at View the MathML source390±30°C during late Miocene NNW–SSE orogenic shortening and regional uplift of the Aar Massif. This differential stress resulted in strain-hardening of the quartz at very low total strain (<5%<5%) while nearby shear zones were accommodating significant displacements. Further implementation of these experimentally derived rules should provide new insight into processes of fluid–rock interaction in the ductile regime within the Earth's crust.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
Resumo:
Weak radiative decays of the B mesons belong to the most important flavor changing processes that provide constraints on physics at the TeV scale. In the derivation of such constraints, accurate standard model predictions for the inclusive branching ratios play a crucial role. In the current Letter we present an update of these predictions, incorporating all our results for the O(α2s) and lower-order perturbative corrections that have been calculated after 2006. New estimates of nonperturbative effects are taken into account, too. For the CP- and isospin-averaged branching ratios, we find Bsγ=(3.36±0.23)×10−4 and Bdγ=(1.73+0.12−0.22)×10−5, for Eγ>1.6 GeV. Both results remain in agreement with the current experimental averages. Normalizing their sum to the inclusive semileptonic branching ratio, we obtain Rγ≡(Bsγ+Bdγ)/Bcℓν=(3.31±0.22)×10−3. A new bound from Bsγ on the charged Higgs boson mass in the two-Higgs-doublet-model II reads MH±>480 GeV at 95% C.L.
Resumo:
The interaction of a comet with the solar wind undergoes various stages as the comet’s activity varies along its orbit. For a comet like 67P/Churyumov–Gerasimenko, the target comet of ESA’s Rosetta mission, the various features include the formation of a Mach cone, the bow shock, and close to perihelion even a diamagnetic cavity. There are different approaches to simulate this complex interplay between the solar wind and the comet’s extended neutral gas coma which include magnetohydrodynamics (MHD) and hybrid-type models. The first treats the plasma as fluids (one fluid in basic single fluid MHD) and the latter treats the ions as individual particles under the influence of the local electric and magnetic fields. The electrons are treated as a charge-neutralizing fluid in both cases. Given the different approaches both models yield different results, in particular for a low production rate comet. In this paper we will show that these differences can be reduced when using a multifluid instead of a single-fluid MHD model and increase the resolution of the Hybrid model. We will show that some major features obtained with a hybrid type approach like the gyration of the cometary heavy ions and the formation of the Mach cone can be partially reproduced with the multifluid-type model.
Resumo:
Aim The usual hypothesis about the relationship between niche breadth and range size posits that species with the capacity to use a wider range of resources or to tolerate a greater range of environmental conditions should be more widespread. In plants, broader niches are often hypothesized to be due to pronounced phenotypic plasticity, and more plastic species are therefore predicted to be more common. We examined the relationship between the magnitude of phenotypic plasticity in five functional traits, mainly related to leaves, and several measures of abundance in 105 Central European grassland species. We further tested whether mean values of traits, rather than their plasticity, better explain the commonness of species, possibly because they are pre-adapted to exploiting the most common resources. Location Central Europe. Methods In a multispecies experiment with 105 species we measured leaf thickness, leaf greenness, specific leaf area, leaf dry matter content and plant height, and the plasticity of these traits in response to fertilization, waterlogging and shading. For the same species we also obtained five measures of commonness, ranging from plot-level abundance to range size in Europe. We then examined whether these measures of commonness were associated with the magnitude of phenotypic plasticity, expressed as composite plasticity of all traits across the experimental treatments. We further estimated the relative importance of trait plasticity and trait means for abundance and geographical range size. Results More abundant species were less plastic. This negative relationship was fairly consistent across several spatial scales of commonness, but it was weak. Indeed, compared with trait means, plasticity was relatively unimportant for explaining differences in species commonness. Main conclusions Our results do not indicate that larger phenotypic plasticity of leaf morphological traits enhances species abundance. Furthermore, possession of a particular trait value, rather than of trait plasticity, is a more important determinant of species commonness.
Resumo:
The goal of this paper is to revisit the influential work of Mauro [1995] focusing on the strength of his results under weak identification. He finds a negative impact of corruption on investment and economic growth that appears to be robust to endogeneity when using two-stage least squares (2SLS). Since the inception of Mauro [1995], much literature has focused on 2SLS methods revealing the dangers of estimation and thus inference under weak identification. We reproduce the original results of Mauro [1995] with a high level of confidence and show that the instrument used in the original work is in fact 'weak' as defined by Staiger and Stock [1997]. Thus we update the analysis using a test statistic robust to weak instruments. Our results suggest that under Mauro's original model there is a high probability that the parameters of interest are locally almost unidentified in multivariate specifications. To address this problem, we also investigate other instruments commonly used in the corruption literature and obtain similar results.