946 resultados para rotated to zero
Resumo:
We try to explain why economic conflicts and illegal business often take place in poor countries. We use the concept of subsistence level of consumption (d) and assume a regular concave utility function for consumption levels higher than d. For consumption levels lower than d utility is constant and equal to zero. Under this framework poor agents are risk-lovers. This result helps to explain why economic conflicts are more likely to appear in poor economies and why poor agents are more willing to undertake illegal business.
Resumo:
Traditionally, school efficiency has been measured as a function of educational production. In the last two decades, however, studies in the economics of education have indicated that more is required to improve school efficiency: researchers must explore how significant changes in school organization affect the performance of at-risk students. In this paper we introduce Henry Levin’s adoption of the X-efficiency approach to education and we describe the efficient and cost-effective characteristics of one Learning Communities Project School that significantly improved its student outcomes and enrollment numbers and reduced its absenteeism rate to zero. The organizational change that facilitated these improvements defined specific issues to address. Students’ school success became the focus of the school project, which also offered specific incentives, selected teachers, involved parents and community members in decisions, and used the most efficient technologies and methods. This case analysis reveals new two elements—family training and community involvement—that were not explicit parts of Levin’s adaptation. The case of the Antonio Machado Public School should attract the attention of both social scientists and policy makers
Resumo:
Without corrective measures, Greek public debt will exceed 190 percent of GDP, instead of peaking at the anyway too-high target ratio of 167 percent of GDP of the March 2012 financial assistance programme. The rise is largely due to a negative feedback loop between high public debt and the collapse in GDP, and endangers Greek membership of the euro area. But a Greek exit would have devastating impacts both inside and outside Greece. A small reduction in the interest rate on bilateral loans, the exchange of European Central Bank holdings, buy-back of privately-held debt, and frontloading of some privatisation receipts are unlikely to be sufficient. A credible resolution should involve the reduction of the official lending rate to zero until 2020, an extension of the maturity of all official lending, and indexing the notional amount of all official loans to Greek GDP. Thereby, the debt ratio would fall below 100 percent of GDP by 2020, and if the economy deteriorates further, there will not be a need for new arrangements. But if growth is better than expected, official creditors will also benefit. In exchange for such help, the fiscal sovereignty of Greece should be curtailed further. An extended privatisation plan and future budget surpluses may be used to pay back the debt relief. The Greek fiscal tragedy highlights the need for a formal debt restructuring mechanism
Resumo:
Turbulence statistics obtained by direct numerical simulations are analysed to investigate spatial heterogeneity within regular arrays of building-like cubical obstacles. Two different array layouts are studied, staggered and square, both at a packing density of λp=0.25 . The flow statistics analysed are mean streamwise velocity ( u− ), shear stress ( u′w′−−−− ), turbulent kinetic energy (k) and dispersive stress fraction ( u˜w˜ ). The spatial flow patterns and spatial distribution of these statistics in the two arrays are found to be very different. Local regions of high spatial variability are identified. The overall spatial variances of the statistics are shown to be generally very significant in comparison with their spatial averages within the arrays. Above the arrays the spatial variances as well as dispersive stresses decay rapidly to zero. The heterogeneity is explored further by separately considering six different flow regimes identified within the arrays, described here as: channelling region, constricted region, intersection region, building wake region, canyon region and front-recirculation region. It is found that the flow in the first three regions is relatively homogeneous, but that spatial variances in the latter three regions are large, especially in the building wake and canyon regions. The implication is that, in general, the flow immediately behind (and, to a lesser extent, in front of) a building is much more heterogeneous than elsewhere, even in the relatively dense arrays considered here. Most of the dispersive stress is concentrated in these regions. Considering the experimental difficulties of obtaining enough point measurements to form a representative spatial average, the error incurred by degrading the sampling resolution is investigated. It is found that a good estimate for both area and line averages can be obtained using a relatively small number of strategically located sampling points.
Resumo:
The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
On the time scale of a century, the Atlantic thermohaline circulation (THC) is sensitive to the global surface salinity distribution. The advection of salinity toward the deep convection sites of the North Atlantic is one of the driving mechanisms for the THC. There is both a northward and a southward contributions. The northward salinity advection (Nsa) is related to the evaporation in the subtropics, and contributes to increased salinity in the convection sites. The southward salinity advection (Ssa) is related to the Arctic freshwater forcing and tends on the contrary to diminish salinity in the convection sites. The THC changes results from a delicate balance between these opposing mechanisms. In this study we evaluate these two effects using the IPSL-CM4 ocean-atmosphere-sea-ice coupled model (used for IPCC AR4). Perturbation experiments have been integrated for 100 years under modern insolation and trace gases. River runoff and evaporation minus precipitation are successively set to zero for the ocean during the coupling procedure. This allows the effect of processes Nsa and Ssa to be estimated with their specific time scales. It is shown that the convection sites in the North Atlantic exhibit various sensitivities to these processes. The Labrador Sea exhibits a dominant sensitivity to local forcing and Ssa with a typical time scale of 10 years, whereas the Irminger Sea is mostly sensitive to Nsa with a 15 year time scale. The GIN Seas respond to both effects with a time scale of 10 years for Ssa and 20 years for Nsa. It is concluded that, in the IPSL-CM4, the global freshwater forcing damps the THC on centennial time scales.
Resumo:
Ab initio calculations of the energy have been made at approximately 150 points on the two lowest singlet A' potential energy surfaces of the water molecule, 1A' and 1A', covering structures having D∞h, C∞v, C2v and Cs symmetries. The object was to obtain an ab initio surface of uniform accuracy over the whole three-dimensional coordinate space. Molecular orbitals were constructed from a double zeta plus Rydberg basis, and correlation was introduced by single and double excitations from multiconfiguration states which gave the correct dissociation behaviour. A two-valued analytical potential function has been constructed to fit these ab initio energy calculations. The adiabatic energies are given in our analytical function as the eigenvalues of a 2 2 matrix, whose diagonal elements define two diabatic surfaces. The off-diagonal element goes to zero for those configurations corresponding to surface intersections, so that our adiabatic surface exhibits the correct Σ/II conical intersections for linear configurations, and singlet/triplet intersections of the O + H2 dissociation fragments. The agreement between our analytical surface and experiment has been improved by using empirical diatomic potential curves in place of those derived from ab initio calculations.
Resumo:
The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.
Resumo:
Three new metal-organic polymeric complexes, [Fe(N-3)(2)(bPP)(2)] (1), [Fe(N-3)(2)(bpe)] (2), and [Fe(N-3)(2)(phen)] (3) [bpp = (1,3-bis(4-pyridyl)-propane), bpe = (1,2-bis(4-pyridyl)-ethane), phen = 1,10-phenanthroline], have been synthesized and characterized by single-crystal X-ray diffraction studies and low-temperature magnetic measurements in the range 300-2 K. Complexes 1 and 2 crystallize in the monoclinic system, space group C2/c, with the following cell parameters: a = 19.355(4) Angstrom, b = 7.076(2) Angstrom, c = 22.549(4) Angstrom, beta = 119.50(3)degrees, Z = 4, and a = 10.007(14) Angstrom, b = 13.789(18) Angstrom, c = 10.377(14) Angstrom, beta = 103.50(1)degrees, Z = 4, respectively. Complex 3 crystallizes in the triclinic system, space group P (1) over bar, with a = 7.155(12) Angstrom, b = 10.066(14) Angstrom, c = 10.508(14) Angstrom, alpha = 109.57(1)degrees, beta = 104.57(1)degrees, gamma = 105.10(1)degrees, and Z = 2. All coordination polymers exhibit octahedral Fe(II) nodes. The structural determination of 1 reveals a parallel interpenetrated structure of 2D layers of (4,4) topology, formed by Fe(II) nodes linked through bpp ligands, while mono-coordinated azide anions are pendant from the corrugated sheet. Complex 2 has a 2D arrangement constructed through 1D double end-to-end azide bridged iron(11) chains interconnected through bpe ligands. Complex 3 shows a polymeric arrangement where the metal ions are interlinked through pairs of end-on and end-to-end azide ligands exhibiting a zigzag arrangement of metals (Fe-Fe-Fe angle of 111.18degrees) and an intermetallic separation of 3.347 Angstrom (through the EO azide) and of 5.229 Angstrom (EE azide). Variable-temperature magnetic susceptibility data suggest that there is no magnetic interaction between the metal centers in 1, whereas in 2 there is an antiferromagnetic interaction through the end-to-end azide bridge. Complex 3 shows ferro- as well as anti-ferromagnetic interactions between the metal centers generated through the alternating end-on and end-to-end azide bridges. Complex I has been modeled using the D parameter (considering distorted octahedral Fe(II) geometry and with any possible J value equal to zero) and complex 2 has been modeled as a one-dimensional system with classical and/or quantum spin where we have used two possible full diagonalization processes: without and with the D parameter, considering the important distortions of the Fe(II) ions. For complex 3, the alternating coupling model impedes a mathematical solution for the modeling as classical spins. With quantum spin, the modeling has been made as in 2.
Resumo:
A remote haploscopic photorefractor was used to assess objective binocular vergence and accommodation responses in 157 full-term healthy infants aged 1-6 months while fixating a brightly coloured target moving between fixation distances at 2, 1, 0.5 and 0.33 m. Vergence and accommodation response gain matured rapidly from 'flat' neonatal responses at an intercept of approximately 2 dioptres (D) for accommodation and 2.5 metre angles(MA) for vergence, reaching adult-like values at 4 months. Vergence gain was marginally higher in females (p = 0.064), but accommodation gain (p = 0.034) was higher and accommodative intercept closer to zero (p = 0.004) in males in the first 3 months as they relaxed accommodation more appropriately for distant targets. More females showed flat accommodation responses (p = 0.029). More males behaved hypermetropically in the first two months of life, but when these hypermetropic infants were excluded from the analysis, the gender difference remained. Gender differences disappeared after three months. Data showed variable responses and infants could behave appropriately and simultaneously on both, neither or only one measure at all ages. If accommodation was appropriate (gain between 0.7 and 1.3; r(2) > 0.7) but vergence was not, males over- and under-converged equally, while the females who accommodated appropriately were more likely to overconverge (p = 0.008). The apparent earlier maturity of the male accommodative responses may be due to refractive error differences but could also reflect gender-specific male preference for blur cues while females show earlier preference for disparity, which may underpin the earlier emerging, disparity dependent, stereopsis and full vergence found in females in other studies.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
The problem of reconstructing the (otherwise unknown) source and sink field of a tracer in a fluid is studied by developing and testing a simple tracer transport model of a single-level global atmosphere and a dynamic data assimilation system. The source/sink field (taken to be constant over a 10-day assimilation window) and initial tracer field are analysed together by assimilating imperfect tracer observations over the window. Experiments show that useful information about the source/sink field may be determined from relatively few observations when the initial tracer field is known very accurately a-priori, even when a-priori source/sink information is biased (the source/sink a-priori is set to zero). In this case each observation provides information about the source/sink field at positions upstream and the assimilation of many observations together can reasonably determine the location and strength of a test source.
Resumo:
Using 4 years of radar and lidar observations of layer clouds from the Chilbolton Observatory in the UK, we show that almost all (95%) ice particles formed at temperatures >-20°C appear to originate from supercooled liquid clouds. At colder temperatures, there is a monotonic decline in the fraction of liquid-topped ice clouds: 50% at -27°C, falling to zero at -37°C (where homogeneous freezing of water droplets occurs). This strongly suggests that deposition nucleation plays a relatively minor role in the initiation of ice in mid-level clouds. It also means that the initial growth of the ice particles occurs predominantly within a liquid cloud, a situation which promotes rapid production of precipitation via the Bergeron-Findeison mechanism.