882 resultados para Gleason-Pierce Theorem
Resumo:
El presente trabajo pretende hacer una revisión de la bibliografía latinoamericana con respecto al concepto de propiedad del suelo urbano. Se explica primero el itinerario del concepto a través de la historia, rescatando los puntos más importantes del debate y las particularidades que cada contexto aportó a la formación del concepto actual, para ahondar después en el análisis de la situación del debate en el contexto latinoamericano desde 1990, explicando los puntos de discusión y las posiciones de los autores con respecto a los mismos, para después intentar intuir el devenir futuro del concepto en nuestro continente. En este trabajo se encontró que las problemáticas particulares de las ciudades latinoamericanas tienen una influencia importante en el devenir de la discusión proporcionándole un enfoque diferente en el tratamiento del concepto.
Resumo:
Bank managers often claim that equity is expensive, which contradicts the Modigliani-Miller irrelevance theorem. An opaque bank must signal its solvency by paying high and stable dividends in order to keep depositors tranquil. This signalling may require costly liquidations if the return on assets has been poor, but not paying the dividend might trigger a run. A strongly capitalized bank should keep substantial amounts of risk-free yet non-productive currency because the number of shares is high, which is costly. The dividend is informative of the state of the bank; rational depositors react to it.
Resumo:
Previous research has shown that often there is clear inertia in individual decision making---that is, a tendency for decision makers to choose a status quo option. I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. I find that inertia is quite large and that both mechanisms are equally important.
Resumo:
Attitudes toward risk influence the decision to diversify among uncertain options. Yet, because in most situations the options are ambiguous, attitudes toward ambiguity may also play an important role. I conduct a laboratory experiment to investigate the effect of ambiguity on the decision to diversify. I find that diversification is more prevalent and more persistent under ambiguity than under risk. Moreover, excess diversification under ambiguity is driven by participants who stick with a status quo gamble when diversification among gambles is not feasible. This behavioral pattern cannot be accommodated by major theories of choice under ambiguity.
Resumo:
Es un estudio psicológico sobre la mujer y los valores y cualidades que desarrollan en el proceso de socialización. A través de las diferentes experiencias y expectativas culturales de mujeres y hombres se manifiestan los problemas y conflictos con que ambos grupos se enfrentan en las relaciones interpersonales. Conflictos laborales, roles adquiridos, situación del hogar son temas tratados, a los que se añade la afirmación de la mujer como persona eficiente e independiente dentro de la sociedad.
Resumo:
A novel statistic for local wave amplitude of the 500-hPa geopotential height field is introduced. The statistic uses a Hilbert transform to define a longitudinal wave envelope and dynamical latitude weighting to define the latitudes of interest. Here it is used to detect the existence, or otherwise, of multimodality in its distribution function. The empirical distribution function for the 1960-2000 period is close to a Weibull distribution with shape parameters between 2 and 3. There is substantial interdecadal variability but no apparent local multimodality or bimodality. The zonally averaged wave amplitude, akin to the more usual wave amplitude index, is close to being normally distributed. This is consistent with the central limit theorem, which applies to the construction of the wave amplitude index. For the period 1960-70 it is found that there is apparent bimodality in this index. However, the different amplitudes are realized at different longitudes, so there is no bimodality at any single longitude. As a corollary, it is found that many commonly used statistics to detect multimodality in atmospheric fields potentially satisfy the assumptions underlying the central limit theorem and therefore can only show approximately normal distributions. The author concludes that these techniques may therefore be suboptimal to detect any multimodality.
Resumo:
We study generalised prime systems P (1 < p(1) <= p(2) <= ..., with p(j) is an element of R tending to infinity) and the associated Beurling zeta function zeta p(s) = Pi(infinity)(j=1)(1 - p(j)(-s))(-1). Under appropriate assumptions, we establish various analytic properties of zeta p(s), including its analytic continuation, and we characterise the existence of a suitable generalised functional equation. In particular, we examine the relationship between a counterpart of the Prime Number Theorem (with error term) and the properties of the analytic continuation of zeta p(s). Further we study 'well-behaved' g-prime systems, namely, systems for which both the prime and integer counting function are asymptotically well-behaved. Finally, we show that there exists a natural correspondence between generalised prime systems and suitable orders on N-2. Some of the above results are relevant to the second author's theory of 'fractal membranes', whose spectral partition functions are given by Beurling-type zeta functions, as well as to joint work of that author and R. Nest on zeta functions attached to quasicrystals.
Resumo:
Observations show the oceans have warmed over the past 40 yr. with appreciable regional variation and more warming at the surface than at depth. Comparing the observations with results from two coupled ocean-atmosphere climate models [the Parallel Climate Model version 1 (PCM) and the Hadley Centre Coupled Climate Model version 3 (HadCM3)] that include anthropogenic forcing shows remarkable agreement between the observed and model-estimated warming. In this comparison the models were sampled at the same locations as gridded yearly observed data. In the top 100 m of the water column the warming is well separated from natural variability, including both variability arising from internal instabilities of the coupled ocean-atmosphere climate system and that arising from volcanism and solar fluctuations. Between 125 and 200 m the agreement is not significant, but then increases again below this level, and remains significant down to 600 m. Analysis of PCM's heat budget indicates that the warming is driven by an increase in net surface heat flux that reaches 0.7 W m(-2) by the 1990s; the downward longwave flux increases bv 3.7 W m(-2). which is not fully compensated by an increase in the upward longwave flux of 2.2 W m(-2). Latent and net solar heat fluxes each decrease by about 0.6 W m(-2). The changes in the individual longwave components are distinguishable from the preindustrial mean by the 1920s, but due to cancellation of components. changes in the net surface heat flux do not become well separated from zero until the 1960s. Changes in advection can also play an important role in local ocean warming due to anthropogenic forcing, depending, on the location. The observed sampling of ocean temperature is highly variable in space and time. but sufficient to detect the anthropogenic warming signal in all basins, at least in the surface layers, bv the 1980s.
Resumo:
A suite of climate change indices derived from daily temperature and precipitation data, with a primary focus on extreme events, were computed and analyzed. By setting an exact formula for each index and using specially designed software, analyses done in different countries have been combined seamlessly. This has enabled the presentation of the most up-to-date and comprehensive global picture of trends in extreme temperature and precipitation indices using results from a number of workshops held in data-sparse regions and high-quality station data supplied by numerous scientists world wide. Seasonal and annual indices for the period 1951-2003 were gridded. Trends in the gridded fields were computed and tested for statistical significance. Results showed widespread significant changes in temperature extremes associated with warming, especially for those indices derived from daily minimum temperature. Over 70% of the global land area sampled showed a significant decrease in the annual occurrence of cold nights and a significant increase in the annual occurrence of warm nights. Some regions experienced a more than doubling of these indices. This implies a positive shift in the distribution of daily minimum temperature throughout the globe. Daily maximum temperature indices showed similar changes but with smaller magnitudes. Precipitation changes showed a widespread and significant increase, but the changes are much less spatially coherent compared with temperature change. Probability distributions of indices derived from approximately 200 temperature and 600 precipitation stations, with near-complete data for 1901-2003 and covering a very large region of the Northern Hemisphere midlatitudes (and parts of Australia for precipitation) were analyzed for the periods 1901-1950, 1951-1978 and 1979-2003. Results indicate a significant warming throughout the 20th century. Differences in temperature indices distributions are particularly pronounced between the most recent two periods and for those indices related to minimum temperature. An analysis of those indices for which seasonal time series are available shows that these changes occur for all seasons although they are generally least pronounced for September to November. Precipitation indices show a tendency toward wetter conditions throughout the 20th century.
Resumo:
The perturbed Hartree–Fock theory developed in the preceding paper is applied to LiH, BH, and HF, using limited basis‐set SCF–MO wavefunctions derived by previous workers. The calculated values for the force constant ke and the dipole‐moment derivative μ(1) are (experimental values in parentheses): LiH, ke = 1.618(1.026)mdyn/Å,μ(1) = −18.77(−2.0±0.3)D/ÅBH,ke = 5.199(3.032)mdyn/Å,μ(1) = −1.03(−)D/Å;HF,ke = 12.90(9.651)mdyn/Å,μ(1) = −2.15(+1.50)D/Å. The values of the force on the proton were calculated exactly and according to the Hellmann–Feynman theorem in each case, and the discrepancies show that none of the wavefunctions used are close to the Hartree–Fock limit, so that the large errors in ke and μ(1) are not surprising. However no difficulties arose in the perturbed Hartree–Fock calculation, so that the application of the theory to more accurate wavefunctions appears quite feasible.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.