635 resultados para Vanishing Theorems
Resumo:
Lecture notes in PDF
Resumo:
Exam questions and solutions in LaTex
Resumo:
Exercises and solutions in PDF
Resumo:
Se utiliza un modelo de innovaciones sesgadas para estudiar los efectos de cambios exógenos en la oferta laboral. En un contexto de innovaciones sesgadas, a medida que las economías acumulan capital, el trabajo se hace relativamente más escaso y más caro, por este motivo, hay incentivos para adoptar tecnologías ahorradoras de trabajo. Del mismo modo un cambio en la oferta laboral afecta la abundancia de factores y sus precios relativos. En general, una reducción de la oferta laboral, hace que el trabajo sea más caro y genera incentivos para cambio tecnológico ahorrador de trabajo. Así, el efecto inicial que tiene el cambio en la oferta laboral sobre los precios de los factores es mitigado por el cambio tecnológico. Finalmente, los movimientos en la remuneración a los factores afectan las decisiones de ahorro y, por lo tanto, la dinámica del crecimiento. En este trabajo se exploran las consecuencias de una reducción de la oferta laboral en dos contextos teóricos diferentes: un modelo de agentes homogéneos y horizonte infinito y un modelo de generaciones traslapadas.
Resumo:
Desterrando minas es, a su vez, un documental de contexto y un documental de proceso. A partir de la inmersión en los mismos campamentos en los que viven los desminadores se cuenta la historia de Nariño como la vivieron sus pobladores, se muestra la manera en la que el miedo fue el principal configurador de las relaciones entre la comunidad y su territorio y cómo el desminado ha transformado ese miedo
Resumo:
Baroclinic wave development is investigated for unstable parallel shear flows in the limit of vanishing normal-mode growth rate. This development is described in terms of the propagation and interaction mechanisms of two coherent structures, called counter-propagating Rossby waves (CRWs). It is shown that, in this limit of vanishing normal-mode growth rate, arbitrary initial conditions produce sustained linear amplification of the marginally neutral normal mode (mNM). This linear excitation of the mNM is subsequently interpreted in terms of a resonance phenomenon. Moreover, while the mathematical character of the normal-mode problem changes abruptly as the bifurcation point in the dispersion diagram is encountered and crossed, it is shown that from an initial-value viewpoint, this transition is smooth. Consequently, the resonance interpretation remains relevant (albeit for a finite time) for wavenumbers slightly different from the ones defining cut-off points. The results are further applied to a two-layer version of the classic Eady model in which the upper rigid lid has been replaced by a simple stratosphere.
Resumo:
Separation of stratified flow over a two-dimensional hill is inhibited or facilitated by acceleration or deceleration of the flow just outside the attached boundary layer. In this note, an expression is derived for this acceleration or deceleration in terms of streamline curvature and stratification. The expression is valid for linear as well as nonlinear deformation of the flow. For hills of vanishing aspect ratio a linear theory can be derived and a full regime diagram for separation can be constructed. For hills of finite aspect ratio scaling relationships can be derived that indicate the presence of a critical aspect ratio, proportional to the stratification, above which separation will occur as well as a second critical aspect ratio above which separation will always occur irrespective of stratification.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
This paper presents several new families of cumulant-based linear equations with respect to the inverse filter coefficients for deconvolution (equalisation) and identification of nonminimum phase systems. Based on noncausal autoregressive (AR) modeling of the output signals and three theorems, these equations are derived for the cases of 2nd-, 3rd and 4th-order cumulants, respectively, and can be expressed as identical or similar forms. The algorithms constructed from these equations are simpler in form, but can offer more accurate results than the existing methods. Since the inverse filter coefficients are simply the solution of a set of linear equations, their uniqueness can normally be guaranteed. Simulations are presented for the cases of skewed series, unskewed continuous series and unskewed discrete series. The results of these simulations confirm the feasibility and efficiency of the algorithms.
Resumo:
We introduce the perspex machine which unifies projective geometry and Turing computation and results in a supra-Turing machine. We show two ways in which the perspex machine unifies symbolic and non-symbolic AI. Firstly, we describe concrete geometrical models that map perspexes onto neural networks, some of which perform only symbolic operations. Secondly, we describe an abstract continuum of perspex logics that includes both symbolic logics and a new class of continuous logics. We argue that an axiom in symbolic logic can be the conclusion of a perspex theorem. That is, the atoms of symbolic logic can be the conclusions of sub-atomic theorems. We argue that perspex space can be mapped onto the spacetime of the universe we inhabit. This allows us to discuss how a robot might be conscious, feel, and have free will in a deterministic, or semi-deterministic, universe. We ground the reality of our universe in existence. On a theistic point, we argue that preordination and free will are compatible. On a theological point, we argue that it is not heretical for us to give robots free will. Finally, we give a pragmatic warning as to the double-edged risks of creating robots that do, or alternatively do not, have free will.
Resumo:
We present an analysis of the oceanic heat advection and its variability in the upper 500 m in the southeastern tropical Pacific (100W–75W, 25S–10S) as simulated by the global coupled model HiGEM, which has one of the highest resolutions currently used in long-term integrations. The simulated climatology represents a temperature advection field arising from transient small-scale (<450 km) features, with structures and transport that appear consistent with estimates based on available observational data for the mooring at 20S, 85W. The transient structures are very persistent (>4 months), and in specific locations they generate an important contribution to the local upper-ocean heat budget, characterised by scales of a few hundred kilometres, and periods of over a year. The contribution from such structures to the local, long-term oceanic heat budget however can be of either sign, or vanishing, depending on the location; and, although there appears some organisation in preferential areas of activity, the average over the entire region is small. While several different mechanisms may be responsible for the temperature advection by transients, we find that a significant, and possibly dominant, component is associated with vortices embedded in the large-scale, climatological salinity gradient associated with the fresh intrusion of mid-latitude intermediate water which penetrates north-westward beneath the tropical thermocline
Resumo:
A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.
Resumo:
We study the boundedness of Toeplitz operators $T_a$ with locally integrable symbols on Bergman spaces $A^p(\mathbb{D})$, $1 < p < \infty$. Our main result gives a sufficient condition for the boundedness of $T_a$ in terms of some ``averages'' (related to hyperbolic rectangles) of its symbol. If the averages satisfy an ${o}$-type condition on the boundary of $\mathbb{D}$, we show that the corresponding Toeplitz operator is compact on $A^p$. Both conditions coincide with the known necessary conditions in the case of nonnegative symbols and $p=2$. We also show that Toeplitz operators with symbols of vanishing mean oscillation are Fredholm on $A^p$ provided that the averages are bounded away from zero, and derive an index formula for these operators.
Resumo:
The Fredholm properties of Toeplitz operators on the Bergman space A2 have been well-known for continuous symbols since the 1970s. We investigate the case p=1 with continuous symbols under a mild additional condition, namely that of the logarithmic vanishing mean oscillation in the Bergman metric. Most differences are related to boundedness properties of Toeplitz operators acting on Ap that arise when we no longer have 1
Resumo:
Equilibrium theory occupies an important position in chemistry and it is traditionally based on thermodynamics. A novel mathematical approach to chemical equilibrium theory for gaseous systems at constant temperature and pressure is developed. Six theorems are presented logically which illustrate the power of mathematics to explain chemical observations and these are combined logically to create a coherent system. This mathematical treatment provides more insight into chemical equilibrium and creates more tools that can be used to investigate complex situations. Although some of the issues covered have previously been given in the literature, new mathematical representations are provided. Compared to traditional treatments, the new approach relies on straightforward mathematics and less on thermodynamics, thus, giving a new and complementary perspective on equilibrium theory. It provides a new theoretical basis for a thorough and deep presentation of traditional chemical equilibrium. This work demonstrates that new research in a traditional field such as equilibrium theory, generally thought to have been completed many years ago, can still offer new insights and that more efficient ways to present the contents can be established. The work presented here can be considered appropriate as part of a mathematical chemistry course at University level.