169 resultados para Resolvent Convergence
Resumo:
We present a KAM theory for some dissipative systems (geometrically, these are conformally symplectic systems, i.e. systems that transform a symplectic form into a multiple of itself). For systems with n degrees of freedom depending on n parameters we show that it is possible to find solutions with n-dimensional (Diophantine) frequencies by adjusting the parameters. We do not assume that the system is close to integrable, but we use an a-posteriori format. Our unknowns are a parameterization of the solution and a parameter. We show that if there is a sufficiently approximate solution of the invariance equation, which also satisfies some explicit non–degeneracy conditions, then there is a true solution nearby. We present results both in Sobolev norms and in analytic norms. The a–posteriori format has several consequences: A) smooth dependence on the parameters, including the singular limit of zero dissipation; B) estimates on the measure of parameters covered by quasi–periodic solutions; C) convergence of perturbative expansions in analytic systems; D) bootstrap of regularity (i.e., that all tori which are smooth enough are analytic if the map is analytic); E) a numerically efficient criterion for the break–down of the quasi–periodic solutions. The proof is based on an iterative quadratically convergent method and on suitable estimates on the (analytical and Sobolev) norms of the approximate solution. The iterative step takes advantage of some geometric identities, which give a very useful coordinate system in the neighborhood of invariant (or approximately invariant) tori. This system of coordinates has several other uses: A) it shows that for dissipative conformally symplectic systems the quasi–periodic solutions are attractors, B) it leads to efficient algorithms, which have been implemented elsewhere. Details of the proof are given mainly for maps, but we also explain the slight modifications needed for flows and we devote the appendix to present explicit algorithms for flows.
Resumo:
Malgrat els esforços de la UE en la promoció de la democràcia i un compromís comú per la democràcia i els drets humans al EMP, no hi ha signes de convergència cap al model liberal democràtic propugnat per la UE. No obstant això, l'abast i la intensitat de la cooperació multilateral, transnacional i bilateral han augmentat constantment en tota la regió des de mitjans de 1990. La cooperació en el camp de la promoció de la democràcia es caracteritza per la forta dinàmica de normativa sectorial, i la diferenciació geogràfica, però està clarament situada en un marc regional i altament estandarditzat. Si bé la convergència política o la política sembla poc probable en el curt o mitjà termini, democràcia i drets humans estan fermament establerts en una agenda regional comú
Resumo:
Next Generation Access Networks (NGAN) are the new step forward to deliver broadband services and to facilitate the integration of different technologies. It is plausible to assume that, from a technological standpoint, the Future Internet will be composed of long-range high-speed optical networks; a number of wireless networks at the edge; and, in between, several access technologies, among which, the Passive Optical Networks (xPON) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. Among the different PON technologies, the Ethernet-PON (EPON) is the most promising alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel. The aim of this research project is to study and evaluate current contributions and propose new efficient solutions to address the resource allocation issues in Next Generation EPON (NG-EPON). Key issues in this context are future end-user needs, integrated quality of service (QoS) support and optimized service provisioning for real time and elastic flows. This project will unveil research opportunities, issue recommendations and propose novel mechanisms associated with the convergence within heterogeneous access networks and will thus serve as a basis for long-term research projects in this direction. The project has served as a platform for the generation of new concepts and solutions that were published in national and international conferences, scientific journals and also in book chapter. We expect some more research publications in addition to the ones mentioned to be generated in a few months.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
Linear response functions are implemented for a vibrational configuration interaction state allowing accurate analytical calculations of pure vibrational contributions to dynamical polarizabilities. Sample calculations are presented for the pure vibrational contributions to the polarizabilities of water and formaldehyde. We discuss the convergence of the results with respect to various details of the vibrational wave function description as well as the potential and property surfaces. We also analyze the frequency dependence of the linear response function and the effect of accounting phenomenologically for the finite lifetime of the excited vibrational states. Finally, we compare the analytical response approach to a sum-over-states approach
Resumo:
To obtain a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the methane C-H bond to the palladium atom, we have explored this PES using a hierarchical series of ab initio methods (Hartree-Fock, second-order Møller-Plesset perturbation theory, fourth-order Møller-Plesset perturbation theory with single, double and quadruple excitations, coupled cluster theory with single and double excitations (CCSD), and with triple excitations treated perturbatively [CCSD(T)]) and hybrid density functional theory using the B3LYP functional, in combination with a hierarchical series of ten Gaussian-type basis sets, up to g polarization. Relativistic effects are taken into account either through a relativistic effective core potential for palladium or through a full four-component all-electron approach. Counterpoise corrected relative energies of stationary points are converged to within 0.1-0.2 kcal/mol as a function of the basis-set size. Our best estimate of kinetic and thermodynamic parameters is -8.1 (-8.3) kcal/mol for the formation of the reactant complex, 5.8 (3.1) kcal/mol for the activation energy relative to the separate reactants, and 0.8 (-1.2) kcal/mol for the reaction energy (zero-point vibrational energy-corrected values in parentheses). This agrees well with available experimental data. Our work highlights the importance of sufficient higher angular momentum polarization functions, f and g, for correctly describing metal-d-electron correlation and, thus, for obtaining reliable relative energies. We show that standard basis sets, such as LANL2DZ+ 1f for palladium, are not sufficiently polarized for this purpose and lead to erroneous CCSD(T) results. B3LYP is associated with smaller basis set superposition errors and shows faster convergence with basis-set size but yields relative energies (in particular, a reaction barrier) that are ca. 3.5 kcal/mol higher than the corresponding CCSD(T) values
Resumo:
Standard practice in Bayesian VARs is to formulate priors on the autoregressive parameters, but economists and policy makers actually have priors about the behavior of observable variables. We show how this kind of prior can be used in a VAR under strict probability theory principles. We state the inverse problem to be solved and we propose a numerical algorithm that works well in practical situations with a very large number of parameters. We prove various convergence theorems for the algorithm. As an application, we first show that the results in Christiano et al. (1999) are very sensitive to the introduction of various priors that are widely used. These priors turn out to be associated with undesirable priors on observables. But an empirical prior on observables helps clarify the relevance of these estimates: we find much higher persistence of output responses to monetary policy shocks than the one reported in Christiano et al. (1999) and a significantly larger total effect.
Resumo:
The article presents and discusses estimates of social and economic indicators for Italy’s regions in benchmark years roughly from Unification to the present day: life expectancy, education, GDP per capita at purchasing power parity, and the new Human Development Index (HDI). A broad interpretative hypothesis, based on the distinction between passive and active modernization, is proposed to account for the evolution of regional imbalances over the long-run. In the lack of active modernization, Southern Italy converged thanks to passive modernization, i.e., State intervention: however, this was more effective in life expectancy, less successful in education, expensive and as a whole ineffective in GDP. As a consequence, convergence in the HDI occurred from the late XIX century to the 1970s, but came to a sudden halt in the last decades of the XX century.
Resumo:
We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.
Resumo:
This research report concerns about the post-doctoral activities, conducted betweenSeptember 2010 and March 2011 at the University Pompeu Fabra, Barcelona. It comes to identify the consequences of the convergence phenomenon on photojournalism.Thus, in a more general approach, the effort is to to recovery the structural elements of the convergence concept in journalism. It aims to map, as well, the current debates about the repositioning of photographic practices linked to the news produced in a widespread adoption of digital devices in contemporary workflow. It is also specified,the analysis of photographic collectives as a result of the convergence frameworkapplied to photojournalism; the debate on ways of funding; alternatives facing thealleged crisis of press photography and, finally, proposes to create qualifying stages ofdevelopment of photojournalism in the digital age as well as the proposition of hypotheses concerning the structure of the productive routines. In addition, we present three cases to be analyzed in order to explore and verify the occurrence ofcharacteristics that may identify the object of research in the state of practice. Finally,we work in a series of conclusions, revisiting the main hypotheses. With this strategy, ispossible to define an sequence of analysis capable of addressing the characteristics present in the studied cases and other ones in future, thus, be able to affirm this stage as a step, in the continuous historical course of photojournalism.
Resumo:
In this paper we attempt to describe the general picture reasons behind the world population explosion during the 20th century. In general we comment that if, according to some, at the end of the 20th century there were too many people, this was has a consequence of scientific innovation, circulation of information, and economic growth, leading to a dramatic improvement in life expectancies. Nevertheless, a crucial variable shaping differences in demographic growth is fertility. In this paper we identify as important exogenous variables affecting fertility female education levels, infant mortality, and racial identity and diversity. It is estimated that three additional years of schooling for mothers leads on average (at the world level ) to one child less per couple. Even if we can identify a worldwide trend towards convergence in demographic trends, the African case needs to be given more attention, not only because of its different demographic patterns, but also because this is the continent where the worldwide movement towards a higher quality of life has not yet been achieved for an important share of the world's population.
Resumo:
The view of a 1870-1913 expanding European economy providing increasing welfare to everybody has been challenged by many, then and now. We focus on the amazing growth that was experienced, its diffusion and its sources, in the context of the permanent competition among European nation states. During 1870-193 the globalized European economy reached a silver age . GDP growth was quite rapid (2.15% per annum) and diffused all over Europe. Even discounting the high rates of population growth (1.06%), per capita growth was left at a respectable 1.08%. Income per capita was rising in every country, and the rates of improvement were quite similar. This was a major achievement after two generations of highly localized growth, both geographically and socially. Growth was based on the increased use of labour and capital, but a good part of growth (73 per cent for the weighted average of the best documented European countries) came out of total factor productivity efficiency gains resulting from not well specified ultimate sources of growth. This proportion suggests that the European economy was growing at full capacity at its production frontier. It would have been very difficult to improve its performance. Within Europe, convergence was limited, and it only was in motion after 1900. What happened was more the end of the era of big divergence rather than an era of convergence.
Resumo:
I study the role of internal migration in income convergence acrossregions in Japan. Neoclassical theory predicts that migration should have beenan important source of convergence. Regression results, however, suggest thatmigration did not contribute to convergence. I investigate the possibilitythat this discrepancy is explained by taking into account the effects ofmigration on population composition, especially on educational attainment.I propose an empirical approach to quantify this ``educational compositioneffect''. It is shown that, although this effect did slow down convergence,its magnitude was too small to account for the discrepancy between theoryand empirics.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
In this paper we consider dynamic processes, in repeated games, that are subject to the natural informational restriction of uncoupledness. We study the almost sure convergence to Nash equilibria, and present a number of possibility and impossibility results. Basically, we show that if in addition to random moves some recall is introduced, then successful search procedures that are uncoupled can be devised. In particular, to get almost sure convergence to pure Nash equilibria when these exist, it su±ces to recall the last two periods of play.