76 resultados para Finite difference simulations
Resumo:
An abundant scientific literature about climate change economics points out that the future participation of developing countries in international environmental policies will depend on their amount of pay offs inside and outside specific agreements. These studies are aimed at analyzing coalitions stability typically through a game theoretical approach. Though these contributions represent a corner stone in the research field investigating future plausible international coalitions and the reasons behind the difficulties incurred over time to implement emissions stabilizing actions, they cannot disentangle satisfactorily the role that equality play in inducing poor regions to tackle global warming. If we focus on the Stern Review findings stressing that climate change will generate heavy damages and policy actions will be costly in a finite time horizon, we understand why there is a great incentive to free ride in order to exploit benefits from emissions reduction efforts of others. The reluctance of poor countries in joining international agreements is mainly supported by historical responsibility of rich regions in generating atmospheric carbon concentration, whereas rich countries claim that emissions stabilizing policies will be effective only when developing countries will join them.Scholars recently outline that a perceived fairness in the distribution of emissions would facilitate a wide spread participation in international agreements. In this paper we overview the literature about distributional aspects of emissions by focusing on those contributions investigating past trends of emissions distribution through empirical data and future trajectories through simulations obtained by integrated assessment models. We will explain methodologies used to elaborate data and the link between real data and those coming from simulations. Results from this strand of research will be interpreted in order to discuss future negotiations for post Kyoto agreements that will be the focus of the next. Conference of the Parties in Copenhagen at the end of 2009. A particular attention will be devoted to the role that technological change will play in affecting the distribution of emissions over time and to how spillovers and experience diffusion could influence equality issues and future outcomes of policy negotiations.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
The objective of this paper is to re-examine the risk-and effort attitude in the context of strategic dynamic interactions stated as a discrete-time finite-horizon Nash game. The analysis is based on the assumption that players are endogenously risk-and effort-averse. Each player is characterized by distinct risk-and effort-aversion types that are unknown to his opponent. The goal of the game is the optimal risk-and effort-sharing between the players. It generally depends on the individual strategies adopted and, implicitly, on the the players' types or characteristics.
Resumo:
We show that the product of a subparacompact C-scattered space and a Lindelöf D-space is D. In addition, we show that every regular locally D-space which is the union of a finite collection of subparacompact spaces and metacompact spaces has the D-property. Also, we extend this result from the class of locally D-spaces to the wider class of D-scattered spaces. All the results are shown in a direct way.
Resumo:
In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
This article analyzes Folner sequences of projections for bounded linear operators and their relationship to the class of finite operators introduced by Williams in the 70ies. We prove that each essentially hyponormal operator has a proper Folner sequence (i.e. a Folner sequence of projections strongly converging to 1). In particular, any quasinormal, any subnormal, any hyponormal and any essentially normal operator has a proper Folner sequence. Moreover, we show that an operator is finite if and only if it has a proper Folner sequence or if it has a non-trivial finite dimensional reducing subspace. We also analyze the structure of operators which have no Folner sequence and give examples of them. For this analysis we introduce the notion of strongly non-Folner operators, which are far from finite block reducible operators, in some uniform sense, and show that this class coincides with the class of non-finite operators.
Resumo:
We present a study of the continuous-time equations governing the dynamics of a susceptible infected-susceptible model on heterogeneous metapopulations. These equations have been recently proposed as an alternative formulation for the spread of infectious diseases in metapopulations in a continuous-time framework. Individual-based Monte Carlo simulations of epidemic spread in uncorrelated networks are also performed revealing a good agreement with analytical predictions under the assumption of simultaneous transmission or recovery and migration processes
Resumo:
A simple extended finite field nuclear relaxation procedure for calculating vibrational contributions to degenerate four-wave mixing (also known as the intensity-dependent refractive index) is presented. As a by-product one also obtains the static vibrationally averaged linear polarizability, as well as the first and second hyperpolarizability. The methodology is validated by illustrative calculations on the water molecule. Further possible extensions are suggested
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes
Resumo:
A new practical method to generate a subspace of active coordinates for quantum dynamics calculations is presented. These reduced coordinates are obtained as the normal modes of an analytical quadratic representation of the energy difference between excited and ground states within the complete active space self-consistent field method. At the Franck-Condon point, the largest negative eigenvalues of this Hessian correspond to the photoactive modes: those that reduce the energy difference and lead to the conical intersection; eigenvalues close to 0 correspond to bath modes, while modes with large positive eigenvalues are photoinactive vibrations, which increase the energy difference. The efficacy of quantum dynamics run in the subspace of the photoactive modes is illustrated with the photochemistry of benzene, where theoretical simulations are designed to assist optimal control experiments
Resumo:
In the finite field (FF) treatment of vibrational polarizabilities and hyperpolarizabilities, the field-free Eckart conditions must be enforced in order to prevent molecular reorientation during geometry optimization. These conditions are implemented for the first time. Our procedure facilities identification of field-induced internal coordinates that make the major contribution to the vibrational properties. Using only two of these coordinates, quantitative accuracy for nuclear relaxation polarizabilities and hyperpolarizabilities is achieved in π-conjugated systems. From these two coordinates a single most efficient natural conjugation coordinate (NCC) can be extracted. The limitations of this one coordinate approach are discussed. It is shown that the Eckart conditions can lead to an isotope effect that is comparable to the isotope effect on zero-point vibrational averaging, but with a different mass-dependence
Resumo:
La teor\'\ı a de Morales–Ramis es la teor\'\ı a de Galois en el contextode los sistemas din\'amicos y relaciona dos tipos diferentes de integrabilidad:integrabilidad en el sentido de Liouville de un sistema hamiltonianoe integrabilidad en el sentido de la teor\'\ı a de Galois diferencial deuna ecuaci\'on diferencial. En este art\'\i culo se presentan algunas aplicacionesde la teor\'\i a de Morales–Ramis en problemas de no integrabilidadde sistemas hamiltonianos cuya ecuaci\'on variacional normal a lo largode una curva integral particular es una ecuaci\'on diferencial lineal desegundo orden con coeficientes funciones racionales. La integrabilidadde la ecuaci\'on variacional normal es analizada mediante el algoritmode Kovacic.
Resumo:
La meva incorporació al grup de recerca del Prof. McCammon (University of California San Diego) en qualitat d’investigador post doctoral amb una beca Beatriu de Pinós, va tenir lloc el passat 1 de desembre de 2010; on vaig dur a terme les meves tasques de recerca fins al darrer 1 d’abril de 2012. El Prof. McCammon és un referent mundial en l’aplicació de simulacions de dinàmica molecular (MD) en sistemes biològics d’interès humà. La contribució més important del Prof. McCammon en la simulació de sistemes biològics és el desenvolupament del mètode de dinàmiques moleculars accelerades (AMD). Les simulacions MD convencionals, les quals estan limitades a l’escala de temps del nanosegon (~10-9s), no son adients per l’estudi de sistemes biològics rellevants a escales de temps mes llargues (μs, ms...). AMD permet explorar fenòmens moleculars poc freqüents però que son clau per l’enteniment de molts sistemes biològics; fenòmens que no podrien ser observats d’un altre manera. Durant la meva estada a la “University of California San Diego”, vaig treballar en diferent aplicacions de les simulacions AMD, incloent fotoquímica i disseny de fàrmacs per ordinador. Concretament, primer vaig desenvolupar amb èxit una combinació dels mètodes AMD i simulacions Car-Parrinello per millorar l’exploració de camins de desactivació (interseccions còniques) en reaccions químiques fotoactivades. En segon lloc, vaig aplicar tècniques estadístiques (Replica Exchange) amb AMD en la descripció d’interaccions proteïna-lligand. Finalment, vaig dur a terme un estudi de disseny de fàrmacs per ordinador en la proteïna-G Rho (involucrada en el desenvolupament de càncer humà) combinant anàlisis estructurals i simulacions AMD. Els projectes en els quals he participat han estat publicats (o estan encara en procés de revisió) en diferents revistes científiques, i han estat presentats en diferents congressos internacionals. La memòria inclosa a continuació conté més detalls de cada projecte esmentat.