120 resultados para Spinodal decomposition
Resumo:
This paper is concerned with the modeling and analysis of quantum dissipation phenomena in the Schrödinger picture. More precisely, we do investigate in detail a dissipative, nonlinear Schrödinger equation somehow accounting for quantum Fokker–Planck effects, and how it is drastically reduced to a simpler logarithmic equation via a nonlinear gauge transformation in such a way that the physics underlying both problems keeps unaltered. From a mathematical viewpoint, this allows for a more achievable analysis regarding the local wellposedness of the initial–boundary value problem. This simplification requires the performance of the polar (modulus–argument) decomposition of the wavefunction, which is rigorously attained (for the first time to the best of our knowledge) under quite reasonable assumptions.
Resumo:
Aquest treball proposa unes EDT (estructures de descomposició de treball) per a un conjunt de models de projecte informàtic que, sense voler ser un recull exhaustiu, sí que és prou representatiu de l'extensa varietat de tipus existents.
Resumo:
The use of orthonormal coordinates in the simplex and, particularly, balance coordinates, has suggested the use of a dendrogram for the exploratory analysis of compositional data. The dendrogram is based on a sequential binary partition of a compositional vector into groups of parts. At each step of a partition, one group of parts isdivided into two new groups, and a balancing axis in the simplex between both groupsis defined. The set of balancing axes constitutes an orthonormal basis, and the projections of the sample on them are orthogonal coordinates. They can be represented in adendrogram-like graph showing: (a) the way of grouping parts of the compositional vector; (b) the explanatory role of each subcomposition generated in the partition process;(c) the decomposition of the total variance into balance components associated witheach binary partition; (d) a box-plot of each balance. This representation is useful tohelp the interpretation of balance coordinates; to identify which are the most explanatory coordinates; and to describe the whole sample in a single diagram independentlyof the number of parts of the sample
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to goon or to stop sampling a signal. When the sampled values are homogeneous enough, we assume thatthey represent the signal fairly well and we do not need further refinement, otherwise more samples arerequired, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is verysensitive to variability is necessary. In this paper, we present a family of discrimination measures, thef-divergences, meeting this requirement. These convex functions have been well studied and successfullyapplied to image processing and several areas of engineering. Two applications to global illuminationare shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. Weobtain significantly better results than with classic criteria, showing that f-divergences are worth furtherinvestigation in computer graphics. Also a discrimination measure based on entropy of the samples forrefinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a naturalmethod to deal with the adaptive subdivision of the sampling region
Resumo:
This paper surveys control architectures proposed in the literature and describes a control architecture that is being developed for a semi-autonomous underwater vehicle for intervention missions (SAUVIM) at the University of Hawaii. Conceived as hybrid, this architecture has been organized in three layers: planning, control and execution. The mission is planned with a sequence of subgoals. Each subgoal has a related task supervisor responsible for arranging a set of pre-programmed task modules in order to achieve the subgoal. Task modules are the key concept of the architecture. They are the main building blocks and can be dynamically re-arranged by the task supervisor. In our architecture, deliberation takes place at the planning layer while reaction is dealt through the parallel execution of the task modules. Hence, the system presents both a hierarchical and an heterarchical decomposition, being able to show a predictable response while keeping rapid reactivity to the dynamic environment
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequentin practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of thesample, thus producing a transfer of mass from some components to other ones, butpreserving the total mass present in the system. This evolution is traditionally modelledas a system of ordinary di erential equations of the mass of each component. However,this kind of evolution can be decomposed into a compositional change, expressed interms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despiteof some subcompositions behaving linearly.The goal is to study the characteristics of such simplicial systems of di erential equa-tions such as linearity and stability. This is performed extracting the compositional differential equations from the mass equations. Then, simplicial derivatives are expressedin coordinates of the simplex, thus reducing the problem to the standard theory ofsystems of di erential equations, including stability. The characterisation of stabilityof these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and theassociated behaviour of the orbits are the main tools. For a three component system,these orbits can be plotted both in coordinates of the simplex or in a ternary diagram.A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is aradioactive decay
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.
Resumo:
Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.
Resumo:
Recently, White (2007) analysed the international inequalities in Ecological Footprints per capita (EF hereafter) based on a two-factor decomposition of an index from the Atkinson family (Atkinson (1970)). Specifically, this paper evaluated the separate role of environment intensity (EF/GDP) and average income as explanatory factors for these global inequalities. However, in addition to other comments on their appeal, this decomposition suffers from the serious limitation of the omission of the role exerted by probable factorial correlation (York et al. (2005)). This paper proposes, by way of an alternative, a decomposition of a conceptually similar index like Theil’s (Theil, 1967) which, in effect, permits clear decomposition in terms of the role of both factors plus an inter-factor correlation, in line with Duro and Padilla (2006). This decomposition might, in turn, be extended to group inequality components (Shorrocks, 1980), an analysis that cannot be conducted in the case of the Atkinson indices. The proposed methodology is implemented empirically with the aim of analysing the international inequalities in EF per capita for the 1980-2007 period and, amongst other results, we find that, indeed, the interactive component explains, to a significant extent, the apparent pattern of stability observed in overall international inequalities. Key words: ecological footprint; international environmental distribution; inequality decomposition
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
In this paper, we characterize the non-emptiness of the equity core (Selten, 1978) and provide a method, easy to implement, for computing the Lorenz-maximal allocations in the equal division core (Dutta-Ray, 1991). Both results are based on a geometrical decomposition of the equity core as a finite union of polyhedrons. Keywords: Cooperative game, equity core, equal division core, Lorenz domination. JEL classification: C71
Resumo:
The Computational Biophysics Group at the Universitat Pompeu Fabra (GRIB-UPF) hosts two unique computational resources dedicated to the execution of large scale molecular dynamics (MD) simulations: (a) the ACMD molecular-dynamics software, used on standard personal computers with graphical processing units (GPUs); and (b) the GPUGRID. net computing network, supported by users distributed worldwide that volunteer GPUs for biomedical research. We leveraged these resources and developed studies, protocols and open-source software to elucidate energetics and pathways of a number of biomolecular systems, with a special focus on flexible proteins with many degrees of freedom. First, we characterized ion permeation through the bactericidal model protein Gramicidin A conducting one of the largest studies to date with the steered MD biasing methodology. Next, we addressed an open problem in structural biology, the determination of drug-protein association kinetics; we reconstructed the binding free energy, association, and dissaciociation rates of a drug like model system through a spatial decomposition and a Makov-chain analysis. The work was published in the Proceedings of the National Academy of Sciences and become one of the few landmark papers elucidating a ligand-binding pathway. Furthermore, we investigated the unstructured Kinase Inducible Domain (KID), a 28-peptide central to signalling and transcriptional response; the kinetics of this challenging system was modelled with a Markovian approach in collaboration with Frank Noe’s group at the Freie University of Berlin. The impact of the funding includes three peer-reviewed publication on high-impact journals; three more papers under review; four MD analysis components, released as open-source software; MD protocols; didactic material, and code for the hosting group.
Resumo:
In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.