997 resultados para Problem Decomposition
Resumo:
In a recent paper [Phys. Rev. B 50, 3477 (1994)], P. Fratzl and O. Penrose present the results of the Monte Carlo simulation of the spinodal decomposition problem (phase separation) using the vacancy dynamics mechanism. They observe that the t1/3 growth regime is reached faster than when using the standard Kawasaki dynamics. In this Comment we provide a simple explanation for the phenomenon based on the role of interface diffusion, which they claim is irrelevant for the observed behavior.
Resumo:
The integral representation of the electromagnetic two-form, defined on Minkowski space-time, is studied from a new point of view. The aim of the paper is to obtain an invariant criteria in order to define the radiative field. This criteria generalizes the well-known structureless charge case. We begin with the curvature two-form, because its field equations incorporate the motion of the sources. The gauge theory methods (connection one-forms) are not suited because their field equations do not incorporate the motion of the sources. We obtain an integral solution of the Maxwell equations in the case of a flow of charges in irrotational motion. This solution induces us to propose a new method of solving the problem of the nature of the retarded radiative field. This method is based on a projection tensor operator which, being local, is suited to being implemented on general relativity. We propose the field equations for the pair {electromagnetic field, projection tensor J. These field equations are an algebraic differential first-order system of oneforms, which verifies automatically the integrability conditions.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
Decomposition and side reactions of, and the synthetic use of, pentafluorophenylmagnesium bromide and pentafluorophenyllithium have been investigated using G,C9/M.S, techniques• Their reactions with reagents such as CgF^X (X - H, F, CI, Br, 1), C6F4X2 (X - H, CI)f C6F3C13, C6H6. (CgX5)3P (X = H, F), (C6X5)3P=0 (X = H, F), (CgX5)Si (CH3)3 (X = H, F) and (CH0K SiCl , n = 1,2, in ether or ether/n-hexane were studied• In addition to the principal reaction of synthetic use, namely the replacement of a halogen by a pentafluorophenyl group, two types of side reactions were observed* These were (i) intermolecular loss of LiF via a nucleophilic substitution, and (ii) intramolecular loss of LiF, followed by the addition of either inorganic salts such as lithium or magnesium halides, or organometal compounds such as organolithium or organo-Grigaard* G.C«/M.S. techniques were routinely employed to study complicated reaction mixtures. Although mass spectrometry alone has disadvantages for the identification of isomers, deduction of the most probable pathway often helps overcome this problem.
Resumo:
In a recent paper [Phys. Rev. B 50, 3477 (1994)], P. Fratzl and O. Penrose present the results of the Monte Carlo simulation of the spinodal decomposition problem (phase separation) using the vacancy dynamics mechanism. They observe that the t1/3 growth regime is reached faster than when using the standard Kawasaki dynamics. In this Comment we provide a simple explanation for the phenomenon based on the role of interface diffusion, which they claim is irrelevant for the observed behavior.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
Solutions of a two-dimensional dam break problem are presented for two tailwater/reservoir height ratios. The numerical scheme used is an extension of one previously given by the author [J. Hyd. Res. 26(3), 293–306 (1988)], and is based on numerical characteristic decomposition. Thus approximate solutions are obtained via linearised problems, and the method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids non-physical, spurious oscillations.
Resumo:
A technique is derived for solving a non-linear optimal control problem by iterating on a sequence of simplified problems in linear quadratic form. The technique is designed to achieve the correct solution of the original non-linear optimal control problem in spite of these simplifications. A mixed approach with a discrete performance index and continuous state variable system description is used as the basis of the design, and it is shown how the global problem can be decomposed into local sub-system problems and a co-ordinator within a hierarchical framework. An analysis of the optimality and convergence properties of the algorithm is presented and the effectiveness of the technique is demonstrated using a simulation example with a non-separable performance index.
Resumo:
Several popular Machine Learning techniques are originally designed for the solution of two-class problems. However, several classification problems have more than two classes. One approach to deal with multiclass problems using binary classifiers is to decompose the multiclass problem into multiple binary sub-problems disposed in a binary tree. This approach requires a binary partition of the classes for each node of the tree, which defines the tree structure. This paper presents two algorithms to determine the tree structure taking into account information collected from the used dataset. This approach allows the tree structure to be determined automatically for any multiclass dataset.
Resumo:
Foundries can be found all over Brazil and they are very important to its economy. In 2008, a mixed integer-programming model for small market-driven foundries was published, attempting to minimize delivery delays. We undertook a study of that model. Here, we present a new approach based on the decomposition of the problem into two sub-problems: production planning of alloys and production planning of items. Both sub-problems are solved using a Lagrangian heuristic based on transferences. An important aspect of the proposed heuristic is its ability to take into account a secondary practice objective solution: the furnace waste. Computational tests show that the approach proposed here is able to generate good quality solutions that outperform prior results. Journal of the Operational Research Society (2010) 61, 108-114. doi:10.1057/jors.2008.151
Resumo:
l Suppose that X, Y. A and B are Banach spaces such that X is isomorphic to Y E) A and Y is isomorphic to X circle plus B. Are X and Y necessarily isomorphic? In this generality. the answer is no, as proved by W.T. Cowers in 1996. In the present paper, we provide a very simple necessary and sufficient condition on the 10-tuples (k, l, m, n. p, q, r, s, u, v) in N with p+q+u >= 3, r+s+v >= 3, uv >= 1, (p,q)$(0,0), (r,s)not equal(0,0) and u=1 or v=1 or (p. q) = (1, 0) or (r, s) = (0, 1), which guarantees that X is isomorphic to Y whenever these Banach spaces satisfy X(u) similar to X(p)circle plus Y(q), Y(u) similar to X(r)circle plus Y(s), and A(k) circle plus B(l) similar to A(m) circle plus B(n). Namely, delta = +/- 1 or lozenge not equal 0, gcd(lozenge, delta (p + q - u)) divides p + q - u and gcd(lozenge, delta(r + s - v)) divides r + s - v, where 3 = k - I - in + n is the characteristic number of the 4-tuple (k, l, m, n) and lozenge = (p - u)(s - v) - rq is the discriminant of the 6-tuple (p, q, r, s, U, v). We conjecture that this result is in some sense a maximal extension of the classical Pelczynski`s decomposition method in Banach spaces: the case (1, 0. 1, 0, 2. 0, 0, 2. 1. 1). (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Suppose that X and Y are Banach spaces isomorphic to complemented subspaces of each other. In 1996, W. T. Gowers solved the Schroeder- Bernstein Problem for Banach spaces by showing that X is not necessarily isomorphic to Y. However, if X-2 is complemented in X with supplement A and Y-2 is complemented in Y with supplement B, that is, { X similar to X-2 circle plus A Y similar to Y-2 circle plus B, then the classical Pelczynski`s decomposition method for Banach spaces shows that X is isomorphic to Y whenever we can assume that A = B = {0}. But unfortunately, this is not always possible. In this paper, we show that it is possible to find all finite relations of isomorphism between A and B which guarantee that X is isomorphic to Y. In order to do this, we say that a quadruple (p, q, r, s) in N is a P-Quadruple for Banach spaces if X is isomorphic to Y whenever the supplements A and B satisfy A(p) circle plus B-q similar to A(r) circle plus B-s . Then we prove that (p, q, r, s) is a P-Quadruple for Banach spaces if and only if p - r = s - q = +/- 1.
Resumo:
This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)