997 resultados para Problem Decomposition
Resumo:
In this paper we address the new reduction method called Proper Generalized Decomposition (PGD) which is a discretization technique based on the use of separated representation of the unknown fields, specially well suited for solving multidimensional parametric equations. In this case, it is applied to the solution of dynamics problems. We will focus on the dynamic analysis of an one-dimensional rod with a unit harmonic load of frequency (ω) applied at a point of interest. In what follows, we will present the application of the methodology PGD to the problem in order to approximate the displacement field as the sum of the separated functions. We will consider as new variables of the problem, parameters models associated with the characteristic of the materials, in addition to the frequency. Finally, the quality of the results will be assessed based on an example.
Resumo:
The existence of the RNA world, in which RNA acted as a catalyst as well as an informational macromolecule, assumes a large prebiotic source of ribose or the existence of pre-RNA molecules with backbones different from ribose-phosphate. The generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity. Even if there were a selective synthesis of ribose, there is still the problem of stability. Sugars are known to be unstable in strong acid or base, but there are few data for neutral solutions. Therefore, we have measured the rate of decomposition of ribose between pH 4 and pH 8 from 40 degrees C to 120 degrees C. The ribose half-lives are very short (73 min at pH 7.0 and 100 degrees C and 44 years at pH 7.0 and 0 degrees C). The other aldopentoses and aldohexoses have half-lives within an order of magnitude of these values, as do 2-deoxyribose, ribose 5-phosphate, and ribose 2,4-bisphosphate. These results suggest that the backbone of the first genetic material could not have contained ribose or other sugars because of their instability.
Resumo:
Polyvinyl chloride (PVC) is one of the plastics most extensively used due to its versatility. The demand of PVC resin in Europe during 2012 reached 5000 ktonnes1. PVC waste management is a big problem because of the high volume generated all over the world and its chlorine content. End-of-life PVC is mainly mixed with municipal solid waste (MSW) and one common disposal option for this is waste-to-energy incineration (WtE). The presence of plastics such as PVC in the fuel mix increases the heating value of the fuel. PVC has two times higher energy content than MSW ‒around 20 MJ/kg vs 10 MJ/kg, respectively. However, the high chlorine content in PVC resin, 57 wt.%, may be a source for the formation of hazardous chlorinated organic pollutants in thermal processes. Chlorine present in the feedstock of WtE plants plays an important role in the formation of (i) chlorine (Cl2) and (ii) hydrochloric gas (HCl), both of them responsible for corrosion, and (iii) chlorinated organic pollutants2. In this work, pyrolytic and oxidative thermal degradation of PVC resin were carried out in a laboratory scale reactor at 500 ºC in order to analyze the influence of the reaction atmosphere on the emissions evolved. Special emphasis was put on the analysis of chlorinated organic pollutants such as polychlorodibenzo-p-dioxins (PCDDs), polychlorodibenzofurans (PCDFs) and other related compounds like polychlorobenzenes (PCBzs), polychlorophenols (PCPhs) and polycyclic aromatic hydrocarbons (PAHs). Another objective of this work was to compare the results with those of a previous work3 in which emissions at different temperatures in both pyrolysis and combustion of another PVC resin had been studied; in that case, experiments for PCDD/Fs emissions had been performed only at 850 ºC.
Resumo:
Necessary conditions for the complete graph on n vertices to have a decomposition into 5-cubes are that 5 divides it - 1 and 80 divides it (it - 1)/2. These are known to be sufficient when n is odd. We prove them also sufficient for it even, thus completing the spectrum problem for the 5-cube and lending further weight to a long-standing conjecture of Kotzig. (c) 2005 Wiley Periodicals, Inc.
Resumo:
A hard combinatorial problem is investigated which has useful application in design of discrete devices: the two-block decomposition of a partial Boolean function. The key task is regarded: finding such a weak partition on the set of arguments, at which the considered function can be decomposed. Solving that task is essentially speeded up by the way of preliminary discovering traces of the sought-for partition. Efficient combinatorial operations are used by that, based on parallel execution of operations above adjacent units in the Boolean space.
Resumo:
The problem of sequent two-block decomposition of a Boolean function is regarded in case when a good solution does exist. The problem consists mainly in finding an appropriate weak partition on the set of arguments of the considered Boolean function, which should be decomposable at that partition. A new fast heuristic combinatorial algorithm is offered for solving this task. At first the randomized search for traces of such a partition is fulfilled. The recognized traces are represented by some "triads" - the simplest weak partitions corresponding to non-trivial decompositions. After that the whole sought-for partition is restored from the discovered trace by building a track initialized by the trace and leading to the solution. The results of computer experiments testify the high practical efficiency of the algorithm.
Resumo:
2000 Mathematics Subject Classification: 35L15, Secondary 35L30.
Resumo:
MSC 2010: 05C50, 15A03, 15A06, 65K05, 90C08, 90C35
Resumo:
As the efficiency of parallel software increases it is becoming common to measure near linear speedup for many applications. For a problem size N on P processors then with software running at O(N=P ) the performance restrictions due to file i/o systems and mesh decomposition running at O(N) become increasingly apparent especially for large P . For distributed memory parallel systems an additional limit to scalability results from the finite memory size available for i/o scatter/gather operations. Simple strategies developed to address the scalability of scatter/gather operations for unstructured mesh based applications have been extended to provide scalable mesh decomposition through the development of a parallel graph partitioning code, JOSTLE [8]. The focus of this work is directed towards the development of generic strategies that can be incorporated into the Computer Aided Parallelisation Tools (CAPTools) project.
Resumo:
This paper presents our work on decomposing a specific nurse rostering problem by cyclically assigning blocks of shifts, which are designed considering both hard and soft constraints, to groups of nurses. The rest of the shifts are then assigned to the nurses to construct a schedule based on the one cyclically generated by blocks. The schedules obtained by decomposition and construction can be further improved by a variable neighborhood search. Significant results are obtained and compared with a genetic algorithm and a variable neighborhood search approach on a problem that was presented to us by our collaborator, ORTEC bv, The Netherlands. We believe that the approach has the potential to be further extended to solve a wider range of nurse rostering problems.
A class of domain decomposition preconditioners for hp-discontinuous Galerkin finite element methods
Resumo:
In this article we address the question of efficiently solving the algebraic linear system of equations arising from the discretization of a symmetric, elliptic boundary value problem using hp-version discontinuous Galerkin finite element methods. In particular, we introduce a class of domain decomposition preconditioners based on the Schwarz framework, and prove bounds on the condition number of the resulting iteration operators. Numerical results confirming the theoretical estimates are also presented.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.