968 resultados para Solution-process


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present paper we concentrate on solving sequences of nonsymmetric linear systems with block structure arising from compressible flow problems. We attempt to improve the solution process by sharing part of the computational effort throughout the sequence. This is achieved by application of a cheap updating technique for preconditioners which we adapted in order to be used for our applications. Tested on three benchmark compressible flow problems, the strategy speeds up the entire computation with an acceleration being particularly pronounced in phases of instationary behavior.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reactions of the 1: 2 condensate (L) of benzil dihydrazone and 2-acetylpyridine with Hg(ClO4)(2) center dot xH(2)O and HgI2 yield yellow [HgL2](ClO4)(2) (1) and HgLI2 (2), respectively. Homoleptic 1 is a 8-coordinate double helical complex with a Hg(II)N-8 core crystallising in the space group Pbca with cell dimensions: a = 16.2250(3), b = 20.9563(7), c = 31.9886(11) angstrom. Complex 2 is a 4-coordinate single helical complex having a Hg(II)N2I2 core crystallising in the space group P2(1)/n with cell dimensions a = 9.8011(3), b = 17.6736(6), c = 16.7123(6) angstrom and b = 95.760(3). In complex 1, the N-donor ligand L uses all of its binding sites to act as tetradentate. On the other hand, it acts as a bidentate N-donor ligand in 2 giving rise to a dangling part. From variable temperature H-1 NMR studies both the complexes are found to be stereochemically non-rigid in solution. In the case of 2, the solution process involves wrapping up of the dangling part of L around the metal. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pb1-xCaxTiO3 (0.10less than or equal toxless than or equal to0.40) thin films on Pt/Ti/SiO2/Si(100) substrates were prepared by the soft solution process and their characteristics were investigated as a function of the calcium content (x). The structural modifications in the films were studied using x-ray diffraction and micro-Raman scattering techniques. Lattice parameters calculated from x-ray data indicate a decrease in lattice tetragonality with the increasing content of calcium in these films. Raman spectra exhibited characteristic features of pure PbTiO3 thin films. Variations in the phonon mode wave numbers, especially those of lower wave numbers, of Pb1-xCaxTiO3 thin films as a function of the composition corroborate the decrease in tetragonality caused by the calcium doping. As the Ca content (x) increases from 0.10 to 0.40, the dielectric constant at room temperature abnormally increased at 1 kHz from 148 to 430. Also calcium substitution decreased the remanent polarization and coercive field from 28.0 to 5.3 muC/cm(2) and 124 to 58 kV/cm, respectively. These properties can be explained in terms of variations of phase transition (ferroelectric-paraelectric), resulting from the substitution the lead site of PbTiO(3)for the nonvolatile calcium. (C) 2002 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ein dynamisches Umfeld erforderte die permanente Anpassung (intra-)logistischer Prozesse zur Aufrechterhaltung der Leistungs- und Wettbewerbsfähigkeit von Unternehmen. In der Standardisierung von Prozessen und in unternehmensübergreifenden Prozessmodellen wurden Schlüsselfaktoren für ein effizientes Geschäftsprozessmanagement gesehen, insbesondere in Netzwerken. In der Praxis fehlten wissenschaftlich fundierte und detaillierte Referenzprozessmodelle für die (Intra-)Logistik. Mit der Erforschung und Entwicklung eines Referenzprozessmodells und der prototypischen Realisierung einer Prozess-Workbench zur generischen Erstellung wandelbarer Prozessketten wurde ein Beitrag zur Prozessstandardisierung in der Logistik geleistet. Im Folgenden wird der beschrittene Lösungsweg dargestellt, der erstens aus der Entwicklung eines Metamodells für die Referenzmodellierung, zweitens aus dem empirischen Nachweis der Generierung eines Referenzprozessmodells aus „atomaren“ Elementen und drittens aus der Modellevaluation bestand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Composition, grain-size distribution, and areal extent of Recent sediments from the Northern Adriatic Sea along the Istrian coast have been studied. Thirty one stations in four sections vertical to the coast were investigated; for comparison 58 samples from five small bays were also analyzed. Biogenic carbonate sediments are deposited on the shallow North Adriatic shelf off the Istrian coast. Only at a greater distance from the coast are these carbonate sediments being mixed with siliceous material brought in by the Alpine rivers Po, Adige, and Brenta. Graphical analysis of grain-size distribution curves shows a sediment composition of normally three, and only in the most seaward area, of four major constituents. Constituent 1 represents the washed-in terrestrial material of clay size (Terra Rossa) from the Istrian coastal area. Constituent 2 consists of fine to medium sand. Constituent 3 contains the heterogeneous biogenic material. Crushing by organisms and by sediment eaters reduces the coarse biogenic material into small pieces generating constituent 2. Between these two constituents there is a dynamic equilibrium. Depending upon where the equilibrium is, between the extremes of production and crushing, the resulting constituent 2 is finer or coarser. Constituent 4 is composed of the fine sandy material from the Alpine rivers. In the most seaward area constituents 2 and 4 are mixed. The total carbonate content of the samples depends on the distance from the coast. In the near coastal area in high energy environments, the carbonate content is about 80 %. At a distance of 2 to 3 km from the coast there is a carbonate minimum because of the higher rate of sedimentation of clay-sized terrestrial, noncarbonate material at extremely low energy environments. In an area between 5 and 20 km off the coast, the carbonate content is about 75 %. More than 20 km from the shore, the carbonate content diminishes rapidly to values of about 30 % through mixing with siliceous material from the Alpine rivers. The carbonate content of the individual fractions increases with increasing grain-size to a maximum of about 90 % within the coarse sand fractions. Beyond 20 km from the coast the samples show a carbonate minimum of about 13 % within the sand-size classes from 1.5 to 0.7 zeta¬? through mixing with siliceous material from the alpine rivers. By means of grain-size distribution and carbonate content, four sediment zones parallel to the coast were separated. Genetically they are closely connected with the zonation of the benthic fauna. Two cores show a characteristic vertical distribution of the sediment. The surface zone is inversely graded, that means the coarse fractions are at the top and the fine fractions are at the bottom. This is the effect of crushing of the biogenic material produced at the surface by predatory organisms and by sediment eaters. lt is proposed that at a depth of about 30 cm a chemical solution process begins which leads to diminution of the original sediment from a fine to medium sand to a silt. The carbonate content decreases from about 75 % at the surface to 65 % at a depth of 100 cm. The increase of the noncarbonate components by 10 % corresponds to a decrease in the initial amount of sediment (CaC03=75 %) by roughly 30 % through solution. With increasing depth the carbonate content of the individual fractions becomes more and more uniform. At the surface the variation is from 30 % to 90 %, at the bottom it varies only between 50 % and 75 %. Comparable investigations of small-bay sediments showed a c1ear dependence of sediment/faunal zonation from the energy of the environment. The investigations show that the composition and three-dimensional distribution of the Istrian coastal sediments can not be predicted only from one or a few measurable factors. Sedimentation and syngenetic changes must be considered as a complex interaction between external factors and the actions of producing and destroying organisms that are in dynamic equilibrium. The results obtained from investigations of these recent sediments may be of value for interpreting fossil sediments only with strong limitations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we show the possibility of applying adaptive procedures as an alternative to the well-known philosophy of standard Boundary Elements. The three characteristic steps of adaptive procedures, i.e. hierarchical shape functions families, indicator criteria, and a posteriori estimation, can be defined in order to govern an automatic refinement and stopping of the solution process. A computer program to treat potential problems, called QUEIMADA, has been developed to show the capabilities of the new idea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El conjunto eficiente en la Teoría de la Decisión Multicriterio juega un papel fundamental en los procesos de solución ya que es en este conjunto donde el decisor debe hacer su elección más preferida. Sin embargo, la generación de tal conjunto puede ser difícil, especialmente en problemas continuos y/o no lineales. El primer capítulo de esta memoria, es introductorio a la Decisión Multicriterio y en él se exponen aquellos conceptos y herramientas que se van a utilizar en desarrollos posteriores. El segundo capítulo estudia los problemas de Toma de Decisiones en ambiente de certidumbre. La herramienta básica y punto de partida es la función de valor vectorial que refleja imprecisión sobre las preferencias del decisor. Se propone una caracterización del conjunto de valor eficiente y diferentes aproximaciones con sus propiedades de encaje y convergencia. Varios algoritmos interactivos de solución complementan los desarrollos teóricos. El tercer capítulo está dedicado al caso de ambiente de incertidumbre. Tiene un desarrollo parcialmente paralelo al anterior y utiliza la función de utilidad vectorial como herramienta de modelización de preferencias del decisor. A partir de la consideración de las distribuciones simples se introduce la eficiencia en utilidad, su caracterización y aproximaciones, que posteriormente se extienden a los casos de distribuciones discretas y continuas. En el cuarto capítulo se estudia el problema en ambiente difuso, aunque de manera introductoria. Concluimos sugiriendo distintos problemas abiertos.---ABSTRACT---The efficient set of a Multicriteria Decicion-Making Problem plays a fundamental role in the solution process since the Decisión Maker's preferred choice should be in this set. However, the computation of that set may be difficult, specially in continuous and/or nonlinear problems. Chapter one introduces Multicriteria Decision-Making. We review basic concepts and tools for later developments. Chapter two studies Decision-Making problems under certainty. The basic tool is the vector valué function, which represents imprecisión in the DM's preferences. We propose a characterization of the valué efficient set and different approximations with nesting and convergence properties. Several interactive algorithms complement the theoretical results. We devote Chapter three to problems under uncertainty. The development is parallel to the former and uses vector utility functions to model the DM's preferences. We introduce utility efficiency for simple distributions, its characterization and some approximations, which we partially extend to discrete and continuous classes of distributions. Chapter four studies the problem under fuzziness, at an exploratory level. We conclude with several open problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Time-harmonic methods are required in the accurate design of RF coils as operating frequency increases. This paper presents such a method to find a current density solution on the coil that will induce some desired magnetic field upon an asymmetrically located target region within. This inverse method appropriately considers the geometry of the coil via a Fourier series expansion, and incorporates some new regularization penalty functions in the solution process. A new technique is introduced by which the complex, time-dependent current density solution is approximated by a static coil winding pattern. Several winding pattern solutions are given, with more complex winding patterns corresponding to more desirable induced magnetic fields.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.