973 resultados para effective linear solver


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A implementação convencional do método de migração por diferenças finitas 3D, usa a técnica de splitting inline e crossline para melhorar a eficiência computacional deste algoritmo. Esta abordagem torna o algoritmo eficiente computacionalmente, porém cria anisotropia numérica. Esta anisotropia numérica por sua vez, pode levar a falsos posicionamentos de refletores inclinados, especialmente refletores com grandes ângulos de mergulho. Neste trabalho, como objetivo de evitar o surgimento da anisotropia numérica, implementamos o operador de extrapolação do campo de onda para baixo sem usar a técnica splitting inline e crossline no domínio frequência-espaço via método de diferenças finitas implícito, usando a aproximação de Padé complexa. Comparamos a performance do algoritmo iterativo Bi-gradiente conjugado estabilizado (Bi-CGSTAB) com o multifrontal massively parallel solver (MUMPS) para resolver o sistema linear oriundo do método de migração por diferenças finitas. Verifica-se que usando a expansão de Padé complexa ao invés da expansão de Padé real, o algoritmo iterativo Bi-CGSTAB fica mais eficientes computacionalmente, ou seja, a expansão de Padé complexa atua como um precondicionador para este algoritmo iterativo. Como consequência, o algoritmo iterativo Bi-CGSTAB é bem mais eficiente computacionalmente que o MUMPS para resolver o sistema linear quando usado apenas um termo da expansão de Padé complexa. Para aproximações de grandes ângulos, métodos diretos são necessários. Para validar e avaliar as propriedades desses algoritmos de migração, usamos o modelo de sal SEG/EAGE para calcular a sua resposta ao impulso.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implementações dos métodos de migração diferença finita e Fourier (FFD) usam fatoração direcional para acelerar a performance e economizar custo computacional. Entretanto essa técnica introduz anisotropia numérica que podem erroneamente posicionar os refletores em mergulho ao longo das direções em que o não foi aplicado a fatoração no operador de migração. Implementamos a migração FFD 3D, sem usar a técnica do fatoração direcional, no domínio da frequência usando aproximação de Padé complexa. Essa aproximação elimina a anisotropia numérica ao preço de maior custo computacional buscando a solução do campo de onda para um sistema linear de banda larga. Experimentos numéricos, tanto no modelo homogêneo e heterogêneo, mostram que a técnica da fatoração direcional produz notáveis erros de posicionamento dos refletores em meios com forte variação lateral de velocidade. Comparamos a performance de resolução do algoritmo de FFD usando o método iterativo gradiente biconjugado estabilizado (BICGSTAB) e o multifrontal massively parallel direct solver (MUMPS). Mostrando que a aproximação de Padé complexa é um eficiente precondicionador para o BICGSTAB, reduzindo o número de iterações em relação a aproximação de Padé real. O método iterativo BICGSTAB é mais eficiente que o método direto MUMPS, quando usamos apenas um termo da expansão de Padé complexa. Para maior ângulo de abertura do operador, mais termos da série são requeridos no operador de migração, e neste caso, a performance do método direto é mais eficiente. A validação do algoritmo e as propriedades da evolução computacional foram avaliadas para a resposta ao impulso do modelo de sal SEG/EAGE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The erosion is a natural process of detachment, transport and deposition of soil and rock particles from one place to another. Human activities with no previous planning may accelerate this process, causing several damages to the environment and to society. In order to control the acceleration of these erosion processes caused by humans, prevention and improvement initiatives emerge. Regarding works which interfere directly in some of the natural resources, these initiatives must respect the intrinsic physical properties of the area of interest, if they aim to obtain effective results. Based on this scenario, this work proposes a few methods of accelerated linear erosion prevention, control and recovery in a specific area of the municipal district of Ipeúna (SP). For that matter, this study is based on a method of physiographic compartmentalization of the area, considering and integrating soil, relief, geology and the use and land cover properties of the study area. Plus, a flowchart with general orientations regarding management of eroded areas was produced, focused on the control and recovery of linear erosion. The result demonstrates the importance of careful erosion control, respecting the physical properties of each physiographic unit. The vegetative and mechanical conservationists methods, and the discipline of water flow, have found wide applicability in the study area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The collapse of linear polyelectrolyte chains in a poor solvent: When does a collapsing polyelectrolyte collect its counter ions? The collapse of polyions in a poor solvent is a complex system and is an active research subject in the theoretical polyelectrolyte community. The complexity is due to the subtle interplay between hydrophobic effects, electrostatic interactions, entropy elasticity, intrinsic excluded volume as well as specific counter-ion and co-ion properties. Long range Coulomb forces can obscure single molecule properties. The here presented approach is to use just a small amount of screening salt in combination with a very high sample dilution in order to screen intermolecular interaction whereas keeping intramolecular interaction as much as possible (polyelectrolyte concentration cp ≤ 12 mg/L, salt concentration; Cs = 10^-5 mol/L). This is so far not described in literature. During collapse, the polyion is subject to a drastic change in size along with strong reduction of free counterions in solution. Therefore light scattering was utilized to obtain the size of the polyion whereas a conductivity setup was developed to monitor the proceeding of counterion collection by the polyion. Partially quaternized PVP’s below and above the Manning limit were investigated and compared to the collapse of their uncharged precursor. The collapses were induced by an isorefractive solvent/non-solvent mixture consisting of 1-propanol and 2-pentanone, with nearly constant dielectric constant. The solvent quality for the uncharged polyion could be quantified which, for the first time, allowed the experimental investigation of the effect of electrostatic interaction prior and during polyion collapse. Given that the Manning parameter M for QPVP4.3 is as low as lB / c = 0.6 (lB the Bjerrum length and c the mean contour distance between two charges), no counterion binding should occur. However the Walden product reduces with first addition of non solvent and accelerates when the structural collapse sets in. Since the dielectric constant of the solvent remains virtually constant during the chain collapse, the counterion binding is entirely caused by the reduction in the polyion chain dimension. The collapse is shifted to lower wns with higher degrees of quaternization as the samples QPVP20 and QPVP35 show (M = 2.8 respectively 4.9). The combination of light scattering and conductivity measurement revealed for the first time that polyion chains already collect their counter ions well above the theta-dimension when the dimensions start to shrink. Due to only small amounts of screening salt, strong electrostatic interactions bias dynamic as well as static light scattering measurements. An extended Zimm formula was derived to account for this interaction and to obtain the real chain dimensions. The effective degree of dissociation g could be obtained semi quantitatively using this extrapolated static in combination with conductivity measurements. One can conclude the expansion factor a and the effective degree of ionization of the polyion to be mutually dependent. In the good solvent regime g of QPVP4.3, QPVP20 and QPVP35 exhibited a decreasing value in the order 1 > g4.3 > g20 > g35. The low values of g for QPVP20 and QPVP35 are assumed to be responsible for the prior collapse of the higher quaternized samples. Collapse theory predicts dipole-dipole attraction to increase accordingly and even predicts a collapse in the good solvent regime. This could be exactly observed for the QPVP35 sample. The experimental results were compared to a newly developed theory of uniform spherical collapse induced by concomitant counterion binding developed by M. Muthukumar and A. Kundagrami. The theory agrees qualitatively with the location of the phase boundary as well as the trend of an increasing expansion with an increase of the degree of quaternization. However experimental determined g for the samples QPVP4.3, QPVP20 and QPVP35 decreases linearly with the degree of quaternization whereas this theory predicts an almost constant value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most important property controlling the physicochemical behaviour of polyelectrolytes and their applicability in different fields is the charge density on the macromolecular chain. A polyelectrolyte molecule in solution may have an effective charge density which is smaller than the actual charge density determined from its chemical structure. In the present work an attempt has been made to quantitatively determine this effective charge density of a model polyelectrolyte by using light scattering techniques. Flexible linear polyelectrolytes with a Poly(2-Vinylpyridine) (2-PVP) backbone are used in the present study. The polyelectrolytes are synthesized by quaternizing the pyridine groups of 2-PVP by ethyl bromide to different quaternization degrees. The effect of the molar mass, degree of quaternization and solvent polarity on the effective charge is studied. The results show that the effective charge does not vary much with the polymer molar mass or the degree of quaternization. But a significant increase in the effective charge is observed when the solvent polarity is increased. The results do not obey the counterion condensation theory proposed by Manning. Based on the very low effective charges determined in this study, a new mechanism for the counterion condensation phenomena from a specific polyelectrolyte-counterion interaction is proposed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questa tesi sono state applicate le tecniche del gruppo di rinormalizzazione funzionale allo studio della teoria quantistica di campo scalare con simmetria O(N) sia in uno spaziotempo piatto (Euclideo) che nel caso di accoppiamento ad un campo gravitazionale nel paradigma dell'asymptotic safety. Nel primo capitolo vengono esposti in breve alcuni concetti basilari della teoria dei campi in uno spazio euclideo a dimensione arbitraria. Nel secondo capitolo si discute estensivamente il metodo di rinormalizzazione funzionale ideato da Wetterich e si fornisce un primo semplice esempio di applicazione, il modello scalare. Nel terzo capitolo è stato studiato in dettaglio il modello O(N) in uno spaziotempo piatto, ricavando analiticamente le equazioni di evoluzione delle quantità rilevanti del modello. Quindi ci si è specializzati sul caso N infinito. Nel quarto capitolo viene iniziata l'analisi delle equazioni di punto fisso nel limite N infinito, a partire dal caso di dimensione anomala nulla e rinormalizzazione della funzione d'onda costante (approssimazione LPA), già studiato in letteratura. Viene poi considerato il caso NLO nella derivative expansion. Nel quinto capitolo si è introdotto l'accoppiamento non minimale con un campo gravitazionale, la cui natura quantistica è considerata a livello di QFT secondo il paradigma di rinormalizzabilità dell'asymptotic safety. Per questo modello si sono ricavate le equazioni di punto fisso per le principali osservabili e se ne è studiato il comportamento per diversi valori di N.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-regeneration capacity of articular cartilage is limited, due to its avascular and aneural nature. Loaded explants and cell cultures demonstrated that chondrocyte metabolism can be regulated via physiologic loading. However, the explicit ranges of mechanical stimuli that correspond to favourable metabolic response associated with extracellular matrix (ECM) synthesis are elusive. Unsystematic protocols lacking this knowledge produce inconsistent results. This study aims to determine the intrinsic ranges of physical stimuli that increase ECM synthesis and simultaneously inhibit nitric oxide (NO) production in chondrocyte-agarose constructs, by numerically re-evaluating the experiments performed by Tsuang et al. (2008). Twelve loading patterns were simulated with poro-elastic finite element models in ABAQUS. Pressure on solid matrix, von Mises stress, maximum principle stress and pore pressure were selected as intrinsic mechanical stimuli. Their development rates and magnitudes at the steady state of cyclic loading were calculated with MATLAB at the construct level. Concurrent increase in glycosaminoglycan and collagen was observed at 2300 Pa pressure and 40 Pa/s pressure rate. Between 0-1500 Pa and 0-40 Pa/s, NO production was consistently positive with respect to controls, whereas ECM synthesis was negative in the same range. A linear correlation was found between pressure rate and NO production (R = 0.77). Stress states identified in this study are generic and could be used to develop predictive algorithms for matrix production in agarose-chondrocyte constructs of arbitrary shape, size and agarose concentration. They could also be helpful to increase the efficacy of loading protocols for avascular tissue engineering. Copyright (c) 2010 John Wiley \& Sons, Ltd.