969 resultados para Decomposition method.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo principal de esta tesis es el desarrollo de herramientas numéricas basadas en técnicas de onda completa para el diseño asistido por ordenador (Computer-Aided Design,‘CAD’) de dispositivos de microondas. En este contexto, se desarrolla una herramienta numérica basada en el método de los elementos finitos para el diseño y análisis de antenas impresas mediante algoritmos de optimización. Esta técnica consiste en dividir el análisis de una antena en dos partes. Una parte de análisis 3D que se realiza sólo una vez en cada punto de frecuencia de la banda de funcionamiento donde se sustituye una superficie que contiene la metalización del parche por puertas artificiales. En una segunda parte se inserta entre las puertas artificiales en la estructura 3D la superficie soportando una metalización y se procede un análisis 2D para caracterizar el comportamiento de la antena. La técnica propuesta en esta tesis se puede implementar en un algoritmo de optimización para definir el perfil de la antena que permite conseguir los objetivos del diseño. Se valida experimentalmente dicha técnica empleándola en el diseño de antenas impresas de banda ancha para diferentes aplicaciones mediante la optimización del perfil de los parches. También, se desarrolla en esta tesis un procedimiento basado en el método de descomposición de dominio y el método de los elementos finitos para el diseño de dispositivos pasivos de microonda. Se utiliza este procedimiento en particular para el diseño y sintonía de filtros de microondas. En la primera etapa de su aplicación se divide la estructura que se quiere analizar en subdominios aplicando el método de descomposición de dominio, este proceso permite analizar cada segmento por separado utilizando el método de análisis adecuado dado que suele haber subdominios que se pueden analizar mediante métodos analíticos por lo que el tiempo de análisis es más reducido. Se utilizan métodos numéricos para analizar los subdominios que no se pueden analizar mediante métodos analíticos. En esta tesis, se utiliza el método de los elementos finitos para llevar a cabo el análisis. Además de la descomposición de dominio, se aplica un proceso de barrido en frecuencia para reducir los tiempos del análisis. Como método de orden reducido se utiliza la técnica de bases reducidas. Se ha utilizado este procedimiento para diseñar y sintonizar varios ejemplos de filtros con el fin de comprobar la validez de dicho procedimiento. Los resultados obtenidos demuestran la utilidad de este procedimiento y confirman su rigurosidad, precisión y eficiencia en el diseño de filtros de microondas. ABSTRACT The main objective of this thesis is the development of numerical tools based on full-wave techniques for computer-aided design ‘CAD’ of microwave devices. In this context, a numerical technique based on the finite element method ‘FEM’ for the design and analysis of printed antennas using optimization algorithms has been developed. The proposed technique consists in dividing the analysis of the antenna in two stages. In the first stage, the regions of the antenna which do not need to be modified during the CAD process are initially characterized only once from their corresponding matrix transfer function (Generalized Admittance matrix, ‘GAM’). The regions which will be modified are defined as artificial ports, precisely the regions which will contain the conducting surfaces of the printed antenna. In a second stage, the contour shape of the conducting surfaces of the printed antenna is iteratively modified in order to achieve a desired electromagnetic performance of the antenna. In this way, a new GAM of the radiating device which takes into account each printed antenna shape is computed after each iteration. The proposed technique can be implemented with a genetic algorithm to achieve the design objectives. This technique is validated experimentally and applied to the design of wideband printed antennas for different applications by optimizing the shape of the radiating device. In addition, a procedure based on the domain decomposition method and the finite element method has been developed for the design of microwave passive devices. In particular, this procedure can be applied to the design and tune of microwave filters. In the first stage of its implementation, the structure to be analyzed is divided into subdomains using the domain decomposition method; this process allows each subdomains can be analyzed separately using suitable analysis method, since there is usually subdomains that can be analyzed by analytical methods so that the time of analysis is reduced. For analyzing the subdomains that cannot be analyzed by analytical methods, we use the numerical methods. In this thesis, the FEM is used to carry out the analysis. Furthermore the decomposition of the domain, a frequency sweep process is applied to reduce analysis times. The reduced order model as the reduced basis technique is used in this procedure. This procedure is applied to the design and tune of several examples of microwave filters in order to check its validity. The obtained results allow concluding the usefulness of this procedure and confirming their thoroughness, accuracy and efficiency for the design of microwave filters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we present a systematic method for the optimal development of bioprocesses that relies on the combined use of simulation packages and optimization tools. One of the main advantages of our method is that it allows for the simultaneous optimization of all the individual components of a bioprocess, including the main upstream and downstream units. The design task is mathematically formulated as a mixed-integer dynamic optimization (MIDO) problem, which is solved by a decomposition method that iterates between primal and master sub-problems. The primal dynamic optimization problem optimizes the operating conditions, bioreactor kinetics and equipment sizes, whereas the master levels entails the solution of a tailored mixed-integer linear programming (MILP) model that decides on the values of the integer variables (i.e., number of equipments in parallel and topological decisions). The dynamic optimization primal sub-problems are solved via a sequential approach that integrates the process simulator SuperPro Designer® with an external NLP solver implemented in Matlab®. The capabilities of the proposed methodology are illustrated through its application to a typical fermentation process and to the production of the amino acid L-lysine.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: In this paper the authors aim to show the advantages of using the decomposition method introduced by Adomian to solve Emden's equation, a classical non‐linear equation that appears in the study of the thermal behaviour of a spherical cloud and of the gravitational potential of a polytropic fluid at hydrostatic equilibrium. Design/methodology/approach: In their work, the authors first review Emden's equation and its possible solutions using the Frobenius and power series methods; then, Adomian polynomials are introduced. Afterwards, Emden's equation is solved using Adomian's decomposition method and, finally, they conclude with a comparison of the solution given by Adomian's method with the solution obtained by the other methods, for certain cases where the exact solution is known. Findings: Solving Emden's equation for n in the interval [0, 5] is very interesting for several scientific applications, such as astronomy. However, the exact solution is known only for n=0, n=1 and n=5. The experiments show that Adomian's method achieves an approximate solution which overlaps with the exact solution when n=0, and that coincides with the Taylor expansion of the exact solutions for n=1 and n=5. As a result, the authors obtained quite satisfactory results from their proposal. Originality/value: The main classical methods for obtaining approximate solutions of Emden's equation have serious computational drawbacks. The authors make a new, efficient numerical implementation for solving this equation, constructing iteratively the Adomian polynomials, which leads to a solution of Emden's equation that extends the range of variation of parameter n compared to the solutions given by both the Frobenius and the power series methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Currently, one of the most attractive and desirable ways to solve the energy challenge is harvesting energy directly from the sunlight through the so-called artificial photosynthesis. Among the ternary oxides based on earth–abundant metals, bismuth vanadate has recently emerged as a promising photoanode. Herein, BiVO4 thin film photoanodes have been successfully synthesized by a modified metal-organic precursor decomposition method, followed by an annealing treatment. In an attempt to improve the photocatalytic properties of this semiconductor material for photoelectrochemical water oxidation, the electrodes have been modified (i) by doping with La and Ce (by modifying the composition of the BiVO4 precursor solution with the desired concentration of the doping element), and (ii) by surface modification with Au nanoparticles potentiostatically electrodeposited. La and Ce doping at concentrations of 1 and 2 at% in the BiVO4 precursor solution, respectively, enhances significantly the photoelectrocatalytic performance of BiVO4 without introducing important changes in either the material structure or the electrode morphology, according to XRD and SEM characterization. In addition, surface modification of the electrodes with Au nanoparticles further enhances the photocurrent as such metallic nanoparticles act as co-catalysts, promoting charge transfer at the semiconductor/solution interface. The combination of these two complementary ways of modifying the electrodes has resulted in a significant increase in the photoresponse, facilitating their potential application in artificial photosynthesis devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper examines the sources of structural changes in output growth of South Africa's economy over 1975-93 using a decomposition method within the inputoutput (IO) framework for analysing output changes from a demand side perspective. It decomposes output growth into private consumption, government consumption, investment and export components and also measures the impact of import substitution and changes in intermediate input use (as indicated by changes in IO coefficients). It is found that, before 1981, overall output growth was multi-components driven with all the above components contributing positively to economic growth. However, the collapse of investment demand is by far the single largest factor contributing to the economic stagnation that categorizes the post-1981 period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Self-assembly of nanoparticles is a promising route to form complex, nanostructured materials with functional properties. Nanoparticle assemblies characterized by a crystallographic alignment of the nanoparticles on the atomic scale, i.e. mesocrystals, are commonly found in nature with outstanding functional and mechanical properties. This thesis aims to investigate and understand the formation mechanisms of mesocrystals formed by self-assembling iron oxide nanocubes. We have used the thermal decomposition method to synthesize monodisperse, oleate-capped iron oxide nanocubes with average edge lengths between 7 nm and 12 nm and studied the evaporation-induced self-assembly in dilute toluene-based nanocube dispersions. The influence of packing constraints on the alignment of the nanocubes in nanofluidic containers has been investigated with small and wide angle X-ray scattering (SAXS and WAXS, respectively). We found that the nanocubes preferentially orient one of their {100} faces with the confining channel wall and display mesocrystalline alignment irrespective of the channel widths.  We manipulated the solvent evaporation rate of drop-cast dispersions on fluorosilane-functionalized silica substrates in a custom-designed cell. The growth stages of the assembly process were investigated using light microscopy and quartz crystal microbalance with dissipation monitoring (QCM-D). We found that particle transport phenomena, e.g. the coffee ring effect and Marangoni flow, result in complex-shaped arrays near the three-phase contact line of a drying colloidal drop when the nitrogen flow rate is high. Diffusion-driven nanoparticle assembly into large mesocrystals with a well-defined morphology dominates at much lower nitrogen flow rates. Analysis of the time-resolved video microscopy data was used to quantify the mesocrystal growth and establish a particle diffusion-based, three-dimensional growth model. The dissipation obtained from the QCM-D signal reached its maximum value when the microscopy-observed lateral growth of the mesocrystals ceased, which we address to the fluid-like behavior of the mesocrystals and their weak binding to the substrate. Analysis of electron microscopy images and diffraction patterns showed that the formed arrays display significant nanoparticle ordering, regardless of the distinctive formation process.  We followed the two-stage formation mechanism of mesocrystals in levitating colloidal drops with real-time SAXS. Modelling of the SAXS data with the square-well potential together with calculations of van der Waals interactions suggests that the nanocubes initially form disordered clusters, which quickly transform into an ordered phase.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes an parallel semi-Lagrangian finite difference approach to the pricing of early exercise Asian Options on assets with a stochastic volatility. A multigrid procedure is described for the fast iterative solution of the discrete linear complementarity problems that result. The accuracy and performance of this approach is improved considerably by a strike-price related analytic transformation of asset prices. Asian options are contingent claims with payoffs that depend on the average price of an asset over some time interval. The payoff may depend on this average and a fixed strike price (Fixed Strike Asians) or it may depend on the average and the asset price (Floating Strike Asians). The option may also permit early exercise (American contract) or confine the holder to a fixed exercise date (European contract). The Fixed Strike Asian with early exercise is considered here where continuous arithmetic averaging has been used. Pricing such an option where the asset price has a stochastic volatility leads to the requirement to solve a tri-variate partial differential inequation in the three state variables of asset price, average price and volatility (or equivalently, variance). The similarity transformations [6] used with Floating Strike Asian options to reduce the dimensionality of the problem are not applicable to Fixed Strikes and so the numerical solution of a tri-variate problem is necessary. The computational challenge is to provide accurate solutions sufficiently quickly to support realtime trading activities at a reasonable cost in terms of hardware requirements.