974 resultados para Linear semi-infinite optimization
Resumo:
Abstract This paper describes a design methodology for piezoelectric energy harvester s that thinly encapsulate the mechanical devices and expl oit resonances from higher- order vibrational modes. The direction of polarization determines the sign of the pi ezoelectric tensor to avoid cancellations of electric fields from opposite polarizations in the same circuit. The resultant modified equations of state are solved by finite element method (FEM). Com- bining this method with the solid isotropic material with penalization (SIMP) method for piezoelectric material, we have developed an optimization methodology that optimizes the piezoelectric material layout and polarization direc- tion. Updating the density function of the SIMP method is performed based on sensitivity analysis, the sequen- tial linear programming on the early stage of the opti- mization, and the phase field method on the latter stage
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provade a very Complexity of inferences in polytree-shaped semi-qualitative probabilistic networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that interferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is construtive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynominal-time algorithm for SQPn. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones
Resumo:
Small scale fluid flow systems have been studied for various applications, such as chemical reagent dosages and cooling devices of compact electronic components. This work proposes to present the complete cycle development of an optimized heat sink designed by using Topology Optimization Method (TOM) for best performance, including minimization of pressure drop in fluid flow and maximization of heat dissipation effects, aiming small scale applications. The TOM is applied to a domain, to obtain an optimized channel topology, according to a given multi-objective function that combines pressure drop minimization and heat transfer maximization. Stokes flow hypothesis is adopted. Moreover, both conduction and forced convection effects are included in the steady-state heat transfer model. The topology optimization procedure combines the Finite Element Method (to carry out the physical analysis) with Sequential Linear Programming (as the optimization algorithm). Two-dimensional topology optimization results of channel layouts obtained for a heat sink design are presented as example to illustrate the design methodology. 3D computational simulations and prototype manufacturing have been carried out to validate the proposed design methodology.
Resumo:
The need for a convergence between semi-structured data management and Information Retrieval techniques is manifest to the scientific community. In order to fulfil this growing request, W3C has recently proposed XQuery Full Text, an IR-oriented extension of XQuery. However, the issue of query optimization requires the study of important properties like query equivalence and containment; to this aim, a formal representation of document and queries is needed. The goal of this thesis is to establish such formal background. We define a data model for XML documents and propose an algebra able to represent most of XQuery Full-Text expressions. We show how an XQuery Full-Text expression can be translated into an algebraic expression and how an algebraic expression can be optimized.
Resumo:
Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
The present thesis is a contribution to the theory of algebras of pseudodifferential operators on singular settings. In particular, we focus on the $b$-calculus and the calculus on conformally compact spaces in the sense of Mazzeo and Melrose in connection with the notion of spectral invariant transmission operator algebras. We summarize results given by Gramsch et. al. on the construction of $Psi_0$-and $Psi*$-algebras and the corresponding scales of generalized Sobolev spaces using commutators of certain closed operators and derivations. In the case of a manifold with corners $Z$ we construct a $Psi*$-completion $A_b(Z,{}^bOmega^{1/2})$ of the algebra of zero order $b$-pseudodifferential operators $Psi_{b,cl}(Z, {}^bOmega^{1/2})$ in the corresponding $C*$-closure $B(Z,{}^bOmega^{12})hookrightarrow L(L^2(Z,{}^bOmega^{1/2}))$. The construction will also provide that localised to the (smooth) interior of Z the operators in the $A_b(Z, {}^bOmega^{1/2})$ can be represented as ordinary pseudodifferential operators. In connection with the notion of solvable $C*$-algebras - introduced by Dynin - we calculate the length of the $C*$-closure of $Psi_{b,cl}^0(F,{}^bOmega^{1/2},R^{E(F)})$ in $B(F,{}^bOmega^{1/2}),R^{E(F)})$ by localizing $B(Z, {}^bOmega^{1/2})$ along the boundary face $F$ using the (extended) indical familiy $I^B_{FZ}$. Moreover, we discuss how one can localise a certain solving ideal chain of $B(Z, {}^bOmega^{1/2})$ in neighbourhoods $U_p$ of arbitrary points $pin Z$. This localisation process will recover the singular structure of $U_p$; further, the induced length function $l_p$ is shown to be upper semi-continuous. We give construction methods for $Psi*$- and $C*$-algebras admitting only infinite long solving ideal chains. These algebras will first be realized as unconnected direct sums of (solvable) $C*$-algebras and then refined such that the resulting algebras have arcwise connected spaces of one dimensional representations. In addition, we recall the notion of transmission algebras on manifolds with corners $(Z_i)_{iin N}$ following an idea of Ali Mehmeti, Gramsch et. al. Thereby, we connect the underlying $C^infty$-function spaces using point evaluations in the smooth parts of the $Z_i$ and use generalized Laplacians to generate an appropriate scale of Sobolev spaces. Moreover, it is possible to associate generalized (solving) ideal chains to these algebras, such that to every $ninN$ there exists an ideal chain of length $n$ within the algebra. Finally, we discuss the $K$-theory for algebras of pseudodifferential operators on conformally compact manifolds $X$ and give an index theorem for these operators. In addition, we prove that the Dirac-operator associated to the metric of a conformally compact manifold $X$ is not a Fredholm operator.
Resumo:
The collapse of linear polyelectrolyte chains in a poor solvent: When does a collapsing polyelectrolyte collect its counter ions? The collapse of polyions in a poor solvent is a complex system and is an active research subject in the theoretical polyelectrolyte community. The complexity is due to the subtle interplay between hydrophobic effects, electrostatic interactions, entropy elasticity, intrinsic excluded volume as well as specific counter-ion and co-ion properties. Long range Coulomb forces can obscure single molecule properties. The here presented approach is to use just a small amount of screening salt in combination with a very high sample dilution in order to screen intermolecular interaction whereas keeping intramolecular interaction as much as possible (polyelectrolyte concentration cp ≤ 12 mg/L, salt concentration; Cs = 10^-5 mol/L). This is so far not described in literature. During collapse, the polyion is subject to a drastic change in size along with strong reduction of free counterions in solution. Therefore light scattering was utilized to obtain the size of the polyion whereas a conductivity setup was developed to monitor the proceeding of counterion collection by the polyion. Partially quaternized PVP’s below and above the Manning limit were investigated and compared to the collapse of their uncharged precursor. The collapses were induced by an isorefractive solvent/non-solvent mixture consisting of 1-propanol and 2-pentanone, with nearly constant dielectric constant. The solvent quality for the uncharged polyion could be quantified which, for the first time, allowed the experimental investigation of the effect of electrostatic interaction prior and during polyion collapse. Given that the Manning parameter M for QPVP4.3 is as low as lB / c = 0.6 (lB the Bjerrum length and c the mean contour distance between two charges), no counterion binding should occur. However the Walden product reduces with first addition of non solvent and accelerates when the structural collapse sets in. Since the dielectric constant of the solvent remains virtually constant during the chain collapse, the counterion binding is entirely caused by the reduction in the polyion chain dimension. The collapse is shifted to lower wns with higher degrees of quaternization as the samples QPVP20 and QPVP35 show (M = 2.8 respectively 4.9). The combination of light scattering and conductivity measurement revealed for the first time that polyion chains already collect their counter ions well above the theta-dimension when the dimensions start to shrink. Due to only small amounts of screening salt, strong electrostatic interactions bias dynamic as well as static light scattering measurements. An extended Zimm formula was derived to account for this interaction and to obtain the real chain dimensions. The effective degree of dissociation g could be obtained semi quantitatively using this extrapolated static in combination with conductivity measurements. One can conclude the expansion factor a and the effective degree of ionization of the polyion to be mutually dependent. In the good solvent regime g of QPVP4.3, QPVP20 and QPVP35 exhibited a decreasing value in the order 1 > g4.3 > g20 > g35. The low values of g for QPVP20 and QPVP35 are assumed to be responsible for the prior collapse of the higher quaternized samples. Collapse theory predicts dipole-dipole attraction to increase accordingly and even predicts a collapse in the good solvent regime. This could be exactly observed for the QPVP35 sample. The experimental results were compared to a newly developed theory of uniform spherical collapse induced by concomitant counterion binding developed by M. Muthukumar and A. Kundagrami. The theory agrees qualitatively with the location of the phase boundary as well as the trend of an increasing expansion with an increase of the degree of quaternization. However experimental determined g for the samples QPVP4.3, QPVP20 and QPVP35 decreases linearly with the degree of quaternization whereas this theory predicts an almost constant value.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.