948 resultados para Non-convex optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

R.J. DOUGLAS, Non-existence of polar factorisations and polar inclusion of a vector-valued mapping. Intern. Jour. Of Pure and Appl. Math., (IJPAM) 41, no. 3 (2007).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Iantchenko, A., (2007) 'Scattering poles near the real axis for two strictly convex obstacles', Annales of the Institute Henri Poincar? 8 pp.513-568 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood, Ian; Hieber, M., (2007) 'The Dirichlet problem in convex bounded domains for operators with L8-coefficients', Differential and Integral Equations 20 pp.721-734 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We wish to construct a realization theory of stable neural networks and use this theory to model the variety of stable dynamics apparent in natural data. Such a theory should have numerous applications to constructing specific artificial neural networks with desired dynamical behavior. The networks used in this theory should have well understood dynamics yet be as diverse as possible to capture natural diversity. In this article, I describe a parameterized family of higher order, gradient-like neural networks which have known arbitrary equilibria with unstable manifolds of known specified dimension. Moreover, any system with hyperbolic dynamics is conjugate to one of these systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric. fits to known stable systems, is either non-constructive, lacks generality, or has unspecified attracting equilibria. More specifically, We construct a parameterized family of gradient-like neural networks with a simple feedback rule which will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data, on nested periodic orbits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the application of computational fluid dynamics (CFD) to simulate the macroscopic bulk motion of solder paste ahead of a moving squeegee blade in the stencil printing process during the manufacture of electronic components. The successful outcome of the stencil printing process is dependent on the interaction of numerous process parameters. A better understanding of these parameters is required to determine their relation to print quality and improve guidelines for process optimization. Various modelling techniques have arisen to analyse the flow behaviour of solder paste, including macroscopic studies of the whole mass of paste as well as microstructural analyses of the motion of individual solder particles suspended in the carrier fluid. This work builds on the knowledge gained to date from earlier analytical models and CFD investigations by considering the important non-Newtonian rheological properties of solder pastes which have been neglected in previous macroscopic studies. Pressure and velocity distributions are obtained from both Newtonian and non-Newtonian CFD simulations and evaluated against each other as well as existing established analytical models. Significant differences between the results are observed, which demonstrate the importance of modelling non-Newtonian properties for realistic representation of the flow behaviour of solder paste.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of biofluid behavior in a T-shaped microchannel device and a design optimization for improved biofluid performance in terms of particle liquid separation. The biofluid is modeled with single phase shear rate non-Newtonian flow with blood property. The separation of red blood cell from plasma is evident based on biofluid distribution in the microchannels against various relevant effects and findings, including Zweifach-Fung bifurcation law, Fahraeus effect, Fahraeus-Lindqvist effect and cell free phenomenon. The modeling with the initial device shows that this T-microchannel device can separate red blood cell from plasma but the separation efficiency among different bifurcations varies largely. In accordance with the imbalanced performance, a design optimization is conducted. This includes implementing a series of simulations to investigate the effect of the lengths of the main and branch channels to biofluid behavior and searching an improved design with optimal separation performance. It is found that changing relative lengths of branch channels is effective to both uniformity of flow rate ratio among bifurcations and reduction of difference of the flow velocities between the branch channels, whereas extending the length of the main channel from bifurcation region is only effective for uniformity of flow rate ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is focused on the demonstration of the advantages of miniaturized reactor systems which are essential for processes where potential for considerable heat transfer intensification exists as well as for kinetic studies of highly exothermic reactions at near-isothermal conditions. The heat transfer characteristics of four different cross-flow designs of a microstructured reactor/heat-exchanger (MRHE) were studied by CFD simulation using ammonia oxidation on a platinum catalyst as a model reaction. An appropriate distribution of the nitrogen flow used as a coolant can decrease drastically the axial temperature gradient in the reaction channels. In case of a microreactor made of a highly conductive material, the temperature non-uniformity in the reactor is strongly dependent on the distance between the reaction and cooling channels. Appropriate design of a single periodic reactor/heat-exchanger unit, combined with a non-uniform inlet coolant distribution, reduces the temperature gradients in the complete reactor to less than 4degreesC, even at conditions corresponding to an adiabatic temperature rise of about 1400degreesC, which are generally not accessible in conventional reactors because of the danger of runaway reactions. To obtain the required coolant flow distribution, an optimization study was performed to acquire the particular geometry of the inlet and outlet chambers in the microreactor/heat-exchanger. The predicted temperature profiles are in good agreement with experimental data from temperature sensors located along the reactant and coolant flows. The results demonstrate the clear potential of microstructured devices as reliable instruments for kinetic research as well as for proper heat management in the case of highly exothermic reactions. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the optimization of a series of non-MPEP site metabotropic glutamate receptor 5 (mGlu5) pos. allosteric modulators (PAMs) based on a simple acyclic ether series. Modifications led to a gain of MPEP site interaction through incorporation of a chiral amide in conjunction with a nicotinamide core. A highly potent PAM, 8v (VU0404251), was shown to be efficacious in a rodent model of psychosis. These studies suggest that potent PAMs within topol. similar chemotypes can be developed to preferentially interact or not interact with the MPEP allosteric binding site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple, non-seeding and high-yield synthesis of convex gold octahedra with size of ca. 50 nm in aqueous solution is described. The octahedral nanoparticles were systematically prepared by reduction of HAuCl4 using ascorbic acid (AA) in the presence of cetyltrimethylammonium bromide (CTAB) as the stabilizing surfactant while concentrations of Au3+ were fixed. The synthesizing process is especially different to other wet synthesis of metallic nanoparticles because it is mediated by H2O2. Mechanism of the H2O2 – mediated process will be described in details. The gold octahedra were shown to be single crystals with all 8 faces belonging to {111} family. Moreover, the single crystalline particles also showed attractive optical properties towards LSPR that should find uses as labels for microscopic imaging, materials for colorimetric biosensings, or nanosensor developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Volatile Memory (NVM) technology holds promise to replace SRAM and DRAM at various levels of the memory hierarchy. The interest in NVM is motivated by the difficulty faced in scaling DRAM beyond 22 nm and, long-term, lower cost per bit. While offering higher density and negligible static power (leakage and refresh), NVM suffers increased latency and energy per memory access. This paper develops energy and performance models of memory systems and applies them to understand the energy-efficiency of replacing or complementing DRAM with NVM. Our analysis focusses on the application of NVM in main memory. We demonstrate that NVM such as STT-RAM and RRAM is energy-efficient for memory sizes commonly employed in servers and high-end workstations, but PCM is not. Furthermore, the model is well suited to quickly evaluate the impact of changes to the model parameters, which may be achieved through optimization of the memory architecture, and to determine the key parameters that impact system-level energy and performance.