974 resultados para Objective functions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

An efficient strategy for identification of delamination in composite beams and connected structures is presented. A spectral finite-element model consisting of a damaged spectral element is used for model-based prediction of the damaged structural response in the frequency domain. A genetic algorithm (GA) specially tailored for damage identification is derived and is integrated with finite-element code for automation. For best application of the GA, sensitivities of various objective functions with respect to delamination parameters are studied and important conclusions are presented. Model-based simulations of increasing complexity illustrate some of the attractive features of the strategy in terms of accuracy as well as computational cost. This shows the possibility of using such strategies for the development of smart structural health monitoring softwares and systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The topology optimization problem for the synthesis of compliant mechanisms has been formulated in many different ways in the last 15 years, but there is not yet a definitive formulation that is universally accepted. Furthermore, there are two unresolved issues in this problem. In this paper, we present a comparative study of five distinctly different formulations that are reported in the literature. Three benchmark examples are solved with these formulations using the same input and output specifications and the same numerical optimization algorithm. A total of 35 different synthesis examples are implemented. The examples are limited to desired instantaneous output direction for prescribed input force direction. Hence, this study is limited to linear elastic modeling with small deformations. Two design parameterizations, namely, the frame element based ground structure and the density approach using continuum elements, are used. The obtained designs are evaluated with all other objective functions and are compared with each other. The checkerboard patterns, point flexures, the ability to converge from an unbiased uniform initial guess, and the computation time are analyzed. Some observations are noted based on the extensive implementation done in this study. Complete details of the benchmark problems and the results are included. The computer codes related to this study are made available on the internet for ready access.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stirred tank bioreactors, employed in the production of a variety of biologically active chemicals, are often operated in batch, fed-batch, and continuous modes of operation. The optimal design of bioreactor is dependent on the kinetics of the biological process, as well as the performance criteria (yield, productivity, etc.) under consideration. In this paper, a general framework is proposed for addressing the two key issues related to the optimal design of a bioreactor, namely, (i) choice of the best operating mode and (ii) the corresponding flow rate trajectories. The optimal bioreactor design problem is formulated with initial conditions and inlet and outlet flow rate trajectories as decision variables to maximize more than one performance criteria (yield, productivity, etc.) as objective functions. A computational methodology based on genetic algorithm approach is developed to solve this challenging multiobjective optimization problem with multiple decision variables. The applicability of the algorithm is illustrated by solving two challenging problems from the bioreactor optimization literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Three algorithms for reactive power optimization are proposed in this paper with three different objective functions. The objectives in the proposed algorithm are to minimize the sum of the squares of the voltage deviations of the load buses, minimization of sum of squares of voltage stability L-indices of load buses (:3L2) algorithm, and also the objective of system real power loss (Ploss) minimization. The approach adopted is an iterative scheme with successive power flow analysis using decoupled technique and solution of the linear programming problem using upper bound optimization technique. Results obtained with all these objectives are compared. The analysis of these objective functions are presented to illustrate their advantages. It is observed comparing different objective functions it is possible to identify critical On Load Tap Changers (OLTCs) that should be made manual to avoid possible voltage instability due to their operation based on voltage improvement criteria under heavy load conditions. These algorithms have been tested under simulated conditions on few test systems. The results obtained on practical systems of 24-node equivalent EHV Indian power network, and for a 205 bus EHV system are presented for illustration purposes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile P2P technology provides a scalable approach for content delivery to a large number of users on their mobile devices. In this work, we study the dissemination of a single item of content (e. g., an item of news, a song or a video clip) among a population of mobile nodes. Each node in the population is either a destination (interested in the content) or a potential relay (not yet interested in the content). There is an interest evolution process by which nodes not yet interested in the content (i.e., relays) can become interested (i.e., become destinations) on learning about the popularity of the content (i.e., the number of already interested nodes). In our work, the interest in the content evolves under the linear threshold model. The content is copied between nodes when they make random contact. For this we employ a controlled epidemic spread model. We model the joint evolution of the copying process and the interest evolution process, and derive joint fluid limit ordinary differential equations. We then study the selection of parameters under the content provider's control, for the optimization of various objective functions that aim at maximizing content popularity and efficient content delivery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiobjective fuzzy methodology is applied to a case study of Khadakwasla complex irrigation project located near Pune city of Maharashtra State, India. Three objectives, namely, maximization of net benefits, crop production and labour employment are considered. Effect of reuse of wastewater on the planning scenario is also studied. Three membership functions, namely, nonlinear, hyperbolic and exponential are analyzed for multiobjective fuzzy optimization. In the present study, objective functions are considered as fuzzy in nature whereas inflows are considered as dependable. It is concluded that exponential and hyperbolic membership functions provided similar cropping pattern for most of the situations whereas nonlinear membership functions provided different cropping pattern. However, in all the three cases, irrigation intensities are more than the existing irrigation intensity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study simulates a two-stage silica gel + water adsorption desalination (AD) and chiller system. The adsorber system thermally compresses the low pressure steam generated in the evaporator to the condenser pressure in two stages. Unlike a standalone adsorption chiller unit which operates in a closed cycle the present system is an open cycle wherein the condensed desalinated water is not fed back to the evaporator. The mathematical relations formulated in the current study are based on conservation of mass and energy along with isotherm relation and kinetics for RD-type silica gel + water pair. Various constitutive relations for each component namely the evaporator, adsorber and condenser are integrated in the model. The dynamics of heat exchanger are modeled using LMTD method, and LDF model is used to predict the dynamic characteristic of the adsorber bed. The system performance indicators namely, specific cooling capacity (SCC), specific daily water production (SDWP) and coefficient of performance (COP) are used as objective functions to optimize the system. The novelty of the present work is in introduction of inter-stage pressure as a new parameter for optimizing the two-stage operation of AD chiller system. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern Engineering Design involves the deployment of many computational tools. Re- search on challenging real-world design problems is focused on developing improvements for the engineering design process through the integration and application of advanced com- putational search/optimization and analysis tools. Successful application of these methods generates vast quantities of data on potential optimum designs. To gain maximum value from the optimization process, designers need to visualise and interpret this information leading to better understanding of the complex and multimodal relations between param- eters, objectives and decision-making of multiple and strongly conflicting criteria. Initial work by the authors has identified that the Parallel Coordinates interactive visualisation method has considerable potential in this regard. This methodology involves significant levels of user-interaction, making the engineering designer central to the process, rather than the passive recipient of a deluge of pre-formatted information. In the present work we have applied and demonstrated this methodology in two differ- ent aerodynamic turbomachinery design cases; a detailed 3D shape design for compressor blades, and a preliminary mean-line design for the whole compressor core. The first case comprises 26 design parameters for the parameterisation of the blade geometry, and we analysed the data produced from a three-objective optimization study, thus describing a design space with 29 dimensions. The latter case comprises 45 design parameters and two objective functions, hence developing a design space with 47 dimensions. In both cases the dimensionality can be managed quite easily in Parallel Coordinates space, and most importantly, we are able to identify interesting and crucial aspects of the relationships between the design parameters and optimum level of the objective functions under con- sideration. These findings guide the human designer to find answers to questions that could not even be addressed before. In this way, understanding the design leads to more intelligent decision-making and design space exploration. © 2012 AIAA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The primary approaches for people to understand the inner properties of the earth and the distribution of the mineral resources are mainly coming from surface geology survey and geophysical/geochemical data inversion and interpretation. The purpose of seismic inversion is to extract information of the subsurface stratum geometrical structures and the distribution of material properties from seismic wave which is used for resource prospecting, exploitation and the study for inner structure of the earth and its dynamic process. Although the study of seismic parameter inversion has achieved a lot since 1950s, some problems are still persisting when applying in real data due to their nonlinearity and ill-posedness. Most inversion methods we use to invert geophysical parameters are based on iterative inversion which depends largely on the initial model and constraint conditions. It would be difficult to obtain a believable result when taking into consideration different factors such as environmental and equipment noise that exist in seismic wave excitation, propagation and acquisition. The seismic inversion based on real data is a typical nonlinear problem, which means most of their objective functions are multi-minimum. It makes them formidable to be solved using commonly used methods such as general-linearization and quasi-linearization inversion because of local convergence. Global nonlinear search methods which do not rely heavily on the initial model seem more promising, but the amount of computation required for real data process is unacceptable. In order to solve those problems mentioned above, this paper addresses a kind of global nonlinear inversion method which brings Quantum Monte Carlo (QMC) method into geophysical inverse problems. QMC has been used as an effective numerical method to study quantum many-body system which is often governed by Schrödinger equation. This method can be categorized into zero temperature method and finite temperature method. This paper is subdivided into four parts. In the first one, we briefly review the theory of QMC method and find out the connections with geophysical nonlinear inversion, and then give the flow chart of the algorithm. In the second part, we apply four QMC inverse methods in 1D wave equation impedance inversion and generally compare their results with convergence rate and accuracy. The feasibility, stability, and anti-noise capacity of the algorithms are also discussed within this chapter. Numerical results demonstrate that it is possible to solve geophysical nonlinear inversion and other nonlinear optimization problems by means of QMC method. They are also showing that Green’s function Monte Carlo (GFMC) and diffusion Monte Carlo (DMC) are more applicable than Path Integral Monte Carlo (PIMC) and Variational Monte Carlo (VMC) in real data. The third part provides the parallel version of serial QMC algorithms which are applied in a 2D acoustic velocity inversion and real seismic data processing and further discusses these algorithms’ globality and anti-noise capacity. The inverted results show the robustness of these algorithms which make them feasible to be used in 2D inversion and real data processing. The parallel inversion algorithms in this chapter are also applicable in other optimization. Finally, some useful conclusions are obtained in the last section. The analysis and comparison of the results indicate that it is successful to bring QMC into geophysical inversion. QMC is a kind of nonlinear inversion method which guarantees stability, efficiency and anti-noise. The most appealing property is that it does not rely heavily on the initial model and can be suited to nonlinear and multi-minimum geophysical inverse problems. This method can also be used in other filed regarding nonlinear optimization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider challenges associated with application domains in which a large number of distributed, networked sensors must perform a sensing task repeatedly over time. For the tasks we consider, there are three significant challenges to address. First, nodes have resource constraints imposed by their finite power supply, which motivates computations that are energy-conserving. Second, for the applications we describe, the utility derived from a sensing task may vary depending on the placement and size of the set of nodes who participate, which often involves complex objective functions for nodes to target. Finally, nodes must attempt to realize these global objectives with only local information. We present a model for such applications, in which we define appropriate global objectives based on utility functions and specify a cost model for energy consumption. Then, for an important class of utility functions, we present distributed algorithms which attempt to maximize the utility derived from the sensor network over its lifetime. The algorithms and experimental results we present enable nodes to adaptively change their roles over time and use dynamic reconfiguration of routes to load balance energy consumption in the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.