964 resultados para Near-optimal solutions
Resumo:
We are concerned with two-level optimization problems called strongweak Stackelberg problems, generalizing the class of Stackelberg problems in the strong and weak sense. In order to handle the fact that the considered two-level optimization problems may fail to have a solution under mild assumptions, we consider a regularization involving ε-approximate optimal solutions in the lower level problems. We prove the existence of optimal solutions for such regularized problems and present some approximation results when the parameter ǫ goes to zero. Finally, as an example, we consider an optimization problem associated to a best bound given in [2] for a system of nondifferentiable convex inequalities.
Resumo:
Let V be an array. The range query problem concerns the design of data structures for implementing the following operations. The operation update(j,x) has the effect vj ← vj + x, and the query operation retrieve(i,j) returns the partial sum vi + ... + vj. These tasks are to be performed on-line. We define an algebraic model – based on the use of matrices – for the study of the problem. In this paper we establish as well a lower bound for the sum of the average complexity of both kinds of operations, and demonstrate that this lower bound is near optimal – in terms of asymptotic complexity.
Resumo:
* This paper is partially supported by the National Science Fund of Bulgarian Ministry of Education and Science under contract № I–1401\2004 "Interactive Algorithms and Software Systems Supporting Multicriteria Decision Making".
Resumo:
In this paper a genetic algorithm (GA) is applied on Maximum Betweennes Problem (MBP). The maximum of the objective function is obtained by finding a permutation which satisfies a maximal number of betweenness constraints. Every permutation considered is genetically coded with an integer representation. Standard operators are used in the GA. Instances in the experimental results are randomly generated. For smaller dimensions, optimal solutions of MBP are obtained by total enumeration. For those instances, the GA reached all optimal solutions except one. The GA also obtained results for larger instances of up to 50 elements and 1000 triples. The running time of execution and finding optimal results is quite short.
Resumo:
In this paper a variable neighborhood search (VNS) approach for the task assignment problem (TAP) is considered. An appropriate neighborhood scheme along with a shaking operator and local search procedure are constructed specifically for this problem. The computational results are presented for the instances from the literature, and compared to optimal solutions obtained by the CPLEX solver and heuristic solutions generated by the genetic algorithm. It can be seen that the proposed VNS approach reaches all optimal solutions in a quite short amount of computational time.
Resumo:
In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.
Resumo:
Insulated-gate bipolar transistor (IGBT) power modules find widespread use in numerous power conversion applications where their reliability is of significant concern. Standard IGBT modules are fabricated for general-purpose applications while little has been designed for bespoke applications. However, conventional design of IGBTs can be improved by the multiobjective optimization technique. This paper proposes a novel design method to consider die-attachment solder failures induced by short power cycling and baseplate solder fatigue induced by the thermal cycling which are among major failure mechanisms of IGBTs. Thermal resistance is calculated analytically and the plastic work design is obtained with a high-fidelity finite-element model, which has been validated experimentally. The objective of minimizing the plastic work and constrain functions is formulated by the surrogate model. The nondominated sorting genetic algorithm-II is used to search for the Pareto-optimal solutions and the best design. The result of this combination generates an effective approach to optimize the physical structure of power electronic modules, taking account of historical environmental and operational conditions in the field.
Resumo:
A klasszikus tételnagyság probléma két fontosabb készletezési költséget ragad meg: rendelési és készlettartási költségek. Ebben a dolgozatban a vállalatok készpénz áramlásának a beszerzési tevékenységre gyakorolt hatását vizsgáljuk. Ebben az elemzésben a készpénzáramlási egyenlőséget használjuk, amely nagyban emlékeztet a készletegyenletekre. Eljárásunkban a beszerzési és rendelési folyamatot diszkontálva vizsgáljuk. A költségfüggvény lineáris készpénztartási, a pénzkiadás haszonlehetőség és lineáris kamatköltségből áll. Bemutatjuk a vizsgált modell optimális megoldását. Az optimális megoldást egy számpéldával illusztráljuk. = The classical economic order quantity model has two types of costs: ordering and inventory holding costs. In this paper we try to investigate the effect of purchasing activity on cash flow of a firm. In the examinations we use a cash flow identity similar to that of in inventory modeling. In our approach we analyze the purchasing and ordering process with discounted costs. The cost function of the model consists of linear cash holding, linear opportunity cost of spending cash, and linear interest costs. We show the optimal solution of the proposed model. The optimal solutions will be presented by numerical examples.
Resumo:
A tanulmány a variációszámítás gazdasági alkalmazásaiból ismertet hármat. Mindhárom alkalmazás a Leontief-modellen alapszik. Az optimális pályák vizsgálata után arra keressük a választ, hogy az Euler–Lagrange-differenciálegyenlet rendszerrel kapott megoldások valóban optimális megoldásai-e a modelleknek. Arra a következtetésre jut a tanulmány, hogy csak pótlólagos közgazdasági feltételek bevezetésével határozhatók meg az optimális megoldások. Ugyanakkor a megfogalmazott feltételek segítségével az ismertetett modellek egy általánosabb keretbe illeszthetők. A tanulmány végső eredménye az, hogy mind a három modell optimális megoldása a Neumann-sugárnak felel meg. /===/ The study presents three economic applications of variation calculations. All three rely on the Leontief model. After examination of the optimal courses, an answer is sought to whether the solutions to the Euler–Lagrange differential equation system are really opti-mal solutions to the models. The study concludes that the optimal solutions can only be determined by introducing additional economic conditions. At the same time, the models presented can be fitted into a general framework with the help of the conditions outlined. The final conclusion of the study is that the optimal solution of all three models fits into the Neumann band.
Resumo:
Return guarantee constitutes a key ingredient of classical life insurance premium calculation. In the current low interest rate environment insurers face increasingly strong financial incentives to reduce guaranteed returns embedded in life insurance contracts. However, return guarantee lowering efforts are restrained by associated demand effects, since a higher guaranteed return makes the net price of the insurance cover lower. This tradeoff between possibly higher future insurance obligations and the possibility of a larger demand for life insurance products can theoretically also be considered when determining optimal guaranteed returns. In this paper, optimality of return guarantee levels is analyzed from a solvency point of view. Availability and some other properties of optimal solutions for guaranteed returns are explored and compared in a simple model for two measures of solvency risk (company-level and contract-level VaR). The paper concludes that a solvency risk minimizing optimal guaranteed return may theoretically exist, although its practical availability can be impeded by economic and regulatory constraints.
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Environmentally conscious construction has received a significant amount of research attention during the last decades. Even though construction literature is rich in studies that emphasize the importance of environmental impact during the construction phase, most of the previous studies failed to combine environmental analysis with other project performance criteria in construction. This is mainly because most of the studies have overlooked the multi-objective nature of construction projects. In order to achieve environmentally conscious construction, multi-objectives and their relationships need to be successfully analyzed in the complex construction environment. The complex construction system is composed of changing project conditions that have an impact on the relationship between time, cost and environmental impact (TCEI) of construction operations. Yet, this impact is still unknown by construction professionals. Studying this impact is vital to fulfill multiple project objectives and achieve environmentally conscious construction. This research proposes an analytical framework to analyze the impact of changing project conditions on the relationship of TCEI. This study includes green house gas (GHG) emissions as an environmental impact category. The methodology utilizes multi-agent systems, multi-objective optimization, analytical network process, and system dynamics tools to study the relationships of TCEI and support decision-making under the influence of project conditions. Life cycle assessment (LCA) is applied to the evaluation of environmental impact in terms of GHG. The mixed method approach allowed for the collection and analysis of qualitative and quantitative data. Structured interviews of professionals in the highway construction field were conducted to gain their perspectives in decision-making under the influence of certain project conditions, while the quantitative data were collected from the Florida Department of Transportation (FDOT) for highway resurfacing projects. The data collected were used to test the framework. The framework yielded statistically significant results in simulating project conditions and optimizing TCEI. The results showed that the change in project conditions had a significant impact on the TCEI optimal solutions. The correlation between TCEI suggested that they affected each other positively, but in different strengths. The findings of the study will assist contractors to visualize the impact of their decision on the relationship of TCEI.
Resumo:
With the developments in computing and communication technologies, wireless sensor networks have become popular in wide range of application areas such as health, military, environment and habitant monitoring. Moreover, wireless acoustic sensor networks have been widely used for target tracking applications due to their passive nature, reliability and low cost. Traditionally, acoustic sensor arrays built in linear, circular or other regular shapes are used for tracking acoustic sources. The maintaining of relative geometry of the acoustic sensors in the array is vital for accurate target tracking, which greatly reduces the flexibility of the sensor network. To overcome this limitation, we propose using only a single acoustic sensor at each sensor node. This design greatly improves the flexibility of the sensor network and makes it possible to deploy the sensor network in remote or hostile regions through air-drop or other stealth approaches. Acoustic arrays are capable of performing the target localization or generating the bearing estimations on their own. However, with only a single acoustic sensor, the sensor nodes will not be able to generate such measurements. Thus, self-organization of sensor nodes into virtual arrays to perform the target localization is essential. We developed an energy-efficient and distributed self-organization algorithm for target tracking using wireless acoustic sensor networks. The major error sources of the localization process were studied, and an energy-aware node selection criterion was developed to minimize the target localization errors. Using this node selection criterion, the self-organization algorithm selects a near-optimal localization sensor group to minimize the target tracking errors. In addition, a message passing protocol was developed to implement the self-organization algorithm in a distributed manner. In order to achieve extended sensor network lifetime, energy conservation was incorporated into the self-organization algorithm by incorporating a sleep-wakeup management mechanism with a novel cross layer adaptive wakeup probability adjustment scheme. The simulation results confirm that the developed self-organization algorithm provides satisfactory target tracking performance. Moreover, the energy saving analysis confirms the effectiveness of the cross layer power management scheme in achieving extended sensor network lifetime without degrading the target tracking performance.