962 resultados para Constrained Minimization
Resumo:
In this paper we present algorithms which work on pairs of 0,1- matrices which multiply again a matrix of zero and one entries. When applied over a pair, the algorithms change the number of non-zero entries present in the matrices, meanwhile their product remains unchanged. We establish the conditions under which the number of 1s decreases. We recursively define as well pairs of matrices which product is a specific matrix and such that by applying on them these algorithms, we minimize the total number of non-zero entries present in both matrices. These matrices may be interpreted as solutions for a well known information retrieval problem, and in this case the number of 1 entries represent the complexity of the retrieve and information update operations.
Resumo:
The problem of finite automata minimization is important for software and hardware designing. Different types of automata are used for modeling systems or machines with finite number of states. The limitation of number of states gives savings in resources and time. In this article we show specific type of probabilistic automata: the reactive probabilistic finite automata with accepting states (in brief the reactive probabilistic automata), and definitions of languages accepted by it. We present definition of bisimulation relation for automata's states and define relation of indistinguishableness of automata states, on base of which we could effectuate automata minimization. Next we present detailed algorithm reactive probabilistic automata’s minimization with determination of its complexity and analyse example solved with help of this algorithm.
Resumo:
2000 Mathematics Subject Classification: 90C26, 90C20, 49J52, 47H05, 47J20.
Resumo:
2000 Mathematics Subject Classification: 90C48, 49N15, 90C25
Resumo:
Иван Гинчев - Класът на ℓ-устойчивите в точка функции, дефиниран в [2] и разширяващ класа на C1,1 функциите, се обобщава от скаларни за векторни функции. Доказани са някои свойства на ℓ-устойчивите векторни функции. Показано е, че векторни оптимизационни задачи с ограничения допускат условия от втори ред изразени чрез посочни производни, което обобщава резултати от [2] и [5].
Resumo:
AMS subject classification: 65K10, 49M07, 90C25, 90C48.
Resumo:
Using data from the 2004 wave of the Afrobarometer survey, this study examines correlates of household hardship in three countries of sub-Saharan Africa: Tanzania, Zambia, and Zimbabwe. Findings provide partial support for the hypothesized relationship. Specifically, poverty reduction initiatives and informal assistance are associated with reduced hardship while civic engagement is related to an increase in household hardship. We also note that certain demographic characteristics are linked to hardship. Policy and practice implications are suggested. © The Author(s) 2011.
Resumo:
We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy 10%). © 2006 IEEE.
Resumo:
Using the risk measure CV aR in �nancial analysis has become more and more popular recently. In this paper we apply CV aR for portfolio optimization. The problem is formulated as a two-stage stochastic programming model, and the SRA algorithm, a recently developed heuristic algorithm, is applied for minimizing CV aR.
Resumo:
A CV aR kockázati mérték egyre nagyobb jelentőségre tesz szert portfóliók kockázatának megítélésekor. A portfolió egészére a CVaR kockázati mérték minimalizálását meg lehet fogalmazni kétlépcsős sztochasztikus feladatként. Az SRA algoritmus egy mostanában kifejlesztett megoldó algoritmus sztochasztikus programozási feladatok optimalizálására. Ebben a cikkben az SRA algoritmussal oldottam meg CV aR kockázati mérték minimalizálást. ___________ The risk measure CVaR is becoming more and more popular in recent years. In this paper we use CVaR for portfolio optimization. We formulate the problem as a two-stage stochastic programming model. We apply the SRA algorithm, which is a recently developed heuristic algorithm, to minimizing CVaR.
Resumo:
A job shop with one batch processing and several discrete machines is analyzed. Given a set of jobs, their process routes, processing requirements, and size, the objective is to schedule the jobs such that the makespan is minimized. The batch processing machine can process a batch of jobs as long as the machine capacity is not violated. The batch processing time is equal to the longest processing job in the batch. The problem under study can be represented as Jm:batch:Cmax. If no batches were formed, the scheduling problem under study reduces to the classical job shop scheduling problem (i.e. Jm:: Cmax), which is known to be NP-hard. This research extends the scheduling literature by combining Jm::Cmax with batch processing. The primary contributions are the mathematical formulation, a new network representation and several solution approaches. The problem under study is observed widely in metal working and other industries, but received limited or no attention due to its complexity. A novel network representation of the problem using disjunctive and conjunctive arcs, and a mathematical formulation are proposed to minimize the makespan. Besides that, several algorithms, like batch forming heuristics, dispatching rules, Modified Shifting Bottleneck, Tabu Search (TS) and Simulated Annealing (SA), were developed and implemented. An experimental study was conducted to evaluate the proposed heuristics, and the results were compared to those from a commercial solver (i.e., CPLEX). TS and SA, with the combination of MWKR-FF as the initial solution, gave the best solutions among all the heuristics proposed. Their results were close to CPLEX; and for some larger instances, with total operations greater than 225, they were competitive in terms of solution quality and runtime. For some larger problem instances, CPLEX was unable to report a feasible solution even after running for several hours. Between SA and the experimental study indicated that SA produced a better average Cmax for all instances. The solution approaches proposed will benefit practitioners to schedule a job shop (with both discrete and batch processing machines) more efficiently. The proposed solution approaches are easier to implement and requires short run times to solve large problem instances.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
Patient awareness and concern regarding the potential health risks from ionizing radiation have peaked recently (Coakley et al., 2011) following widespread press and media coverage of the projected cancer risks from the increasing use of computed tomography (CT) (Berrington et al., 2007). The typical young and educated patient with inflammatory bowel disease (IBD) may in particular be conscious of his/her exposure to ionising radiation as a result of diagnostic imaging. Cumulative effective doses (CEDs) in patients with IBD have been reported as being high and are rising, primarily due to the more widespread and repeated use of CT (Desmond et al., 2008). Radiologists, technologists, and referring physicians have a responsibility to firstly counsel their patients accurately regarding the actual risks of ionizing radiation exposure; secondly to limit the use of those imaging modalities which involve ionising radiation to clinical situations where they are likely to change management; thirdly to ensure that a diagnostic quality imaging examination is acquired with lowest possible radiation exposure. In this paper, we synopsize available evidence related to radiation exposure and risk and we report advances in low-dose CT technology and examine the role for alternative imaging modalities such as ultrasonography or magnetic resonance imaging which avoid radiation exposure.