912 resultados para optimal solution
Resumo:
In this paper, we study a problem of scheduling and batching on two machines in a flow-shop and open-shop environment. Each machine processes operations in batches, and the processing time of a batch is the sum of the processing times of the operations in that batch. A setup time, which depends only on the machine, is required before a batch is processed on a machine, and all jobs in a batch remain at the machine until the entire batch is processed. The aim is to make batching and sequencing decisions, which specify a partition of the jobs into batches on each machine, and a processing order of the batches on each machine, respectively, so that the makespan is minimized. The flow-shop problem is shown to be strongly NP-hard. We demonstrate that there is an optimal solution with the same batches on the two machines; we refer to these as consistent batches. A heuristic is developed that selects the best schedule among several with one, two, or three consistent batches, and is shown to have a worst-case performance ratio of 4/3. For the open-shop, we show that the problem is NP-hard in the ordinary sense. By proving the existence of an optimal solution with one, two or three consistent batches, a close relationship is established with the problem of scheduling two or three identical parallel machines to minimize the makespan. This allows a pseudo-polynomial algorithm to be derived, and various heuristic methods to be suggested.
Resumo:
The paper focuses on the development of an aircraft design optimization methodology that models uncertainty and sensitivity analysis in the tradeoff between manufacturing cost, structural requirements, andaircraft direct operating cost.Specifically,ratherthanonlylooking atmanufacturingcost, direct operatingcost is also consideredintermsof the impact of weight on fuel burn, in addition to the acquisition cost to be borne by the operator. Ultimately, there is a tradeoff between driving design according to minimal weight and driving it according to reduced manufacturing cost. Theanalysis of cost is facilitated withagenetic-causal cost-modeling methodology,andthe structural analysis is driven by numerical expressions of appropriate failure modes that use ESDU International reference data. However, a key contribution of the paper is to investigate the modeling of uncertainty and to perform a sensitivity analysis to investigate the robustness of the optimization methodology. Stochastic distributions are used to characterize manufacturing cost distributions, andMonteCarlo analysis is performed in modeling the impact of uncertainty on the cost modeling. The results are then used in a sensitivity analysis that incorporates the optimization methodology. In addition to investigating manufacturing cost variance, the sensitivity of the optimization to fuel burn cost and structural loading are also investigated. It is found that the consideration of manufacturing cost does make an impact and results in a different optimal design configuration from that delivered by the minimal-weight method. However, it was shown that at lower applied loads there is a threshold fuel burn cost at which the optimization process needs to reduce weight, and this threshold decreases with increasing load. The new optimal solution results in lower direct operating cost with a predicted savings of 640=m2 of fuselage skin over the life, relating to a rough order-of-magnitude direct operating cost savings of $500,000 for the fuselage alone of a small regional jet. Moreover, it was found through the uncertainty analysis that the principle was not sensitive to cost variance, although the margins do change.
Resumo:
Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.
--------------------------------------------------------------------------------
Resumo:
A wireless relay network with one source, one relay and one destination is considered, where nodes communicate via N orthogonal channels. We develop optimal power allocation strategies at both the source and relay for maximizing the overall source-destination capacity under individual power constraints at the source and relay. Some properties of the optimal solution are studied. © 2012 IEEE.
Resumo:
The use of radars in detecting low flying, small targets is being explored for several decades now. However radar with counter-stealth abilities namely the passive, multistatic, low frequency radars are in the focus recently. Passive radar that uses Digital Video Broadcast Terrestrial (DVB-T) signals as illuminator of opportunity is a major contender in this area. A DVB-T based passive radar requires the development of an antenna array that performs satisfactorily over the entire DVB-T band. At Fraunhofer FHR, there is currently a need for an array antenna to be designed for operation over the 450-900 MHz range with wideband beamforming and null steering capabilities. This would add to the ability of the passive radar in detecting covert targets and would improve the performance of the system. The array should require no mechanical adjustments to inter-element spacing to correspond to the DVB-T carrier frequency used for any particular measurement. Such an array would have an increased flexibility of operation in different environment or locations.
The design of such an array antenna and the applied techniques for wideband beamforming and null steering are presented in the thesis. The interaction between the inter-element spacing, the grating lobes and the mutual couplings had to be carefully studied and an optimal solution was to be reached at that meets all the specifications of the antenna array for wideband applications. Directional beams, nulls along interference directions, low sidelobe levels, polarization aspects and operation along a wide bandwidth of 450-900 MHz were some of the key considerations.
Resumo:
We consider a wireless relay network with one source, one relay and one destination, where communications between nodes are preformed over N orthogonal channels. This, for example, is the case when orthogonal frequency division multiplexing is employed for data communications. Since the power available at the source and relay is limited, we study optimal power allocation strategies at the source and relay in order to maximize the overall source-destination capacity. Depending on the availability of the channel state information at both the source and relay or only at the relay, power allocation is performed at both the source and relay or only at the relay. Considering different setups for the problem, various optimization problems are formulated and solved. Some properties of the optimal solution are also proved.
Resumo:
In this paper, we propose a multiuser cognitive relay network, where multiple secondary sources communicate with a secondary destination through the assistance of a secondary relay in the presence of secondary direct links and multiple primary receivers. We consider the two relaying protocols of amplify-and-forward (AF) and decode-and-forward (DF), and take into account the availability of direct links from the secondary sources to the secondary destination. With this in mind, we propose an optimal solution for cognitive multiuser scheduling by selecting the optimal secondary source, which maximizes the received signal-to-noise ratio (SNR) at the secondary destination using maximal ratio combining. This is done by taking into account both the direct link and the relay link in the multiuser selection criterion. For both AF and DF relaying protocols, we first derive closed-form expressions for the outage probability and then provide the asymptotic outage probability, which determines the diversity behavior of the multiuser cognitive relay network. Finally, this paper is corroborated by representative numerical examples.
Resumo:
The present research investigates the uptake of phosphate ions from aqueous solutions using acidified laterite (ALS), a by-product from the production of ferric aluminium sulfate using laterite. Phosphate adsorption experiments were performed in batch systems to determine the amount of phosphate adsorbed as a function of solution pH, adsorbent dosage and thermodynamic parameters per fixed P concentration. Kinetic studies were also carried out to study the effect of adsorbent particle sizes. The maximum removal capacity of ALS observed at pH 5 was 3.68 mg P g-1. It was found that as the adsorbent dosage increases, the equilibrium pH decreases, so an adsorbent dosage of 1.0 g L-1 of ALS was selected. Adsorption capacity (qm) calculated from the Langmuir isotherm was found to be 2.73 mg g-1. Kinetic experimental data were mathematically well described using the pseudo first-order model over the full range of the adsorbent particle size. The adsorption reactions were endothermic, and the process of adsorption was favoured at high temperature; the ΔG and ΔH values implied that the main adsorption mechanism of P onto ALS is physisorption. The desorption studies indicated the need to consider a NaOH 0.1M solution as an optimal solution for practical regeneration applications.
Resumo:
The design optimization of a cold-formed steel portal frame building is considered in this paper. The proposed genetic algorithm (GA) optimizer considers both topology (i.e., frame spacing and pitch) and cross-sectional sizes of the main structural members as the decision variables. Previous GAs in the literature were characterized by poor convergence, including slow progress, that usually results in excessive computation times and/or frequent failure to achieve an optimal or near-optimal solution. This is the main issue addressed in this paper. In an effort to improve the performance of the conventional GA, a niching strategy is presented that is shown to be an effective means of enhancing the dissimilarity of the solutions in each generation of the GA. Thus, population diversity is maintained and premature convergence is reduced significantly. Through benchmark examples, it is shown that the efficient GA proposed generates optimal solutions more consistently. A parametric study was carried out, and the results included. They show significant variation in the optimal topology in terms of pitch and frame spacing for a range of typical column heights. They also show that the optimized design achieved large savings based on the cost of the main structural elements; the inclusion of knee braces at the eaves yield further savings in cost, that are significant.
Resumo:
The design optimization of cold-formed steel portal frame buildings is considered in this paper. The objective function is based on the cost of the members for the main frame and secondary members (i.e., purlins, girts, and cladding for walls and roofs) per unit area on the plan of the building. A real-coded niching genetic algorithm is used to minimize the cost of the frame and secondary members that are designed on the basis of ultimate limit state. It iis shown that the proposed algorithm shows effective and robust capacity in generating the optimal solution, owing to the population's diversity being maintained by applying the niching method. In the optimal design, the cost of purlins and side rails are shown to account for 25% of the total cost; the main frame members account for 27% of the total cost, claddings for the walls and roofs accounted for 27% of the total cost.
Resumo:
We investigate the cell coverage optimization problem for the massive multiple-input multiple-output (MIMO) uplink. By deploying tilt-adjustable antenna arrays at the base stations, cell coverage optimization can become a promising technique which is able to strike a compromise between covering cell-edge users and pilot contamination suppression. We formulate a detailed description of this optimization problem by maximizing the cell throughput, which is shown to be mainly determined by the user distribution within several key geometrical regions. Then, the formulated problem is applied to different example scenarios: for a network with hexagonal shaped cells and uniformly distributed users, we derive an analytical lower bound of the ergodic throughput in the objective cell, based on which, it is shown that the optimal choice for the cell coverage should ensure that the coverage of different cells does not overlap; for a more generic network with sectoral shaped cells and non-uniformly distributed users, we propose an analytical approximation of the ergodic throughput. After that, a practical coverage optimization algorithm is proposed, where the optimal solution can be easily obtained through a simple one-dimensional line searching within a confined searching region. Our numerical results show that the proposed coverage optimization method is able to greatly increase the system throughput in macrocells for the massive MIMO uplink transmission, compared with the traditional schemes where the cell coverage is fixed.
Resumo:
This paper proposes a series of variations in designing the location of a wind farm on Monti della Tolfa. These project solutions aim at mitigating the visual impact caused by the wind aerogenerators. Besides the usual location of the wind aerogenerators on the skyline, these alternatives within the project design relate to the placement of wind turbines in the middle and at the bottom of the hillside. Other possible mitigation forms relate to the dimensions and the colour of the wind towers. This study proposes both a non-monetary and monetary analysis of the visual impact related to each project proposal. The final aim of the paper is to analyze economic and financial costs-benefits for each alternative to find out the economic optimal solution.
Resumo:
Network virtualisation is seen as a promising approach to overcome the so-called “Internet impasse” and bring innovation back into the Internet, by allowing easier migration towards novel networking approaches as well as the coexistence of complementary network architectures on a shared infrastructure in a commercial context. Recently, the interest from the operators and mainstream industry in network virtualisation has grown quite significantly, as the potential benefits of virtualisation became clearer, both from an economical and an operational point of view. In the beginning, the concept has been mainly a research topic and has been materialized in small-scale testbeds and research network environments. This PhD Thesis aims to provide the network operator with a set of mechanisms and algorithms capable of managing and controlling virtual networks. To this end, we propose a framework that aims to allocate, monitor and control virtual resources in a centralized and efficient manner. In order to analyse the performance of the framework, we performed the implementation and evaluation on a small-scale testbed. To enable the operator to make an efficient allocation, in real-time, and on-demand, of virtual networks onto the substrate network, it is proposed a heuristic algorithm to perform the virtual network mapping. For the network operator to obtain the highest profit of the physical network, it is also proposed a mathematical formulation that aims to maximize the number of allocated virtual networks onto the physical network. Since the power consumption of the physical network is very significant in the operating costs, it is important to make the allocation of virtual networks in fewer physical resources and onto physical resources already active. To address this challenge, we propose a mathematical formulation that aims to minimize the energy consumption of the physical network without affecting the efficiency of the allocation of virtual networks. To minimize fragmentation of the physical network while increasing the revenue of the operator, it is extended the initial formulation to contemplate the re-optimization of previously mapped virtual networks, so that the operator has a better use of its physical infrastructure. It is also necessary to address the migration of virtual networks, either for reasons of load balancing or for reasons of imminent failure of physical resources, without affecting the proper functioning of the virtual network. To this end, we propose a method based on cloning techniques to perform the migration of virtual networks across the physical infrastructure, transparently, and without affecting the virtual network. In order to assess the resilience of virtual networks to physical network failures, while obtaining the optimal solution for the migration of virtual networks in case of imminent failure of physical resources, the mathematical formulation is extended to minimize the number of nodes migrated and the relocation of virtual links. In comparison with our optimization proposals, we found out that existing heuristics for mapping virtual networks have a poor performance. We also found that it is possible to minimize the energy consumption without penalizing the efficient allocation. By applying the re-optimization on the virtual networks, it has been shown that it is possible to obtain more free resources as well as having the physical resources better balanced. Finally, it was shown that virtual networks are quite resilient to failures on the physical network.
Resumo:
Relatório da Prática de Ensino Supervisionada, Mestrado em Ensino da Matemática, Universidade de Lisboa, 2015
Resumo:
Energy resource scheduling becomes increasingly important, as the use of distributed resources is intensified and massive gridable vehicle use is envisaged. The present paper proposes a methodology for dayahead energy resource scheduling for smart grids considering the intensive use of distributed generation and of gridable vehicles, usually referred as Vehicle- o-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with V2G owners. It takes into account these contracts, the user´s requirements subjected to the VPP, and several discharge price steps. Full AC power flow calculation included in the model allows taking into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33 bus distribution network and V2G is used to illustrate the good performance of the proposed method.