856 resultados para allocation and extraction
Resumo:
To allocate and size capacitors in a distribution system, an optimization algorithm, called Discrete Particle Swarm Optimization (DPSO), is employed in this paper. The objective is to minimize the transmission line loss cost plus capacitors cost. During the optimization procedure, the bus voltage, the feeder current and the reactive power flowing back to the source side should be maintained within standard levels. To validate the proposed method, the semi-urban distribution system that is connected to bus 2 of the Roy Billinton Test System (RBTS) is used. This 37-bus distribution system has 22 loads being located in the secondary side of a distribution substation (33/11 kV). Reducing the transmission line loss in a standard system, in which the transmission line loss consists of only about 6.6 percent of total power, the capabilities of the proposed technique are seen to be validated.
Resumo:
In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.
Resumo:
In this paper, the optimal allocation and sizing of distributed generators (DGs) in a distribution system is studied. To achieve this goal, an optimization problem should be solved in which the main objective is to minimize the DGs cost and to maximise the reliability simultaneously. The active power balance between loads and DGs during the isolation time is used as a constraint. Another point considered in this process is the load shedding. It means that if the summation of DGs active power in a zone, isolated by the sectionalizers because of a fault, is less than the total active power of loads located in that zone, the program start shedding the loads in one-by-one using the priority rule still the active power balance is satisfied. This assumption decreases the reliability index, SAIDI, compared with the case loads in a zone are shed when total DGs power is less than the total load power. To validate the proposed method, a 17-bus distribution system is employed and the results are analysed.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
The principles relating to the passing of risk under a contract for the sale of real property would seem to have been long settled. The rule under the general law is that the risk of loss of the subject matter under a contract for the sale of real property passes to the buyer upon the creation of a valid and binding contract. This article considers the origin of that rule, how it developed with the growth of equity, and advances the view that it is anomalous in a modern context of property dealings. In doing so, the article adverts to the variety of statutory mechanisms used to subvert the rule, few of which are of practical value. It concludes that the rule is outmoded in many respects and suggests a number of reforms which might be implemented nationally to bring consistency and simplicity to the issue of damage or destruction of improvements which are the subject of a land contract.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
Dendrocalamus strictus and Bambusa arundinacea are monocarpic, gregariously flowering species of bamboo, common in the deciduous forests of the State of Karnataka in India. Their populations have significantly declined, especially since the last flowering. This decline parelleis increasing incidence of grazing, fire and extraction in recent decades. Results of an experiment in which the intensities of grazing and fire were varied, indicate that while grazing significantly depresses the survival of seedlings and the recruitment of new eulms of bamboo clumps, fire appeared to enhance seedling survival, presumably by reducing competition of lass fire-resistant species. New shoots of bamboo are destroyed by insects and a variety of herbivorous mammals. In areas of intense herbivore pressure, a bamboo clump initiates the production of a much larger number of new culrm, but results in many fewer and shorter intact culms. Extraction renders the new shoots more susceptible to herbivore pressure by removal of the protective covering of branches at the base of a bamboo clump. Hence, regular and extensive extraction by the paper mills in conjuction with intense grazing pressure strongly depresses the addition of new culms to bamboo clumps. Regulation of grazing in the forest by domestic livestock along with maintenance of the cover at the base of the clumps by extracting the culms at a higher level should reduce the rate of decline of the bamboo stocks.
Resumo:
A method for the delipidation of egg yolk plasma using phospholipase-C, n-heptane, and 1-butanol has been described. An aggregating protein fraction and a soluble protein fraction were separated by the action of phospholipase-C. The aggregating protein fraction freed of most of the lipids by treatment with n-heptane and 1-butanol was shown to be the apolipoproteins of yolk plasma, whereas the soluble proteins were identified as the livetins. Carbohydrate and the N-terminal amino acid analysis of these protein fractions are reported. A comparison of these protein fractions with the corresponding fractions obtained by formic acid delipidation of yolk plasma has been made. The gelation of yolk plasma by the action of phospholipase-C has been interpreted as an aggregation of lipoproteins caused by ionic interactions. The role of lecithin in maintaining the structural integrity of lipoproteins has been discussed.
Resumo:
We propose two texture-based approaches, one involving Gabor filters and the other employing log-polar wavelets, for separating text from non-text elements in a document image. Both the proposed algorithms compute local energy at some information-rich points, which are marked by Harris' corner detector. The advantage of this approach is that the algorithm calculates the local energy at selected points and not throughout the image, thus saving a lot of computational time. The algorithm has been tested on a large set of scanned text pages and the results have been seen to be better than the results from the existing algorithms. Among the proposed schemes, the Gabor filter based scheme marginally outperforms the wavelet based scheme.
Resumo:
A two dimensional correlation experiment for the measurement of short and long range homo- and hetero- nuclear residual dipolar couplings (RDCs) from the broad and featureless proton NMR spectra including C-13 satellites is proposed. The method employs a single natural abundant C-13 spin as a spy nucleus to probe all the coupled protons and permits the determination of RDCs of negligible strengths. The technique has been demonstrated for the study of organic chiral molecules aligned in chiral liquid crystal, where additional challenge is to unravel the overlapped spectrum of enantiomers. The significant advantage of the method is demonstrated in better chiral discrimination using homonuclear RDCs as additional parameters. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An improved flux draining technique for the extraction of grown YBCO crystals from its solvent is reported. This simple and efficient technique facilitates in-situ flux separation in the isothermal region of the furnace. Consequently, the crystals are spared from thermal shock and subsequent damage. Flux-free surfaces of these crystals were studied by optical microscopy. Transmission X-ray topographs of the crystals reveal the dislocations present in them as well as the stresses developed as a result of ferroelastic phase transition occurring during cooling.
Resumo:
In achieving higher instruction level parallelism, software pipelining increases the register pressure in the loop. The usefulness of the generated schedule may be restricted to cases where the register pressure is less than the available number of registers. Spill instructions need to be introduced otherwise. But scheduling these spill instructions in the compact schedule is a difficult task. Several heuristics have been proposed to schedule spill code. These heuristics may generate more spill code than necessary, and scheduling them may necessitate increasing the initiation interval. We model the problem of register allocation with spill code generation and scheduling in software pipelined loops as a 0-1 integer linear program. The formulation minimizes the increase in initiation interval (II) by optimally placing spill code and simultaneously minimizes the amount of spill code produced. To the best of our knowledge, this is the first integrated formulation for register allocation, optimal spill code generation and scheduling for software pipelined loops. The proposed formulation performs better than the existing heuristics by preventing an increase in II in 11.11% of the loops and generating 18.48% less spill code on average among the loops extracted from Perfect Club and SPEC benchmarks with a moderate increase in compilation time.