911 resultados para MINIMIZING EARLINESS
Resumo:
Parallel-connected photovoltaic inverters are required in large solar plants where it is not economically or technically reasonable to use a single inverter. Currently, parallel inverters require individual isolating transformers to cut the path for the circulating current. In this doctoral dissertation, the problem is approached by attempting to minimize the generated circulating current. The circulating current is a function of the generated common-mode voltages of the parallel inverters and can be minimized by synchronizing the inverters. The synchronization has previously been achieved by a communication link. However, in photovoltaic systems the inverters may be located far apart from each other. Thus, a control free of communication is desired. It is shown in this doctoral dissertation that the circulating current can also be obtained by a common-mode voltage measurement. A control method based on a short-time switching frequency transition is developed and tested with an actual photovoltaic environment of two parallel inverters connected to two 5 kW solar arrays. Controls based on the measurement of the circulating current and the common-mode voltage are generated and tested. A communication-free method of controlling the circulating current between parallelconnected inverters is developed and verified.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.
Resumo:
Given a fixed set of identical or different-sized circular items, the problem we deal with consists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and 3D problems are treated. Twice-differentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Granting economic development incentives (or “EDIs”) has become commonplace throughout the United States, but the efficiency of these mechanisms is generally unwarranted. Both the politicians granting, and the companies seeking, EDIs have incentives to overestimate the EDIs benefits. For politicians, ribbon–cutting ceremonies can be the highly desirable opportunity to please political allies and financiers, and the same time that they demonstrate to the population that they are successful in promoting economic growth – even when the population would be better off otherwise. In turn, businesses are naturally prone to seek governmental aid. This explains in part why EDIs often “fail” (i.e. don’t pay–off). To increase transparency and mitigate the risk of EDI failure, local and state governments across the country have created a number of accountability mechanisms. The general trait of these accountability mechanisms is that they apply controls to some of the sub–risks that underlie the risk of EDI failure. These sub–risks include the companies receiving EDIs not generating the expected number of jobs, not investing enough in their local facilities, not attracting the expected additional businesses investments to the jurisdiction, etc. The problem with such schemes is that they tackle the problem of EDI failure very loosely. They are too narrow and leave multiplier effects uncontrolled. I propose novel contractual framework for implementing accountability mechanisms. My suggestion is to establish controls on the risk of EDI failure itself, leaving its underlying sub–risks uncontrolled. I call this mechanism “Contingent EDIs”, because the EDIs are made contingent on the government achieving a preset target that benchmarks the risk of EDI failure. If the target is met, the EDIs will ex post kick in; if not, then the EDIs never kick in.
Resumo:
The effect of competition is an important source of variation in breeding experiments. This study aimed to compare the selection of plants of open-pollinated families of Eucalyptus with and without the use of competition covariables. Genetic values were determined for each family and tree and for the traits height, diameter at breast height and timber volume in a randomized block design, resulting in the variance components, genetic parameters, selection gains, effective size and selection coincidence, with and without the use of covariables. Intergenotypic competition is an important factor of environmental variation. The use of competition covariables generally reduces the estimates of variance components and influences genetic gains in the studied traits. Intergenotypic competition biases the selection of open-pollinated eucalypt progenies, and can result in an erroneous choice of superior genotypes; the inclusion of covariables in the model reduces this influence.
Resumo:
In the minimization of tool switches problem we seek a sequence to process a set of jobs so that the number of tool switches required is minimized. In this work different variations of a heuristic based on partial ordered job sequences are implemented and evaluated. All variations adopt a depth first strategy of the enumeration tree. The computational test results indicate that good results can be obtained by a variation which keeps the best three branches at each node of the enumeration tree, and randomly choose, among all active nodes, the next node to branch when backtracking.
Resumo:
Incluye bibliografía
Resumo:
In tropical climates the heat is one of the major constraints to production of broilers and is responsible for inducing a high mortality, especially in the finishing phase. Thus, the objective of this study was to compare the thermal conditioning early (TC) and feed formulation using dietary electrolytes (DE). Therefore, the electrolyte balance of K+Na-Cl was set at 350 mEq/kg and electrolyte ratio (K+Cl)/Na) in the 3:1 program PPFR (http://www.fmva.unesp.br/ppfr). A total of 300 Cobb 500 1-dold male broiler chicks was randomly allocated to 24 floor pens with six replicates per treatment in a 2x2 factorial arrangement (with and without TC and with and without DE). Dietary treatments consisted: (T1) a traditional diet without TC; (T2) traditional diet with TC; (T3) with the application of dietary electrolyte and without TC and (T4) application of dietary electrolyte with TC. The thermal conditioning was conducted at 5 d of age (36°C for 24 h), only half of the batch (150 birds). After this period, all birds were transferred to boxes of 1.5 x3m (12 birds / box), with wood shavings reused as litter. Chicks were exposed to acute stress (36°C) for 8 h at the age 36, in all treatments, being electronically monitored the temperature and humidity of the microclimate of the birds. Feed and water were provided ad libitum, even during periods of stress. Were measured performance data (weight gain, feed intake and feed conversion) and mortality rate. The early thermal conditioning (T2) and effect of dietary electrolytes (T3) were effective to minimize the mortality of broilers subjected to acute heat stress with a significant difference (P<0.05), without prejudice on broiler performance. The results also showed that there was a more favorable effect when applied dietary electrolytes and thermal conditioning simultaneously (treatment T4). However, for the treatment none of these strategies has been applied (T1), the mortality rate was 83% over that in which they were applied (T4). It was concluded from this study that both techniques: the thermal conditioning early as the dietary electrolytes are efficacious in minimize the damaging effects caused by heat broiler.
Resumo:
Optical networks based on passive star couplers and employing wavelength-division multiplexing (WDhf) have been proposed for deployment in local and metropolitan areas. Amplifiers are required in such networks to compensate for the power losses due to splitting and attenuation. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus optical amplifier placement becomes a challenging problem. The general problem of minimizing the total amplifier count, subject to the device constraints, is a mixed-integer non-linear problem. Previous studies have attacked the amplifier placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. In this paper, we present a method to solve the minimum amplifier- placement problem while avoiding the equally powered- wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required in several small to medium-sized networks.
Resumo:
The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter) among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem). Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.