987 resultados para General Algorithm
Resumo:
The neutral wire in most power flow software is usually merged into phase wires using Kron's reduction. Since the neutral wire and the ground are not explicitly represented, neutral wire and ground currents and voltages remain unknown. In some applications, like power quality and safety analyses, loss analysis, etc., knowing the neutral wire and ground currents and voltages could be of special interest. In this paper, a general power flow algorithm for three-phase four-wire radial distribution networks, considering neutral grounding, based on backward-forward technique, is proposed. In this novel use of the technique, both the neutral wire and ground are explicitly represented. A problem of three-phase distribution system with earth return, as a special case of a four-wire network, is also elucidated. Results obtained from several case studies using medium- and low-voltage test feeders with unbalanced load, are presented and discussed.
Resumo:
In this paper a method for solving the Short Term Transmission Network Expansion Planning (STTNEP) problem is presented. The STTNEP is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In this work we present a constructive heuristic algorithm to find a solution of the STTNEP of excellent quality. In each step of the algorithm a sensitivity index is used to add a circuit (transmission line or transformer) to the system. This sensitivity index is obtained solving the STTNEP problem considering as a continuous variable the number of circuits to be added (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an interior points method that uses a combination of the multiple predictor corrector and multiple centrality corrections methods, both belonging to the family of higher order interior points method (HOIPM). Tests were carried out using a modified Carver system and the results presented show the good performance of both the constructive heuristic algorithm to solve the STTNEP problem and the HOIPM used in each step.
Resumo:
This work presents a branch-and-bound algorithm to solve the multi-stage transmission expansion planning problem. The well known transportation model is employed, nevertheless the algorithm can be extended to hybrid models or to more complex ones such as the DC model. Tests with a realistic power system were carried out in order to show the performance of the algorithm for the expansion plan executed for different time frames. © 2005 IEEE.
Resumo:
In this paper the genetic algorithm of Chu and Beasley (GACB) is applied to solve the static and multistage transmission expansion planning problem. The characteristics of the GACB, and some modifications that were done, to efficiently solve the problem described above are also presented. Results using some known systems show that the GACB is very efficient. To validate the GACB, we compare the results achieved using it with the results using other meta-heuristics like tabu-search, simulated annealing, extended genetic algorithm and hibrid algorithms. © 2006 IEEE.
Resumo:
In this paper, a method for solving the short term transmission network expansion planning problem is presented. This is a very complex mixed integer nonlinear programming problem that presents a combinatorial explosion in the search space. In order to And a solution of excellent quality for this problem, a constructive heuristic algorithm is presented in this paper. In each step of the algorithm, a sensitivity index is used to add a circuit (transmission line or transformer) or a capacitor bank (fixed or variable) to the system. This sensitivity index is obtained solving the problem considering the numbers of circuits and capacitors banks to be added (relaxed problem), as continuous variables. The relaxed problem is a large and complex nonlinear programming and was solved through a higher order interior point method. The paper shows results of several tests that were performed using three well-known electric energy systems in order to show the possibility and the advantages of using the AC model. ©2007 IEEE.
Resumo:
Network reconfiguration is an important tool to optimize the operating conditions of a distribution system. This is accomplished modifying the network structure of distribution feeders by changing the open/close status of sectionalizing switches. This not only reduces the power losses, but also relieves the overloading of the network components. Network reconfiguration belongs to a complex family of problems because of their combinatorial nature and multiple constraints. This paper proposes a solution to this problem, using a specialized evolutionary algorithm, with a novel codification, and a brand new way of implement the genetic operators considering the problem characteristics. The algorithm is presented and tested in a real distribution system, showing excellent results and computational efficiency. © 2007 IEEE.
Resumo:
An optimization technique to solve distribution network planning (DNP) problem is presented. This is a very complex mixed binary nonlinear programming problem. A constructive heuristic algorithm (CHA) aimed at obtaining an excellent quality solution for this problem is presented. In each step of the CHA, a sensitivity index is used to add a circuit or a substation to the distribution network. This sensitivity index is obtained solving the DNP problem considering the numbers of circuits and substations to be added as continuous variables (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an efficient nonlinear optimization solver. A local improvement phase and a branching technique were implemented in the CHA. Results of two tests using a distribution network are presented in the paper in order to show the ability of the proposed algorithm. ©2009 IEEE.
Resumo:
In this work the multiarea optimal power flow (OPF) problem is decoupled into areas creating a set of regional OPF subproblems. The objective is to solve the optimal dispatch of active and reactive power for a determined area, without interfering in the neighboring areas. The regional OPF subproblems are modeled as a large-scale nonlinear constrained optimization problem, with both continuous and discrete variables. Constraints violated are handled as objective functions of the problem. In this way the original problem is converted to a multiobjective optimization problem, and a specifically-designed multiobjective evolutionary algorithm is proposed for solving the regional OPF subproblems. The proposed approach has been examined and tested on the RTS-96 and IEEE 354-bus test systems. Good quality suboptimal solutions were obtained, proving the effectiveness and robustness of the proposed approach. ©2009 IEEE.
Resumo:
Resistant hypertension (RH) is characterized by blood pressure above 140 × 90 mm Hg, despite the use, in appropriate doses, of three antihypertensive drug classes, including a diuretic, or the need of four classes to control blood pressure. Resistant hypertension patients are under a greater risk of presenting secondary causes of hypertension and may be benefited by therapeutical approach for this diagnosis. However, the RH is currently little studied, and more knowledge of this clinical condition is necessary. In addition, few studies had evaluated this issue in emergent countries. Therefore, we proposed the analysis of specific causes of RH by using a standardized protocol in Brazilian patients diagnosed in a center for the evaluation and treatment of hypertension. The management of these patients was conducted with the application of a preformulated protocol which aimed at the identification of the causes of resistant hypertension in each patient through management standardization. The data obtained suggest that among patients with resistant hypertension there is a higher prevalence of secondary hypertension, than that observed in general hypertensive ones and a higher prevalence of sleep apnea as well. But there are a predominance of obesity, noncompliance with diet, and frequent use of hypertensive drugs. These latter factors are likely approachable at primary level health care, since that detailed anamneses directed to the causes of resistant hypertension are applied. © 2012 Livia Beatriz Santos Limonta et al.
Resumo:
OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4) as well as the risk ovarian malignancy algorithm (ROMA) and risk malignancy index (RMI) values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128) were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC) for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal), and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Resumo:
OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p < 0.01). The patients in the new algorithm group were older (41 +/- 1 vs. 46.3 +/- 16.7 years, p=0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p= 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery.
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).