133 resultados para Algorithm Calibration
Resumo:
Aerodynamic balances are employed in wind tunnels to estimate the forces and moments acting on the model under test. This paper proposes a methodology for the assessment of uncertainty in the calibration of an internal multi-component aerodynamic balance. In order to obtain a suitable model to provide aerodynamic loads from the balance sensor responses, a calibration is performed prior to the tests by applying known weights to the balance. A multivariate polynomial fitting by the least squares method is used to interpolate the calibration data points. The uncertainties of both the applied loads and the readings of the sensors are considered in the regression. The data reduction includes the estimation of the calibration coefficients, the predicted values of the load components and their corresponding uncertainties, as well as the goodness of fit.
Resumo:
This paper describes an investigation of the hybrid PSO/ACO algorithm to classify automatically the well drilling operation stages. The method feasibility is demonstrated by its application to real mud-logging dataset. The results are compared with bio-inspired methods, and rule induction and decision tree algorithms for data mining. © 2009 Springer Berlin Heidelberg.
Resumo:
An optimization technique to solve distribution network planning (DNP) problem is presented. This is a very complex mixed binary nonlinear programming problem. A constructive heuristic algorithm (CHA) aimed at obtaining an excellent quality solution for this problem is presented. In each step of the CHA, a sensitivity index is used to add a circuit or a substation to the distribution network. This sensitivity index is obtained solving the DNP problem considering the numbers of circuits and substations to be added as continuous variables (relaxed problem). The relaxed problem is a large and complex nonlinear programming and was solved through an efficient nonlinear optimization solver. A local improvement phase and a branching technique were implemented in the CHA. Results of two tests using a distribution network are presented in the paper in order to show the ability of the proposed algorithm. ©2009 IEEE.
Resumo:
In this paper, the short term transmission network expansion planning (STTNEP) is solved through a specialized genetic algorithm (SGA). A complete AC model of the transmission network is used, which permits the formulation of an integrated power system transmission network expansion planning problem (real and reactive power planning). The characteristics of the proposed SGA to solve the STTNEP problem are detailed and an interior point method is employed to solve nonlinear programming problems during the solution steps of the SGA. Results of tests carried out with two electrical energy systems show the capabilities of the SGA and also the viability of using the AC model to solve the STTNEP problem. © 2009 IEEE.
Resumo:
This paper presents a methodology to solve the transmission network expansion planning problem (TNEP) considering reliability and uncertainty in the demand. The proposed methodology provides an optimal expansion plan that allows the power system to operate adequately with an acceptable level of reliability and in an enviroment with uncertainness. The reliability criterion limits the expected value of the reliability index (LOLE - Loss Of Load Expectation) of the expanded system. The reliability is evaluated for the transmission system using an analytical technique based in enumeration. The mathematical model is solved, in a efficient way, using a specialized genetic algorithm of Chu-Beasley modified. Detailed results from an illustrative example are presented and discussed. © 2009 IEEE.
Resumo:
In this work the multiarea optimal power flow (OPF) problem is decoupled into areas creating a set of regional OPF subproblems. The objective is to solve the optimal dispatch of active and reactive power for a determined area, without interfering in the neighboring areas. The regional OPF subproblems are modeled as a large-scale nonlinear constrained optimization problem, with both continuous and discrete variables. Constraints violated are handled as objective functions of the problem. In this way the original problem is converted to a multiobjective optimization problem, and a specifically-designed multiobjective evolutionary algorithm is proposed for solving the regional OPF subproblems. The proposed approach has been examined and tested on the RTS-96 and IEEE 354-bus test systems. Good quality suboptimal solutions were obtained, proving the effectiveness and robustness of the proposed approach. ©2009 IEEE.
Resumo:
The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies. © 2010 American Institute of Physics.
Resumo:
The GEANT4 simulations are essential for the development of medical tomography with proton beams pCT. In the case of thin absorbers the latest releases of GEANT4 generate very similar final spectra which agree well with the results of other popular Monte Carlo codes like TRIM/SRIM, or MCNPX. For thick absorbers, however, the disagreements became evident. In a part, these disagreements are due to the known contradictions in the NIST PSTAR and SRIM reference data. Therefore, it is interesting to compare the GEANT4 results with each other, with experiment, and with diverse code results in a reduced form, which is free from this kind of doubts. In this work such comparison is done within the Reduced Calibration Curve concept elaborated for the proton beam tomography. © 2010 IEEE.
Resumo:
This paper proposes a heuristic constructive multi-start algorithm (HCMA) to distribution system restoration in real time considering distributed generators installed in the system. The problem is modeled as nonlinear mixed integer and considers the two main goals of the restoration of distribution networks: minimizing the number of consumers without power and the number of switching. The proposed algorithm is implemented in C++ programming language and tested using a large real-life distribution system. The results show that the proposed algorithm is able to provide a set of feasible and good quality solutions in a suitable time for the problem. © 2011 IEEE.
Resumo:
Despite the large use of differential scanning calorimetry (DSC) technique in advanced polymer materials characterization, the new methodology called DSC in high heating rates was developed. The heating rate during conventional DSC experiments varying from 10 to 20°C.min-1, sample mass from 10 to 15mg and standard aluminum sample pan weighting, approximately, 27mg. In order to contribute to a better comprehension of DSC behavior in different heating rates, this work correlates as high heating rate influences to the thermal events in DSC experiments. Samples of metallic standard (In, Pb, Sn and Zn) with masses varying from 0.570mg to 20.9mg were analyzed in multiples sample heating rate from 4 to 324°C. min-1. In order to make properly all those experiments, a precise and careful temperature and enthalpy calibrations were performed and deeply discussed. Thus, this work shows a DSC methodology able to generate good and reliable results on experiments under any researcher choice heating rates to characterize the advanced materials used, for example, for aerospace industry. Also it helps the DSC users to find in their available instruments, already installed, a better and more accurate DSC test results, improving in just one shot the analysis sensitivity and resolution. Polypropylene melting and enthalpy thermal events are also studied using both the conventional DSC method and high heating rate method.
Resumo:
Transmission expansion planning (TEP) is a non-convex optimization problem that can be solved via different heuristic algorithms. A variety of classical as well as heuristic algorithms in literature are addressed to solve TEP problem. In this paper a modified constructive heuristic algorithm (CHA) is proposed for solving such a crucial problem. Most of research papers handle TEP problem by linearization of the non-linear mathematical model while in this research TEP problem is solved via CHA using non-linear model. The proposed methodology is based upon Garver's algorithm capable of applying to a DC model. Simulation studies and tests results on the well known transmission network such as: Garver and IEEE 24-bus systems are carried out to show the significant performance as well as the effectiveness of the proposed algorithm. © 2011 IEEE.
Resumo:
A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.
Resumo:
In a peer-to-peer network, the nodes interact with each other by sharing resources, services and information. Many applications have been developed using such networks, being a class of such applications are peer-to-peer databases. The peer-to-peer databases systems allow the sharing of unstructured data, being able to integrate data from several sources, without the need of large investments, because they are used existing repositories. However, the high flexibility and dynamicity of networks the network, as well as the absence of a centralized management of information, becomes complex the process of locating information among various participants in the network. In this context, this paper presents original contributions by a proposed architecture for a routing system that uses the Ant Colony algorithm to optimize the search for desired information supported by ontologies to add semantics to shared data, enabling integration among heterogeneous databases and the while seeking to reduce the message traffic on the network without causing losses in the amount of responses, confirmed by the improve of 22.5% in this amount. © 2011 IEEE.
Resumo:
Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.
Resumo:
Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.