942 resultados para Current Density Mapping Method
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
This work presents, with the aid of the natural approach, an extension of the force density method for the initial shape finding of cable and membrane structures, which leads to the solution of a system of linear equations. This method, here called the natural force density method, preserves the linearity which characterizes the original force density method. At the same time, it overcomes the difficulties that the original procedure presents to cope with irregular triangular finite element meshes. Furthermore, if this method is applied iteratively in the lines prescribed herewith, it leads to a viable initial configuration with a uniform, isotropic plane Cauchy stress state. This means that a minimal surface for the membrane can be achieved through a succession of equilibrated configurations. Several numerical examples illustrate the simplicity and robustness of the method. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
High-density polyethylene resins have increasingly been used in the production of pipes for water- and gas-pressurized distribution systems and are expected to remain in service for several years, but they eventually fail prematurely by creep fracture. Usual standard methods used to rank resins in terms of their resistance to fracture are expensive and non-practical for quality control purposes, justifying the search for alternative methods. Essential work of fracture (EWF) method provides a relatively simple procedure to characterize the fracture behavior of ductile polymers, such as polyethylene resins. In the present work, six resins were analyzed using the EWF methodology. The results show that the plastic work dissipation factor, beta w(p), is the most reliable parameter to evaluate the performance. Attention must be given to specimen preparation that might result in excessive dispersion in the results, especially for the essential work of fracture w(e).
Resumo:
The effect of different precracking methods on the results of linear elastic K(Ic) fracture toughness testing with medium-density polyethylene (MDPE) was investigated. Cryogenic conditions were imposed in order to obtain valid K(Ic) values from specimens of suitable size. Most conservative K(Ic) values were obtained by slow pressing a fresh razor blade at the notch root of the specimen. Due to the low deformation level imposed on the crack tip region, the slow pressing razor blade technique also produced less scatter in fracture toughness results. It has been shown that the slow stable crack growth preceding catastrophic brittle failure during K(Ic) tests in MOPE under cryogenic conditions should not be disregarded as it has relevant physical meaning and may affect the fracture toughness results. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with excellent properties. The approach is in- spired by the principles of the generalized cross entropy method. The pro- posed density estimation procedure has numerous advantages over the tra- ditional kernel density estimator methods. Firstly, for the first time in the nonparametric literature, the proposed estimator allows for a genuine incor- poration of prior information in the density estimation procedure. Secondly, the approach provides the first data-driven bandwidth selection method that is guaranteed to provide a unique bandwidth for any data. Lastly, simulation examples suggest the proposed approach outperforms the current state of the art in nonparametric density estimation in terms of accuracy and reliability.
Resumo:
The construction industry keeps on demanding huge quantities of natural resources, mainly minerals for mortars and concrete production. The depletion of many quarries and environmental concerns about reducing the dumping of construction and demolition waste in quarries have led to an increase in the procuring and use of recycled aggregates from this type of waste. If they are to be incorporated in concrete and mortars it is essential to know their properties to guarantee the adequate performance of the end products, in both mechanical and durability-related terms. Existing regulated tests were developed for natural aggregates, however, and several problems arise when they are applied to recycled aggregates, especially fine recycled aggregates (FRA). This paper describes the main problems encountered with these tests and proposes an alternative method to determine the density and water absorption of FRA that removes them. The use of sodium hexametaphosphate solutions in the water absorption test has proven to improve its efficiency, minimizing cohesion between particles and helping to release entrained air.
Resumo:
Dissertação de mestrado em Engenharia Urbana
Resumo:
Abstract Purpose: New treatments against long-lasting uveitis need to be tested. Our aim was to develop a six-week model of uveitis in rabbits. Methods: Rabbits were presensitized with an s.c. injection of Mycobacterium tuberculosis H37RA emulsified with TiterMax® Gold adjuvant. Uveitis was induced at day 28 and 50, by intravitreal challenges of antigen suspension. Ocular inflammation was assessed till euthanasia at day 71 after s.c. injection of M. tuberculosis H37RA by: (a) the number of inflammatory cells in aqueous humor (AH); (b) the protein concentration in AH; (c) the clinical score (mean of conjunctival hyperaemia, conjunctival chemosis, oedema and secretion); (d) the microscopical score (mean presence of fibrin and synechiae, aqueous cell density and aqueous flare grade, as scored by slit lamp). Results: At the sites of presensitization injection, rabbits presented flat nodules which progressively vanished. The first challenge induced a significant increase in the four parameters (p < 0.05 the Wilcoxon/Kruskal-Wallis test). The AH contained 764 ± 82 cells/µl and 32 ± 0.77 mg protein/ml. During the following days, inflammatory parameters decreased slightly. The second intravitreal challenge increased inflammation (3564 ± 228 cells/µl AH and 31 ± 1 mg protein/ml), which remained at a high level for a longer period of time. Conclusion: We developed a model of long-term uveitis, which could be maintained in rabbits for at least six weeks. Such a model could be used to test the efficacy of either new drugs or various drug delivery systems intended to deliver active agents during a few months.
Resumo:
The contributions of the correlated and uncorrelated components of the electron-pair density to atomic and molecular intracule I(r) and extracule E(R) densities and its Laplacian functions ∇2I(r) and ∇2E(R) are analyzed at the Hartree-Fock (HF) and configuration interaction (CI) levels of theory. The topologies of the uncorrelated components of these functions can be rationalized in terms of the corresponding one-electron densities. In contrast, by analyzing the correlated components of I(r) and E(R), namely, IC(r) and EC(R), the effect of electron Fermi and Coulomb correlation can be assessed at the HF and CI levels of theory. Moreover, the contribution of Coulomb correlation can be isolated by means of difference maps between IC(r) and EC(R) distributions calculated at the two levels of theory. As application examples, the He, Ne, and Ar atomic series, the C2-2, N2, O2+2 molecular series, and the C2H4 molecule have been investigated. For these atoms and molecules, it is found that Fermi correlation accounts for the main characteristics of IC(r) and EC(R), with Coulomb correlation increasing slightly the locality of these functions at the CI level of theory. Furthermore, IC(r), EC(R), and the associated Laplacian functions, reveal the short-ranged nature and high isotropy of Fermi and Coulomb correlation in atoms and molecules
Resumo:
A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).