432 resultados para PARAMETERIZATION
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The conventional power flow method is considered to be inadequate to obtain the maximum loading point because of the singularity of Jacobian matrix. Continuation methods are efficient tools for solving this kind of problem since different parameterization schemes can be used to avoid such ill-conditioning problems. This paper presents the details of new schemes for the parameterization step of the continuation power flow method. The new parameterization options are based on physical parameters, namely, the total power losses (real and reactive), the power at the slack bus (real or reactive), the reactive power at generation buses, and transmission line power losses (real and reactive). The simulation results obtained with the new approach for the IEEE test systems (14, 30, 57, and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are not only preserved but also improved.
Resumo:
New parameterization schemes have been proposed by the authors in Part I of this paper. In this part these new options for the parameterization of power flow equations are tested, namely, the total power losses (real and reactive), the power at the slack bus (real or reactive), the reactive power at generation buses, and the transmission line power losses (real and reactive). These different parameterization schemes can be used to obtain the maximum loading point without ill-conditioning problems, once the singularity of Jacobian matrix is avoided. The results obtained with the new approach for the IEEE test systems (14, 30, 57, and 118 buses) show that the characteristics of the conventional method are not only preserved but also improved. In addition, it is shown that the proposed method and the conventional one can be switched during the tracing of PV curves to determine, with few iterations, all points of the PV curve. Several tests were also carried out to compare the performance of the proposed parameterization schemes for the continuation power flow method with the use of both the secant and tangent predictors.
Resumo:
Continuation methods have been shown as efficient tools for solving ill-conditioned cases, with close to singular Jacobian matrices, such as the maximum loading point of power systems. Some parameterization techniques have been proposed to avoid matrix singularity and successfully solve those cases. This paper presents a new geometric parameterization scheme that allows the complete tracing of the P-V curves without ill-conditioning problems. The proposed technique associates robustness to simplicity and, it is of easy understanding. The Jacobian matrix singularity is avoided by the addition of a line equation, which passes through a point in the plane determined by the total real power losses and loading factor. These two parameters have clear physical meaning. The application of this new technique to the IEEE systems (14, 30, 57, 118 and 300 buses) shows that the best characteristics of the conventional Newton's method are not only preserved but also improved. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper presents efficient geometric parameterization techniques using the tangent and the trivial predictors for the continuation power flow, developed from observation of the trajectories of the load flow solution. The parameterization technique eliminates the Jacobian matrix singularity of load flow, and therefore all the consequent problems of ill-conditioning, by the addition of the line equations which pass through the points in the plane determined by the variables loading factor and the real power generated by the slack bus, two parameters with clear physical meaning. This paper also provides an automatic step size control around the maximum loading point. Thus, the resulting method enables not only the calculation of the maximum loading point, but also the complete tracing of P-V curves of electric power systems. The technique combines robustness with ease of understanding. The results to the IEEE 300-bus system and of large real systems show the effectiveness of the proposed method. © 2012 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
[EN]We present a new strategy, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance...
Resumo:
[EN]We have recently introduced a new strategy, based on the meccano method [1, 2], to construct a T-spline parameterization of 2D and 3D geometries for the application of iso geometric analysis [3, 4]. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between the objects and the parametric domain, i.e. the meccano. The key of the method lies in de_ning an isomorphic transformation between the parametric and physical T-mesh _nding the optimal position of the interior nodes, once the meccano boundary nodes are mapped to the boundary of the physical domain…
Resumo:
[EN]We present a new method, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance…
Resumo:
Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.
Resumo:
Methods for optical motion capture often require timeconsuming manual processing before the data can be used for subsequent tasks such as retargeting or character animation. These processing steps restrict the applicability of motion capturing especially for dynamic VR-environments with real time requirements. To solve these problems, we present two additional, fast and automatic processing stages based on our motion capture pipeline presented in [HSK05]. A normalization step aligns the recorded coordinate systems with the skeleton structure to yield a common and intuitive data basis across different recording sessions. A second step computes a parameterization based on automatically extracted main movement axes to generate a compact motion description. Our method does not restrict the placement of marker bodies nor the recording setup, and only requires a short calibration phase.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
Existing evidence of plant phenological change to temperature increase demonstrates that the phenological responsiveness is greater at warmer locations and in early-season plant species. Explanations of these findings are scarce and not settled. Some studies suggest considering phenology as one functional trait within a plant's life history strategy. In this study, we adapt an existing phenological model to derive a generalized sensitivity in space (SpaceSens) model for calculating temperature sensitivity of spring plant phenophases across species and locations. The SpaceSens model have three parameters, including the temperature at the onset date of phenophases (Tp), base temperature threshold (Tb) and the length of period (L) used to calculate the mean temperature when performing regression analysis between phenology and temperature. A case study on first leaf date of 20 plant species from eastern China shows that the change of Tp and Tb among different species accounts for interspecific difference in temperature sensitivity. Moreover, lower Tp at lower latitude is the main reason why spring phenological responsiveness is greater there. These results suggest that spring phenophases of more responsive, early-season plants (especially in low latitude) will probably continue to diverge from the other late-season plants with temperatures warming in the future.
Resumo:
This paper explains how the Armington-Krugman-Melitz supermodel developed by Dixon and Rimmer can be parameterized, and demonstrates that only two kinds of additional information are required in order to extend a standard trade model to include Melitz-type monopolistic competition and heterogeneous firms. Further, it is shown how specifying too much additional information leads to violations of the model constraints, necessitating adjustment and reconciliation of the data. Once a Melitz-type model is parameterized, a Krugman-type model can also be parameterized using the calibrated values in the Melitz-type model without any additional data. Sample code for the General Algebraic Modeling System (GAMS) has also been prepared to promote the innovative supermodel in the AGE community.
Resumo:
There exists an interest in performing full core pin-by-pin computations for present nuclear reactors. In such type of problems the use of a transport approximation like the diffusion equation requires the introduction of correction parameters. Interface discontinuity factors can improve the diffusion solution to nearly reproduce a transport solution. Nevertheless, calculating accurate pin-by-pin IDF requires the knowledge of the heterogeneous neutron flux distribution, which depends on the boundary conditions of the pin-cell as well as the local variables along the nuclear reactor operation. As a consequence, it is impractical to compute them for each possible configuration. An alternative to generate accurate pin-by-pin interface discontinuity factors is to calculate reference values using zero-net-current boundary conditions and to synthesize afterwards their dependencies on the main neighborhood variables. In such way the factors can be accurately computed during fine-mesh diffusion calculations by correcting the reference values as a function of the actual environment of the pin-cell in the core. In this paper we propose a parameterization of the pin-by-pin interface discontinuity factors allowing the implementation of a cross sections library able to treat the neighborhood effect. First results are presented for typical PWR configurations.