956 resultados para COMPUTATIONAL APPROACH
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
Market risk exposure plays a key role for nancial institutions risk management. A possible measure for this exposure is to evaluate losses likely to incurwhen the price of the portfolio's assets declines using Value-at-Risk (VaR) estimates, one of the most prominent measure of nancial downside market risk. This paper suggests an evolving possibilistic fuzzy modeling approach for VaR estimation. The approach is based on an extension of the possibilistic fuzzy c-means clustering and functional fuzzy rule-based modeling, which employs memberships and typicalities to update clusters and creates new clusters based on a statistical control distance-based criteria. ePFM also uses an utility measure to evaluate the quality of the current cluster structure. Computational experiments consider data of the main global equity market indexes of United States, London, Germany, Spain and Brazil from January 2000 to December 2012 for VaR estimation using ePFM, traditional VaR benchmarks such as Historical Simulation, GARCH, EWMA, and Extreme Value Theory and state of the art evolving approaches. The results show that ePFM is a potential candidate for VaR modeling, with better performance than alternative approaches.
Resumo:
Starting from the idea that economic systems fall into complexity theory, where its many agents interact with each other without a central control and that these interactions are able to change the future behavior of the agents and the entire system, similar to a chaotic system we increase the model of Russo et al. (2014) to carry out three experiments focusing on the interaction between Banks and Firms in an artificial economy. The first experiment is relative to Relationship Banking where, according to the literature, the interaction over time between Banks and Firms are able to produce mutual benefits, mainly due to reduction of the information asymmetry between them. The following experiment is related to information heterogeneity in the credit market, where the larger the bank, the higher their visibility in the credit market, increasing the number of consult for new loans. Finally, the third experiment is about the effects on the credit market of the heterogeneity of prices that Firms faces in the goods market.
Resumo:
Nowadays, fraud detection is important to avoid nontechnical energy losses. Various electric companies around the world have been faced with such losses, mainly from industrial and commercial consumers. This problem has traditionally been dealt with using artificial intelligence techniques, although their use can result in difficulties such as a high computational burden in the training phase and problems with parameter optimization. A recently-developed pattern recognition technique called optimum-path forest (OPF), however, has been shown to be superior to state-of-the-art artificial intelligence techniques. In this paper, we proposed to use OPF for nontechnical losses detection, as well as to apply its learning and pruning algorithms to this purpose. Comparisons against neural networks and other techniques demonstrated the robustness of the OPF with respect to commercial losses automatic identification.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Background: The tectum is a structure localized in the roof of the midbrain in vertebrates, and is taken to be highly conserved in evolution. The present article assessed three hypotheses concerning the evolution of lamination and citoarchitecture of the tectum of nontetrapod animals: 1) There is a significant degree of phylogenetic inertia in both traits studied (number of cellular layers and number of cell classes in tectum); 2) Both traits are positively correlated accross evolution after correction for phylogeny; and 3) Different developmental pathways should generate different patterns of lamination and cytoarchitecture.Methodology/Principal Findings: The hypotheses were tested using analytical-computational tools for phylogenetic hypothesis testing. Both traits presented a considerably large phylogenetic signal and were positively associated. However, no difference was found between two clades classified as per the general developmental pathways of their brains.Conclusions/Significance: The evidence amassed points to more variation in the tectum than would be expected by phylogeny in three species from the taxa analysed; this variation is not better explained by differences in the main course of development, as would be predicted by the developmental clade hypothesis. Those findings shed new light on the evolution of an functionally important structure in nontetrapods, the most basal radiations of vertebrates.
Resumo:
The paper presents a new methodology to model material failure, in two-dimensional reinforced concrete members, using the Continuum Strong Discontinuity Approach (CSDA). The mixture theory is used as the methodological approach to model reinforced concrete as a composite material, constituted by a plain concrete matrix reinforced with two embedded orthogonal long fiber bundles (rebars). Matrix failure is modeled on the basis of a continuum damage model, equipped with strain softening, whereas the rebars effects are modeled by means of phenomenological constitutive models devised to reproduce the axial non-linear behavior, as well as the bondslip and dowel effects. The proposed methodology extends the fundamental ingredients of the standard Strong Discontinuity Approach, and the embedded discontinuity finite element formulations, in homogeneous materials, to matrix/fiber composite materials, as reinforced concrete. The specific aspects of the material failure modeling for those composites are also addressed. A number of available experimental tests are reproduced in order to illustrate the feasibility of the proposed methodology. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. This paper presents a novel approach to solve robust parameter estimation problem for nonlinear model with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.
Resumo:
A neural approach to solve the problem defined by the economic load dispatch in power systems is presented in this paper, Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements the ability of neural networks to realize some complex nonlinear function makes them attractive for system optimization the neural networks applyed in economic load dispatch reported in literature sometimes fail to converge towards feasible equilibrium points the internal parameters of the modified Hopfield network developed here are computed using the valid-subspace technique These parameters guarantee the network convergence to feasible quilibrium points, A solution for the economic load dispatch problem corresponds to an equilibrium point of the network. Simulation results and comparative analysis in relation to other neural approaches are presented to illustrate efficiency of the proposed approach.
Resumo:
This paper describes a branch-and-price algorithm for the p-median location problem. The objective is to locate p facilities (medians) such as the sum of the distances from each demand point to its nearest facility is minimized. The traditional column generation process is compared with a stabilized approach that combines the column generation and Lagrangean/surrogate relaxation. The Lagrangean/surrogate multiplier modifies; the reduced cost criterion, providing the selection of new productive columns at the search tree. Computational experiments are conducted considering especially difficult instances to the traditional column generation and also with some large-scale instances. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Neste trabalho estuda-se um problema de dimensionamento de lotes e distribuição que envolve além de custos de estoques, produção e preparação, custos de transportes para o armazém da empresa. Os custos logísticos estão associados aos contêineres necessários para empacotar os produtos produzidos. A empresa negocia um contrato de longo prazo onde um custo fixo por período é associado ao transporte dos itens, em contrapartida um limite de contêineres é disponibilizado com custo mais baixo que o custo padrão. Caso ocorra um aumento ocasional de demanda, novos contêineres podem ser utilizados, no entanto, seu custo é mais elevado. Um modelo matemático foi proposto na literatura e resolvido utilizando uma heurística Lagrangiana. No presente trabalho a resolução do problema por uma heurística Lagrangiana/surrogate é avaliada. Além disso, é considerada uma extensão do modelo da literatura adicionando restrições de capacidade e permitindo atraso no atendimento a demanda. Testes computacionais mostraram que a heurística Lagrangiana/surrogate é competitiva especialmente quando se têm restrições de capacidade apertada.
Resumo:
This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In conformational analysis, the systematic search method completely maps the space but suffers from the combinatorial explosion problem because the number of conformations increases exponentially with the number of free rotation angles. This study introduces a new methodology of conformational analysis that controls the combinatorial explosion. It is based on a dimensional reduction of the system through the use of principal component analysis. The results are exactly the same as those obtained for the complete search but, in this case, the number of conformations increases only quadratically with the number of free rotation angles. The method is applied to a series of three drugs: omeprazole. pantoprazole, lansoprazole-benzimidazoles that suppress gastric-acid secretion by means of H(+), K(+)-ATPase enzyme inhibition. (C) 2002 John Wiley Sons. Inc.