86 resultados para Optimal network configuration
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
Resumo:
There is an increasing need to treat effluents contaminated with phenol with advanced oxidation processes (AOPs) to minimize their impact on the environment as well as on bacteriological populations of other wastewater treatment systems. One of the most promising AOPs is the Fenton process that relies on the Fenton reaction. Nevertheless, there are no systematic studies on Fenton reactor networks. The objective of this paper is to develop a strategy for the optimal synthesis of Fenton reactor networks. The strategy is based on a superstructure optimization approach that is represented as a mixed integer non-linear programming (MINLP) model. Network superstructures with multiple Fenton reactors are optimized with the objective of minimizing the sum of capital, operation and depreciation costs of the effluent treatment system. The optimal solutions obtained provide the reactor volumes and network configuration, as well as the quantities of the reactants used in the Fenton process. Examples based on a case study show that multi-reactor networks yield decrease of up to 45% in overall costs for the treatment plant. (C) 2010 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
This paper presents results of research into the use of the Bellman-Zadeh approach to decision making in a fuzzy environment for solving multicriteria power engineering problems. The application of the approach conforms to the principle of guaranteed result and provides constructive lines in computationally effective obtaining harmonious solutions on the basis of solving associated maxmin problems. The presented results are universally applicable and are already being used to solve diverse classes of power engineering problems. It is illustrated by considering problems of power and energy shortage allocation, power system operation, optimization of network configuration in distribution systems, and energetically effective voltage control in distribution systems. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper shows a new hybrid method for risk assessment regarding interruptions in sensitive processes due to faults in electric power distribution systems. This method determines indices related to long duration interruptions and short duration voltage variations (SDVV), such as voltage sags and swells in each customer supplied by the distribution network. Frequency of such occurrences and their impact on customer processes are determined for each bus and classified according to their corresponding magnitude and duration. The method is based on information regarding network configuration, system parameters and protective devices. It randomly generates a number of fault scenarios in order to assess risk areas regarding long duration interruptions and voltage sags and swells in an especially inventive way, including frequency of events according to their magnitude and duration. Based on sensitivity curves, the method determines frequency indices regarding disruption in customer processes that represent equipment malfunction and possible process interruptions due to voltage sags and swells. Such approach allows for the assessment of the annual costs associated with each one of the evaluated power quality indices.
Resumo:
Changes in species composition is an important process in many ecosystems but rarely considered in systematic reserve site selection. To test the influence of temporal variability in species composition on the establishment of a reserve network, we compared network configurations based on species data of small mammals and frogs sampled during two consecutive years in a fragmented Atlantic Forest landscape (SE Brazil). Site selection with simulated annealing was carried out with the datasets of each single year and after merging the datasets of both years. Site selection resulted in remarkably divergent network configurations. Differences are reflected in both the identity of the selected fragments and in the amount of flexibility and irreplaceability in network configuration. Networks selected when data for both years were merged did not include all sites that were irreplaceable in one of the 2 years. Results of species number estimation revealed that significant changes in the composition of the species community occurred. Hence, temporal variability of community composition should be routinely tested and considered in systematic reserve site selection in dynamic systems.
Resumo:
Confined water, such as those molecules in nanolayers of 2-3 nm in length, plays an important role in the adhesion of hydrophilic materials, mainly in cementitious ones. In this study, the effects of water containing kosmotropic substances on adhesion, known for their ability of enhancing the hydrogen bond (H-bond) network of confined water, were evaluated using mechanical strength tests. Indeed, to link adhesion provided by water confined in nanolayers to a macro-response of the cementitious samples, such as the bending strength, requires the evaluation of local water H-bond network configuration in the presence of kosmotropes, considering their influences on the extent and the strength of H-bonds. Among the kosmotropes, trimethylamine and sucrose provided a 50% increase in bending strength compared to the reference samples, the latter just using water as an adhesive, whereas trehalose was responsible for reducing the bending strength to a value close to the samples without any adhesive. The results attained opened up perspectives regarding exploring the confined water behavior which naturally occurs throughout the hydration process in cement-based materials.
Resumo:
We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Resumo:
Base-level maps (or ""isobase maps"", as originally defined by Filosofov, 1960), express a relationship between valley order and topography. The base-level map can be seen as a ""simplified"" version of the original topographic surface, from which the ""noise"" of the low-order stream erosion was removed. This method is able to identify areas with possible tectonic influence even within lithologically uniform domains. Base-level maps have been recently applied in semi-detail scale (e.g., 1:50 000 or larger) morphotectonic analysis. In this paper, we present an evaluation of the method's applicability in regional-scale analysis (e.g., 1:250 000 or smaller). A test area was selected in northern Brazil, at the lower course of the Araguaia and Tocantins rivers. The drainage network extracted from SRTM30_PLUS DEMs with spatial resolution of approximately 900 m was visually compared with available topographic maps and considered to be compatible with a 1:1,000 000 scale. Regarding the interpretation of regional-scale morphostructures, the map constructed with 2nd and 3rd-order valleys was considered to present the best results. Some of the interpreted base-level anomalies correspond to important shear zones and geological contacts present in the 1:5 000 000 Geological Map of South America. Others have no correspondence with mapped Precambrian structures and are considered to represent younger, probably neotectonic, features. A strong E-W orientation of the base-level lines over the inflexion of the Araguaia and Tocantins rivers, suggest a major drainage capture. A N-S topographic swath profile over the Tocantins and Araguaia rivers reveals a topographic pattern which, allied with seismic data showing a roughly N-S direction of extension in the area, lead us to interpret this lineament as an E-W, southward-dipping normal fault. There is also a good visual correspondence between the base-level lineaments and geophysical anomalies. A NW-SE lineament in the southeast of the study area partially corresponds to the northern border of the Mosquito lava field, of Jurassic age, and a NW-SE lineament traced in the northeastern sector of the study area can be interpreted as the Picos-Santa Ines lineament, identifiable in geophysical maps but with little expression in hypsometric or topographic maps.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The thermal performance of a cooling tower and its cooling water system is critical for industrial plants, and small deviations from the design conditions may cause severe instability in the operation and economics of the process. External disturbances such as variation in the thermal demand of the process or oscillations in atmospheric conditions may be suppressed in multiple ways. Nevertheless, such alternatives are hardly ever implemented in the industrial operation due to the poor coordination between the utility and process sectors. The complexity of the operation increases because of the strong interaction among the process variables. In the present work, an integrated model for the minimization of the operating costs of a cooling water system is developed. The system is composed of a cooling tower as well as a network of heat exchangers. After the model is verified, several cases are studied with the objective of determining the optimal operation. It is observed that the most important operational resources to mitigate disturbances in the thermal demand of the process are, in this order: the increase in recycle water flow rate, the increase in air flow rate and finally the forced removal of a portion of the water flow rate that enters the cooling tower with the corresponding make-up flow rate. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this paper is to develop a mathematical model for the synthesis of anaerobic digester networks based on the optimization of a superstructure that relies on a non-linear programming formulation. The proposed model contains the kinetic and hydraulic equations developed by Pontes and Pinto [Chemical Engineering journal 122 (2006) 65-80] for two types of digesters, namely UASB (Upflow Anaerobic Sludge Blanket) and EGSB (Expanded Granular Sludge Bed) reactors. The objective function minimizes the overall sum of the reactor volumes. The optimization results show that a recycle stream is only effective in case of a reactor with short-circuit, such as the UASB reactor. Sensitivity analysis was performed in the one and two-digester network superstructures, for the following parameters: UASB reactor short-circuit fraction and the EGSB reactor maximum organic load, and the corresponding results vary considerably in terms of digester volumes. Scenarios for three and four-digester network superstructures were optimized and compared with the results from fewer digesters. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A network of Kuramoto oscillators with different natural frequencies is optimized for enhanced synchronizability. All node inputs are normalized by the node connectivity and some important properties of the network Structure are determined in this case: (i) optimized networks present a strong anti-correlation between natural frequencies of adjacent nodes: (ii) this anti-correlation should be as high as possible since the average path length between nodes is maintained as small as in random networks: and (iii) high anti-correlation is obtained without any relation between nodes natural frequencies and the degree of connectivity. We also propose a network construction model with which it is shown that high anti-correlation and small average paths may be achieved by randomly rewiring a fraction of the links of a totally anti-correlated network, and that these networks present optimal synchronization properties. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.