918 resultados para Feeder reconfiguration of distribution systems
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
Soil tillage promotes changes in soil structure. The magnitude of the changes varies with the nature of the soil, tillage system and soil water content and decreases over time after tillage. The objective of this study was to evaluate short-term (one year period) and long-term (nine year period) effects of soil tillage and nutrient sources on some physical properties of a very clayey Hapludox. Five tillage systems were evaluated: no-till (NT), chisel plow + one secondary disking (CP), primary + two (secondary) diskings (CT), CT with burning of crop residues (CTb), and CT with removal of crop residues from the field (CTr), in combination with five nutrient sources: control without nutrient application (C); mineral fertilizers, according to technical recommendations for each crop (MF); 5 Mg ha-1 yr-1 of poultry litter (wetmatter) (PL); 60 m³ ha-1 yr-1 of cattle slurry (CS) and; 40 m³ ha-1 yr-1 of swine slurry (SS). Bulk density (BD), total porosity (TP), and parameters related to the water retention curve (macroporosity, mesoporosity and microporosity) were determined after nine years and at five sampling dates during the tenth year of the experiment. Soil physical properties were tillage and time-dependent. Tilled treatments increased total porosity and macroporosity, and reduced bulk density in the surface layer (0.00-0.05 m), but this effect decreased over time after tillage operations due to natural soil reconsolidation, since no external stress was applied in this period. Changes in pore size distribution were more pronounced in larger and medium pore diameter classes. The bulk density was greatest in intermediate layers in all tillage treatments (0.05-0.10 and 0.12-0.17 m) and decreased down to the deepest layer (0.27-0.32 m), indicating a more compacted layer around 0.05-0.20 m. Nutrient sources did not significantly affect soil physical and hydraulic properties studied.
Resumo:
For years, specifications have focused on the water to cement ratio (w/cm) and strength of concrete, despite the majority of the volume of a concrete mixture consisting of aggregate. An aggregate distribution of roughly 60% coarse aggregate and 40% fine aggregate, regardless of gradation and availability of aggregates, has been used as the norm for a concrete pavement mixture. Efforts to reduce the costs and improve sustainability of concrete mixtures have pushed owners to pay closer attention to mixtures with a well-graded aggregate particle distribution. In general, workability has many different variables that are independent of gradation, such as paste volume and viscosity, aggregate’s shape, and texture. A better understanding of how the properties of aggregates affect the workability of concrete is needed. The effects of aggregate characteristics on concrete properties, such as ability to be vibrated, strength, and resistivity, were investigated using mixtures in which the paste content and the w/cm were held constant. The results showed the different aggregate proportions, the maximum nominal aggregate sizes, and combinations of different aggregates all had an impact on the performance in the strength, slump, and box test.
Resumo:
The overall focus of the thesis involves the systematics,germplasm evaluation and pattern of distribution and abundance of freshwater fishes of kerala (india).Biodiversity is the measure of variety of Life.With the signing on the convention on biodiversity, the countries become privileged with absolute rights and responsibility to conserve and utilize their diverse resources for the betterment of mankind in a sustainable way. South-east Asia along with Africa and South America were considered to be the most biodiversity rich areas in the world .The tremendous potential associated with the sustainable utilization of fish germplasm resources of various river systems of Kerala for food, aquaculture and ornamental purposes have to be fully tapped for economic upliftment of fisherman community and also for equitable sharing of benefits among the mankind without compromising the conservation of the rare and unique fish germplasm resources for the future generations.The study was carried during April 2000 to December 2004. 25 major river systems of Kerala were surveyed for fish fauna for delineating the pattern of distribution and abundance of fishes both seasonally and geographically.The results of germplasm inventory and evaluation of fish species were presented both for the state and also river wise. The results of evaluation of fish species for their commercial utilization revealed that, of the 145, 76 are ornamental, 47 food and 22 cultivable. 21 species are strictly endemic to Kerala rivers. The revalidation on biodiversity status of the fishes assessed based on IUCN is so alarming that a high percentage of fishes (59spp.) belong to threatened category which is inclusive of 8 critically ndangered (CR), 36 endangered and 15 species under vulnerable (VU) category.The river wise fish germplasm inventory surveys were conducted in 25 major river systems of Kerala.The results of the present study is indicative of existence of several new fish species in the streams and rivulets located in remote areas of the forests and therefore, new exclusive surveys are required to surface fish species new to science, new distributional records etc, for the river systems.The results of fish germplasm evaluation revealed that there exist many potential endemic ornamental and cultivable fishes in Kerala. It is found imperative to utilize these species sustainably for improving the aquaculture production and aquarium trade of the country which would definitely fetch more income and generate employment.
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.
Resumo:
In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.
Resumo:
Air distribution systems are one of the major electrical energy consumers in air-conditioned commercial buildings which maintain comfortable indoor thermal environment and air quality by supplying specified amounts of treated air into different zones. The sizes of air distribution lines affect energy efficiency of the distribution systems. Equal friction and static regain are two well-known approaches for sizing the air distribution lines. Concerns to life cycle cost of the air distribution systems, T and IPS methods have been developed. Hitherto, all these methods are based on static design conditions. Therefore, dynamic performance of the system has not been yet addressed; whereas, the air distribution systems are mostly performed in dynamic rather than static conditions. Besides, none of the existing methods consider any aspects of thermal comfort and environmental impacts. This study attempts to investigate the existing methods for sizing of the air distribution systems and proposes a dynamic approach for size optimisation of the air distribution lines by taking into account optimisation criteria such as economic aspects, environmental impacts and technical performance. These criteria have been respectively addressed through whole life costing analysis, life cycle assessment and deviation from set-point temperature of different zones. Integration of these criteria into the TRNSYS software produces a novel dynamic optimisation approach for duct sizing. Due to the integration of different criteria into a well- known performance evaluation software, this approach could be easily adopted by designers in busy nature of design. Comparison of this integrated approach with the existing methods reveals that under the defined criteria, system performance is improved up to 15% compared to the existing methods. This approach is interpreted as a significant step forward reaching to the net zero emission building in future.
Resumo:
The deterpenation of bergamot essential oil can be performed by liquid liquid extraction using hydrous ethanol as the solvent. A ternary mixture composed of 1-methyl-4-prop-1-en-2-yl-cydohexene (limonene), 3,7-dimethylocta-1,6-dien-3-yl-acetate (linalyl acetate), and 3,7-dimethylocta-1,6-dien-3-ol (linalool), three major compounds commonly found in bergamot oil, was used to simulate this essential oil. Liquid liquid equilibrium data were experimentally determined for systems containing essential oil compounds, ethanol, and water at 298.2 K and are reported in this paper. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were lower than 0.0062 in all systems, indicating the good descriptive quality of the molecular models. To verify the effect of the water mass fraction in the solvent and the linalool mass fraction in the terpene phase on the distribution coefficients of the essential oil compounds, nonlinear regression analyses were performed, obtaining mathematical models with correlation coefficient values higher than 0.99. The results show that as the water content in the solvent phase increased, the kappa value decreased, regardless of the type of compound studied. Conversely, as the linalool content increased, the distribution coefficients of hydrocarbon terpene and ester also increased. However, the linalool distribution coefficient values were negatively affected when the terpene alcohol content increased in the terpene phase.
Resumo:
We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Irradiation distribution functions based on the yearly collectible energy have been derived for two locations; Sydney, Australia which represents a mid-latitude site and Stockholm, Sweden, which represents a high latitude site. The strong skewing of collectible energy toward summer solstice at high latitudes dictates optimal collector tilt angles considerably below the polar mount. The lack of winter radiation at high latitudes indicates that the optimal acceptance angle for a stationary EW-aligned concentrator decreases as latitude increases. Furthermore concentrator design should be highly asymmetric at high latitudes.
Resumo:
Most of water distribution systems (WDS) need rehabilitation due to aging infrastructure leading to decreasing capacity, increasing leakage and consequently low performance of the WDS. However an appropriate strategy including location and time of pipeline rehabilitation in a WDS with respect to a limited budget is the main challenge which has been addressed frequently by researchers and practitioners. On the other hand, selection of appropriate rehabilitation technique and material types is another main issue which has yet to address properly. The latter can affect the environmental impacts of a rehabilitation strategy meeting the challenges of global warming mitigation and consequent climate change. This paper presents a multi-objective optimization model for rehabilitation strategy in WDS addressing the abovementioned criteria mainly focused on greenhouse gas (GHG) emissions either directly from fossil fuel and electricity or indirectly from embodied energy of materials. Thus, the objective functions are to minimise: (1) the total cost of rehabilitation including capital and operational costs; (2) the leakage amount; (3) GHG emissions. The Pareto optimal front containing optimal solutions is determined using Non-dominated Sorting Genetic Algorithm NSGA-II. Decision variables in this optimisation problem are classified into a number of groups as: (1) percentage proportion of each rehabilitation technique each year; (2) material types of new pipeline for rehabilitation each year. Rehabilitation techniques used here includes replacement, rehabilitation and lining, cleaning, pipe duplication. The developed model is demonstrated through its application to a Mahalat WDS located in central part of Iran. The rehabilitation strategy is analysed for a 40 year planning horizon. A number of conventional techniques for selecting pipes for rehabilitation are analysed in this study. The results show that the optimal rehabilitation strategy considering GHG emissions is able to successfully save the total expenses, efficiently decrease the leakage amount from the WDS whilst meeting environmental criteria.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
The reconfiguration of a distribution network is a change in its topology, aiming to provide specific operation conditions of the network, by changing the status of its switches. It can be performed regardless of any system anomaly. The service restoration is a particular case of reconfiguration and should be performed whenever there is a network failure or whenever one or more sections of a feeder have been taken out of service for maintenance. In such cases, loads that are supplied through lines sections that are downstream of portions removed for maintenance may be supplied by the closing of switches to the others feeders. By classical methods of reconfiguration, several switches may be required beyond those used to perform the restoration service. This includes switching feeders in the same substation or for substations that do not have any direct connection to the faulted feeder. These operations can cause discomfort, losses and dissatisfaction among consumers, as well as a negative reputation for the energy company. The purpose of this thesis is to develop a heuristic for reconfiguration of a distribution network, upon the occurrence of a failure in this network, making the switching only for feeders directly involved in this specific failed segment, considering that the switching applied is related exclusively to the isolation of failed sections and bars, as well as to supply electricity to the islands generated by the condition, with significant reduction in the number of applications of load flows, due to the use of sensitivity parameters for determining voltages and currents estimated on bars and lines of the feeders directly involved with that failed segment. A comparison between this process and classical methods is performed for different test networks from the literature about networks reconfiguration
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)