887 resultados para Distribution network reconfiguration problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La calidad de energía eléctrica incluye la calidad del suministro y la calidad de la atención al cliente. La calidad del suministro a su vez se considera que la conforman dos partes, la forma de onda y la continuidad. En esta tesis se aborda la continuidad del suministro a través de la localización de faltas. Este problema se encuentra relativamente resuelto en los sistemas de transmisión, donde por las características homogéneas de la línea, la medición en ambos terminales y la disponibilidad de diversos equipos, se puede localizar el sitio de falta con una precisión relativamente alta. En sistemas de distribución, sin embargo, la localización de faltas es un problema complejo y aún no resuelto. La complejidad es debida principalmente a la presencia de conductores no homogéneos, cargas intermedias, derivaciones laterales y desbalances en el sistema y la carga. Además, normalmente, en estos sistemas sólo se cuenta con medidas en la subestación, y un modelo simplificado del circuito. Los principales esfuerzos en la localización han estado orientados al desarrollo de métodos que utilicen el fundamental de la tensión y de la corriente en la subestación, para estimar la reactancia hasta la falta. Como la obtención de la reactancia permite cuantificar la distancia al sitio de falta a partir del uso del modelo, el Método se considera Basado en el Modelo (MBM). Sin embargo, algunas de sus desventajas están asociadas a la necesidad de un buen modelo del sistema y a la posibilidad de localizar varios sitios donde puede haber ocurrido la falta, esto es, se puede presentar múltiple estimación del sitio de falta. Como aporte, en esta tesis se presenta un análisis y prueba comparativa entre varios de los MBM frecuentemente referenciados. Adicionalmente se complementa la solución con métodos que utilizan otro tipo de información, como la obtenida de las bases históricas de faltas con registros de tensión y corriente medidos en la subestación (no se limita solamente al fundamental). Como herramienta de extracción de información de estos registros, se utilizan y prueban dos técnicas de clasificación (LAMDA y SVM). Éstas relacionan las características obtenidas de la señal, con la zona bajo falta y se denominan en este documento como Métodos de Clasificación Basados en el Conocimiento (MCBC). La información que usan los MCBC se obtiene de los registros de tensión y de corriente medidos en la subestación de distribución, antes, durante y después de la falta. Los registros se procesan para obtener los siguientes descriptores: a) la magnitud de la variación de tensión ( dV ), b) la variación de la magnitud de corriente ( dI ), c) la variación de la potencia ( dS ), d) la reactancia de falta ( Xf ), e) la frecuencia del transitorio ( f ), y f) el valor propio máximo de la matriz de correlación de corrientes (Sv), cada uno de los cuales ha sido seleccionado por facilitar la localización de la falta. A partir de estos descriptores, se proponen diferentes conjuntos de entrenamiento y validación de los MCBC, y mediante una metodología que muestra la posibilidad de hallar relaciones entre estos conjuntos y las zonas en las cuales se presenta la falta, se seleccionan los de mejor comportamiento. Los resultados de aplicación, demuestran que con la combinación de los MCBC con los MBM, se puede reducir el problema de la múltiple estimación del sitio de falta. El MCBC determina la zona de falta, mientras que el MBM encuentra la distancia desde el punto de medida hasta la falta, la integración en un esquema híbrido toma las mejores características de cada método. En este documento, lo que se conoce como híbrido es la combinación de los MBM y los MCBC, de una forma complementaria. Finalmente y para comprobar los aportes de esta tesis, se propone y prueba un esquema de integración híbrida para localización de faltas en dos sistemas de distribución diferentes. Tanto los métodos que usan los parámetros del sistema y se fundamentan en la estimación de la impedancia (MBM), como aquellos que usan como información los descriptores y se fundamentan en técnicas de clasificación (MCBC), muestran su validez para resolver el problema de localización de faltas. Ambas metodologías propuestas tienen ventajas y desventajas, pero según la teoría de integración de métodos presentada, se alcanza una alta complementariedad, que permite la formulación de híbridos que mejoran los resultados, reduciendo o evitando el problema de la múltiple estimación de la falta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous operation of insect-monitoring radars in the UK has permitted, for the first time, the characterization of various phenomena associated with high-altitude migration of large insects over this part of northern Europe. Previous studies have taken a case-study approach, concentrating on a small number of nights of particular interest. Here, combining data from two radars, and from an extensive suction- and light-trapping network, we have undertaken a more systematic, longer-term study of diel flight periodicity and vertical distribution of macro-insects in the atmosphere. Firstly, we identify general features of insect abundance and stratification, occurring during the 24-hour cycle, which emerge from four years’ aggregated radar data for the summer months in southern Britain. These features include mass emigrations at dusk and to a lesser extent at dawn, and daytime concentrations associated with thermal convection. We then focus our attention on the well-defined layers of large nocturnal migrants that form in the early evening, usually at heights of 200–500 m above ground. We present evidence from both radar and trap data that these nocturnal layers are composed mainly of noctuid moths, with species such as Noctua pronuba, Autographa gamma, Agrotis exclamationis, A. segetum, Xestia c-nigrum and Phlogophora meticulosa predominating.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The state of river water deterioration in the Agueda hydrographic basin, mostly in the western part, partly reflects the high rate of housing and industrial development in this area in recent years. The streams have acted as a sink for organic and inorganic loads from several origins: domestic and industrial sewage and agricultural waste. The contents of the heavy metals Cr, Cd, Ni, Cu, Pb, and Zn were studied by sequential chemical extraction of the principal geochemical phases of streambed sediments, in the <63 mum fraction, in order to assess their potential availability to the environment, investigating, the metal concentrations, assemblages, and trends. The granulometric and mineralogical characteristics of this sediment fraction were also studied. This study revealed clear pollution by Cr, Cd, Ni, Cu, Zn, and Pb, as a result from both natural and anthropogenic origins. The chemical transport of metals appears to be essentially by the following geochemical phases, in decreasing order of significance: (exchangeable + carbonates) much greater than (organics) much greater than (Mn and Fe oxides and hydroxides). The (exchangeable + carbonate) phase plays an important part in the fixation of Cu, Ni, Zn, and Cd. The organic phase is important in the fixation of Cr, Pb, and also Cu and Ni. Analyzing the metal contents in the residual fraction, we conclude that Zn and Cd are the most mobile, and Cr and Pb are less mobile than Cu and Ni. The proximity of the pollutant sources and the timing of the influx of contaminated material control the distribution of the contaminant-related sediments locally and on the network scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genetic analysis workshop 15 (GAW15) problem 1 contained baseline expression levels of 8793 genes in immortalised B cells from 194 individuals in 14 Centre d’Etude du Polymorphisme Humane (CEPH) Utah pedigrees. Previous analysis of the data showed linkage and association and evidence of substantial individual variations. In particular, correlation was examined on expression levels of 31 genes and 25 target genes corresponding to two master regulatory regions. In this analysis, we apply Bayesian network analysis to gain further insight into these findings. We identify strong dependences and therefore provide additional insight into the underlying relationships between the genes involved. More generally, the approach is expected to be applicable for integrated analysis of genes on biological pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overseas trained teachers (OTTs) have grown in numbers during the past decade, particularly in London and the South East of England. In this recruitment explosion many OTTs have experienced difficulties. In professional literature as well as press coverage OTTs often become part of a deficit discourse. A small-scale pilot investigation of OTT experience has begun to suggest why OTTs have been successful as well as the principal challenges they have faced. An important factor in their success was felt to be the quality of support in school from others on the staff. Major challenges included the complexity of the primary curriculum. The argument that globalisation leads to brain-drain may be exaggerated. Suggestions for further research are made, which might indicate the positive benefits OTTs can bring to a school.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of complexity is particularly relevant to the field of control engineering, since many engineering problems are inherently complex. The inherent complexity is such that straightforward computational problem solutions often produce very poor results. Although parallel processing can alleviate the problem to some extent, it is artificial neural networks (in various forms) which have recently proved particularly effective, even in dealing with the causes of the problem itself. This paper presents an overview of the current neural network research being undertaken. Such research aims to solve the complex problems found in many areas of science and engineering today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensemble forecasting of nonlinear systems involves the use of a model to run forward a discrete ensemble (or set) of initial states. Data assimilation techniques tend to focus on estimating the true state of the system, even though model error limits the value of such efforts. This paper argues for choosing the initial ensemble in order to optimise forecasting performance rather than estimate the true state of the system. Density forecasting and choosing the initial ensemble are treated as one problem. Forecasting performance can be quantified by some scoring rule. In the case of the logarithmic scoring rule, theoretical arguments and empirical results are presented. It turns out that, if the underlying noise dominates model error, we can diagnose the noise spread.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general stability theory of nonlinear receding horizon controllers has attracted much attention over the last fifteen years, and many algorithms have been proposed to ensure closed-loop stability. On the other hand many reports exist regarding the use of artificial neural network models in nonlinear receding horizon control. However, little attention has been given to the stability issue of these specific controllers. This paper addresses this problem and proposes to cast the nonlinear receding horizon control based on neural network models within the framework of an existing stabilising algorithm.