915 resultados para Distribution system reliability
Resumo:
The first approach in this work is the concept of the electric power transmission and distribution system, as well as the survey of the current condition of substations chosen to thus identify the importance of having a suitable design with respect to security due to the need to obtain the approval for the operation of the substation according to the standards valid in the country. Present the guidelines and present needs when designing improvements, thus ensuring the company for more safety and reliability. From the company's point of view is an opportunity to reduce the cost of maintenance, and the professional point of view is an opportunity that allows a comprehensive study of legal requirements for the operation of substations
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
The Environmental Health (EH) program of Peace Corps (PC) Panama and a non-governmental organization (NGO) Waterlines have been assisting rural communities in Panama gain access to improved water sources through the practice of community management (CM) model and participatory development. Unfortunately, there is little information available on how a water system is functioning once the construction is complete and the volunteer leaves the community. This is a concern when the recent literature suggests that most communities are not able to indefinitely maintain a rural water system (RWS) without some form of external assistance (Sara and Katz, 1997; Newman et al, 2002; Lockwood, 2002, 2003, 2004; IRC, 2003; Schweitzer, 2009). Recognizing this concern, the EH program director encouraged the author to complete a postproject assessment of the past EH water projects. In order to carry out the investigation, an easy to use monitoring and evaluation tool was developed based on literature review and the author’s three years of field experience in rural Panama. The study methodology consists of benchmark scoring systems to rate the following ten indicators: watershed, source capture, transmission line, storage tank, distribution system, system reliability, willingness to pay, accounting/transparency, maintenance, and active water committee members. The assessment of 28 communities across the country revealed that the current state of physical infrastructure, as well as the financial, managerial and technical capabilities of water committees varied significantly depending on the community. While some communities are enjoying continued service and their water committee completing all of its responsibilities, others have seen their water systems fall apart and be abandoned. Overall, the higher score were more prevalent for all ten indicators. However, even the communities with the highest scores requested some form of additional assistance. The conclusion from the assessment suggests that the EH program should incorporate an institutional support mechanism (ISM) to its sector policy in order to systematically provide follow-up support to rural communities in Panama. A full-time circuit rider with flexible funding would be able to provide additional technical support, training and encouragement to those communities in need.
Resumo:
Water distribution systems are important for life saving facilities especially in the recovery after earthquakes. In this paper, a framework is discussed about seismic serviceability of water systems that includes the fragility evaluation of water sources of water distribution networks. Also, a case study is brought about the performance of a water system under different levels of seismic hazard. The seismic serviceability of a water supply system provided by EPANET is evaluated under various levels of seismic hazard. Basically, the assessment process is based on hydraulic analysis and Monte Carlo simulations, implemented with empirical fragility data provided by the American Lifeline Alliance (ALA, 2001) for both pipelines and water facilities. Represented by the Seismic Serviceability Index (Cornell University, 2008), the serviceability of the water distribution system is evaluated under each level of earthquakes with return periods of 72 years, 475 years, and 2475 years. The system serviceability under levels of earthquake hazard are compared with and without considering the seismic fragility of the water source. The results show that the seismic serviceability of the water system decreases with the growing of the return period of seismic hazard, and after considering the seismic fragility of the water source, the seismic serviceability decreases. The results reveal the importance of considering the seismic fragility of water sources, and the growing dependence of the system performance of water system on the seismic resilience of water source under severe earthquakes.
Resumo:
Two of the indicators of the UN Millennium Development Goals ensuring environmental sustainability are energy use and per capita carbon dioxide emissions. The increasing urbanization and increasing world population may require increased energy use in order to transport enough safe drinking water to communities. In addition, the increase in water use would result in increased energy consumption, thereby resulting in increased green-house gas emissions that promote global climate change. The study of multiple Municipal Drinking Water Distribution Systems (MDWDSs) that relates various MDWDS aspects--system components and properties--to energy use is strongly desirable. The understanding of the relationship between system aspects and energy use aids in energy-efficient design. In this study, components of a MDWDS, and/or the characteristics associated with the component are termed as MDWDS aspects (hereafter--system aspects). There are many aspects of MDWDSs that affect the energy usage. Three system aspects (1) system-wide water demand, (2) storage tank parameters, and (3) pumping stations were analyzed in this study. The study involved seven MDWDSs to understand the relationship between the above-mentioned system aspects in relation with energy use. A MDWDSs model, EPANET 2.0, was utilized to analyze the seven systems. Six of the systems were real and one was a hypothetical system. The study presented here is unique in its statistical approach using seven municipal water distribution systems. The first system aspect studied was system-wide water demand. The analysis involved analyzing seven systems for the variation of water demand and its impact on energy use. To quantify the effects of water use reduction on energy use in a municipal water distribution system, the seven systems were modeled and the energy usage quantified for various amounts of water conservation. It was found that the effect of water conservation on energy use was linear for all seven systems and that all the average values of all the systems' energy use plotted on the same line with a high R 2 value. From this relationship, it can be ascertained that a 20% reduction in water demand results in approximately a 13% savings in energy use for all seven systems analyzed. This figure might hold true for many similar systems that are dominated by pumping and not gravity driven. The second system aspect analyzed was storage tank(s) parameters. Various tank parameters: (1) tank maximum water levels, (2) tank elevation, and (3) tank diameter were considered in this part of the study. MDWDSs use a significant amount of electrical energy for the pumping of water from low elevations (usually a source) to higher ones (usually storage tanks). The use of electrical energy has an effect on pollution emissions and, therefore, potential global climate change as well. Various values of these tank parameters were modeled on seven MDWDSs of various sizes using a network solver and the energy usage recorded. It was found that when averaged over all seven analyzed systems (1) the reduction of maximum tank water level by 50% results in a 2% energy reduction, (2) energy use for a change in tank elevation is system specific, and (2) a reduction of tank diameter of 50% results in approximately a 7% energy savings. The third system aspect analyzed in this study was pumping station parameters. A pumping station consists of one or more pumps. The seven systems were analyzed to understand the effect of the variation of pump horsepower and the number of booster stations on energy use. It was found that adding booster stations could save energy depending upon the system characteristics. For systems with flat topography, a single main pumping station was found to use less energy. In systems with a higher-elevation neighborhood, however, one or more booster pumps with a reduced main pumping station capacity used less energy. The energy savings for the seven systems was dependent on the number of boosters and ranged from 5% to 66% for the analyzed five systems with higher elevation neighborhoods (S3, S4, S5, S6, and S7). No energy savings was realized for the remaining two flat topography systems, S1, and S2. The present study analyzed and established the relationship between various system aspects and energy use in seven MDWDSs. This aids in estimating the amount of energy savings in MDWDSs. This energy savings would ultimately help reduce Greenhouse gases (GHGs) emissions including per capita CO 2 emissions thereby potentially lowering the global climate change effect. This will in turn contribute to meeting the MDG of ensuring environmental sustainability.
Resumo:
As continued global funding and coordination are allocated toward the improvement of access to safe sources of drinking water, alternative solutions may be necessary to expand implementation to remote communities. This report evaluates two technologies used in a small water distribution system in a mountainous region of Panama; solar powered pumping and flow-reducing discs. The two parts of the system function independently, but were both chosen for their ability to mitigate unique issues in the community. The design program NeatWork and flow-reducing discs were evaluated because they are tools taught to Peace Corps Volunteers in Panama. Even when ample water is available, mountainous terrains affect the pressure available throughout a water distribution system. Since the static head in the system only varies with the height of water in the tank, frictional losses from pipes and fittings must be exploited to balance out the inequalities caused by the uneven terrain. Reducing the maximum allowable flow to connections through the installation of flow-reducing discs can help to retain enough residual pressure in the main distribution lines to provide reliable service to all connections. NeatWork was calibrated to measured flow rates by changing the orifice coefficient (θ), resulting in a value of 0.68, which is 10-15% higher than typical values for manufactured flow-reducing discs. NeatWork was used to model various system configurations to determine if a single-sized flow-reducing disc could provide equitable flow rates throughout an entire system. There is a strong correlation between the optimum single-sized flow- reducing disc and the average elevation change throughout a water distribution system; the larger the elevation change across the system, the smaller the recommended uniform orifice size. Renewable energy can jump the infrastructure gap and provide basic services at a fraction of the cost and time required to install transmission lines. Methods for the assessment of solar powered pumping systems as a means for rural water supply are presented and assessed. It was determined that manufacturer provided product specifications can be used to appropriately design a solar pumping system, but care must be taken to ensure that sufficient water can be provided to the system despite variations in solar intensity.
Resumo:
Intermodal rail/road freight transport constitutes an alternative to long-haul road transport for the distribution of large volumes of goods. The paper introduces the intermodal transportation problem for the tactical planning of mode and service selection. In rail mode, shippers either book train capacity on a per-unit basis or charter block trains completely. Road mode is used for short-distance haulage to intermodal terminals and for direct shipments to customers. We analyze the competition of road and intermodal transportation with regard to freight consolidation and service cost on a model basis. The approach is applied to a distribution system of an industrial company serving customers in eastern Europe. The case study investigates the impact of transport cost and consolidation on the optimal modal split.
Resumo:
Proton therapy is a high precision technique in cancer radiation therapy which allows irradiating the tumor with minimal damage to the surrounding healthy tissues. Pencil beam scanning is the most advanced dose distribution technique and it is based on a variable energy beam of a few millimeters FWHM which is moved to cover the target volume. Due to spurious effects of the accelerator, of dose distribution system and to the unavoidable scattering inside the patient's body, the pencil beam is surrounded by a halo that produces a peripheral dose. To assess this issue, nuclear emulsion films interleaved with tissue equivalent material were used for the first time to characterize the beam in the halo region and to experimentally evaluate the corresponding dose. The high-precision tracking performance of the emulsion films allowed studying the angular distribution of the protons in the halo. Measurements with this technique were performed on the clinical beam of the Gantry1 at the Paul Scherrer Institute. Proton tracks were identified in the emulsion films and the track density was studied at several depths. The corresponding dose was assessed by Monte Carlo simulations and the dose profile was obtained as a function of the distance from the center of the beam spot.
Resumo:
Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.
Resumo:
Multimedia distribution through wireless networks in the home environment presents a number of advantages which have fueled the interest of industry in recent years, such as simple connectivity and data delivery to a variety of devices. Together with High-Definition (HD) contents, multimedia wireless networks have been proposed for several applications, such as IPTV and Digital TV distribution for multiple devices in the home environment. For these scenarios, we propose a multicast distribution system for High-Definition video over 802.11 wireless networks based on rate-limited packet retransmission. We develop a limited rate ARQ system that retransmits packets according to the importance of their content (prioritization scheme) and according to their delay limitations (delay control). The performance of our proposed ARQ system is evaluated and compared with a similarly rate-limited ARQ algorithm. The results show a higher packet recovery rate and improvements in video quality for our proposed system.
Resumo:
One of the main obstacles to the widespread adoption of quantum cryptography has been the difficulty of integration into standard optical networks, largely due to the tremendous difference in power of classical signals compared with the single quantum used for quantum key distribution. This makes the technology expensive and hard to deploy. In this letter, we show an easy and straightforward integration method of quantum cryptography into optical access networks. In particular, we analyze how a quantum key distribution system can be seamlessly integrated in a standard access network based on the passive optical and time division multiplexing paradigms. The novelty of this proposal is based on the selective post-processing that allows for the distillation of secret keys avoiding the noise produced by other network users. Importantly, the proposal does not require the modification of the quantum or classical hardware specifications neither the use of any synchronization mechanism between the network and quantum cryptography devices.
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
Purpose – The purpose of this paper is to investigate the “last mile” delivery link between a hub and spoke distribution system and its customers. The proportion of retail, as opposed to non-retail (trade) customers using this type of distribution system has been growing in the UK. The paper shows the applicability of simulation to demonstrate changes in overall delivery policy to these customers. Design/methodology/approach – A case-based research method was chosen with the aim to provide an exemplar of practice and test the proposition that simulation can be used as a tool to investigate changes in delivery policy. Findings – The results indicate the potential improvement in delivery performance, specifically in meeting timed delivery performance, that could be made by having separate retail and non-retail delivery runs from the spoke terminal to the customer. Research limitations/implications – The simulation study does not attempt to generate a vehicle routing schedule but demonstrates the effects of a change on delivery performance when comparing delivery policies. Practical implications – Scheduling and spreadsheet software are widely used and provide useful assistance in the design of delivery runs and the allocation of staff to those delivery runs. This paper demonstrates to managers the usefulness of investigating the efficacy of current design rules and presents simulation as a suitable tool for this analysis. Originality/value – A simulation model is used in a novel application to test a change in delivery policy in response to a changing delivery profile of increased retail deliveries.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.