828 resultados para Distribution system - Power quality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Water distribution systems are important for life saving facilities especially in the recovery after earthquakes. In this paper, a framework is discussed about seismic serviceability of water systems that includes the fragility evaluation of water sources of water distribution networks. Also, a case study is brought about the performance of a water system under different levels of seismic hazard. The seismic serviceability of a water supply system provided by EPANET is evaluated under various levels of seismic hazard. Basically, the assessment process is based on hydraulic analysis and Monte Carlo simulations, implemented with empirical fragility data provided by the American Lifeline Alliance (ALA, 2001) for both pipelines and water facilities. Represented by the Seismic Serviceability Index (Cornell University, 2008), the serviceability of the water distribution system is evaluated under each level of earthquakes with return periods of 72 years, 475 years, and 2475 years. The system serviceability under levels of earthquake hazard are compared with and without considering the seismic fragility of the water source. The results show that the seismic serviceability of the water system decreases with the growing of the return period of seismic hazard, and after considering the seismic fragility of the water source, the seismic serviceability decreases. The results reveal the importance of considering the seismic fragility of water sources, and the growing dependence of the system performance of water system on the seismic resilience of water source under severe earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two of the indicators of the UN Millennium Development Goals ensuring environmental sustainability are energy use and per capita carbon dioxide emissions. The increasing urbanization and increasing world population may require increased energy use in order to transport enough safe drinking water to communities. In addition, the increase in water use would result in increased energy consumption, thereby resulting in increased green-house gas emissions that promote global climate change. The study of multiple Municipal Drinking Water Distribution Systems (MDWDSs) that relates various MDWDS aspects--system components and properties--to energy use is strongly desirable. The understanding of the relationship between system aspects and energy use aids in energy-efficient design. In this study, components of a MDWDS, and/or the characteristics associated with the component are termed as MDWDS aspects (hereafter--system aspects). There are many aspects of MDWDSs that affect the energy usage. Three system aspects (1) system-wide water demand, (2) storage tank parameters, and (3) pumping stations were analyzed in this study. The study involved seven MDWDSs to understand the relationship between the above-mentioned system aspects in relation with energy use. A MDWDSs model, EPANET 2.0, was utilized to analyze the seven systems. Six of the systems were real and one was a hypothetical system. The study presented here is unique in its statistical approach using seven municipal water distribution systems. The first system aspect studied was system-wide water demand. The analysis involved analyzing seven systems for the variation of water demand and its impact on energy use. To quantify the effects of water use reduction on energy use in a municipal water distribution system, the seven systems were modeled and the energy usage quantified for various amounts of water conservation. It was found that the effect of water conservation on energy use was linear for all seven systems and that all the average values of all the systems' energy use plotted on the same line with a high R 2 value. From this relationship, it can be ascertained that a 20% reduction in water demand results in approximately a 13% savings in energy use for all seven systems analyzed. This figure might hold true for many similar systems that are dominated by pumping and not gravity driven. The second system aspect analyzed was storage tank(s) parameters. Various tank parameters: (1) tank maximum water levels, (2) tank elevation, and (3) tank diameter were considered in this part of the study. MDWDSs use a significant amount of electrical energy for the pumping of water from low elevations (usually a source) to higher ones (usually storage tanks). The use of electrical energy has an effect on pollution emissions and, therefore, potential global climate change as well. Various values of these tank parameters were modeled on seven MDWDSs of various sizes using a network solver and the energy usage recorded. It was found that when averaged over all seven analyzed systems (1) the reduction of maximum tank water level by 50% results in a 2% energy reduction, (2) energy use for a change in tank elevation is system specific, and (2) a reduction of tank diameter of 50% results in approximately a 7% energy savings. The third system aspect analyzed in this study was pumping station parameters. A pumping station consists of one or more pumps. The seven systems were analyzed to understand the effect of the variation of pump horsepower and the number of booster stations on energy use. It was found that adding booster stations could save energy depending upon the system characteristics. For systems with flat topography, a single main pumping station was found to use less energy. In systems with a higher-elevation neighborhood, however, one or more booster pumps with a reduced main pumping station capacity used less energy. The energy savings for the seven systems was dependent on the number of boosters and ranged from 5% to 66% for the analyzed five systems with higher elevation neighborhoods (S3, S4, S5, S6, and S7). No energy savings was realized for the remaining two flat topography systems, S1, and S2. The present study analyzed and established the relationship between various system aspects and energy use in seven MDWDSs. This aids in estimating the amount of energy savings in MDWDSs. This energy savings would ultimately help reduce Greenhouse gases (GHGs) emissions including per capita CO 2 emissions thereby potentially lowering the global climate change effect. This will in turn contribute to meeting the MDG of ensuring environmental sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As continued global funding and coordination are allocated toward the improvement of access to safe sources of drinking water, alternative solutions may be necessary to expand implementation to remote communities. This report evaluates two technologies used in a small water distribution system in a mountainous region of Panama; solar powered pumping and flow-reducing discs. The two parts of the system function independently, but were both chosen for their ability to mitigate unique issues in the community. The design program NeatWork and flow-reducing discs were evaluated because they are tools taught to Peace Corps Volunteers in Panama. Even when ample water is available, mountainous terrains affect the pressure available throughout a water distribution system. Since the static head in the system only varies with the height of water in the tank, frictional losses from pipes and fittings must be exploited to balance out the inequalities caused by the uneven terrain. Reducing the maximum allowable flow to connections through the installation of flow-reducing discs can help to retain enough residual pressure in the main distribution lines to provide reliable service to all connections. NeatWork was calibrated to measured flow rates by changing the orifice coefficient (θ), resulting in a value of 0.68, which is 10-15% higher than typical values for manufactured flow-reducing discs. NeatWork was used to model various system configurations to determine if a single-sized flow-reducing disc could provide equitable flow rates throughout an entire system. There is a strong correlation between the optimum single-sized flow- reducing disc and the average elevation change throughout a water distribution system; the larger the elevation change across the system, the smaller the recommended uniform orifice size. Renewable energy can jump the infrastructure gap and provide basic services at a fraction of the cost and time required to install transmission lines. Methods for the assessment of solar powered pumping systems as a means for rural water supply are presented and assessed. It was determined that manufacturer provided product specifications can be used to appropriately design a solar pumping system, but care must be taken to ensure that sufficient water can be provided to the system despite variations in solar intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intermodal rail/road freight transport constitutes an alternative to long-haul road transport for the distribution of large volumes of goods. The paper introduces the intermodal transportation problem for the tactical planning of mode and service selection. In rail mode, shippers either book train capacity on a per-unit basis or charter block trains completely. Road mode is used for short-distance haulage to intermodal terminals and for direct shipments to customers. We analyze the competition of road and intermodal transportation with regard to freight consolidation and service cost on a model basis. The approach is applied to a distribution system of an industrial company serving customers in eastern Europe. The case study investigates the impact of transport cost and consolidation on the optimal modal split.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metamodels have proven be very useful when it comes to reducing the computational requirements of Evolutionary Algorithm-based optimization by acting as quick-solving surrogates for slow-solving fitness functions. The relationship between metamodel scope and objective function varies between applications, that is, in some cases the metamodel acts as a surrogate for the whole fitness function, whereas in other cases it replaces only a component of the fitness function. This paper presents a formalized qualitative process to evaluate a fitness function to determine the most suitable metamodel scope so as to increase the likelihood of calibrating a high-fidelity metamodel and hence obtain good optimization results in a reasonable amount of time. The process is applied to the risk-based optimization of water distribution systems; a very computationally-intensive problem for real-world systems. The process is validated with a simple case study (modified New York Tunnels) and the power of metamodelling is demonstrated on a real-world case study (Pacific City) with a computational speed-up of several orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proton therapy is a high precision technique in cancer radiation therapy which allows irradiating the tumor with minimal damage to the surrounding healthy tissues. Pencil beam scanning is the most advanced dose distribution technique and it is based on a variable energy beam of a few millimeters FWHM which is moved to cover the target volume. Due to spurious effects of the accelerator, of dose distribution system and to the unavoidable scattering inside the patient's body, the pencil beam is surrounded by a halo that produces a peripheral dose. To assess this issue, nuclear emulsion films interleaved with tissue equivalent material were used for the first time to characterize the beam in the halo region and to experimentally evaluate the corresponding dose. The high-precision tracking performance of the emulsion films allowed studying the angular distribution of the protons in the halo. Measurements with this technique were performed on the clinical beam of the Gantry1 at the Paul Scherrer Institute. Proton tracks were identified in the emulsion films and the track density was studied at several depths. The corresponding dose was assessed by Monte Carlo simulations and the dose profile was obtained as a function of the distance from the center of the beam spot.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the factors associated with the rejection of products at ports of importer countries and remedial actions taken by producers in China by taking as an example one of the most competitive agro-food products of China: frozen vegetables. This paper provides an overview of the vegetable production and distribution system in China and the way in which China has been participating in exports of these products. Later sections will examine in detail the frozen vegetable sector in China, identify the causes of port rejections, and the actions taken by the Chinese government and by producers, processors and exporters to improve the quality of frozen vegetable exports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of functional legged robots has encountered its limits in human-made actuation technology. This paper describes research on the biomimetic design of legs for agile quadrupeds. A biomimetic leg concept that extracts key principles from horse legs which are responsible for the agile and powerful locomotion of these animals is presented. The proposed biomimetic leg model defines the effective leg length, leg kinematics, limb mass distribution, actuator power, and elastic energy recovery as determinants of agile locomotion, and values for these five key elements are given. The transfer of the extracted principles to technological instantiations is analyzed in detail, considering the availability of current materials, structures and actuators. A real leg prototype has been developed following the biomimetic leg concept proposed. The actuation system is based on the hybrid use of series elasticity and magneto-rheological dampers which provides variable compliance for natural motion. From the experimental evaluation of this prototype, conclusions on the current technological barriers to achieve real functional legged robots to walk dynamically in agile locomotion are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un incremento de la demanda del agua, junto con el aumento de la contaminación, ha provocado que hoy en día la reutilización de las aguas depuradas sea necesaria, pero la reutilización de aguas debe garantizar y minimizar los posibles riesgos sanitarios y medioambientales que su práctica pueda provocar. En España estos parámetros se encuentran regulados por el RD 1620/2007 relativo al régimen jurídico de la reutilización de las aguas depuradas. Las aguas regeneradas son aguas que ya han sido sometidas a un tratamiento de depuración, y a las cuales se aplica un posterior tratamiento adicional o complementario que permita adecuar su calidad al uso al que vaya a destinarse. Siendo requeridos para los distintos reúsos procesos de desinfección, uno de los principales sistemas utilizados es el cloro, debido a su sencilla aplicación y costos bajos, sin tomar en cuenta la posible formación de compuestos organohalogenados potencialmente cancerígenos. Es por esto que surge la necesidad de aplicar distintos sistemas de oxidación objeto de estudio en esta tesis, como el dióxido de cloro estabilizado, ozono y los procesos avanzados de oxidación (Advanced Oxidation Processes, AOP), ozono/peróxido y uv/peróxido. En esta tesis se investiga los rendimientos que pueden alcanzar estos sistemas en la eliminación de los ácidos húmicos y los fenoles, siendo las principales sustancias formadoras de subproductos de la desinfección, así mismo, se considera necesario garantizar la desinfección del agua a través del estudio de tres grupos de microrganismos, los coliformes totales, e. coli y enterococos, siendo un punto importante el posible recrecimiento microbiológico debido a una desinfección escasa, por la permanencia en el agua de los compuestos antes mencionados, o por alguna fuente de alimento que pudieran encontrar en el sistema de distribución. Lo más importante será la calidad que se pueda alcanzar con estos desinfectantes, con el fin de obtener agua para los distintos reúsos que existen en la actualidad. Y así no limitar los alcances que puede tener la reutilización de las aguas residuales. Basándose en lo antes mencionado se procedió a realizar la caracterización del agua del rio Manzanares, con el fin de determinar la cantidad de ácidos húmicos disueltos y fenoles, obteniendo valores bajos, se decidió incorporar a las muestras de rio 5 mg/L de estos compuestos, con el fin de observar de que manera podrían interferir en la desinfección de esta agua. De esta forma se logran obtener resultados óptimos de los sistemas de desinfección estudiados, siendo el Ozono un oxidante eficiente en la desinfección de los microrganismos y en la eliminación de ácidos húmicos y fenoles con tiempos de contacto cortos, mostrando deficiencias al permitir el recrecimiento de los coliformes totales. Del sistema de oxidación avanzada UV/Peróxido se determino como un eficiente desinfectante para garantizar la inexistencia de rebrotes, al paso del tiempo. Así mismo se concluye que tiene buenos rendimientos en la eliminación del ácido húmico y los fenoles. An increase in water demand, coupled with increasing pollution, has caused today reuse of treated water is necessary, but must ensure water reuse and minimize potential health and environmental risks that their practice is cause. In Spain these parameters are regulated by Royal Decree 1620/2007 on the legal regime of the reuse of treated water. The reclaimed water is water that has already been subjected to a depuration treatment, which is applied as a subsequent further treatment that will bring quality to the use to which is to be delivered. As required for various reuses disinfection processes, one of the main systems used is chlorine, due to its simple implementation and low costs, without taking into account the possible formation of potentially carcinogenic halogenated organic compounds. That is why there is a need to apply different oxidation systems studied in this thesis, as stabilized chlorine dioxide, ozone and advanced oxidation processes (AOP), ozone/peroxide and UV/peroxide. This thesis investigates the rates can reach these systems in removing humic acids and phenols, the main substances forming disinfection byproducts, likewise, it is considered necessary to ensure water disinfection through the study of three groups of microorganisms, total coliform, e. coli and enterococci, the important point being a possible regrowth due to microbiological disinfection scarce, the water remaining on the aforementioned compounds, or a food source which may be found in the distribution system. The most important quality is that achievable with these disinfectants, with the water to obtain various reuses that exist today. And thus not limit the scope that can be reuse of wastewater. Based on the above we proceeded to perform characterization Manzanares river water, in order to determine the quantity of dissolved humic acids and phenols, obtaining low values, it was decided to incorporate river samples 5 mg / L of these compounds, in order to observe how they might interfere with the disinfection of the water. Thus optimum results are achieved for disinfection systems studied, being efficient ozone oxidant in the disinfection of microorganisms and the removal of humic acids and phenols with short contact times, showing gaps to allow regrowth total coliforms. Advanced oxidation system UV / peroxide were determined as an efficient disinfectant to ensure the absence of volunteers, the passage of time. Also it is concluded that has good yields in removing humic acid and phenols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing penetration of wind energy into power systems has pushed grid operators to set new requirements for this kind of generating plants in order to keep acceptable and reliable operation of the system. In addition to the low voltage ride through capability, wind farms are required to participate in voltage support, stability enhancement and power quality improvement. This paper presents a solution for wind farms with fixed-speed generators based on the use of STATCOM with braking resistor and additional series impedances, with an adequate control strategy. The focus is put on guaranteeing the grid code compliance when the wind farm faces an extensive series of grid disturbances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the effect of different penetration rates of plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EV) in the Spanish electrical system. A stochastic model for the average trip evaluation and for the arriving and departure times is used to determine the availability of the vehicles for charging. A novel advanced charging algorithm is proposed, which avoids any communication among all agents. Its performance is determined through different charging scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.