996 resultados para Extensive air showers
Resumo:
The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.
Developing a simple, rapid method for identifying and monitoring jellyfish aggregations from the air
Resumo:
Within the marine environment, aerial surveys have historically centred on apex predators, such as pinnipeds, cetaceans and sea birds. However, it is becoming increasingly apparent that the utility of this technique may also extend to subsurface species such as pre-spawning fish stocks and aggregations of jellyfish that occur close to the surface. In light of this, we tested the utility of aerial surveys to provide baseline data for 3 poorly understood scyphozoan jellyfish found throughout British and Irish waters: Rhizostoma octopus, Cyanea capillata and Chrysaora hysoscella. Our principal objectives were to develop a simple sampling protocol to identify and quantify surface aggregations, assess their consistency in space and time, and consider the overall applicability of this technique to the study of gelatinous zooplankton. This approach provided a general understanding of range and relative abundance for each target species, with greatest suitability to the study of R. octopus. For this species it was possible to identify and monitor extensive, temporally consistent and previously undocumented aggregations throughout the Irish Sea, an area spanning thousands of square kilometres. This finding has pronounced implications for ecologists and fisheries managers alike and, moreover, draws attention to the broad utility of aerial surveys for the study of gelatinous aggregations beyond the range of conventional ship-based techniques.
Resumo:
Despite the extensive geographical range of palaeolimnological studies designed to assess the extent of surface water acidification in the United Kingdom during the 1980s, little attention was paid to the status of surface waters in the North York Moors (NYM). In this paper, we present sediment core data from a moorland pool in the NYM that provide a record of air pollution contamination and surface water acidification. The 41-cm-long core was divided into three lithostratigraphic units. The lower two comprise peaty soils and peats, respectively, that date to between approximately 8080 and 6740 cal. BP. The uppermost unit comprises peaty lake muds dating from between approximately ad 1790 and the present day (ad 2006). The lower two units contain pollen dominated by forest taxa, whereas the uppermost unit contains pollen indicative of open landscape conditions similar to those of the present. Heavy metal, spheroidal carbonaceous particle, mineral magnetics and stable isotope analysis of the upper sediments show clear evidence of contamination by air pollutants derived from fossil-fuel combustion over the last c. 150years, and diatom analysis indicates that the naturally acidic pool became more acidic during the 20th century. We conclude that the exceptionally acidic surface waters of the pool at present (pH=c. 4.1) are the result of a long history of air pollution and not because of naturally acidic local conditions. We argue that the highly acidic surface waters elsewhere in the NYM are similarly acidified and that the lack of evidence of significant recovery from acidification, despite major reductions in the emissions of acidic gases that have taken place over the last c. 30years, indicates the continuing influence of pollutant sulphur stored in catchment peats, a legacy of over 150years of acid deposition.
Resumo:
Glazed Double Skin Facades (DSF) offer the potential to improve the performance of all-glass building skins, common to commercial office buildings in which full facade glazing has almost become the standard. Single skin glazing results in increased heating and cooling costs over opaque walls, due to lower thermal resistance of glass, and the increased impact of solar gain through it. However, the performance benefit of DSF technology continues to be questioned and its operation poorly understood, particularly the nature of airflow through the cavity. This paper deals specifically with the experimental analysis of the air flow characteristics in an automated double skin façade. The benefit of the DSF as a thermal buffer, and to limit overheating is evaluated through analysis of an extensive set of parameters including air and surface temperatures at each level in the DSF, airflow readings in the cavity and at the inlet and outlet, solar and wind data, and analytically derived pressure differentials. The temperature and air-flow are monitored in the cavity of a DSF using wireless sensors and hot wire anemometers respectively. Automated louvre operation and building set-points are monitored via the BMS. Thermal stratification and air flow variation during changing weather conditions are shown to effect the performance of the DSF considerably and hence the energy performance of the building. The relative pressure effects due to buoyancy and wind are analysed and quantified. This research aims to developed and validate models of DSFs in the maritime climate, using multi-season data from experimental monitoring. This extensive experimental study provides data for training and validation of models.
Resumo:
Une compréhension approfondie et un meilleur contrôle de l'auto-assemblage des copolymères diblocs (séquencés) et de leurs complexes à l'interface air/eau permettent la formation contrôlée de nanostructures dont les propriétés sont connues comme alternative à la nanolithographie. Dans cette thèse, des monocouches obtenues par les techniques de Langmuir et de Langmuir-Blodgett (LB) avec le copolymère dibloc polystyrène-poly(4-vinyl pyridine) (PS-PVP), seul ou complexé avec de petites molécules par liaison hydrogène [en particulier, le 3-n-pentadécylphénol (PDP)], ont été étudiées. Une partie importante de notre recherche a été consacrée à l'étude d'une monocouche assemblée atypique baptisée réseau de nanostries. Des monocouches LB composées de nanostries ont déjà été rapportées dans la littérature mais elles coexistent souvent avec d'autres morphologies, ce qui les rend inutilisables pour des applications potentielles. Nous avons déterminé les paramètres moléculaires et les conditions expérimentales qui contrôlent cette morphologie, la rendant très reproductible. Nous avons aussi proposé un mécanisme original pour la formation de cette morphologie. De plus, nous avons montré que l'utilisation de solvants à haut point d’ébullition, non couramment utilisés pour la préparation des films Langmuir, peut améliorer l'ordre des nanostries. En étudiant une large gamme de PS-PVP avec des rapports PS/PVP et des masses molaires différents, avec ou sans la présence de PDP, nous avons établi la dépendance des types principaux de morphologie (planaire, stries, nodules) en fonction de la composition et de la concentration des solutions. Ces observations ont mené à une discussion sur les mécanismes de formation des morphologies, incluant la cinétique, l’assemblage moléculaire et l’effet du démouillage. Nous avons aussi démontré pour la première fois que le plateau dans l'isotherme des PS-PVP/PDP avec morphologie de type nodules est relié à une transition ordre-ordre des nodules (héxagonal-tétragonal) qui se produit simultanément avec la réorientation du PDP, les deux aspects étant clairement observés par AFM. Ces études ouvrent aussi la voie à l'utilisation de films PS-PVP/PDP ultraminces comme masque. La capacité de produire des films nanostructurés bien contrôlés sur différents substrats a été démontrée et la stabilité des films a été vérifiée. Le retrait de la petite molécule des nanostructures a fait apparaître une structure interne à explorer lors d’études futures.
Resumo:
Recent radar and rain-gauge observations from the island of Dominica, which lies in the eastern Caribbean sea at 15 N, show a strong orographic enhancement of trade-wind precipitation. The mechanisms behind this enhancement are investigated using idealized large-eddy simulations with a realistic representation of the shallow trade-wind cumuli over the open ocean upstream of the island. The dominant mechanism is found to be the rapid growth of convection by the bulk lifting of the inhomogenous impinging flow. When rapidly lifted by the terrain, existing clouds and other moist parcels gain buoyancy relative to rising dry air because of their different adiabatic lapse rates. The resulting energetic, closely-packed convection forms precipitation readily and brings frequent heavy showers to the high terrain. Despite this strong precipitation enhancement, only a small fraction (1%) of the impinging moisture flux is lost over the island. However, an extensive rain shadow forms to the lee of Dominica due to the convective stabilization, forced descent, and wave breaking. A linear model is developed to explain the convective enhancement over the steep terrain.
Resumo:
The DIAMET (DIAbatic influences on Mesoscale structures in ExTratropical storms) project aims to improve forecasts of high-impact weather in extratropical cyclones through field measurements, high-resolution numerical modeling, and improved design of ensemble forecasting and data assimilation systems. This article introduces DIAMET and presents some of the first results. Four field campaigns were conducted by the project, one of which, in late 2011, coincided with an exceptionally stormy period marked by an unusually strong, zonal North Atlantic jet stream and a succession of severe windstorms in northwest Europe. As a result, December 2011 had the highest monthly North Atlantic Oscillation index (2.52) of any December in the last 60 years. Detailed observations of several of these storms were gathered using the UK’s BAe146 research aircraft and extensive ground-based measurements. As an example of the results obtained during the campaign, observations are presented of cyclone Friedhelm on 8 December 2011, when surface winds with gusts exceeding 30 m s-1 crossed central Scotland, leading to widespread disruption to transportation and electricity supply. Friedhelm deepened 44 hPa in 24 hours and developed a pronounced bent-back front wrapping around the storm center. The strongest winds at 850 hPa and the surface occurred in the southern quadrant of the storm, and detailed measurements showed these to be most intense in clear air between bands of showers. High-resolution ensemble forecasts from the Met Office showed similar features, with the strongest winds aligned in linear swaths between the bands, suggesting that there is potential for improved skill in forecasts of damaging winds.
Resumo:
The most damaging winds in a severe extratropical cyclone often occur just ahead of the evaporating ends of cloud filaments emanating from the so-called cloud head. These winds are associated with low-level jets (LLJs), sometimes occurring just above the boundary layer. The question then arises as to how the high momentum is transferred to the surface. An opportunity to address this question arose when the severe ‘St Jude's Day’ windstorm travelled across southern England on 28 October 2013. We have carried out a mesoanalysis of a network of 1 min resolution automatic weather stations and high-resolution Doppler radar scans from the sensitive S-band Chilbolton Advanced Meteorological Radar (CAMRa), along with satellite and radar network imagery and numerical weather prediction products. We show that, although the damaging winds occurred in a relatively dry region of the cyclone, there was evidence within the LLJ of abundant precipitation residues from shallow convective clouds that were evaporating in a localized region of descent. We find that pockets of high momentum were transported towards the surface by the few remaining actively precipitating convective clouds within the LLJ and also by precipitation-free convection in the boundary layer that was able to entrain evaporatively cooled air from the LLJ. The boundary-layer convection was organized in along-wind rolls separated by 500 to about 3000 m, the spacing varying according to the vertical extent of the convection. The spacing was greatest where the strongest winds penetrated to the surface. A run with a medium-resolution version of the Weather Research and Forecasting (WRF) model was able to reproduce the properties of the observed LLJ. It confirmed the LLJ to be a sting jet, which descended over the leading edge of a weaker cold-conveyor-belt jet.
Resumo:
The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.
Resumo:
As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large cities across Europe, particularly for NO2. Modeling air quality in urban areas is rather complex since observed concentration values are a consequence of the interaction of multiple sources and processes that involve a wide range of spatial and temporal scales. Besides a consistent and robust multi-scale modeling system, comprehensive and flexible emission inventories are needed. This paper discusses the application of the WRF-SMOKE-CMAQ system to the Madrid city (Spain) to assess the contribution of the main emitting sectors in the region. A detailed emission inventory was compiled for this purpose. This inventory relies on bottom-up methods for the most important sources. It is coupled with the regional traffic model and it makes use of an extensive database of industrial, commercial and residential combustion plants. Less relevant sources are downscaled from national or regional inventories. This paper reports the methodology and main results of the source apportionment study performed to understand the origin of pollution (main sectors and geographical areas) and define clear targets for the abatement strategy. Finally the structure of the air quality monitoring is analyzed and discussed to identify options to improve the monitoring strategy not only in the Madrid city but the whole metropolitan area.
Resumo:
The phase equilibria in the Al-Fe-Zn-O system in the range 1250 °C to 1695 °C in air have been experimentally studied using equilibration and quenching techniques followed by electron probe X-ray microanalysis. The phase diagram of the binary Al2O3-ZnO system and isothermal sections of the Al2O3-“Fe2O3”-ZnO system at 1250 °C, 1400 °C, and 1550 °C have been constructed and reported for the first time. The extents of solid solutions in the corundum (Al,Fe)2O3, hematite (Fe,Al)2O3, Al2O3*Fe2O3 phase (Al,Fe)2O3, spinel (Al,Fe,Zn)O4, and zincite (Al,Zn,Fe)O primary phase fields have been measured. Corundum, hematite, and Al2O3*Fe2O3 phases dissolve less than 1 mol pct zinc oxide. The limiting compositions of Al2O3*Fe2O3 phase measured in this study at 1400 °C are slightly nonstoichiometric, containing more Al2O3 then previously reported. Spinel forms an extensive solid solution in the Al2O3-“Fe2O3”-ZnO system in air with increasing temperature. Zincite was found to dissolve up to 7 mole pct of aluminum in the presence of iron at 1550 °C in air. A meta-stable Al2O3-rich phase of the approximate composition Al8FeZnO14+x was observed at all of the conditions investigated. Aluminum dissolved in the zincite in the presence of iron appears to suppress the transformation from a round to platelike morphology.
Resumo:
The phase equilibria in the Fe-Mg-Zn-O system in the temperature range 1100-1550degreesC in air have been experimentally studied using equilibration and quenching followed by electron probe X-ray microanalysis. The compositions of condensed phases in equilibrium in the binary MgO-ZnO system and the ternary Fe-Mg-O system have been reported at sub-solidus in air. Pseudo-ternary sections of the quaternary Fe-Mg-Zn-O system at 1100, 1250 and 1400degreesC in air were constructed using the experimental data. The solid solution of iron oxide, MgO and ZnO in the periclase (Mg, Zn, Fe)O, spinel (Mg2+, Fe2+, Zn2+)(x)Fe(2+y)3+O4 and zincite (Zn, Mg, Fe)O phases were found to be extensive under the conditions investigated. A continuous spinel solid solution is formed between the magnesioferrite (Mg2+, Fe2+)(x)Fe(2+y)3+O4 and franklinite (Zn2+, Fe2+)(x)Fe(2+y)3+O4 end-members at 1100 and 1250degreesC, extending to magnetite (Fe2+)(x)Fe(2+y)3+O4 at 1400degreesC in air. The compositions along the spinel boundaries were found to be non-stoichiometric, the magnitude of the non-stoichiometry being a function of composition and temperature in air. It was found that hematite dissolves neither MgO nor ZnO in air.
Resumo:
The main aim of the research project "On the Contribution of Schools to Children's Overall Indoor Air Exposure" is to study associations between adverse health effects, namely, allergy, asthma, and respiratory symptoms, and indoor air pollutants to which children are exposed to in primary schools and homes. Specifically, this investigation reports on the design of the study and methods used for data collection within the research project and discusses factors that need to be considered when designing such a study. Further, preliminary findings concerning descriptors of selected characteristics in schools and homes, the study population, and clinical examination are presented. The research project was designed in two phases. In the first phase, 20 public primary schools were selected and a detailed inspection and indoor air quality (IAQ) measurements including volatile organic compounds (VOC), aldehydes, particulate matter (PM2.5, PM10), carbon dioxide (CO2), carbon monoxide (CO), bacteria, fungi, temperature, and relative humidity were conducted. A questionnaire survey of 1600 children of ages 8-9 years was undertaken and a lung function test, exhaled nitric oxide (eNO), and tear film stability testing were performed. The questionnaire focused on children's health and on the environment in their school and homes. One thousand and ninety-nine questionnaires were returned. In the second phase, a subsample of 68 children was enrolled for further studies, including a walk-through inspection and checklist and an extensive set of IAQ measurements in their homes. The acquired data are relevant to assess children's environmental exposures and health status.