999 resultados para AIR-SHOWER ARRAY


Relevância:

40.00% 40.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An accurate knowledge of the fluorescence yield and its dependence on atmospheric properties such as pressure, temperature or humidity is essential to obtain a reliable measurement of the primary energy of cosmic rays in experiments using the fluorescence technique. In this work, several sets of fluorescence yield data (i.e. absolute value and quenching parameters) are described and compared. A simple procedure to study the effect of the assumed fluorescence yield on the reconstructed shower parameters (energy and shower maximum depth) as a function of the primary features has been developed. As an application, the effect of water vapor and temperature dependence of the collisional cross section on the fluorescence yield and its impact on the reconstruction of primary energy and shower maximum depth has been studied. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metal oxide semiconductor (MOS) sensors are a class of chemical sensor that have potential for being a practical core sensor module for an electronic nose system in various environmental monitoring applications. However, the responses of these sensors may be affected by changes in humidity and this must be taken into consideration when developing calibration models. This paper characterises the humidity dependence of a sensor array which consists of 12 MOS sensors. The results were used to develop calibration models using partial least squares. Effects of humidity on the response of the sensor array and predictive ability of partial least squares are discussed. It is shown that partial least squares can provide proper calibration models to compensate for effects caused by changes in humidity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A commercial non-specific gas sensor array system was evaluated in terms of its capability to monitor the odour abatement performance of a biofiltration system developed for treating emissions from a commercial piggery building. The biofiltration system was a modular system comprising an inlet ducting system, humidifier and closed-bed biofilter. It also included a gravimetric moisture monitoring and water application system for precise control of moisture content of an organic woodchip medium. Principal component analysis (PCA) of the sensor array measurements indicated that the biofilter outlet air was significantly different to both inlet air of the system and post-humidifier air. Data pre-processing techniques including normalising and outlier handling were applied to improve the odour discrimination performance of the non-specific gas sensor array. To develop an odour quantification model using the sensor array responses of the non-specific sensor array, PCA regression, artificial neural network (ANN) and partial least squares (PLS) modelling techniques were applied. The correlation coefficient (r(2)) values of the PCA, ANN, and PLS models were 0.44, 0.62 and 0.79, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, dry chemical modification methods involving UV/ozone, oxygen plasma, and vacuum annealing treatments are explored to precisely control the wettability of CNT arrays. By varying the exposure time of these treatments the surface concentration of oxygenated groups adsorbed on the CNT arrays can be controlled. CNT arrays with very low amount of oxygenated groups exhibit a superhydrophobic behavior. In addition to their extremely high static contact angle, they cannot be dispersed in DI water and their impedance in aqueous electrolytes is extremely high. These arrays have an extreme water repellency capability such that a water droplet will bounce off of their surface upon impact and a thin film of air is formed on their surface as they are immersed in a deep pool of water. In contrast, CNT arrays with very high surface concentration of oxygenated functional groups exhibit an extreme hydrophilic behavior. In addition to their extremely low static contact angle, they can be dispersed easily in DI water and their impedance in aqueous electrolytes is tremendously low. Since the bulk structure of the CNT arrays are preserved during the UV/ozone, oxygen plasma, and vacuum annealing treatments, all CNT arrays can be repeatedly switched between superhydrophilic and superhydrophobic, as long as their O/C ratio is kept below 18%.

The effect of oxidation using UV/ozone and oxygen plasma treatments is highly reversible as long as the O/C ratio of the CNT arrays is kept below 18%. At O/C ratios higher than 18%, the effect of oxidation is no longer reversible. This irreversible oxidation is caused by irreversible changes to the CNT atomic structure during the oxidation process. During the oxidation process, CNT arrays undergo three different processes. For CNT arrays with O/C ratios lower than 40%, the oxidation process results in the functionalization of CNT outer walls by oxygenated groups. Although this functionalization process introduces defects, vacancies and micropores opening, the graphitic structure of the CNT is still largely intact. For CNT arrays with O/C ratios between 40% and 45%, the oxidation process results in the etching of CNT outer walls. This etching process introduces large scale defects and holes that can be obviously seen under TEM at high magnification. Most of these holes are found to be several layers deep and, in some cases, a large portion of the CNT side walls are cut open. For CNT arrays with O/C ratios higher than 45%, the oxidation process results in the exfoliation of the CNT walls and amorphization of the remaining CNT structure. This amorphization process can be implied from the disappearance of C-C sp2 peak in the XPS spectra associated with the pi-bond network.

The impact behavior of water droplet impinging on superhydrophobic CNT arrays in a low viscosity regime is investigated for the first time. Here, the experimental data are presented in the form of several important impact behavior characteristics including critical Weber number, volume ratio, restitution coefficient, and maximum spreading diameter. As observed experimentally, three different impact regimes are identified while another impact regime is proposed. These regimes are partitioned by three critical Weber numbers, two of which are experimentally observed. The volume ratio between the primary and the secondary droplets is found to decrease with the increase of Weber number in all impact regimes other than the first one. In the first impact regime, this is found to be independent of Weber number since the droplet remains intact during and subsequent to the impingement. Experimental data show that the coefficient of restitution decreases with the increase of Weber number in all impact regimes. The rate of decrease of the coefficient of restitution in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Experimental data also show that the maximum spreading factor increases with the increase of Weber number in all impact regimes. The rate of increase of the maximum spreading factor in the high Weber number regime is found to be higher than that in the low and moderate Weber number. Phenomenological approximations and interpretations of the experimental data, as well as brief comparisons to the previously proposed scaling laws, are shown here.

Dry oxidation methods are used for the first time to characterize the influence of oxidation on the capacitive behavior of CNT array EDLCs. The capacitive behavior of CNT array EDLCs can be tailored by varying their oxygen content, represented by their O/C ratio. The specific capacitance of these CNT arrays increases with the increase of their oxygen content in both KOH and Et4NBF4/PC electrolytes. As a result, their gravimetric energy density increases with the increase of their oxygen content. However, their gravimetric power density decreases with the increase of their oxygen content. The optimally oxidized CNT arrays are able to withstand more than 35,000 charge/discharge cycles in Et4NBF4/PC at a current density of 5 A/g while only losing 10% of their original capacitance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct air-sea flux measurements were made on RN Kexue #1 at 40 degrees S, 156 degrees E during the Tropical Ocean Global Atmosphere (TOGA) Coupled Ocean-Atmospheric Response Experiment (COARE) Intensive Observation Period (IOP). An array of six accelerometers was used to measure the motion of the anchored ship, and a sonic anemometer and Lyman-alpha hygrometer were used to measure the turbulent wind vector and specific humidity. The contamination of the turbulent wind components by ship motion was largely removed by an improvement of a procedure due to Shao based on the acceleration signals. The scheme of the wind correction for ship motion is briefly outlined. Results are presented from data for the best wind direction relative to the ship to minimize flow distortion effects. Both the time series and the power spectra of the sonic-measured wind components show swell-induced ship motion contamination, which is largely removed by the accelerometer correction scheme, There was less contamination in the longitudinal wind component than in the vertical and transverse components. The spectral characteristics of the surface-layer turbulence properties are compared with those from previous land and ocean results, Momentum and latent heat fluxes were calculated by eddy correlation and compared to those estimated by the inertial dissipation method and the TOGA COARE bulk formula. The estimations of wind stress determined by eddy correlation are smaller than those from the TOGA COARE bulk formula, especially for higher wind speeds, while those from the bulk formula and inertial dissipation technique are generally in agreement. The estimations of latent heal flux from the three different methods are in reasonable agreement. The effect of the correction for ship motion on latent heat fluxes is not as large as on momentum fluxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design and analysis of a novel machine family—the enclosed-rotor Halbach-array permanentmagnet brushless dcmotors for spacecraft applications. The initial design, selection of major parameters, and air-gap magnetic flux density are estimated using the analytical model of the machine. The proportion of the Halbach array in the machine is optimized using finite element analysis to obtain a near-trapezoidal flux pattern. The machine is found to provide uniform air-gap flux density along the radius, thus avoiding circulating currents in stator conductors and thereby reducing torque ripple. Furthermore, the design is validated with experimental results on a fabricated machine and is found to suit the design requirements of critical spacecraft applications

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial and temporal fluctuations in the concentration field from an ensemble of continuous point-source releases in a regular building array are analyzed from data generated by direct numerical simulations. The release is of a passive scalar under conditions of neutral stability. Results are related to the underlying flow structure by contrasting data for an imposed wind direction of 0 deg and 45 deg relative to the buildings. Furthermore, the effects of distance from the source and vicinity to the plume centreline on the spatial and temporal variability are documented. The general picture that emerges is that this particular geometry splits the flow domain into segments (e.g. “streets” and “intersections”) in each of which the air is, to a first approximation, well mixed. Notable exceptions to this general rule include regions close to the source, near the plume edge, and in unobstructed channels when the flow is aligned. In the oblique (45 deg) case the strongly three-dimensional nature of the flow enhances mixing of a scalar within the canopy leading to reduced temporal and spatial concentration fluctuations within the plume core. These fluctuations are in general larger for the parallel flow (0 deg) case, especially so in the long unobstructed channels. Due to the more complex flow structure in the canyon-type streets behind buildings, fluctuations are lower than in the open channels, though still substantially larger than for oblique flow. These results are relevant to the formulation of simple models for dispersion in urban areas and to the quantification of the uncertainties in their predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The surface detector array of the Pierre Auger Observatory consists of 1600 water-Cherenkov detectors, for the study of extensive air showers (EAS) generated by ultra-high-energy cosmic rays. We describe the trigger hierarchy, from the identification of candidate showers at the level of a single detector, amongst a large background (mainly random single cosmic ray muons), up to the selection of real events and the rejection of random coincidences. Such trigger makes the surface detector array fully efficient for the detection of EAS with energy above 3 x 10(18) eV, for all zenith angles between 0 degrees and 60 degrees, independently of the position of the impact point and of the mass of the primary particle. In these range of energies and angles, the exposure of the surface array can be determined purely on the basis of the geometrical acceptance. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric parameters, Such as pressure (P), temperature (T) and density (rho proportional to P/T), affect the development of extensive air showers initiated by energetic cosmic rays. We have Studied the impact of atmospheric variations on extensive air showers by means of the surface detector of the Pierre Auger Observatory. The rate of events shows a similar to 10% seasonal modulation and similar to 2% diurnal one. We find that the observed behaviour is explained by a model including the effects associated with the variations of P and rho. The former affects the longitudinal development of air showers while the latter influences the Moliere radius and hence the lateral distribution of the shower particles. The model is validated with full simulations of extensive air showers using atmospheric profiles measured at the site of the Pierre Auger Observatory. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From direct observations of the longitudinal development of ultra-high energy air showers performed with the Pierre Auger Observatory, upper limits of 3.8%, 2.4%, 3.5% and 11.7% (at 95% c.l.) are obtained on the fraction of cosmic-ray photons above 2, 3, 5 and 10 EeV (1 EeV equivalent to 10(18) eV), respectively. These are the first experimental limits on ultra-high energy photons at energies below 10 EeV. The results complement previous constraints on top-down models from array data and they reduce systematic uncertainties in the interpretation of shower data in terms of primary flux, nuclear composition and proton-air cross-section. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pierre Auger Observatory is a hybrid detector for ultra-high energy cosmic rays. It combines a surface array to measure secondary particles at ground level together with a fluorescence detector to measure the development of air showers in the atmosphere above the array. The fluorescence detector comprises 24 large telescopes specialized for measuring the nitrogen fluorescence caused by charged particles of cosmic ray air showers. In this paper we describe the components of the fluorescence detector including its optical system, the design of the camera, the electronics, and the systems for relative and absolute calibration. We also discuss the operation and the monitoring of the detector. Finally, we evaluate the detector performance and precision of shower reconstructions. (C) 2010 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper analyses empirical performance data of five commercial PV-plants in Germany. The purpose was on one side to investigate the weak light performance of the different PV-modules used. On the other hand it was to quantify and compare the shading losses of different PV-array configurations. The importance of this study relies on the fact that even if the behavior under weak light conditions or the shading losses might seem to be a relatively small percentage of the total yearly output; in projects where a performance guarantee is given, these variation can make the difference between meeting or not the conditions.When analyzing the data, a high dispersion was found. To reduce the optical losses and spectral effects, a series of data filters were applied based on the angle of incidence and absolute Air Mass. To compensate for the temperature effects and translate the values to STC (25°C), five different methods were assessed. At the end, the Procedure 2 of IEC 60891 was selected due to its relative simplicity, usage of mostly standard parameters found in datasheets, good accuracy even with missing values, and its potential to improve the results when the complete set of inputs is available.After analyzing the data, the weak light performance of the modules did not show a clear superiority of a certain technology or technology group over the others. Moreover, the uncertainties in the measurements restrictive the conclusiveness of the results.In the partial shading analysis, the landscape mounting of mc-Si PV-modules in free-field showed a significantly better performance than the portrait one. The cross-table string using CIGS modules did not proved the benefits expected and performed actually poorer than a regular one-string-per-table layout. Parallel substrings with CdTe showed a proper functioning and relatively low losses. Among the two product generations of CdTe analyzed, none showed a significantly better performance under partial shadings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on using photovoltaic produced electricity to power air conditioners in a tropical climate. The study takes place in Surabaya, Indonesia at two different locations the classroom, located at the UBAYA campus and the home office, 10 km away. Indonesia has an average solar irradiation of about 4.8 kWh/m²/day (PWC Indonesia, 2013) which is for ideal conditions for these tests. At the home office, tests were conducted on different photovoltaic systems. A series of measuring devices recorded the performance of the 800 W PV system and the consumption of the 1.35 kW air conditioner (cooling capacity). To have an off grid system many of the components need to be oversized. The inverter has to be oversized to meet the startup load of the air conditioner, which can be 3 to 8 times the operating power (Rozenblat, 2013). High energy consumption of the air conditioner would require a large battery storage to provide one day of autonomy. The PV systems output must at least match the consumption of the air conditioner. A grid connect system provides a much better solution with the 800 W PV system providing 80 % of the 3.5 kWh load of the air conditioner, the other 20 % coming from the grid during periods of low irradiation. In this system the startup load is provided by the grid so the inverter does not need to be oversized. With the grid-connected system, the PV panel’s production does not need to match the consumption of the air conditioner, although a smaller PV array will mean a smaller percentage of the load will be covered by PV. Using the results from the home office tests and results from measurements made in the classroom. Two different PV systems (8 kW and 12 kW) were simulated to power both the current air conditioners (COP 2.78) and new air conditioners (COP 4.0). The payback period of the systems can vary greatly depending on if a feed in tariff is awarded or not. If the feed in tariff is awarded the best system is the 12 kW system, with a payback period of 4.3 years and a levelized cost of energy at -3,334 IDR/kWh. If the feed in tariff is not granted then the 8 kW system is the best choice with a lower payback period and lower levelized cost of energy than the 12 kW system under the same conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As population change places pressure on expanding regional and metropolitan urban boundaries, so the threat of bushfire at the rural/urban interface increases. This paper presents a range of 2D and 3D 1:40 and full scale modelling investigations. Various relationships are explored between the urban and rural interface with respect to: air pressure; changes in wind pattern; vectorial velocity; and the deposition of hot ash and firebrand deposits around single story building forms, both as standalone and within an orthogonal array and cul-de-sac relationships.