956 resultados para Telescope Key Project


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new age-redshift test is proposed in order to constrain H(0) on the basis of the existence of old high-redshift galaxies (OHRGs). In the flat Lambda cold dark matter model, the value of H(0) is heavily dependent on the mass density parameter Omega(M) = 1- Omega(Lambda). Such a degeneracy can be broken through a joint analysis involving the OHRG and baryon acoustic oscillation signature. By assuming a galaxy incubation time, t(inc) = 0.8 +/- 0.4 Gyr, our joint analysis yields a value of H(0) = 71 +/- 4 km s(-1) Mpc(-1) (1 sigma) with the best-fit density parameter Omega(M) = 0.27 +/- 0.03. Such results are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates of the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these two independent phenomena provides an interesting method to constrain the Hubble constant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first dynamical analysis of a galaxy cluster to include a large fraction of dwarf galaxies. Our sample of 108 Fornax Cluster members measured with the UK Schmidt Telescope FLAIR-II spectrograph contains 55 dwarf galaxies (15.5 > b(j) > 18.0 or -16 > M-B > -13.5). H alpha emission shows that of the dwarfs are star forming, twice the fraction implied by morphological classifications. The total sample has a mean velocity of 1493 +/- 36 kms s(-1) and a velocity dispersion of 374 +/- 26 km s(-1). The dwarf galaxies form a distinct population: their velocity dispersion (429 +/- 41 km s(-1)) is larger than that of the giants () at the 98% confidence level. This suggests that the dwarf population is dominated by infalling objects whereas the giants are virialized. The Fornax system has two components, the main Fornax Cluster centered on NGC 1399 with cz = 1478 km s(-1) and sigma (cz) = 370 km s(-1) and a subcluster centered 3 degrees to the southwest including NGC 1316 with cz = 1583 km s(-1) and sigma (cz) = 377 km s(-1). This partition is preferred over a single cluster at the 99% confidence level. The subcluster, a site of intense star formation, is bound to Fornax and probably infalling toward the cluster core for the first time. We discuss the implications of this substructure for distance estimates of the Fornax Cluster. We determine the cluster mass profile using the method of Diaferio, which does not assume a virialized sample. The mass within a projected radius of 1.4 Mpc is (7 +/- 2) x 10(13) M-., and the mass-to-light ratio is 300 +/- 100 M-./L-.. The mass is consistent with values derived from the projected mass virial estimator and X-ray measurements at smaller radii.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new set of deep H I observations of member galaxies of the Fornax cluster. We detected 35 cluster galaxies in H I. The resulting sample, the most comprehensive to date, is used to investigate the distribution of neutral hydrogen in the cluster galaxies. We compare the H I content of the detected cluster galaxies with that of field galaxies by measuring H I mass-to-light ratios and the H I deficiency parameter of Solanes et al. (1996). The mean H I mass-to-light ratio of the cluster galaxies is 0.68 +/- 0.15, significantly lower than for a sample of H I-selected field galaxies (1.15 +/- 0.10), although not as low as in the Virgo cluster (0.45 +/- 0.03). In addition, the H I content of two cluster galaxies (NGC1316C and NGC1326B) appears to have been affected by interactions. The mean H I deficiency for the cluster is 0.38 +/- 0.09 (for galaxy types T = 1-6), significantly greater than for the field sample (0.05 +/- 0.03). Both these tests show that Fornax cluster galaxies are H I-deficient compared to field galaxies. The kinematics of the cluster galaxies suggests that the H I deficiency may be caused by ram-pressure stripping of galaxies on orbits that pass close to the cluster core. We also derive the most complete B-band Tully-Fisher relation of inclined spiral galaxies in Fornax. A subcluster in the South-West of the main cluster contributes considerably to the scatter. The scatter for galaxies in the main cluster alone is 0.50 mag, which is slightly larger than the intrinsic scatter of 0.4 mag. We use the Tully-Fisher relation to derive a distance modulus of Fornax relative to the Virgo cluster of -0.38 +/- 0.14 mag. The galaxies in the subcluster are (1.0 +/- 0.5) mag brighter than the galaxies of the main cluster, indicating that they are situated in the foreground. With their mean velocity 95 km s(-1) higher than that of the main cluster we conclude that the subcluster is falling into the main Fornax cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A component of dark energy has been recently proposed to explain the current acceleration of the Universe. Unless some unknown symmetry in Nature prevents or suppresses it, such a field may interact with the pressureless component of dark matter, giving rise to the so-called models of coupled quintessence. In this paper we propose a new cosmological scenario where radiation and baryons are conserved, while the dark energy component is decaying into cold dark matter. The dilution of cold dark matter particles, attenuated with respect to the usual a(-3) scaling due to the interacting process, is characterized by a positive parameter epsilon, whereas the dark energy satisfies the equation of state p(x) = omega rho(x) (omega < 0). We carry out a joint statistical analysis involving recent observations from type Ia supernovae, baryon acoustic oscillation peak, and cosmic microwave background shift parameter to check the observational viability of the coupled quintessence scenario here proposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using data from the H I Parkes All Sky Survey (HIPASS), we have searched for neutral hydrogen in galaxies in a region similar to25x25 deg(2) centred on NGC 1399, the nominal centre of the Fornax cluster. Within a velocity search range of 300-3700 km s(-1) and to a 3sigma lower flux limit of similar to40 mJy, 110 galaxies with H I emission were detected, one of which is previously uncatalogued. None of the detections has early-type morphology. Previously unknown velocities for 14 galaxies have been determined, with a further four velocity measurements being significantly dissimilar to published values. Identification of an optical counterpart is relatively unambiguous for more than similar to90 per cent of our H I galaxies. The galaxies appear to be embedded in a sheet at the cluster velocity which extends for more than 30degrees across the search area. At the nominal cluster distance of similar to20 Mpc, this corresponds to an elongated structure more than 10 Mpc in extent. A velocity gradient across the structure is detected, with radial velocities increasing by similar to500 km s(-1) from south-east to north-west. The clustering of galaxies evident in optical surveys is only weakly suggested in the spatial distribution of our H I detections. Of 62 H I detections within a 10degrees projected radius of the cluster centre, only two are within the core region (projected radius

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For many decades, the Kingdom of Saudi Arabia has been widely known for being a reliable oil exporter. This fact, however, has not exempted it from facing significant domestic energy challenges. One of the most pressing of these challenges involves bridging the widening electricity supply-demand gap where, currently, the demand is growing at a very fast rate. One crucial means to address this challenge is through delivering power supply projects with maximum efficiency. Project delivery delay, however, is not uncommon in this highly capital-intensive industry, indicating electricity supplies are not coping with the demand increases. To provide a deeper insight into the challenges of project implementation and efficient practice, this research adopts a pragmatic approach by triangulating literature, questionnaires and semi-structured interviews. The research was conducted in the Saudi Arabian power supply industry – Western Operating Area. A total of 105 usable questionnaires were collected, and 28 recorded, semi-structured interviews were conducted, analysed and synthesised to produce a conceptual model of what constitutes the project implementation challenges in the investigated industry. This was achieved by conducting a comprehensive ranking analysis applied to all 58 identified and surveyed factors which, according to project practitioners in the investigated industry, contribute to project delay. 28 of these project delay factors were selected as the "most important" ones. Factor Analysis was employed to structure these 28 most important project delay factors into the following meaningful set of 7 project implementation challenges: Saudi Electricity Company's contractual commitments, Saudi Electricity Company's communication and coordination effectiveness, contractors' project planning and project control effectiveness, consultant-related aspects, manpower challenges and material uncertainties, Saudi Electricity Company's tendering system, and lack of project requirements clarity. The study has implications for industry policy in that it provides a coherent assessment of the key project stakeholders' central problems. From this analysis, pragmatic recommendations are proposed that, if enacted, will minimise the significance of the identified problems on future project outcomes, thus helping to ensure the electricity supply-demand gap is diminished.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study is to address the main deficiencies with the prevailing project cost and time control practices for construction projects in the UK. A questionnaire survey was carried out with 250 top companies followed by in-depth interviews with 15 experienced practitioners from these companies in order to gain further insights of the identified problems, and their experience of good practice on how these problems can be tackled. On the basis of these interviews and syntheses with literature, a list of 65 good practice recommendations have been developed for the key project control tasks: planning, monitoring, reporting and analysing. The Delphi method was then used, with the participation of a panel of 8 practitioner experts, to evaluate these improvement recommendations and to establish their degree of relevance. After two rounds of Delphi, these recommendations are put forward as "critical", "important", or "helpful" measures for improving project control practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context. Dwarf irregular galaxies are relatively simple unevolved objects where it is easy to test models of galactic chemical evolution. Aims. We attempt to determine the star formation and gas accretion history of IC 10, a local dwarf irregular for which abundance, gas, and mass determinations are available. Methods. We apply detailed chemical evolution models to predict the evolution of several chemical elements (He, O, N, S) and compared our predictions with the observational data. We consider additional constraints such as the present-time gas fraction, the star formation rate (SFR), and the total estimated mass of IC 10. We assume a dark matter halo for this galaxy and study the development of a galactic wind. We consider different star formation regimes: bursting and continuous. We explore different wind situations: i) normal wind, where all the gas is lost at the same rate and ii) metal-enhanced wind, where metals produced by supernovae are preferentially lost. We study a case without wind. We vary the star formation efficiency (SFE), the wind efficiency, and the time scale of the gas infall, which are the most important parameters in our models. Results. We find that only models with metal-enhanced galactic winds can reproduce the properties of IC 10. The star formation must have proceeded in bursts rather than continuously and the bursts must have been less numerous than similar to 10 over the whole galactic lifetime. Finally, IC 10 must have formed by a slow process of gas accretion with a timescale of the order of 8 Gyr.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This essay appears in the first book to examine feminist curatorship in the last 40 years. It undertakes an extended reading of Cathy de Zegher's influential exhibition, Inside the Visible, An Elliptical Traverse of 20th Century Art. In, of and From the Feminine (1995) which proposed that modern art should be understood through cyclical shifts involving the constant reinvention of artistic method and identified four key moments in 20th century history to structure its project. The essay analyses Inside the Visible's concept of an elliptical traverse to raise questions about repetitions and recurrences in feminist exhibitions of the early 1980s, the mid 1990s and 2007 asking whether and in what ways questions of feminist curating have been continuously repeated and reinvented. The essay argues that Inside the Visible was a key project in second wave feminism and exemplified debates about women's time, first theorised by Julia Kristeva. It concludes, however, that 'women's time' has had its moment, and new conceptions of feminism and its history are needed if feminist curating is not endlessly to recycle its past. The essay informs a wider collaborative project on the sexual politics of violence, feminism and contemporary art, in collaboration with Edinburgh and one of the editors of this collection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Water vapour, despite being a minor constituent in the Martian atmosphere with its precipitable amount of less than 70 pr. μm, attracts considerable attention in the scientific community because of its potential importance for past life on Mars. The partial pressure of water vapour is highly variable because of its seasonal condensation onto the polar caps and exchange with a subsurface reservoir. It is also known to drive photochemical processes: photolysis of water produces H, OH, HO2 and some other odd hydrogen compounds, which in turn destroy ozone. Consequently, the abundance of water vapour is anti-correlated with ozone abundance. The Herschel Space Observatory provides for the first time the possibility to retrieve vertical water profiles in the Martian atmosphere. Herschel will contribute to this topic with its guaranteed-time key project called "Water and related chemistry in the solar system". Observations of Mars by Heterodyne Instrument for the Far Infrared (HIFI) and Photodetector Array Camera and Spectrometer (PACS) onboard Herschel are planned in the frame of the programme. HIFI with its high spectral resolution enables accurate observations of vertically resolved H2O and temperature profiles in the Martian atmosphere. Unlike HIFI, PACS is not capable of resolving the line-shape of molecular lines. However, our present study of PACS observations for the Martian atmosphere shows that the vertical sensitivity of the PACS observations can be improved by using multiple-line observations with different line opacities. We have investigated the possibility of retrieving vertical profiles of temperature and molecular abundances of minor species including H2O in the Martian atmosphere using PACS. In this paper, we report that PACS is able to provide water vapour vertical profiles for the Martian atmosphere and we present the expected spectra for future PACS observations. We also show that the spectral resolution does not allow the retrieval of several studied minor species, such as H2O2, HCl, NO, SO2, etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The UK government aims at achieving 80% CO2 emission reduction by 2050 which requires collective efforts across all the UK industry sectors. In particular, the housing sector has a large potential to contribute to achieving the aim because the housing sector alone accounts for 27% of the total UK CO2 emission, and furthermore, 87% of the housing which is responsible for current 27% CO2 emission will still stand in 2050. Therefore, it is essential to improve energy efficiency of existing housing stock built with low energy efficiency standard. In order for this, a whole‐house needs to be refurbished in a sustainable way by considering the life time financial and environmental impacts of a refurbished house. However, the current refurbishment process seems to be challenging to generate a financially and environmentally affordable refurbishment solution due to the highly fragmented nature of refurbishment practice and a lack of knowledge and skills about whole‐house refurbishment in the construction industry. In order to generate an affordable refurbishment solution, diverse information regarding costs and environmental impacts of refurbishment measures and materials should be collected and integrated in right sequences throughout the refurbishment project life cycle among key project stakeholders. Consequently, various researchers increasingly study a way of utilizing Building Information Modelling (BIM) to tackle current problems in the construction industry because BIM can support construction professionals to manage construction projects in a collaborative manner by integrating diverse information, and to determine the best refurbishment solution among various alternatives by calculating the life cycle costs and lifetime CO2 performance of a refurbishment solution. Despite the capability of BIM, the BIM adoption rate is low with 25% in the housing sector and it has been rarely studied about a way of using BIM for housing refurbishment projects. Therefore, this research aims to develop a BIM framework to formulate a financially and environmentally affordable whole‐house refurbishment solution based on the Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) methods simultaneously. In order to achieve the aim, a BIM feasibility study was conducted as a pilot study to examine whether BIM is suitable for housing refurbishment, and a BIM framework was developed based on the grounded theory because there was no precedent research. After the development of a BIM framework, this framework was examined by a hypothetical case study using BIM input data collected from questionnaire survey regarding homeowners’ preferences for housing refurbishment. Finally, validation of the BIM framework was conducted among academics and professionals by providing the BIM framework and a formulated refurbishment solution based on the LCC and LCA studies through the framework. As a result, BIM was identified as suitable for housing refurbishment as a management tool, and it is timely for developing the BIM framework. The BIM framework with seven project stages was developed to formulate an affordable refurbishment solution. Through the case study, the Building Regulation is identified as the most affordable energy efficiency standard which renders the best LCC and LCA results when it is applied for whole‐house refurbishment solution. In addition, the Fabric Energy Efficiency Standard (FEES) is recommended when customers are willing to adopt high energy standard, and the maximum 60% of CO2 emissions can be reduced through whole‐house fabric refurbishment with the FEES. Furthermore, limitations and challenges to fully utilize BIM framework for housing refurbishment were revealed such as a lack of BIM objects with proper cost and environmental information, limited interoperability between different BIM software and limited information of LCC and LCA datasets in BIM system. Finally, the BIM framework was validated as suitable for housing refurbishment projects, and reviewers commented that the framework can be more practical if a specific BIM library for housing refurbishment with proper LCC and LCA datasets is developed. This research is expected to provide a systematic way of formulating a refurbishment solution using BIM, and to become a basis for further research on BIM for the housing sector to resolve the current limitations and challenges. Future research should enhance the BIM framework by developing more detailed process map and develop BIM objects with proper LCC and LCA Information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Through this paper we will look at links between architecture education, research and practice, using a current project as a vehicle to cover aspects of building, pilot and live project. The first aspect, the building project consists of the refurbishment and extension of a Parnell Cottage for a private client and is located near Cloyne, in East Cork, Ireland. The pilot project falls within the NEES Project, investigating the use of materials and services based on natural or recycled materials to improve the energy performance of new and existing buildings. The live project aims to hold a series of on site workshops and seminars for students of Architecture, Architects and interested parties, demonstrating the integration of the NEES best practice materials and techniques within the built project. The workshops, seminars and key project documents will be digitally recorded for dissemination through a web based publication. The small scale of the building project allowed for flexibility in the early conceptual design stages and the integration of the research and educational aspects.