62 resultados para Space cabin atmospheres


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An analysis of high-resolution Anglo-Australian Telescope (AAT)/University College London Echelle Spectrograph (UCLES) optical spectra for the ultraviolet (UV)-bright star ROA 5701 in the globular cluster omega Cen (NGC 5139) is performed, using non-local thermodynamic equilibrium (non-LTE) model atmospheres to estimate stellar atmospheric parameters and chemical composition. Abundances are derived for C, N, O, Mg, Si and S, and compared with those found previously by Moehler et al. We find a general metal underabundance relative to young B-type stars, consistent with the average metallicity of the cluster. Our results indicate that ROA 5701 has not undergone a gas-dust separation scenario as previously suggested. However, its abundance pattern does imply that ROA 5701 has evolved off the asymptotic giant branch (AGB) prior to the onset of the third dredge-up.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spectroscopic analyses of 7 SMC B-type supergiants and 1 giant have been undertaken using high resolution optical data obtained on the VLT with UVES. FASTWIND, a non-LTE, spherical, line-blanketed model atmosphere code was used to derive atmospheric and wind parameters of these stars as well as their absolute abundances. Mass-loss rates, derived from H-alpha profiles, are in poor agreement with metallicity dependent theoretical predictions. Indeed the wind-momenta of the SMC stars appear to be in good agreement with the wind-momentum luminosity relationship (WLR) of Galactic B-type stars, a puzzling result given that line-driven wind theory predicts a metallicity dependence. However the galactic stars were analysed using unblanketed model atmospheres which may mask any dependence on metallicity. A mean nitrogen enhancement of a factor of 14 is observed in the supergiants whilst only an enrichment of a factor of 4 is present in the giant, AV216. Similar excesses in nitrogen are observed in O-type dwarfs and supergiants in the same mass range, suggesting that the additional nitrogen is produced while the stars are still on the main-sequence. These nitrogen enrichments can be reproduced by current stellar evolution models, which include rotationally induced mixing, only if large initial rotational velocities of 300 kin s(-1) are invoked. Such large rotational velocities appear to be inconsistent with observed v sin i distributions for O-type stars and B-type supergiants. Hence it is suggested that the currently available stellar evolution models require more efficient mixing for lower rotational velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Charge exchange X-ray and far-ultraviolet (FUV) aurorae can provide detailed insight into the interaction between solar system plasmas. Using the two complementary experimental techniques of photon emission spectroscopy and translation energy spectroscopy, we have studied state-selective charge exchange in collisions between fully ionized helium and target gasses characteristic of cometary and planetary atmospheres (H2O, CO2, CO, and CH4). The experiments were performed at velocities typical for the solar wind (200-1500 km s(-1)). Data sets are produced that can be used for modeling the interaction of solar wind alpha particles with cometary and planetary atmospheres. These data sets are used to demonstrate the diagnostic potential of helium line emission. Existing Extreme Ultraviolet Explorer (EUVE) observations of comets Hyakutake and Hale-Bopp are analyzed in terms of solar wind and coma characteristics. The case of Hale-Bopp illustrates well the dependence of the helium line emission to the collision velocity. For Hale-Bopp, our model requires low velocities in the interaction zone. We interpret this as the effect of severe post-bow shock cooling in this extraordinary large comet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a perception that teaching space in universities is a rather scarce resource. However, some studies have revealed that in many institutions it is actually chronically under-used. Often, rooms are occupied only half the time, and even when in use they are often only half full. This is usually measured by the ‘utilization’ which is defined as the percentage of available ‘seat-hours’ that are employed. Within real institutions, studies have shown that this utilization can often take values as low as 20–40%. One consequence of such a low level of utilization is that space managers are under pressure to make more efficient use of the available teaching space. However, better management is hampered because there does not appear to be a good understanding within space management (near-term planning) of why this happens. This is accompanied, within space planning (long-term planning) by a lack of experise on how best to accommodate the expected low utilizations. This motivates our two main goals: (i) To understand the factors that drive down utilizations, (ii) To set up methods to provide better space planning. Here, we provide quantitative evidence that constraints arising from timetabling and location requirements easily have the potential to explain the low utilizations seen in reality. Furthermore, on considering the decision question ‘Can this given set of courses all be allocated in the available teaching space?’ we find that the answer depends on the associated utilization in a way that exhibits threshold behaviour: There is a sharp division between regions in which the answer is ‘almost always yes’ and those of ‘almost always no’. Through analysis and understanding of the space of potential solutions, our work suggests that better use of space within universities will come about through an understanding of the effects of timetabling constraints and when it is statistically likely that it will be possible for a set of courses to be allocated to a particular space. The results presented here provide a firm foundation for university managers to take decisions on how space should be managed and planned for more effectively. Our multi-criteria approach and new methodology together provide new insight into the interaction between the course timetabling problem and the crucial issue of space planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation’ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting’ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.