19 resultados para HOSTING STARS
em CentAUR: Central Archive University of Reading - UK
Resumo:
Perceptual multimedia quality is of paramount importance to the continued take-up and proliferation of multimedia applications: users will not use and pay for applications if they are perceived to be of low quality. Whilst traditionally distributed multimedia quality has been characterised by Quality of Service (QoS) parameters, these neglect the user perspective of the issue of quality. In order to redress this shortcoming, we characterise the user multimedia perspective using the Quality of Perception (QoP) metric, which encompasses not only a user’s satisfaction with the quality of a multimedia presentation, but also his/her ability to analyse, synthesise and assimilate informational content of multimedia. In recognition of the fact that monitoring eye movements offers insights into visual perception, as well as the associated attention mechanisms and cognitive processes, this paper reports on the results of a study investigating the impact of differing multimedia presentation frame rates on user QoP and eye path data. Our results show that provision of higher frame rates, usually assumed to provide better multimedia presentation quality, do not significantly impact upon the median coordinate value of eye path data. Moreover, higher frame rates do not significantly increase level of participant information assimilation, although they do significantly improve overall user enjoyment and quality perception of the multimedia content being shown.
Resumo:
During many lava dome-forming eruptions, persistent rockfalls and the concurrent development of a substantial talus apron around the foot of the dome are important aspects of the observed activity. An improved understanding of internal dome structure, including the shape and internal boundaries of the talus apron, is critical for determining when a lava dome is poised for a major collapse and how this collapse might ensue. We consider a period of lava dome growth at the Soufrière Hills Volcano, Montserrat, from August 2005 to May 2006, during which a 100 × 106 m3 lava dome developed that culminated in a major dome-collapse event on 20 May 2006. We use an axi-symmetrical Finite Element Method model to simulate the growth and evolution of the lava dome, including the development of the talus apron. We first test the generic behaviour of this continuum model, which has core lava and carapace/talus components. Our model describes the generation rate of talus, including its spatial and temporal variation, as well as its post-generation deformation, which is important for an improved understanding of the internal configuration and structure of the dome. We then use our model to simulate the 2005 to 2006 Soufrière Hills dome growth using measured dome volumes and extrusion rates to drive the model and generate the evolving configuration of the dome core and carapace/talus domains. The evolution of the model is compared with the observed rockfall seismicity using event counts and seismic energy parameters, which are used here as a measure of rockfall intensity and hence a first-order proxy for volumes. The range of model-derived volume increments of talus aggraded to the talus slope per recorded rockfall event, approximately 3 × 103–13 × 103 m3 per rockfall, is high with respect to estimates based on observed events. From this, it is inferred that some of the volumetric growth of the talus apron (perhaps up to 60–70%) might have occurred in the form of aseismic deformation of the talus, forced by an internal, laterally spreading core. Talus apron growth by this mechanism has not previously been identified, and this suggests that the core, hosting hot gas-rich lava, could have a greater lateral extent than previously considered.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
The cooled infrared filters and dichroic beam splitters manufactured for the Mid-Infrared Instrument are key optical components for the selection and isolation of wavelengths in the study of astrophysical properties of stars, galaxies, and other planetary objects. We describe the spectral design and manufacture of the precision cooled filter coatings for the spectrometer (7 K) and imager (9 K). Details of the design methods used to achieve the spectral requirements, selection of thin film materials, deposition technique, and testing are presented together with the optical layout of the instrument. (C) 2008 Optical Society of America.
Resumo:
The search for Earth-like exoplanets, orbiting in the habitable zone of stars other than our Sun and showing biological activity, is one of the most exciting and challenging quests of the present time. Nulling interferometry from space, in the thermal infrared, appears as a promising candidate technique for the task of directly observing extra-solar planets. It has been studied for about 10 years by ESA and NASA in the framework of the Darwin and TPF-I missions respectively. Nevertheless, nulling interferometry in the thermal infrared remains a technological challenge at several levels. Among them, the development of the "modal filter" function is mandatory for the filtering of the wavefronts in adequacy with the objective of rejecting the central star flux to an efficiency of about 105. Modal filtering takes benefit of the capability of single-mode waveguides to transmit a single amplitude function, to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible. The modal filter may either be based on single-mode Integrated Optics (IO) and/or Fiber Optics. In this paper, we focus on IO, and more specifically on the progress of the on-going "Integrated Optics" activity of the European Space Agency.
Resumo:
During April-May 2010 volcanic ash clouds from the Icelandic Eyjafjallajökull volcano reached Europe causing an unprecedented disruption of the EUR/NAT region airspace. Civil aviation authorities banned all flight operations because of the threat posed by volcanic ash to modern turbine aircraft. New quantitative airborne ash mass concentration thresholds, still under discussion, were adopted for discerning regions contaminated by ash. This has implications for ash dispersal models routinely used to forecast the evolution of ash clouds. In this new context, quantitative model validation and assessment of the accuracies of current state-of-the-art models is of paramount importance. The passage of volcanic ash clouds over central Europe, a territory hosting a dense network of meteorological and air quality observatories, generated a quantity of observations unusual for volcanic clouds. From the ground, the cloud was observed by aerosol lidars, lidar ceilometers, sun photometers, other remote-sensing instru- ments and in-situ collectors. From the air, sondes and multiple aircraft measurements also took extremely valuable in-situ and remote-sensing measurements. These measurements constitute an excellent database for model validation. Here we validate the FALL3D ash dispersal model by comparing model results with ground and airplane-based measurements obtained during the initial 14e23 April 2010 Eyjafjallajökull explosive phase. We run the model at high spatial resolution using as input hourly- averaged observed heights of the eruption column and the total grain size distribution reconstructed from field observations. Model results are then compared against remote ground-based and in-situ aircraft-based measurements, including lidar ceilometers from the German Meteorological Service, aerosol lidars and sun photometers from EARLINET and AERONET networks, and flight missions of the German DLR Falcon aircraft. We find good quantitative agreement, with an error similar to the spread in the observations (however depending on the method used to estimate mass eruption rate) for both airborne and ground mass concentration. Such verification results help us understand and constrain the accuracy and reliability of ash transport models and it is of enormous relevance for designing future operational mitigation strategies at Volcanic Ash Advisory Centers.
Resumo:
By placing axons into polymeric micro-channels hosting embedded electrodes the extracellular amplitude of action potentials is greatly increased, allowing for robust recording and noise suppression. We are developing such an electrode interface to record electrical activity from bladder afferents to restore bladder control in patients suffering from spinal cord injury. Here we describe our microchannel electrode interface in terms of design, microfabrication and electrode characteristics and report on in vivo bladder function after implantation of teased dorsal rootlets within microchannels.
Resumo:
We present results from 30 nights of observations of the open cluster NGC 7789 with the Wide Field Camera on the Isaac Newton Telescope, La Palma. From ~900 epochs, we obtained light curves and Sloan r'-i' colours for ~33000 stars, with ~2400 stars having better than 1 per cent precision. We expected to detect ~2 transiting hot Jupiter planets if 1 per cent of stars host such a companion and a typical hot Jupiter radius is ~1.2R_J. We find 24 transit candidates, 14 of which we can assign a period. We rule out the transiting planet model for 21 of these candidates using various robust arguments. For two candidates, we are unable to decide on their nature, although it seems most likely that they are eclipsing binaries as well. We have one candidate exhibiting a single eclipse, for which we derive a radius of 1.81+0.09-0.00R_J. Three candidates remain that require follow-up observations in order to determine their nature.
Resumo:
We present N-body simulations of accretion discs about young stellar objects (YSOs). The simulation includes the presence of a magnetic loop structure on the central star which interacts with the particles by means of a magnetic drag force. We find that an equilibrium spin rate is achieved when the corotation radius coincides with the edge of the loop. This spin rate is consistent with observed values for TTauri stars, being an order of magnitude less than the breakup value. The material ejected from the system by the rotating loop has properties consistent with the observed molecular outflows, given the presence of a suitable containing cavity.
Resumo:
It is thought that the secondary stars in cataclysmic variables (CVs) may undergo a period of mass loss in the form of a wind during the evolution of the system (Mullan et al. 1992). This wind is thought to magnetically brake the secondary star with a time-scale ~ 10^8 yr (e.g. van Paradijs 1986). When the secondary’s spin has been brought close to synchronism with the orbit it is possible for tidal torques to lock the secondary in synchronous rotation.
Resumo:
What is it that gives celebrities the voice and authority to do and say the things they do in the realm of development politics? Asked another way, how is celebrity practised and, simultaneously, how does this praxis make celebrity, personas, politics and, indeed, celebrities themselves? In this article, we explore this ‘celebrity praxis’ through the lens of the creation of the contemporary ‘development celebrity’ in those stars working for development writ large in the so-called Third World. Drawing on work in science studies, material cultures and the growing geo-socio-anthropologies of things, the key to understanding the material practices embedded in and creating development celebrity networks is the multiple and complex circulations of the everyday and bespectacled artefacts of celebrity. Conceptualised as the ‘celebrity–consumption–compassion complex’, the performances of development celebrities are as much about everyday events, materials, technologies, emotions and consumer acts as they are about the mediated and liquidised constructions of the stars who now ‘market’ development.Moreover, this complex is constructed by and constructs what we are calling ‘star/poverty space’ that works to facilitate the ‘expertise’ and ‘authenticity’ and, thus, elevated voice and authority, of development celebrities through poverty tours, photoshoots, textual and visual diaries, websites and tweets. In short, the creation of star/poverty space is performed through a kind of ‘materiality of authenticity’ that is at the centre of the networks of development celebrity. The article concludes with several brief observations about the politics, possibilities and problematics of development celebrities and the star/poverty spaces that they create.
Resumo:
The narrative of Rosemary’s Baby hinges on a central hesitation between pregnancy induced madness and the existence of Satanism. Accordingly, the monstrous element is embodied in both the real and the supernatural: Rosemary’s husband Guy (John Cassavetes) is responsible for her victimisation through rape in either explanation. However, I will argue that the inherent ambiguity of the plot makes it difficult to place him as such a figure typical to the archetypal horror binaries of normality/monster, human/inhuman. By displacing generic convention the film complicates the issue of monstrosity, whilst simultaneously offering the possibility for the depiction of female experience of marriage to be at the centre of the narrative, for the real to be possibly of more significance than the supernatural. Previous writing has tended to concentrate on Rosemary and her pregnancy, so through detailed consideration of Cassavetes’ performance and its placement in the mise-en-scène this focus on Guy aims to demonstrate that he changes almost as much as Rosemary does. The chapter will focus on the film’s depiction of rape, during Rosemary’s nightmare and after it, in order to demonstrate how the notion of performance reveals Guy’s monstrousness and the difficulties this represents in our engagement with him.