65 resultados para end user computing application streaming horizon workspace portalvmware view


Relevância:

40.00% 40.00%

Publicador:

Resumo:

n this study, the authors discuss the effective usage of technology to solve the problem of deciding on journey start times for recurrent traffic conditions. The developed algorithm guides the vehicles to travel on more reliable routes that are not easily prone to congestion or travel delays, ensures that the start time is as late as possible to avoid the traveller waiting too long at their destination and attempts to minimise the travel time. Experiments show that in order to be more certain of reaching their destination on time, a traveller has to leave early and correspondingly arrive early, resulting in a large waiting time. The application developed here asks the user to set this certainty factor as per the task in hand, and computes the best start time and route.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of protocols for the identification of metal phosphates in phosphate-treated, metal-contaminated soils is a necessary yet problematical step in the validation of remediation schemes involving immobilization of metals as phosphate phases. The potential for Raman spectroscopy to be applied to the identification of these phosphates in soils has yet to be fully explored. With this in mind, a range of synthetic mixed-metal hydroxylapatites has been characterized and added to soils at known concentrations for analysis using both bulk X-ray powder diffraction (XRD) and Raman spectroscopy. Mixed-metal hydroxylapatites in the binary series Ca-Cd, Ca-Pb, Ca-Sr and Cd-Pb synthesized in the presence of acetate and carbonate ions, were characterized using a range of analytical techniques including XRD, analytical scanning electron microscopy (SEM), infrared spectroscopy (IR), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and Raman spectroscopy. Only the Ca-Cd series displays complete solid solution, although under the synthesis conditions of this study the Cd-5(PO4)(3)OH end member could not be synthesized as a pure phase. Within the Ca-Cd series the cell parameters, IR active modes and Raman active bands vary linearly as a function of Cd content. X-ray diffraction and extended X-ray absorption fine structure spectroscopy (EXAFS) suggest that the Cd is distributed across both the Ca(1) and Ca(2) sites, even at low Cd concentrations. In order to explore the likely detection limits for mixed-metal phosphates in soils for XRD and Raman spectroscopy, soils doped with mixed-metal hydroxylapatites at concentrations of 5, 1 and 0.5 wt.% were then studied. X-ray diffraction could not confirm unambiguously the presence or identity of mixed-metal phosphates in soils at concentrations below 5 wt.%. Raman spectroscopy proved a far more sensitive method for the identification of mixed-metal hydroxylapatites in soils, which could positively identify the presence of such phases in soils at all the dopant concentrations used in this study. Moreover, Raman spectroscopy could also provide an accurate assessment of the degree of chemical substitution in the hydroxylapatites even when present in soils at concentrations as low as 0.1%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a glasshouse experiment using potted strawberry plants (cv. Cambridge Favourite) as hosts, the effect of selected fungal antagonists grown on 25 or 50 g of mushroom compost containing autoclaved mycelia of Agaricus bisporus, or wheat bran was evaluated against Armillaria mellea. Another glasshouse experiment tested the effect of application time of the antagonists in relation to inoculations with the pathogen. A significant interaction was found between the antagonists, substrates and dose rates. All the plants treated with Chaetomium olivaceum isolate Co on 50 g wheat bran survived until the end of the experiment which lasted 482 days, while none of them survived when this antagonist was added to the roots of the plants on 25 g wheat bran or 25 or 50 g mushroom compost. Dactylium dendroides isolate SP had a similar effect, although with a lower host survival rate of 33.3%. Trichoderma hamatum isolate Tham 1 and T. harzianum isolate Th23 protected 33.3% of the plants when added on 50 g and none when added on 25 g of either substrate, while 66.7% of the plants treated with T. harzianum isolate Th2 on 25 g, or T viride isolate TO on 50 g wheat bran, survived. Application of the antagonists on mushroom compost initially resulted in development of more leaves and healthier plants, but this effect was not sustained. Eventually, plants treated with the antagonists on wheat bran had significantly more leaves and higher health scores. The plants treated with isolate Th2 and inoculated with Armillaria at the same time had a survival rate of 66.7% for the duration of the experiment (475 days), while none of them survived that long when the antagonist and pathogen were applied with an interval of 85 days in either sequence. C. olivaceum isolate Co showed a protective effect only, as 66.7% of the plants survived when they were treated with the antagonist 85 days before inoculation with the pathogen, while none of them survived when the antagonist and pathogen were applied together or the infection preceded protection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change is one of the major challenges facing economic systems at the start of the 21st century. Reducing greenhouse gas emissions will require both restructuring the energy supply system (production) and addressing the efficiency and sufficiency of the social uses of energy (consumption). The energy production system is a complicated supply network of interlinked sectors with 'knock-on' effects throughout the economy. End use energy consumption is governed by complex sets of interdependent cultural, social, psychological and economic variables driven by shifts in consumer preference and technological development trajectories. To date, few models have been developed for exploring alternative joint energy production-consumption systems. The aim of this work is to propose one such model. This is achieved in a methodologically coherent manner through integration of qualitative input-output models of production, with Bayesian belief network models of consumption, at point of final demand. The resulting integrated framework can be applied either (relatively) quickly and qualitatively to explore alternative energy scenarios, or as a fully developed quantitative model to derive or assess specific energy policy options. The qualitative applications are explored here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well understood that for haptic interaction: free motion performance and closed-loop constrained motion performance have conflicting requirements. The difficulties for both conditions are compounded when increased workspace is required as most solutions result in a reduction of achievable impedance and bandwidth. A method of chaining devices together to increase workspace without adverse effect on performance is described and analysed. The method is then applied to a prototype, colloquially known as 'The Flying Phantom', and shown to provide high-bandwidth, low impedance interaction over the full range of horizontal movement across the front of a human user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the SIMULINK implementation of a constrained predictive control algorithm based on quadratic programming and linear state space models, and its application to a laboratory-scale 3D crane system. The algorithm is compatible with Real Time. Windows Target and, in the case of the crane system, it can be executed with a sampling period of 0.01 s and a prediction horizon of up to 300 samples, using a linear state space model with 3 inputs, 5 outputs and 13 states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronous collaborative systems allow geographically distributed users to form a virtual work environment enabling cooperation between peers and enriching the human interaction. The technology facilitating this interaction has been studied for several years and various solutions can be found at present. In this paper, we discuss our experiences with one such widely adopted technology, namely the Access Grid [1]. We describe our experiences with using this technology, identify key problem areas and propose our solution to tackle these issues appropriately. Moreover, we propose the integration of Access Grid with an Application Sharing tool, developed by the authors. Our approach allows these integrated tools to utilise the enhanced features provided by our underlying dynamic transport layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents how workflow-oriented, single-user Grid portals could be extended to meet the requirements of users with collaborative needs. Through collaborative Grid portals different research and engineering teams would be able to share knowledge and resources. At the same time the workflow concept assures that the shared knowledge and computational capacity is aggregated to achieve the high-level goals of the group. The paper discusses the different issues collaborative support requires from Grid portal environments during the different phases of the workflow-oriented development work. While in the design period the most important task of the portal is to provide consistent and fault tolerant data management, during the workflow execution it must act upon the security framework its back-end Grids are built on.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of Quality of Service (QoS) techniques involves careful analysis of area including: those business requirements; corporate strategy; and technical implementation process, which can lead to conflict or contradiction between those goals of various user groups involved in that policy definition. In addition long-term change management provides a challenge as these implementations typically require a high-skill set and experience level, which expose organisations to effects such as “hyperthymestria” [1] and “The Seven Sins of Memory”, defined by Schacter and discussed further within this paper. It is proposed that, given the information embedded within the packets of IP traffic, an opportunity exists to augment the traffic management with a machine-learning agent-based mechanism. This paper describes the process by which current policies are defined and that research required to support the development of an application which enables adaptive intelligent Quality of Service controls to augment or replace those policy-based mechanisms currently in use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Personal Area Networks (WPANs) are offering high data rates suitable for interconnecting high bandwidth personal consumer devices (Wireless HD streaming, Wireless-USB and Bluetooth EDR). ECMA-368 is the Physical (PHY) and Media Access Control (MAC) backbone of many of these wireless devices. WPAN devices tend to operate in an ad-hoc based network and therefore it is important to successfully latch onto the network and become part of one of the available piconets. This paper presents a new algorithm for detecting the Packet/Fame Sync (PFS) signal in ECMA-368 to identify piconets and aid symbol timing. The algorithm is based on correlating the received PFS symbols with the expected locally stored symbols over the 24 or 12 PFS symbols, but selecting the likely TFC based on the highest statistical mode from the 24 or 12 best correlation results. The results are very favorable showing an improvement margin in the order of 11.5dB in reference sensitivity tests between the required performance using this algorithm and the performance of comparable systems.