941 resultados para end user computing application streaming horizon workspace portalvmware view
Resumo:
Não é novidade que o paradigma vigente baseia-se na Internet, em que cada vez mais aplicações mudam o seu modelo de negócio relativamente a licenciamento e manutenção, para passar a oferecer ao utilizador final uma aplicação mais acessível no que concerne a licenciamento e custos de manutenção, já que as aplicações se encontram distribuídas eliminando os custos de capitais e operacionais inerentes a uma arquitetura centralizada. Com a disseminação das Interfaces de Programação de Aplicações (Application Programming Interfaces – API) baseadas na Internet, os programadores passaram a poder desenvolver aplicações que utilizam funcionalidades disponibilizadas por terceiros, sem terem que as programar de raiz. Neste conceito, a API das aplicações Google® permitem a distribuição de aplicações a um mercado muito vasto e a integração com ferramentas de produtividade, sendo uma oportunidade para a difusão de ideias e conceitos. Este trabalho descreve o processo de conceção e implementação de uma plataforma, usando as tecnologias HTML5, Javascript, PHP e MySQL com integração com ®Google Apps, com o objetivo de permitir ao utilizador a preparação de orçamentos, desde o cálculo de preços de custo compostos, preparação dos preços de venda, elaboração do caderno de encargos e respetivo cronograma.
Resumo:
The primary objective of this study was to document the benefits and possible detriments of combining ipsilateral acoustic hearing in the cochlear implant ear of a patient with preserved low frequency residual hearing post cochlear implantation. The secondary aim was to examine the efficacy of various cochlear implant mapping and hearing aid fitting strategies in relation to electro-acoustic benefits.
Resumo:
This paper discusses a study to determine the effectiveness of the Hearing Aid Performance Inventory (HAPI) on hearing aid outcomes.
Resumo:
FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Resumo:
The development of protocols for the identification of metal phosphates in phosphate-treated, metal-contaminated soils is a necessary yet problematical step in the validation of remediation schemes involving immobilization of metals as phosphate phases. The potential for Raman spectroscopy to be applied to the identification of these phosphates in soils has yet to be fully explored. With this in mind, a range of synthetic mixed-metal hydroxylapatites has been characterized and added to soils at known concentrations for analysis using both bulk X-ray powder diffraction (XRD) and Raman spectroscopy. Mixed-metal hydroxylapatites in the binary series Ca-Cd, Ca-Pb, Ca-Sr and Cd-Pb synthesized in the presence of acetate and carbonate ions, were characterized using a range of analytical techniques including XRD, analytical scanning electron microscopy (SEM), infrared spectroscopy (IR), inductively coupled plasma-atomic emission spectrometry (ICP-AES) and Raman spectroscopy. Only the Ca-Cd series displays complete solid solution, although under the synthesis conditions of this study the Cd-5(PO4)(3)OH end member could not be synthesized as a pure phase. Within the Ca-Cd series the cell parameters, IR active modes and Raman active bands vary linearly as a function of Cd content. X-ray diffraction and extended X-ray absorption fine structure spectroscopy (EXAFS) suggest that the Cd is distributed across both the Ca(1) and Ca(2) sites, even at low Cd concentrations. In order to explore the likely detection limits for mixed-metal phosphates in soils for XRD and Raman spectroscopy, soils doped with mixed-metal hydroxylapatites at concentrations of 5, 1 and 0.5 wt.% were then studied. X-ray diffraction could not confirm unambiguously the presence or identity of mixed-metal phosphates in soils at concentrations below 5 wt.%. Raman spectroscopy proved a far more sensitive method for the identification of mixed-metal hydroxylapatites in soils, which could positively identify the presence of such phases in soils at all the dopant concentrations used in this study. Moreover, Raman spectroscopy could also provide an accurate assessment of the degree of chemical substitution in the hydroxylapatites even when present in soils at concentrations as low as 0.1%.
Resumo:
Europe's widely distributed climate modelling expertise, now organized in the European Network for Earth System Modelling (ENES), is both a strength and a challenge. Recognizing this, the European Union's Program for Integrated Earth System Modelling (PRISM) infrastructure project aims at designing a flexible and friendly user environment to assemble, run and post-process Earth System models. PRISM was started in December 2001 with a duration of three years. This paper presents the major stages of PRISM, including: (1) the definition and promotion of scientific and technical standards to increase component modularity; (2) the development of an end-to-end software environment (graphical user interface, coupling and I/O system, diagnostics, visualization) to launch, monitor and analyse complex Earth system models built around state-of-art community component models (atmosphere, ocean, atmospheric chemistry, ocean bio-chemistry, sea-ice, land-surface); and (3) testing and quality standards to ensure high-performance computing performance on a variety of platforms. PRISM is emerging as a core strategic software infrastructure for building the European research area in Earth system sciences. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
In a glasshouse experiment using potted strawberry plants (cv. Cambridge Favourite) as hosts, the effect of selected fungal antagonists grown on 25 or 50 g of mushroom compost containing autoclaved mycelia of Agaricus bisporus, or wheat bran was evaluated against Armillaria mellea. Another glasshouse experiment tested the effect of application time of the antagonists in relation to inoculations with the pathogen. A significant interaction was found between the antagonists, substrates and dose rates. All the plants treated with Chaetomium olivaceum isolate Co on 50 g wheat bran survived until the end of the experiment which lasted 482 days, while none of them survived when this antagonist was added to the roots of the plants on 25 g wheat bran or 25 or 50 g mushroom compost. Dactylium dendroides isolate SP had a similar effect, although with a lower host survival rate of 33.3%. Trichoderma hamatum isolate Tham 1 and T. harzianum isolate Th23 protected 33.3% of the plants when added on 50 g and none when added on 25 g of either substrate, while 66.7% of the plants treated with T. harzianum isolate Th2 on 25 g, or T viride isolate TO on 50 g wheat bran, survived. Application of the antagonists on mushroom compost initially resulted in development of more leaves and healthier plants, but this effect was not sustained. Eventually, plants treated with the antagonists on wheat bran had significantly more leaves and higher health scores. The plants treated with isolate Th2 and inoculated with Armillaria at the same time had a survival rate of 66.7% for the duration of the experiment (475 days), while none of them survived that long when the antagonist and pathogen were applied with an interval of 85 days in either sequence. C. olivaceum isolate Co showed a protective effect only, as 66.7% of the plants survived when they were treated with the antagonist 85 days before inoculation with the pathogen, while none of them survived when the antagonist and pathogen were applied together or the infection preceded protection.
Resumo:
Climate change is one of the major challenges facing economic systems at the start of the 21st century. Reducing greenhouse gas emissions will require both restructuring the energy supply system (production) and addressing the efficiency and sufficiency of the social uses of energy (consumption). The energy production system is a complicated supply network of interlinked sectors with 'knock-on' effects throughout the economy. End use energy consumption is governed by complex sets of interdependent cultural, social, psychological and economic variables driven by shifts in consumer preference and technological development trajectories. To date, few models have been developed for exploring alternative joint energy production-consumption systems. The aim of this work is to propose one such model. This is achieved in a methodologically coherent manner through integration of qualitative input-output models of production, with Bayesian belief network models of consumption, at point of final demand. The resulting integrated framework can be applied either (relatively) quickly and qualitatively to explore alternative energy scenarios, or as a fully developed quantitative model to derive or assess specific energy policy options. The qualitative applications are explored here.
Resumo:
It is well understood that for haptic interaction: free motion performance and closed-loop constrained motion performance have conflicting requirements. The difficulties for both conditions are compounded when increased workspace is required as most solutions result in a reduction of achievable impedance and bandwidth. A method of chaining devices together to increase workspace without adverse effect on performance is described and analysed. The method is then applied to a prototype, colloquially known as 'The Flying Phantom', and shown to provide high-bandwidth, low impedance interaction over the full range of horizontal movement across the front of a human user.