18 resultados para Execute

em CentAUR: Central Archive University of Reading - UK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Active Networks can be seen as an evolution of the classical model of packet-switched networks. The traditional and ”passive” network model is based on a static definition of the network node behaviour. Active Networks propose an “active” model where the intermediate nodes (switches and routers) can load and execute user code contained in the data units (packets). Active Networks are a programmable network model, where bandwidth and computation are both considered shared network resources. This approach opens up new interesting research fields. This paper gives a short introduction of Active Networks, discusses the advantages they introduce and presents the research advances in this field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies have demonstrated that when we observe somebody else executing an action many areas of our own motor systems are active. It has been argued that these motor activations are evidence that we motorically simulate observed actions; this motoric simulation may support various functions such as imitation and action understanding. However, whether motoric simulation is indeed the function of motor activations during action observation is controversial, due to inconsistency in findings. Previous studies have demonstrated dynamic modulations in motor activity when we execute actions. Therefore, if we do motorically simulate observed actions, our motor systems should also be modulated dynamically, and in a corresponding fashion, during action observation. Using magnetoencephalography (MEG), we recorded the cortical activity of human participants while they observed actions performed by another person. Here, we show that activity in the human motor system is indeed modulated dynamically during action observation. The finding that activity in the motor system is modulated dynamically when observing actions can explain why studies of action observation using functional magnetic resonance imaging (fMRI) have reported conflicting results, and is consistent with the hypothesis that we motorically simulate observed actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter introduces a new method of controlling the level set evolution, in which the edge strength is taken into account in choosing curvature flows for the speed function and the normal to edge direction is used to orient the diffusion of the moving interface. The addition of an energy term to penalize the irregularity allows for better preservation of local edge information. In contrast with previous Canny-based level set methods that usually adopt a two-stage framework, the proposed algorithm can execute all the above operations in one process during noise removal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the transition to multicore processors almost complete, the parallel processing community is seeking efficient ways to port legacy message passing applications on shared memory and multicore processors. MPJ Express is our reference implementation of Message Passing Interface (MPI)-like bindings for the Java language. Starting with the current release, the MPJ Express software can be configured in two modes: the multicore and the cluster mode. In the multicore mode, parallel Java applications execute on shared memory or multicore processors. In the cluster mode, Java applications parallelized using MPJ Express can be executed on distributed memory platforms like compute clusters and clouds. The multicore device has been implemented using Java threads in order to satisfy two main design goals of portability and performance. We also discuss the challenges of integrating the multicore device in the MPJ Express software. This turned out to be a challenging task because the parallel application executes in a single JVM in the multicore mode. On the contrary in the cluster mode, the parallel user application executes in multiple JVMs. Due to these inherent architectural differences between the two modes, the MPJ Express runtime is modified to ensure correct semantics of the parallel program. Towards the end, we compare performance of MPJ Express (multicore mode) with other C and Java message passing libraries---including mpiJava, MPJ/Ibis, MPICH2, MPJ Express (cluster mode)---on shared memory and multicore processors. We found out that MPJ Express performs signicantly better in the multicore mode than in the cluster mode. Not only this but the MPJ Express software also performs better in comparison to other Java messaging libraries including mpiJava and MPJ/Ibis when used in the multicore mode on shared memory or multicore processors. We also demonstrate effectiveness of the MPJ Express multicore device in Gadget-2, which is a massively parallel astrophysics N-body siimulation code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently a substantial amount of research has been done in the field of dextrous manipulation and hand manoeuvres. The main concern has been how to control robot hands so that they can execute manipulation tasks with the same dexterity and intuition as human hands. This paper surveys multi-fingered robot hand research and development topics which include robot hand design, object force distribution and control, grip transform, grasp stability and its synthesis, grasp stiffness and compliance motion and robot arm-hand coordination. Three main topics are presented in this article. The first is an introduction to the subject. The second concentrates on examples of mechanical manipulators used in research and the methods employed to control them. The third presents work which has been done on the field of object manipulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research in cognitive neuroscience has found that observation of human actions activates the ‘mirror system’ and provokes automatic imitation to a greater extent than observation of non-biological movements. The present study investigated whether this human bias depends primarily on phylogenetic or ontogenetic factors by examining the effects of sensorimotor experience on automatic imitation of non-biological robotic, stimuli. Automatic imitation of human and robotic action stimuli was assessed before and after training. During these test sessions, participants were required to execute a pre-specified response (e.g. to open their hand) while observing a human or robotic hand making a compatible (opening) or incompatible (closing) movement. During training, participants executed opening and closing hand actions while observing compatible (group CT) or incompatible movements (group IT) of a robotic hand. Compatible, but not incompatible, training increased automatic imitation of robotic stimuli (speed of responding on compatible trials, compared with incompatible trials) and abolished the human bias observed at pre-test. These findings suggest that the development of the mirror system depends on sensorimotor experience, and that, in our species, it is biased in favour of human action stimuli because these are more abundant than non-biological action stimuli in typical developmental environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper highlights the methodological development of identifying and characterizing rice (Oryza sativa L.) ecosystems and the varietal deployment process through participatory approaches. Farmers have intricate knowledge of their rice ecosystems. Evidence from Begnas (mid-hill) and Kachorwa (plain) sites in Nepal suggests that farmers distinguish ecosystems for rice primarily on the basis of moisture and fertility of soils. Farmers also differentiate the number, relative size and specific characteristics of each ecosystem within a given geographic area. They allocate individual varieties to each ecosystem, based on the principle of ‘best fit’ between ecosystem characteristics and varietal traits, indicating that competition between varieties mainly occurs within the ecosystems. Land use and ecosystems determine rice genetic diversity, with marginal land having fewer options for varieties than more productive areas. Modern varieties are mostly confined to productive land, whereas landraces are adapted to marginal ecosystems. Researchers need to understand the ecosystems and varietal distribution within ecosystems better in order to plan and execute programmes on agrobiodiversity conservation on-farm, diversity deployment, repatriation of landraces and monitoring varietal diversity. Simple and practical ways to elicit information on rice ecosystems and associated varieties through farmers’ group discussion at village level are suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to explore, from a practical point-of-view, a number of key strategic issues that critically influence organisations' competitiveness. Design/methodology/approach – The paper is based on a semi-structured interview with Mr Paul Walsh, CEO of Diageo. Diageo is a highly successful company and Mr Walsh has played a central role in making Diageo the number one branded drink company in the world. Findings – The paper discusses the key attributes of successful merger, lessons from a complex cross boarder acquisition, rationale for strategic alliance with competitors, distinctive resources, and the role of corporate social responsibility. Research limitations/implications – It is not too often that management scholars have the opportunity to discuss with the CEOs of large multinationals the rational of key strategic decisions. In this paper these issues are explored from the perspective of a CEO of a large and successful company. The lessons, while not generalisable, offer unique insights to students of management and management researchers. Originality/value – The paper offers a bridge between theory and practice. It demonstrates that from Diageo's perspective the distinctive capabilities are intangible. It also offers insight into how to successfully execute strategic decision. In terms of originality it offers a view from the top, which is often missing from strategy research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To bridge the gaps between traditional mesoscale modelling and microscale modelling, the National Center for Atmospheric Research, in collaboration with other agencies and research groups, has developed an integrated urban modelling system coupled to the weather research and forecasting (WRF) model as a community tool to address urban environmental issues. The core of this WRF/urban modelling system consists of the following: (1) three methods with different degrees of freedom to parameterize urban surface processes, ranging from a simple bulk parameterization to a sophisticated multi-layer urban canopy model with an indoor–outdoor exchange sub-model that directly interacts with the atmospheric boundary layer, (2) coupling to fine-scale computational fluid dynamic Reynolds-averaged Navier–Stokes and Large-Eddy simulation models for transport and dispersion (T&D) applications, (3) procedures to incorporate high-resolution urban land use, building morphology, and anthropogenic heating data using the National Urban Database and Access Portal Tool (NUDAPT), and (4) an urbanized high-resolution land data assimilation system. This paper provides an overview of this modelling system; addresses the daunting challenges of initializing the coupled WRF/urban model and of specifying the potentially vast number of parameters required to execute the WRF/urban model; explores the model sensitivity to these urban parameters; and evaluates the ability of WRF/urban to capture urban heat islands, complex boundary-layer structures aloft, and urban plume T&D for several major metropolitan regions. Recent applications of this modelling system illustrate its promising utility, as a regional climate-modelling tool, to investigate impacts of future urbanization on regional meteorological conditions and on air quality under future climate change scenarios. Copyright © 2010 Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What are the main causes of international terrorism? Despite the meticulous examination of various candidate explanations, existing estimates still diverge in sign, size, and significance. This article puts forward a novel explanation and supporting evidence. We argue that domestic political instability provides the learning environment needed to successfully execute international terror attacks. Using a yearly panel of 123 countries over 1973–2003, we find that the occurrence of civil wars increases fatalities and the number of international terrorist acts by 45%. These results hold for alternative indicators of political instability, estimators, subsamples, subperiods, and accounting for competing explanations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the resource-based view, organisations are represented by the sum of their physical, human and organisational assets, resources and capabilities. Operational capabilities maintain the status quo and allow an organisation to execute their existing business. Dynamic capabilities, otherwise, allow an organisation to change this status quo including a change of the operational ones. Competitive advantage, in this context, is an effect of continuously developing and reconfiguring these firm-specific assets through dynamic capabilities. Deciding where and how to source the core operational capabilities is a key success factor. Furthermore, developing its dynamic capabilities allows an organisation to effectively manage change its operational capabilities. Many organisations are asserted to have a high dependency on - as well as a high benefit from - the use of information technology (IT), making it a crucial and overarching resource. Furthermore, the IT function is assigned the role as a change enabler and so IT sourcing affects the capability of managing business change. IT sourcing means that organisations need to decide how to source their IT capabilities. Outsourcing of parts of the IT function will also outsource some of the IT capabilities and therefore some of the business capabilities. As a result, IT sourcing has an impact on the organisation's capabilities and consequently on the business success. And finally, a turbulent and fast moving business environment challenges organisations to effectively and efficiently managing business change. Our research builds on the existing theory of dynamic and operational capabilities by considering the interdependencies between the dynamic capabilities of business change and IT sourcing. Further it examines the decision-making oversight of these areas as implemented through IT governance. We introduce a new conceptual framework derived from the existing theory and extended through an illustrative case study conducted in a German bank. Under a philosophical paradigm of constructivism, we collected data from eight semi-structured interviews and used additional sources of evidence in form of annual accounts, strategy papers and IT benchmark reports. We applied an Interpretative Phenomenological Analysis (IPA), which emerged the superordinate themes for our tentative capabilities framework. An understanding of these interdependencies enables scholars and professionals to improve business success through effectively managing business change and evaluating the impact of IT sourcing decisions on the organisation's operational and dynamic capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organisations typically define and execute their selected strategy by developing and managing a portfolio of projects. The governance of this portfolio has proved to be a major challenge, particularly for large organisations. Executives and managers face even greater pressures when the nature of the strategic landscape is uncertain. This paper explores approaches for dealing with different levels of certainty in business IT projects and provides a contingent governance framework. Historically business IT projects have relied on a structured sequential approach, also referred to as a waterfall method. There is a distinction between the development stages of a solution and the management stages of a project that delivers the solution although these are often integrated in a business IT systems project. Prior research has demonstrated that the level of certainty varies between development projects. There can be uncertainty on what needs to be developed and also on how this solution should be developed. The move to agile development and management reflects a greater level of uncertainty often on both dimensions and this has led the adoption of more iterative approaches. What has been less well researched is the impact of uncertainty on the governance of the change portfolio and the corresponding implications for business executives. This paper poses this research question and proposes a govemance framework to address these aspects. The governance framework has been reviewed in the context of a major anonymous organisation, FinOrg. Findings are reported in this paper with a focus on the need to apply different approaches. In particular, the governance of uncertain business change is contrasted with the management approach for defined IT projects. Practical outputs from the paper include a consideration of some innovative approaches that can be used by executives. It also investigates the role of the business change portfolio group in evaluating and executing the appropriate level of governance. These results lead to recommendations for executives and also proposed further research.