832 resultados para implementing
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
Theoretical understanding of the implementation and use of innovations within construction contexts is discussed and developed. It is argued that both the rhetoric of the 'improvement agenda' within construction and theories of innovation fail to account for the complex contexts and disparate perspectives which characterize construction work. To address this, the concept of relative boundedness is offered. Relatively unbounded innovation is characterized by a lack of a coherent central driving force or mediator with the ability to reconcile potential conflicts and overcome resistance to implementation. This is a situation not exclusive to, but certainly indicative of, much construction project work. Drawing on empirical material from the implementation of new design and coordination technologies on a large construction project, the concept is developed, concentrating on the negotiations and translations implementation mobilized. An actor-network theory (ANT) approach is adopted, which emphasizes the roles that both human actors and non-human agents play in the performance and outcomes of these interactions. Three aspects of how relative boundedness is constituted and affected are described; through the robustness of existing practices and expectations, through the delegation of interests on to technological artefacts and through the mobilization of actors and artefacts to constrain and limit the scope of negotiations over new technology implementation.
Resumo:
The role of users is an often-overlooked aspect of studies of innovation and diffusion. Using an actor-network theory (ANT) approach, four case studies examine the processes of implementing a piece of CAD (computer aided design) software, BSLink, in different organisations and describe the tailoring done by users to embed the software into working practices. This not only results in different practices of use at different locations, but also transforms BSLink itself into a proliferation of BSLinks-in-use. A focus group for BSLink users further reveals the gaps between different users' expectations and ways of using the software, and between different BSLinks-in-use. It also demonstrates the contradictory demands this places on its further development. The ANT-informed approach used treats both innovation and diffusion as processes of translation within networks. It also emphasises the political nature of innovation and implementation, and the efforts of various actors to delegate manoeuvres for increased influence onto technological artefacts.
Resumo:
Processor virtualization for process migration in distributed parallel computing systems has formed a significant component of research on load balancing. In contrast, the potential of processor virtualization for fault tolerance has been addressed minimally. The work reported in this paper is motivated towards extending concepts of processor virtualization towards ‘intelligent cores’ as a means to achieve fault tolerance in distributed parallel computing systems. Intelligent cores are an abstraction of the hardware processing cores, with the incorporation of cognitive capabilities, on which parallel tasks can be executed and migrated. When a processing core executing a task is predicted to fail the task being executed is proactively transferred onto another core. A parallel reduction algorithm incorporating concepts of intelligent cores is implemented on a computer cluster using Adaptive MPI and Charm ++. Preliminary results confirm the feasibility of the approach.