946 resultados para Ephemeral Computation
Resumo:
Generally wireless sensor networks rely of many-to-one communication approach for data gathering. This approach is extremely susceptible to sinkhole attack, where an intruder attracts surrounding nodes with unfaithful routing information, and subsequently presents selective forwarding or change the data that carry through it. A sinkhole attack causes an important threat to sensor networks and it should be considered that the sensor nodes are mostly spread out in open areas and of weak computation and battery power. In order to detect the intruder in a sinkhole attack this paper suggests an algorithm which firstly finds a group of suspected nodes by analyzing the consistency of data. Then, the intruder is recognized efficiently in the group by checking the network flow information. The proposed algorithm's performance has been evaluated by using numerical analysis and simulations. Therefore, accuracy and efficiency of algorithm would be verified.
Resumo:
Wireless Sensor Networks (WSNs) are employed in numerous applications in different areas including military, ecology, and health; for example, to control of important information like the personnel position in a building, as a result, WSNs need security. However, several restrictions such as low capability of computation, small memory, limited resources of energy, and the unreliable channels employ communication in using WSNs can cause difficulty in use of security and protection in WSNs. It is very essential to save WSNs from malevolent attacks in unfriendly situations. Such networks require security plan due to various limitations of resources and the prominent characteristics of a wireless sensor network which is a considerable challenge. This article is an extensive review about problems of WSNs security, which examined recently by researchers and a better understanding of future directions for WSN security.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This research study investigates a novel way to identify potential efficiency gains in business operations by observing how they are carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how they can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A genetic algorithm-based approach is proposed to explore and assess alternative process execution scenarios, where the objective function is represented by a comprehensive cost structure that captures different process dimensions. Experiments conducted with different variants of the genetic algorithm evaluate the approach's feasibility. The findings demonstrate that a genetic algorithm-based approach is able to make use of cost reduction as a way to identify improved execution scenarios in terms of reduced case durations and increased resource utilisation. The ultimate aim is to utilise cost-related insights gained from such improved scenarios to put forward recommendations for reducing process-related cost within organisations.
Resumo:
This paper presents a full system demonstration of dynamic sensorbased reconfiguration of a networked robot team. Robots sense obstacles in their environment locally and dynamically adapt their global geometric configuration to conform to an abstract goal shape. We present a novel two-layer planning and control algorithm for team reconfiguration that is decentralised and assumes local (neighbour-to-neighbour) communication only. The approach is designed to be resource-efficient and we show experiments using a team of nine mobile robots with modest computation, communication, and sensing. The robots use acoustic beacons for localisation and can sense obstacles in their local neighbourhood using IR sensors. Our results demonstrate globally-specified reconfiguration from local information in a real robot network, and highlight limitations of standard mesh networks in implementing decentralised algorithms.
Resumo:
Live migration of multiple Virtual Machines (VMs) has become an integral management activity in data centers for power saving, load balancing and system maintenance. While state-of-the-art live migration techniques focus on the improvement of migration performance of an independent single VM, only a little has been investigated to the case of live migration of multiple interacting VMs. Live migration is mostly influenced by the network bandwidth and arbitrarily migrating a VM which has data inter-dependencies with other VMs may increase the bandwidth consumption and adversely affect the performances of subsequent migrations. In this paper, we propose a Random Key Genetic Algorithm (RKGA) that efficiently schedules the migration of a given set of VMs accounting both inter-VM dependency and data center communication network. The experimental results show that the RKGA can schedule the migration of multiple VMs with significantly shorter total migration time and total downtime compared to a heuristic algorithm.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.
Resumo:
This thesis presents an empirical study of the effects of topology on cellular automata rule spaces. The classical definition of a cellular automaton is restricted to that of a regular lattice, often with periodic boundary conditions. This definition is extended to allow for arbitrary topologies. The dynamics of cellular automata within the triangular tessellation were analysed when transformed to 2-manifolds of topological genus 0, genus 1 and genus 2. Cellular automata dynamics were analysed from a statistical mechanics perspective. The sample sizes required to obtain accurate entropy calculations were determined by an entropy error analysis which observed the error in the computed entropy against increasing sample sizes. Each cellular automata rule space was sampled repeatedly and the selected cellular automata were simulated over many thousands of trials for each topology. This resulted in an entropy distribution for each rule space. The computed entropy distributions are indicative of the cellular automata dynamical class distribution. Through the comparison of these dynamical class distributions using the E-statistic, it was identified that such topological changes cause these distributions to alter. This is a significant result which implies that both global structure and local dynamics play a important role in defining long term behaviour of cellular automata.
Resumo:
The textual turn is a good friend of expert spectating, where it assumes the role of writing-productive apparatus, but no friend at all of expert practices or practitioners (Melrose, 2003). Introduction The challenge of time-based embodied performance when the artefact is unstable As a former full-time professional practitioner with an embodied dance practice as performer, choreographer and artistic director for three decades, I somewhat unexpectedly entered the world of academia in 2000 after completing a practice-based PhD, which was described by its examiners as ‘pioneering’. Like many artists my intention was to deepen and extend my practice through formal research into my work and its context (which was intercultural) and to privilege the artist’s voice in a research world where it was too often silent. Practice as research, practice-based research, and practice-led research were not yet fully named. It was in its infancy and my biggest challenge was to find a serviceable methodology which did not betray my intentions to keep practice at the centre of the research. Over the last 15 years, practice led doctoral research, where examinable creative work is placed alongside an accompanying (exegetical) written component, has come a long way. It has been extensively debated with a range of theories and models proposed (Barrett & Bolt, 2007, Pakes, 2003 & 2004, Piccini, 2005, Philips, Stock & Vincs 2009, Stock, 2009 & 2010, Riley & Hunter 2009, Haseman, 2006, Hecq, 2012). Much of this writing is based around epistemological concerns where the research methodologies proposed normally incorporate a contextualisation of the creative work in its field of practice, and more importantly validation and interrogation of the processes of the practice as the central ‘data gathering’ method. It is now widely accepted, at least in the Australian creative arts context, that knowledge claims in creative practice research arise from the material activities of the practice itself (Carter, 2004). The creative work explicated as the tangible outcome of that practice is sometimes referred to as the ‘artefact’. Although the making of the artefact, according to Colbert (2009, p. 7) is influenced by “personal, experiential and iterative processes”, mapping them through a research pathway is “difficult to predict [for] “the adjustments made to the artefact in the light of emerging knowledge and insights cannot be foreshadowed”. Linking the process and the practice outcome most often occurs through the textual intervention of an exegesis which builds, and/or builds on, theoretical concerns arising in and from the work. This linking produces what Barrett (2007) refers to as “situated knowledge… that operates in relation to established knowledge” (p. 145). But what if those material forms or ‘artefacts’ are not objects or code or digitised forms, but live within the bodies of artist/researchers where the nature of the practice itself is live, ephemeral and constantly transforming, as in dance and physical performance? Even more unsettling is when the ‘artefact’ is literally embedded and embodied in the work and in the maker/researcher; when subject and object are merged. To complicate matters, the performing arts are necessarily collaborative, relying not only on technical mastery and creative/interpretive processes, but on social and artistic relationships which collectively make up the ‘artefact’. This chapter explores issues surrounding live dance and physical performance when placed in a research setting, specifically the complexities of being required to translate embodied dance findings into textual form. Exploring how embodied knowledge can be shared in a research context for those with no experiential knowledge of communicating through and in dance, I draw on theories of “dance enaction” (Warburton, 2011) together with notions of “affective intensities” and “performance mastery” (Melrose, 2003), “intentional activity” (Pakes, 2004) and the place of memory. In seeking ways to capture in another form the knowledge residing in live dance practice, thus making implicit knowledge explicit, I further propose there is a process of triple translation as the performance (the living ‘artefact’) is documented in multi-facetted ways to produce something durable which can be re-visited. This translation becomes more complex if the embodied knowledge resides in culturally specific practices, formed by world views and processes quite different from accepted norms and conventions (even radical ones) of international doctoral research inquiry. But whatever the combination of cultural, virtual and genre-related dance practices being researched, embodiment is central to the process, outcome and findings, and the question remains of how we will use text and what forms that text might take.
Resumo:
This paper investigates demodulation of differentially phase modulated signals DPMS using optimal HMM filters. The optimal HMM filter presented in the paper is computationally of order N3 per time instant, where N is the number of message symbols. Previously, optimal HMM filters have been of computational order N4 per time instant. Also, suboptimal HMM filters have be proposed of computation order N2 per time instant. The approach presented in this paper uses two coupled HMM filters and exploits knowledge of ...
Resumo:
Network Real-Time Kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real time, and it is enabled by a network of Continuously Operating Reference Stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a Memetic Algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.
Resumo:
Distributed computation and storage have been widely used for processing of big data sets. For many big data problems, with the size of data growing rapidly, the distribution of computing tasks and related data can affect the performance of the computing system greatly. In this paper, a distributed computing framework is presented for high performance computing of All-to-All Comparison Problems. A data distribution strategy is embedded in the framework for reduced storage space and balanced computing load. Experiments are conducted to demonstrate the effectiveness of the developed approach. They have shown that about 88% of the ideal performance capacity have be achieved in multiple machines through using the approach presented in this paper.
Resumo:
This work describes recent extensions to the GPFlow scientific workflow system in development at MQUTeR (www.mquter.qut.edu.au), which facilitate interactive experimentation, automatic lifting of computations from single-case to collection-oriented computation and automatic correlation and synthesis of collections. A GPFlow workflow presents as an acyclic data flow graph, yet provides powerful iteration and collection formation capabilities.