900 resultados para static nodes
Resumo:
We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
Huelse, M, Barr, D R W, Dudek, P: Cellular Automata and non-static image processing for embodied robot systems on a massively parallel processor array. In: Adamatzky, A et al. (eds) AUTOMATA 2008, Theory and Applications of Cellular Automata. Luniver Press, 2008, pp. 504-510. Sponsorship: EPSRC
Resumo:
This report presents an algorithm, and its implementation, for doing type inference in the context of Quasi-Static Typing (QST) ["Quasy-static Typing." Satish Thatte Proc. ACM Symp. on Principles of Programming Languages, 1988]. The package infers types a la "QST" for the simply typed λ-calculus.
Resumo:
We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.
Resumo:
In this paper we introduce a theory of policy routing dynamics based on fundamental axioms of routing update mechanisms. We develop a dynamic policy routing model (DPR) that extends the static formalism of the stable paths problem (introduced by Griffin et al.) with discrete synchronous time. DPR captures the propagation of path changes in any dynamic network irrespective of its time-varying topology. We introduce several novel structures such as causation chains, dispute fences and policy digraphs that model different aspects of routing dynamics and provide insight into how these dynamics manifest in a network. We exercise the practicality of the theoretical foundation provided by DPR with two fundamental problems: routing dynamics minimization and policy conflict detection. The dynamics minimization problem utilizes policy digraphs, that capture the dependencies in routing policies irrespective of underlying topology dynamics, to solve a graph optimization problem. This optimization problem explicitly minimizes the number of routing update messages in a dynamic network by optimally changing the path preferences of a minimal subset of nodes. The conflict detection problem, on the other hand, utilizes a theoretical result of DPR where the root cause of a causation cycle (i.e., cycle of routing update messages) can be precisely inferred as either a transient route flap or a dispute wheel (i.e., policy conflict). Using this result we develop SafetyPulse, a token-based distributed algorithm to detect policy conflicts in a dynamic network. SafetyPulse is privacy preserving, computationally efficient, and provably correct.
Resumo:
This paper reports on the design and the manufacturing of an integrated DCDC converter, which respects the specificity of sensor node network: compactness, high efficiency in acquisition and transmission modes, and compatibility with miniature Lithium batteries. A novel integrated circuit (ASIC) has been designed and manufactured to provide regulated Voltage to the sensor node from miniaturized, thin film Lithium batteries. Then, a 3D integration technique has been used to integrate this ASIC in a 3 layers stack with high efficiency passives components, mixing the wafer level technologies from two different research institutions. Electrical results have demonstrated the feasibility of this integrated system and experiments have shown significant improvements in the case of oscillations in regulated voltage. However, stability of this output voltage toward the input voltage has still to be improved.
Resumo:
A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.
Design and implementation of the embedded capacitance layers for decoupling of wireless sensor nodes
Resumo:
In this paper, the embedded capacitance material (ECM) is fabricated between the power and ground layers of the wireless sensor nodes, forming an integrated capacitance to replace the large amount of decoupling capacitors on the board. The ECM material, whose dielectric constant is 16, has the same size of the wireless sensor nodes of 3cm*3cm, with a thickness of only 14μm. Though the capacitance of a single ECM layer being only around 8nF, there are two reasons the ECM layers can still replace the high frequency decoupling capacitors (100nF in our case) on the board. The first reason is: the parasitic inductance of the ECM layer is much lower than the surface mount capacitors'. A smaller capacitance value of the ECM layer could achieve the same resonant frequency of the surface mount decoupling capacitors. Simulation and measurement fit this assumption well. The second reason is: more than one layer of ECM material are utilized during the design step to get a parallel connection of the several ECM capacitance layers, finally leading to a larger value of the capacitance and smaller value of parasitic. Characterization of the ECM is carried out by the LCR meter. To evaluate the behaviors of the ECM layer, time and frequency domain measurements are performed on the power-bus decoupling of the wireless sensor nodes. Comparison with the measurements of bare PCB board and decoupling capacitors solution are provided to show the improvement of the ECM layer. Measurements show that the implementation of the ECM layer can not only save the space of the surface mount decoupling capacitors, but also provide better power-bus decoupling to the nodes.
Resumo:
Buildings consume 40% of Ireland's total annual energy translating to 3.5 billion (2004). The EPBD directive (effective January 2003) places an onus on all member states to rate the energy performance of all buildings in excess of 50m2. Energy and environmental performance management systems for residential buildings do not exist and consist of an ad-hoc integration of wired building management systems and Monitoring & Targeting systems for non-residential buildings. These systems are unsophisticated and do not easily lend themselves to cost effective retrofit or integration with other enterprise management systems. It is commonly agreed that a 15-40% reduction of building energy consumption is achievable by efficiently operating buildings when compared with typical practice. Existing research has identified that the level of information available to Building Managers with existing Building Management Systems and Environmental Monitoring Systems (BMS/EMS) is insufficient to perform the required performance based building assessment. The cost of installing additional sensors and meters is extremely high, primarily due to the estimated cost of wiring and the needed labour. From this perspective wireless sensor technology provides the capability to provide reliable sensor data at the required temporal and spatial granularity associated with building energy management. In this paper, a wireless sensor network mote hardware design and implementation is presented for a building energy management application. Appropriate sensors were selected and interfaced with the developed system based on user requirements to meet both the building monitoring and metering requirements. Beside the sensing capability, actuation and interfacing to external meters/sensors are provided to perform different management control and data recording tasks associated with minimisation of energy consumption in the built environment and the development of appropriate Building information models(BIM)to enable the design and development of energy efficient spaces.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
BACKGROUND: Breast cancer is a heterogeneous disease. Predictive biological markers (BM) of responsiveness to therapy need to be identified. Evaluation of BM is mainly done at the primary site. However, in the adjuvant therapy of breast cancer, the main goal is control of micrometastases. It is still unknown whether heterogeneity in the expression of BM between the primary site and its micrometastases exists. OBJECTIVE: To evaluate the expression of some BM with potential predictive value from the primary breast cancer site and metastatic ipsilateral axillary lymph nodes. PATIENTS AND METHODS: Focality (percentage of positive cells) and intensity staining scores were evaluated for each marker. Freshly cut sections (4 microm) from embedded blocks of breast cancer fixed in formalin or bouin were put onto superfrost slides (Menzel-Gläser). Protein expression was evaluated immunohistochemically (IHC) using monoclonal antibodies against: topo II-alpha (clone KiS1, 1 microg/ml, Roche) with a trypsine pre-treatment (P); HSP27 (clone G3.1, 1/60, Biogenex), HSP70 (clone BRM.22, 1/80, Biogenex) and HER2 (clone CB11, 1/40, Novocastra; without P); p53 (clone D07, 1/750, Dako) and bcl-2 (clone 124, 1/60, Dako) with citrate buffer as P. RESULTS: Overall, the percentage of discordant marker status in the primary tumour and its metastatic lymph nodes was 2% for HER2, 6% for p53, 15% for bcl-2, 19% for topoisomerase II-alpha, 24% for HSP27 and 30% for HSP70. For the subgroup of patients with positive BM in the primary tumour, the percentage of discordance was 6% for HER2, 7% for p53, 14% for bcl-2, 19% for HSP70, 21% for topoisomerase II-alpha and 36% for HSP27. For the subgroup of patients with positive BM in the lymph nodes, the percentage of discordance was 9% for bcl-2, 15% for HER2 and p53, 21% for topoisomerase II-alpha, 22% for HSP27 and 25% for HSP70. CONCLUSIONS: 1) No biological marker had 100% concordant results. 2) Although some discordant cases might be explained by the limitations of the IHC technique, future studies aiming to evaluate the predictive value of BM in the adjuvant therapy of breast cancer should take into account a possible difference in BM expression between the primary and the metastatic sites.
Resumo:
Pigeons and other animals soon learn to wait (pause) after food delivery on periodic-food schedules before resuming the food-rewarded response. Under most conditions the steady-state duration of the average waiting time, t, is a linear function of the typical interfood interval. We describe three experiments designed to explore the limits of this process. In all experiments, t was associated with one key color and the subsequent food delay, T, with another. In the first experiment, we compared the relation between t (waiting time) and T (food delay) under two conditions: when T was held constant, and when T was an inverse function of t. The pigeons could maximize the rate of food delivery under the first condition by setting t to a consistently short value; optimal behavior under the second condition required a linear relation with unit slope between t and T. Despite this difference in optimal policy, the pigeons in both cases showed the same linear relation, with slope less than one, between t and T. This result was confirmed in a second parametric experiment that added a third condition, in which T + t was held constant. Linear waiting appears to be an obligatory rule for pigeons. In a third experiment we arranged for a multiplicative relation between t and T (positive feedback), and produced either very short or very long waiting times as predicted by a quasi-dynamic model in which waiting time is strongly determined by the just-preceding food delay.
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.