151 resultados para Computer networks Security measures
Resumo:
Autonomic management can be used to improve the QoS provided by parallel/distributed applications. We discuss behavioural skeletons introduced in earlier work: rather than relying on programmer ability to design “from scratch” efficient autonomic policies, we encapsulate general autonomic controller features into algorithmic skeletons. Then we leave to the programmer the duty of specifying the parameters needed to specialise the skeletons to the needs of the particular application at hand. This results in the programmer having the ability to fast prototype and tune distributed/parallel applications with non-trivial autonomic management capabilities. We discuss how behavioural skeletons have been implemented in the framework of GCM(the Grid ComponentModel developed within the CoreGRID NoE and currently being implemented within the GridCOMP STREP project). We present results evaluating the overhead introduced by autonomic management activities as well as the overall behaviour of the skeletons. We also present results achieved with a long running application subject to autonomic management and dynamically adapting to changing features of the target architecture.
Overall the results demonstrate both the feasibility of implementing autonomic control via behavioural skeletons and the effectiveness of our sample behavioural skeletons in managing the “functional replication” pattern(s).
Resumo:
Mixture of Gaussians (MoG) modelling [13] is a popular approach to background subtraction in video sequences. Although the algorithm shows good empirical performance, it lacks theoretical justification. In this paper, we give a justification for it from an online stochastic expectation maximization (EM) viewpoint and extend it to a general framework of regularized online classification EM for MoG with guaranteed convergence. By choosing a special regularization function, l1 norm, we derived a new set of updating equations for l1 regularized online MoG. It is shown empirically that l1 regularized online MoG converge faster than the original online MoG .
Resumo:
Assessment of infant pain is a pressing concern, especially within the context of neonatal intensive care where infants may be exposed to prolonged and repeated pain during lengthy hospitalization. In the present study the feasibility of carrying out the complete Neonatal Facial Coding System (NFCS) in real time at bedside, specifically reliability, construct and concurrent validity, was evaluated in a tertiary level Neonatal Intensive Care Unit (NICU). Heel lance was used as a model of procedural pain, and observed with n = 40 infants at 32 weeks gestational age. Infant sleep/wake state, NFCS facial activity and specific hand movements were coded during baseline, unwrap, swab, heel lance, squeezing and recovery events. Heart rate was recorded continuously and digitally sampled using a custom designed computer system. Repeated measures analysis of variance (ANOVA) showed statistically significant differences across events for facial activity (P <0.0001) and heart rate (P <0.0001). Planned comparisons showed facial activity unchanged during baseline, swab and unwrap, then increased significantly during heel lance (P <0.0001), increased further during squeezing (P <0.003), then decreased during recovery (P <0.0001). Systematic shifts in sleep/wake state were apparent. Rise in facial activity was consistent with increased heart rate, except that facial activity more closely paralleled initiation of the invasive event. Thus facial display was more specific to tissue damage compared with heart rate. Inter-observer reliability was high. Construct validity of the NFCS at bedside was demonstrated as invasive procedures were distinguished from tactile. While bedside coding of behavior does not permit raters to be blind to events, mechanical recording of heart rate allowed for an independent source of concurrent validation for bedside application of the NFCS scale.
Resumo:
Introducing automation into a managed environment includes significant initial overhead and abstraction, creating a disconnect between the administrator and the system. In order to facilitate the transition to automated management, this paper proposes an approach whereby automation increases gradually, gathering data from the task deployment process. This stored data is analysed to determine the task outcome status and can then be used for comparison against future deployments of the same task and alerting the administrator to deviations from the expected outcome. Using a machinelearning
approach, the automation tool can learn from the administrator's reaction to task failures and eventually react to faults autonomously.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.
Resumo:
We investigate by numerical EM simulation the potential communication channel capacity of a reverberant environment using the time reversal approach, excited at 2.4 GHz by ON-OFF keyed RF pulse excitation. It is shown that approximately 725 1.25MHz propagation channels can be allocated with the cavity contains a 4×4 ? or 1×1 ? LOS obstruction positioned between the transceiver antenna and the time reversal unit. Furthermore the results show that two co-located transceiver dipoles separated by a spacing of 3?/4 can successfully resolve a 10ns pulse. Our findings suggest that different independent channels with identical operating frequency can be realized in an enclosed environment such as ventilation duct or underground tunnel. This suggests that there is a possibility of implementing a parallel channel radio link with the minimum inter-antenna spacing of 3?/4 between the transceivers in a rich multipath environment. © 2012 IEEE.
Resumo:
We present a technique, using the Imaginary Smith Chart, for determining the admittance of obstacles introduced into evanescent waveguide. The admittance of an inductive iris, capacitive iris, capacitive post, variable width strip and length of evanescent waveuide are investigated. © 2012 IEEE.
Resumo:
Conventional approaches of digital modulation schemes make use of amplitude, frequency and/or phase as modulation characteristic to transmit data. In this paper, we exploit circular polarization (CP) of the propagating electromagnetic carrier as modulation attribute which is a novel concept in digital communications. The requirement of antenna alignment to maximize received power is eliminated for CP signals and these are not affected by linearly polarized jamming signals. The work presents the concept of Circular Polarization Modulation for 2, 4 and 8 states of carrier and refers them as binary circular polarization modulation (BCPM), quaternary circular polarization modulation (QCPM) and 8-state circular polarization modulation (8CPM) respectively. Issues of modulation, demodulation, 3D symbol constellations and 3D propagating waveforms for the proposed modulation schemes are presented and analyzed in the presence of channel effects, and they are shown to have the same bit error performance in the presence of AWGN compared with conventional schemes while provide 3dB gain in the flat Rayleigh fading channel.
Resumo:
Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.