241 resultados para graph entropy
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
Organizational transformations reliant on successful ICT system developments (continue to) fail to deliver projected benefits even when contemporary governance models are applied rigorously. Modifications to traditional program, project and systems development management methods have produced little material improvement to successful transformation as they are unable to routinely address the complexity and uncertainty of dynamic alignment of IS investments and innovation. Complexity theory provides insight into why this phenomenon occurs and is used to develop a conceptualization of complexity in IS-driven organizational transformations. This research-in-progress aims to identify complexity formulations relevant to organizational transformation. Political/power based influences, interrelated business rules, socio-technical innovation, impacts on stakeholders and emergent behaviors are commonly considered as characterizing complexity while the proposed conceptualization accommodates these as connectivity, irreducibility, entropy and/or information gain in hierarchically approximation and scaling, number of states in a finite automata and/or dimension of attractor, and information and/or variety.
Resumo:
The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.
Resumo:
Diagnostics of rolling element bearings is usually performed by means of vibration signals measured by accelerometers placed in the proximity of the bearing under investigation. The aim is to monitor the integrity of the bearing components, in order to avoid catastrophic failures, or to implement condition based maintenance strategies. In particular, the trend in this field is to combine in a single algorithm different signal-enhancement and signal-analysis techniques. Among the first ones, Minimum Entropy Deconvolution (MED) has been pointed out as a key tool able to highlight the effect of a possible damage in one of the bearing components within the vibration signal. This paper presents the application of this technique to signals collected on a simple test-rig, able to test damaged industrial roller bearings in different working conditions. The effectiveness of the technique has been tested, comparing the results of one undamaged bearing with three bearings artificially damaged in different locations, namely on the inner race, outer race and rollers. Since MED performances are dependent on the filter length, the most suitable value of this parameter is defined on the basis of both the application and measured signals. This represents an original contribution of the paper.
Resumo:
The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.
Resumo:
A multi-resource multi-stage scheduling methodology is developed to solve short-term open-pit mine production scheduling problems as a generic multi-resource multi-stage scheduling problem. It is modelled using essential characteristics of short-term mining production operations such as drilling, sampling, blasting and excavating under the capacity constraints of mining equipment at each processing stage. Based on an extended disjunctive graph model, a shifting-bottleneck-procedure algorithm is enhanced and applied to obtain feasible short-term open-pit mine production schedules and near-optimal solutions. The proposed methodology and its solution quality are verified and validated using a real mining case study.
Resumo:
This study investigated movement synchronization of players within and between teams during competitive association football performance. Cluster phase analysis was introduced as a method to assess synchronies between whole teams and between individual players with their team as a function of time, ball possession and field direction. Measures of dispersion (SD) and regularity (sample entropy – SampEn – and cross sample entropy – Cross-SampEn) were used to quantify the magnitude and structure of synchrony. Large synergistic relations within each professional team sport collective were observed, particularly in the longitudinal direction of the field (0.89 ± 0.12) compared to the lateral direction (0.73 ± 0.16, p < .01). The coupling between the group measures of the two teams also revealed that changes in the synchrony of each team were intimately related (Cross-SampEn values of 0.02 ± 0.01). Interestingly, ball possession did not influence team synchronization levels. In player–team synchronization, individuals tended to be coordinated under near in-phase modes with team behavior (mean ranges between −7 and 5° of relative phase). The magnitudes of variations were low, but more irregular in time, for the longitudinal (SD: 18 ± 3°; SampEn: 0.07 ± 0.01), compared to the lateral direction (SD: 28 ± 5°; SampEn: 0.06 ± 0.01, p < .05) on-field. Increases in regularity were also observed between the first (SampEn: 0.07 ± 0.01) and second half (SampEn: 0.06 ± 0.01, p < .05) of the observed competitive game. Findings suggest that the method of analysis introduced in the current study may offer a suitable tool for examining team’s synchronization behaviors and the mutual influence of each team’s cohesiveness in competing social collectives.
Resumo:
This study investigated changes in the complexity (magnitude and structure of variability) of the collective behaviours of association football teams during competitive performance. Raw positional data from an entire competitive match between two professional teams were obtained with the ProZone® tracking system. Five compound positional variables were used to investigate the collective patterns of performance of each team including: surface area, stretch index, team length, team width, and geometrical centre. Analyses involve the coefficient of variation (%CV) and approximate entropy (ApEn), as well as the linear association between both parameters. Collective measures successfully captured the idiosyncratic behaviours of each team and their variations across the six time periods of the match. Key events such as goals scored and game breaks (such as half time and full time) seemed to influence the collective patterns of performance. While ApEn values significantly decreased during each half, the %CV increased. Teams seem to become more regular and predictable, but with increased magnitudes of variation in their organisational shape over the natural course of a match.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
Proton-bound dimers consisting of two glycerophospholipids with different headgroups were prepared using negative ion electrospray ionization and dissociated in a triple quadrupole mass spectrometer. Analysis of the tandem mass spectra of the dimers using the kinetic method provides, for the first time, an order of acidity for the phospholipid classes in the gas phase of PE < PA << PG < PS < PI. Hybrid density functional calculations on model phospholipids were used to predict the absolute deprotonation enthalpies of the phospholipid classes from isodesmic proton transfer reactions with phosphoric acid. The computational data largely support the experimental acidity trend, with the exception of the relative acidity ranking of the two most acidic phospholipid species. Possible causes of the discrepancy between experiment and theory are discussed and the experimental trend is recommended. The sequence of gas phase acidities for the phospholipid headgroups is found to (1) have little correlation with the relative ionization efficiencies of the phospholipid classes observed in the negative ion electrospray process, and (2) correlate well with fragmentation trends observed upon collisional activation of phospholipid \[M - H](-) anions. (c) 2005 American Society for Mass Spectrometry.
Resumo:
Detecting anomalies in the online social network is a significant task as it assists in revealing the useful and interesting information about the user behavior on the network. This paper proposes a rule-based hybrid method using graph theory, Fuzzy clustering and Fuzzy rules for modeling user relationships inherent in online-social-network and for identifying anomalies. Fuzzy C-Means clustering is used to cluster the data and Fuzzy inference engine is used to generate rules based on the cluster behavior. The proposed method is able to achieve improved accuracy for identifying anomalies in comparison to existing methods.
Resumo:
We introduce the notion of distributed password-based public-key cryptography, where a virtual high-entropy private key is implicitly defined as a concatenation of low-entropy passwords held in separate locations. The users can jointly perform private-key operations by exchanging messages over an arbitrary channel, based on their respective passwords, without ever sharing their passwords or reconstituting the key. Focusing on the case of ElGamal encryption as an example, we start by formally defining ideal functionalities for distributed public-key generation and virtual private-key computation in the UC model. We then construct efficient protocols that securely realize them in either the RO model (for efficiency) or the CRS model (for elegance). We conclude by showing that our distributed protocols generalize to a broad class of “discrete-log”-based public-key cryptosystems, which notably includes identity-based encryption. This opens the door to a powerful extension of IBE with a virtual PKG made of a group of people, each one memorizing a small portion of the master key.
Resumo:
Most previous work on unconditionally secure multiparty computation has focused on computing over a finite field (or ring). Multiparty computation over other algebraic structures has not received much attention, but is an interesting topic whose study may provide new and improved tools for certain applications. At CRYPTO 2007, Desmedt et al introduced a construction for a passive-secure multiparty multiplication protocol for black-box groups, reducing it to a certain graph coloring problem, leaving as an open problem to achieve security against active attacks. We present the first n-party protocol for unconditionally secure multiparty computation over a black-box group which is secure under an active attack model, tolerating any adversary structure Δ satisfying the Q 3 property (in which no union of three subsets from Δ covers the whole player set), which is known to be necessary for achieving security in the active setting. Our protocol uses Maurer’s Verifiable Secret Sharing (VSS) but preserves the essential simplicity of the graph-based approach of Desmedt et al, which avoids each shareholder having to rerun the full VSS protocol after each local computation. A corollary of our result is a new active-secure protocol for general multiparty computation of an arbitrary Boolean circuit.
Resumo:
The ability to understand and predict how thermal, hydrological,mechanical and chemical (THMC) processes interact is fundamental to many research initiatives and industrial applications. We present (1) a new Thermal– Hydrological–Mechanical–Chemical (THMC) coupling formulation, based on non-equilibrium thermodynamics; (2) show how THMC feedback is incorporated in the thermodynamic approach; (3) suggest a unifying thermodynamic framework for multi-scaling; and (4) formulate a new rationale for assessing upper and lower bounds of dissipation for THMC processes. The technique is based on deducing time and length scales suitable for separating processes using a macroscopic finite time thermodynamic approach. We show that if the time and length scales are suitably chosen, the calculation of entropic bounds can be used to describe three different types of material and process uncertainties: geometric uncertainties,stemming from the microstructure; process uncertainty, stemming from the correct derivation of the constitutive behavior; and uncertainties in time evolution, stemming from the path dependence of the time integration of the irreversible entropy production. Although the approach is specifically formulated here for THMC coupling we suggest that it has a much broader applicability. In a general sense it consists of finding the entropic bounds of the dissipation defined by the product of thermodynamic force times thermodynamic flux which in material sciences corresponds to generalized stress and generalized strain rates, respectively.