867 resultados para Graph-based method
Resumo:
We propose a method to measure real-valued time series irreversibility which combines two different tools: the horizontal visibility algorithm and the Kullback-Leibler divergence. This method maps a time series to a directed network according to a geometric criterion. The degree of irreversibility of the series is then estimated by the Kullback-Leibler divergence (i.e. the distinguishability) between the in and out degree distributions of the associated graph. The method is computationally efficient and does not require any ad hoc symbolization process. We find that the method correctly distinguishes between reversible and irreversible stationary time series, including analytical and numerical studies of its performance for: (i) reversible stochastic processes (uncorrelated and Gaussian linearly correlated), (ii) irreversible stochastic processes (a discrete flashing ratchet in an asymmetric potential), (iii) reversible (conservative) and irreversible (dissipative) chaotic maps, and (iv) dissipative chaotic maps in the presence of noise. Two alternative graph functionals, the degree and the degree-degree distributions, can be used as the Kullback-Leibler divergence argument. The former is simpler and more intuitive and can be used as a benchmark, but in the case of an irreversible process with null net current, the degree-degree distribution has to be considered to identify the irreversible nature of the series
Resumo:
In this paper we focus on the selection of safeguards in a fuzzy risk analysis and management methodology for information systems (IS). Assets are connected by dependency relationships, and a failure of one asset may affect other assets. After computing impact and risk indicators associated with previously identified threats, we identify and apply safeguards to reduce risks in the IS by minimizing the transmission probabilities of failures throughout the asset network. However, as safeguards have associated costs, the aim is to select the safeguards that minimize costs while keeping the risk within acceptable levels. To do this, we propose a dynamic programming-based method that incorporates simulated annealing to tackle optimizations problems.
Resumo:
In this paper, we propose a novel method for the unsupervised clustering of graphs in the context of the constellation approach to object recognition. Such method is an EM central clustering algorithm which builds prototypical graphs on the basis of fast matching with graph transformations. Our experiments, both with random graphs and in realistic situations (visual localization), show that our prototypes improve the set median graphs and also the prototypes derived from our previous incremental method. We also discuss how the method scales with a growing number of images.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.
Resumo:
Aim: The aim of this study was to assess the discriminatory power and potential turn around time ( TAT) of a PCR-based method for the detection of methicillin-resistant Staphylococcus aureus (MRSA) from screening swabs. Methods: Screening swabs were examined using the current laboratory protocol of direct culture on mannitol salt agar supplemented with oxacillin (MSAO-direct). The PCR method involved pre-incubation in broth for 4 hours followed by a multiplex PCR with primers directed to mecA and nuc genes of MRSA. The reference standard was determined by pre-incubation in broth for 4 hours followed by culture on MSAO (MSAO-broth). Results: A total of 256 swabs was analysed. The rates of detection of MRSA using MSAO-direct, MSAO-broth and PCR were 10.2, 13.3 and 10.2%, respectively. For PCR, the sensitivity, specificity, positive predictive value and negative predictive values were 66.7% (95% CI 51.9 - 83.3%), 98.6% ( 95% CI 97.1 - 100%), 84.6% ( 95% CI 76.2 - 100%) and 95.2% ( 95% CI 92.4 - 98.0%), respectively, and these results were almost identical to those obtained from MSAO-direct. The agreement between MSAO-direct and PCR was 61.5% ( 95% CI 42.8 - 80.2%) for positive results, 95.6% ( 95% CI 93.0 - 98.2%) for negative results and overall was 92.2% ( 95% CI 88.9 - 95.5%). Conclusions: ( 1) The discriminatory power of PCR and MSAO-direct is similar but the level of agreement, especially for true positive results, is low. ( 2) The potential TAT for the PCR method provides a marked advantage over conventional methods. ( 3) Further modifications to the PCR method such as increased broth incubation time, use of selective broth and adaptation to real-time PCR may lead to improvement in sensitivity and TAT.
Prediction of slurry transport in SAG mills using SPH fluid flow in a dynamic DEM based porous media
Resumo:
DEM modelling of the motion of coarse fractions of the charge inside SAG mills has now been well established for more than a decade. In these models the effect of slurry has broadly been ignored due to its complexity. Smoothed particle hydrodynamics (SPH) provides a particle based method for modelling complex free surface fluid flows and is well suited to modelling fluid flow in mills. Previous modelling has demonstrated the powerful ability of SPH to capture dynamic fluid flow effects such as lifters crashing into slurry pools, fluid draining from lifters, flow through grates and pulp lifter discharge. However, all these examples were limited by the ability to model only the slurry in the mill without the charge. In this paper, we represent the charge as a dynamic porous media through which the SPH fluid is then able to flow. The porous media properties (specifically the spatial distribution of porosity and velocity) are predicted by time averaging the mill charge predicted using a large scale DEM model. This allows prediction of transient and steady state slurry distributions in the mill and allows its variation with operating parameters, slurry viscosity and slurry volume, to be explored. (C) 2006 Published by Elsevier Ltd.
Resumo:
This study was undertaken to develop a simple laboratory-based method for simulating the freezing profiles of beef trim so that their effect on E. coli 0157 survival could be better assessed. A commercially available apparatus of the type used for freezing embryos, together with an associated temperature logger and software, was used for this purpose with a -80 degrees C freezer as a heat sink. Four typical beef trim freezing profiles, of different starting temperatures or lengths, were selected and modelled as straight lines for ease of manipulation. A further theoretical profile with an extended freezing plateau was also developed. The laboratory-based setup worked well and the modelled freezing profiles fitted closely to the original data. No change in numbers of any of the strains was apparent for the three simulated profiles of different lengths starting at 25 degrees C. Slight but significant (P < 0.05) decreases in numbers (similar to 0.2 log cfu g(-1)) of all strains were apparent for a profile starting at 12 degrees C. A theoretical version of this profile with a freezing plateau phase extended from 11 h to 17 h resulted in significant (P < 0.05) decreases in numbers (similar to 1.2 log cfu g(-1)) of all strains. Results indicated possible avenues for future research in controlling this pathogen. The method developed in this study proved a useful and cost-effective way for simulating freezing profiles of beef trim. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A sensitive quantitative reversed-phase HPLC method is described for measuring bacterial proteolysis and proteinase activity in UHT milk. The analysis is performed on a TCA filtrate of the milk. The optimum concentration of TCA was found to be 4%; at lower concentrations, non-precipitated protein blocked the HPLC while higher concentrations yielded lower amounts of peptides. The method showed greater sensitivity and reproducibility than a fluorescamine-based method. Quantification of the HPLC method was achieved by use of an external dipeptide standard or a standard proteinase. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
We have developed an alignment-free method that calculates phylogenetic distances using a maximum-likelihood approach for a model of sequence change on patterns that are discovered in unaligned sequences. To evaluate the phylogenetic accuracy of our method, and to conduct a comprehensive comparison of existing alignment-free methods (freely available as Python package decaf+py at http://www.bioinformatics.org.au), we have created a data set of reference trees covering a wide range of phylogenetic distances. Amino acid sequences were evolved along the trees and input to the tested methods; from their calculated distances we infered trees whose topologies we compared to the reference trees. We find our pattern-based method statistically superior to all other tested alignment-free methods. We also demonstrate the general advantage of alignment-free methods over an approach based on automated alignments when sequences violate the assumption of collinearity. Similarly, we compare methods on empirical data from an existing alignment benchmark set that we used to derive reference distances and trees. Our pattern-based approach yields distances that show a linear relationship to reference distances over a substantially longer range than other alignment-free methods. The pattern-based approach outperforms alignment-free methods and its phylogenetic accuracy is statistically indistinguishable from alignment-based distances.
Resumo:
Models and model transformations are the core concepts of OMG's MDA (TM) approach. Within this approach, most models are derived from the MOF and have a graph-based nature. In contrast, most of the current model transformations are specified textually. To enable a graphical specification of model transformation rules, this paper proposes to use triple graph grammars as declarative specification formalism. These triple graph grammars can be specified within the FUJABA tool and we argue that these rules can be more easily specified and they become more understandable and maintainable. To show the practicability of our approach, we present how to generate Tefkat rules from triple graph grammar rules, which helps to integrate triple graph grammars with a state of a art model transformation tool and shows the expressiveness of the concept.
Resumo:
Background/aims Macular pigment is thought to protect the macula against exposure to light and oxidative stress, both of which may play a role in the development of age-related macular degeneration. The aim was to clinically evaluate a novel cathode-ray-tube-based method for measurement of macular pigment optical density (MPOD) known as apparent motion photometry (AMP). Methods The authors took repeat readings of MPOD centrally (0°) and at 3° eccentricity for 76 healthy subjects (mean (±SD) 26.5±13.2 years, range 18–74 years). Results The overall mean MPOD for the cohort was 0.50±0.24 at 0°, and 0.28±0.20 at 3° eccentricity; these values were significantly different (t=-8.905, p<0.001). The coefficients of repeatability were 0.60 and 0.48 for the 0 and 3° measurements respectively. Conclusions The data suggest that when the same operator is taking repeated 0° AMP MPOD readings over time, only changes of more than 0.60 units can be classed as clinically significant. In other words, AMP is not suitable for monitoring changes in MPOD over time, as increases of this magnitude would not be expected, even in response to dietary modification or nutritional supplementation.
Resumo:
We present a novel market-based method, inspired by retail markets, for resource allocation in fully decentralised systems where agents are self-interested. Our market mechanism requires no coordinating node or complex negotiation. The stability of outcome allocations, those at equilibrium, is analysed and compared for three buyer behaviour models. In order to capture the interaction between self-interested agents, we propose the use of competitive coevolution. Our approach is both highly scalable and may be tuned to achieve specified outcome resource allocations. We demonstrate the behaviour of our approach in simulation, where evolutionary market agents act on behalf of service providing nodes to adaptively price their resources over time, in response to market conditions. We show that this leads the system to the predicted outcome resource allocation. Furthermore, the system remains stable in the presence of small changes in price, when buyers' decision functions degrade gracefully. © 2009 The Author(s).
Resumo:
We consider a finite state automata based method of solving a system of linear Diophantine equations with coefficients from the set {-1,0,1} and solutions in {0,1}.
Resumo:
The scope of this paper is to present the Pulse Width Modulation (PWM) based method for Active Power (AP) and Reactive Power (RP) measurements as can be applied in Power Meters. Necessarily, the main aim of the material presented is a twofold, first to present a realization methodology of the proposed algorithm, and second to verify the algorithm’s robustness and validity. The method takes advantage of the fact that frequencies present in a power line are of a specific fundamental frequency range (a range centred on the 50 Hz or 60 Hz) and that in case of the presence of harmonics the frequencies of those dominating in the power line spectrum can be specified on the basis of the fundamental. In contrast to a number of existing methods a time delay or shifting of the input signal is not required by the method presented and the time delay by n/2 of the Current signal with respect to the Voltage signal required by many of the existing measurement techniques, does not apply in the case of the PWM method as well.
Resumo:
In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.