200 resultados para PROBABILISTIC TELEPORTATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce Claude Lévi Strauss' canonical formula (CF), an attempt to rigorously formalise the general narrative structure of myth. This formula utilises the Klein group as its basis, but a recent work draws attention to its natural quaternion form, which opens up the possibility that it may require a quantum inspired interpretation. We present the CF in a form that can be understood by a non-anthropological audience, using the formalisation of a key myth (that of Adonis) to draw attention to its mathematical structure. The future potential formalisation of mythological structure within a quantum inspired framework is proposed and discussed, with a probabilistic interpretation further generalising the formula

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the natural problem of secure n-party computation (in the passive, computationally unbounded attack model) of the n-product function f G (x 1,...,x n ) = x 1 ·x 2 ⋯ x n in an arbitrary finite group (G,·), where the input of party P i is x i  ∈ G for i = 1,...,n. For flexibility, we are interested in protocols for f G which require only black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our results are as follows. First, on the negative side, we show that if (G,·) is non-abelian and n ≥ 4, then no ⌈n/2⌉-private protocol for computing f G exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for f G based on k-of-k threshold secret sharing schemes, which are efficiently implementable over any black-box group G. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance t < n/2, but has exponential communication complexity O(n*2t+1^2/t) group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance t < n/μ for a graph-related constant μ ≤ 2.948, and has efficient communication complexity O(n*t^2) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A probabilistic method is proposed to evaluate voltage quality of grid-connected photovoltaic (PV) power systems. The random behavior of solar irradiation is described in statistical terms and the resulting voltage fluctuation probability distribution is then derived. Reactive power capabilities of the PV generators are then analyzed and their operation under constant power factor mode is examined. By utilizing the reactive power capability of the PV-generators to the full, it is shown that network voltage quality can be greatly enhanced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Small RNA sequencing is commonly used to identify novel miRNAs and to determine their expression levels in plants. There are several miRNA identification tools for animals such as miRDeep, miRDeep2 and miRDeep*. miRDeep-P was developed to identify plant miRNA using miRDeep’s probabilistic model of miRNA biogenesis, but it depends on several third party tools and lacks a user-friendly interface. The objective of our miRPlant program is to predict novel plant miRNA, while providing a user-friendly interface with improved accuracy of prediction. Result We have developed a user-friendly plant miRNA prediction tool called miRPlant. We show using 16 plant miRNA datasets from four different plant species that miRPlant has at least a 10% improvement in accuracy compared to miRDeep-P, which is the most popular plant miRNA prediction tool. Furthermore, miRPlant uses a Graphical User Interface for data input and output, and identified miRNA are shown with all RNAseq reads in a hairpin diagram. Conclusions We have developed miRPlant which extends miRDeep* to various plant species by adopting suitable strategies to identify hairpin excision regions and hairpin structure filtering for plants. miRPlant does not require any third party tools such as mapping or RNA secondary structure prediction tools. miRPlant is also the first plant miRNA prediction tool that dynamically plots miRNA hairpin structure with small reads for identified novel miRNAs. This feature will enable biologists to visualize novel pre-miRNA structure and the location of small RNA reads relative to the hairpin. Moreover, miRPlant can be easily used by biologists with limited bioinformatics skills.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large number of rooftop Photovoltaics (PVs) have turned traditional passive networks into active networks with intermittent and bidirectional power flow. A community based distribution network grid reinforcement process is proposed to address technical challenges associated with large integration of rooftop PVs. Probabilistic estimation of intermittent PV generation is considered. Depending on the network parameters such as the R/X ratio of distribution feeder, either reactive control from PVs or coordinated control of PVs and Battery Energy Storage (BES) has been proposed. Determination of BES capacity is one of the significant outcomes from the proposed method and several factors such as variation in PV installed capacity as well as participation from community members are analyzed. The proposed approach is convenient for the community members providing them flexibility of managing their integrated PV and BES systems

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite element (FE) model studies have made important contributions to our understanding of functional biomechanics of the lumbar spine. However, if a model is used to answer clinical and biomechanical questions over a certain population, their inherently large inter-subject variability has to be considered. Current FE model studies, however, generally account only for a single distinct spinal geometry with one set of material properties. This raises questions concerning their predictive power, their range of results and on their agreement with in vitro and in vivo values. Eight well-established FE models of the lumbar spine (L1-5) of different research centres around the globe were subjected to pure and combined loading modes and compared to in vitro and in vivo measurements for intervertebral rotations, disc pressures and facet joint forces. Under pure moment loading, the predicted L1-5 rotations of almost all models fell within the reported in vitro ranges, and their median values differed on average by only 2° for flexion-extension, 1° for lateral bending and 5° for axial rotation. Predicted median facet joint forces and disc pressures were also in good agreement with published median in vitro values. However, the ranges of predictions were larger and exceeded those reported in vitro, especially for the facet joint forces. For all combined loading modes, except for flexion, predicted median segmental intervertebral rotations and disc pressures were in good agreement with measured in vivo values. In light of high inter-subject variability, the generalization of results of a single model to a population remains a concern. This study demonstrated that the pooled median of individual model results, similar to a probabilistic approach, can be used as an improved predictive tool in order to estimate the response of the lumbar spine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIM: To assess the cost-effectiveness of an automated telephone-linked care intervention, Australian TLC Diabetes, delivered over 6 months to patients with established Type 2 diabetes mellitus and high glycated haemoglobin level, compared to usual care. METHODS: A Markov model was designed to synthesize data from a randomized controlled trial of TLC Diabetes (n=120) and other published evidence. The 5-year model consisted of three health states related to glycaemic control: 'sub-optimal' HbA1c ≥58mmol/mol (7.5%); 'average' ≥48-57mmol/mol (6.5-7.4%) and 'optimal' <48mmol/mol (6.5%) and a fourth state 'all-cause death'. Key outcomes of the model include discounted health system costs and quality-adjusted life years (QALYS) using SF-6D utility weights. Univariate and probabilistic sensitivity analyses were undertaken. RESULTS: Annual medication costs for the intervention group were lower than usual care [Intervention: £1076 (95%CI: £947, £1206) versus usual care £1271 (95%CI: £1115, £1428) p=0.052]. The estimated mean cost for intervention group participants over five years, including the intervention cost, was £17,152 versus £17,835 for the usual care group. The corresponding mean QALYs were 3.381 (SD 0.40) for the intervention group and 3.377 (SD 0.41) for the usual care group. Results were sensitive to the model duration, utility values and medication costs. CONCLUSION: The Australian TLC Diabetes intervention was a low-cost investment for individuals with established diabetes and may result in medication cost-savings to the health system. Although QALYs were similar between groups, other benefits arising from the intervention should also be considered when determining the overall value of this strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment , should be appropriately modelled in order to create the user profiles [1]. Secondly, the semantics behind the tags should be considered properly as the flexibility with their design can cause semantic problems such as synonymy and polysemy [2]. This research proposes to address these two challenges for building a tag-based item recommendation system by employing tensor modeling as the multi-dimensional user profile approach, and the topic model as the semantic analysis approach. The first objective is to optimize the tensor model reconstruction and to improve the model performance in generating quality rec-ommendation. A novel Tensor-based Recommendation using Probabilistic Ranking (TRPR) method [3] has been developed. Results show this method to be scalable for large datasets and outperforming the benchmarking methods in terms of accuracy. The memory efficient loop implements the n-mode block-striped (matrix) product for tensor reconstruction as an approximation of the initial tensor. The probabilistic ranking calculates the probabil-ity of users to select candidate items using their tag preference list based on the entries generated from the reconstructed tensor. The second objective is to analyse the tag semantics and utilize the outcome in building the tensor model. This research proposes to investigate the problem using topic model approach to keep the tags nature as the “social vocabulary” [4]. For the tag assignment data, topics can be generated from the occurrences of tags given for an item. However there is only limited amount of tags availa-ble to represent items as collection of topics, since an item might have only been tagged by using several tags. Consequently, the generated topics might not able to represent the items appropriately. Furthermore, given that each tag can belong to any topics with various probability scores, the occurrence of tags cannot simply be mapped by the topics to build the tensor model. A standard weighting technique will not appropriately calculate the value of tagging activity since it will define the context of an item using a tag instead of a topic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wi-Fi is a commonly available source of localization information in urban environments but is challenging to integrate into conventional mapping architectures. Current state of the art probabilistic Wi-Fi SLAM algorithms are limited by spatial resolution and an inability to remove the accumulation of rotational error, inherent limitations of the Wi-Fi architecture. In this paper we leverage the low quality sensory requirements and coarse metric properties of RatSLAM to localize using Wi-Fi fingerprints. To further improve performance, we present a novel sensor fusion technique that integrates camera and Wi-Fi to improve localization specificity, and use compass sensor data to remove orientation drift. We evaluate the algorithms in diverse real world indoor and outdoor environments, including an office floor, university campus and a visually aliased circular building loop. The algorithms produce topologically correct maps that are superior to those produced using only a single sensor modality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.