542 resultados para Multiple classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multi-secret sharing scheme allows several secrets to be shared amongst a group of participants. In 2005, Shao and Cao developed a verifiable multi-secret sharing scheme where each participant’s share can be used several times which reduces the number of interactions between the dealer and the group members. In addition some secrets may require a higher security level than others involving the need for different threshold values. Recently Chan and Chang designed such a scheme but their construction only allows a single secret to be shared per threshold value. In this article we combine the previous two approaches to design a multiple time verifiable multi-secret sharing scheme where several secrets can be shared for each threshold value. Since the running time is an important factor for practical applications, we will provide a complexity comparison of our combined approach with respect to the previous schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interest in utilising multiple heterogeneous Unmanned Aerial Vehicles (UAVs) in close proximity is growing rapidly. As such, many challenges are presented in the effective coordination and management of these UAVs; converting the current n-to-1 paradigm (n operators operating a single UAV) to the 1-to-n paradigm (one operator managing n UAVs). This paper introduces an Information Abstraction methodology used to produce the functional capability framework initially proposed by Chen et al. and its Level Of Detail (LOD) indexing scale. This framework was validated through comparing the operator workload and Situation Awareness (SA) of three experiment scenarios involving multiple autonomously heterogeneous UAVs. The first scenario was set in a high LOD configuration with highly abstracted UAV functional information; the second scenario was set in a mixed LOD configuration; and the final scenario was set in a low LOD configuration with maximal UAV functional information. Results show that there is a significant statistical decrease in operator workload when a UAV’s functional information is displayed at its physical form (low LOD - maximal information) when comparing to the mixed LOD configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mutations in the genes encoding for either the biosynthetic or transcriptional regulation of the anthocyanin pathway have been linked to color phenotypes. Generally, this is a loss of function resulting in a reduction or a change in the distribution of anthocyanin. Here, we describe a rearrangement in the upstream regulatory region of the gene encoding an apple (Malus x domestica) anthocyanin-regulating transcription factor, MYB10. We show that this modification is responsible for increasing the level of anthocyanin throughout the plant to produce a striking phenotype that includes red foliage and red fruit flesh. This rearrangement is a series of multiple repeats, forming a minisatellite-like structure that comprises five direct tandem repeats of a 23-bp sequence. This MYB10 rearrangement is present in all the red foliage apple varieties and species tested but in none of the white fleshed varieties. Transient assays demonstrated that the 23-bp sequence motif is a target of the MYB10 protein itself, and the number of repeat units correlates with an increase in transactivation by MYB10 protein. We show that the repeat motif is capable of binding MYB10 protein in electrophoretic mobility shift assays. Taken together, these results indicate that an allelic rearrangement in the promoter of MYB10 has generated an autoregulatory locus, and this autoregulation is sufficient to account for the increase in MYB10 transcript levels and subsequent ectopic accumulation of anthocyanins throughout the plant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel framework to further advance the recent trend of using query decomposition and high-order term relationships in query language modeling, which takes into account terms implicitly associated with different subsets of query terms. Existing approaches, most remarkably the language model based on the Information Flow method are however unable to capture multiple levels of associations and also suffer from a high computational overhead. In this paper, we propose to compute association rules from pseudo feedback documents that are segmented into variable length chunks via multiple sliding windows of different sizes. Extensive experiments have been conducted on various TREC collections and our approach significantly outperforms a baseline Query Likelihood language model, the Relevance Model and the Information Flow model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been 21 years since the decision in Rogers v Whitaker and the legal principles concerning informed consent and liability for negligence are still strongly grounded in this landmark High Court decision. This paper considers more recent developments in the law concerning the failure to disclose inherent risks in medical procedures, focusing on the decision in Wallace v Kam [2013] HCA 19. In this case, the appellant underwent a surgical procedure that carried a number of risks. The surgery itself was not performed in a sub-standard way, but the surgeon failed to disclose two risks to the patient, a failure that constituted a breach of the surgeon’s duty of care in negligence. One of the undisclosed risks was considered to be less serious than the other, and this lesser risk eventuated causing injury to the appellant. The more serious risk did not eventuate, but the appellant argued that if the more serious risk had been disclosed, he would have avoided his injuries completely because he would have refused to undergo the procedure. Liability was disputed by the surgeon, with particular reference to causation principles. The High Court of Australia held that the appellant should not be compensated for harm that resulted from a risk he would have been willing to run. We examine the policy reasons underpinning the law of negligence in this specific context and consider some of the issues raised by this unusual case. We question whether some of the judicial reasoning adopted in this case, represents a significant shift in traditional causation principles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage rise is the main issue which limits the capacity of Low Voltage (LV) network to accommodate more Renewable Energy (RE) sources. In addition, voltage drop at peak load period is a significant power quality concern. This paper proposes a new robust voltage support strategy based on distributed coordination of multiple distribution static synchronous compensators (DSTATCOMs). The study focuses on LV networks with PV as the RE source for customers. The proposed approach applied to a typical LV network and its advantages are shown comparing with other voltage control strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method to enable a mobile robot working in non-stationary environments to plan its path and localize within multiple map hypotheses simultaneously. The maps are generated using a long-term and short-term memory mechanism that ensures only persistent configurations in the environment are selected to create the maps. In order to evaluate the proposed method, experimentation is conducted in an office environment. Compared to navigation systems that use only one map, our system produces superior path planning and navigation in a non-stationary environment where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a straightforward method to asymptotically solve a variety of initial and boundary value problems for singularly perturbed ordinary differential equations whose solution structure can be anticipated. The approach is simpler than conventional methods, including those based on asymptotic matching or on eliminating secular terms. © 2010 by the Massachusetts Institute of Technology.