995 resultados para Prototype Selection
Resumo:
Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Resumo:
When an agent wants to fulfill its desires about the world, the agent usually has multiple plans to choose from and these plans have different pre-conditions and additional effects in addition to achieving its goals. Therefore, for further reasoning and interaction with the world, a plan selection strategy (usually based on plan cost estimation) is mandatory for an autonomous agent. This demand becomes even more critical when uncertainty on the observation of the world is taken into account, since in this case, we consider not only the costs of different plans, but also their chances of success estimated according to the agent's beliefs. In addition, when multiple goals are considered together, different plans achieving the goals can be conflicting on their preconditions (contexts) or the required resources. Hence a plan selection strategy should be able to choose a subset of plans that fulfills the maximum number of goals while maintaining context consistency and resource-tolerance among the chosen plans. To address the above two issues, in this paper we first propose several principles that a plan selection strategy should satisfy, and then we present selection strategies that stem from the principles, depending on whether a plan cost is taken into account. In addition, we also show that our selection strategy can partially recover intention revision.
Resumo:
In this paper, we propose general-order transmit antenna selection to enhance the secrecy performance of multiple-input–multiple-output multieavesdropper channels with outdated channel state information (CSI) at the transmitter. To evaluate the effect of the outdated CSI on the secure transmission of the system, we investigate the secrecy performance for two practical scenarios, i.e., Scenarios I and II, where the eavesdropper's CSI is not available at the transmitter and is available at the transmitter, respectively. For Scenario I, we derive exact and asymptotic closed-form expressions for the secrecy outage probability in Nakagami- m fading channels. In addition, we also derive the probability of nonzero secrecy capacity and the \varepsilon -outage secrecy capacity, respectively. Simple asymptotic expressions for the secrecy outage probability reveal that the secrecy diversity order is reduced when the CSI is outdated at the transmitter, and it is independent of the number of antennas at each eavesdropper N_text\rm{E} , the fading parameter of the eavesdropper's channel m_text\rm{E} , and the number of eavesdroppers M . For Scenario II, we make a comprehensive analysis of the average secrecy capacity obtained by the system. Specifically, new closed-form expressions for the exact and asymptotic average secrecy capacity are derived, which are valid for general systems with an arbitrary number of antennas, number of eavesdroppers, and fading severity parameters. Resorting to these results, we also determine a high signal-to-noise ratio power offset to explicitly quantify the impact of the main c- annel and the eavesdropper's channel on the average secrecy capacity.
Resumo:
In this paper, we investigate an amplify-and-forward (AF) multiple-input multiple-output - spatial division multiplexing (MIMO-SDM) cooperative wireless networks, where each network node is equipped with multiple antennas. In order to deal with the problems of signal combining at the destination and cooperative relay selection, we propose an improved minimum mean square error (MMSE) signal combining scheme for signal recovery at the destination. Additionally, we propose two distributed relay selection algorithms based on the minimum mean squared error (MSE) of the signal estimation for the cases where channel state information (CSI) from the source to the destination is available and unavailable at the candidate nodes. Simulation results demonstrate that the proposed combiner together with the proposed relay selection algorithms achieve higher diversity gain than previous approaches in both flat and frequency-selective fading channels.
Resumo:
Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.
Resumo:
Geopolymer binders are generally formed by reacting powdered aluminosilicate precursors with alkali silicate activators. Most research to date has concentrated on using either pulverised fuel ash or high purity dehydroxylated kaolin (metakaolin) in association with ground granulated blast furnace slag as the main precursor material. However, recently, attention has turned to alternative calcined clays that are abundant throughout the globe and have lower kaolinite contents than commercially available metakaolins. Due to the lack of clear and simple screening protocols enabling assessment of such geological resources for use as precursors in geopolymer systems, the present paper presents results from experimental work that was carried out to develop a functional binder using materials containing kaolinite taken from the Interbasaltic Formation of Northern Ireland. The influence of mineralogy has been examined, and a screening process, using three Interbasaltic materials as examples, that will assist in the rapid selection of suitable geopolymeric precursors from such materials is outlined.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
PurposeThe selection of suitable outcomes and sample size calculation are critical factors in the design of a randomised controlled trial (RCT). The goal of this study was to identify the range of outcomes and information on sample size calculation in RCTs on geographic atrophy (GA).MethodsWe carried out a systematic review of age-related macular degeneration (AMD) RCTs. We searched MEDLINE, EMBASE, Scopus, Cochrane Library, www.controlled-trials.com, and www.ClinicalTrials.gov. Two independent reviewers screened records. One reviewer collected data and the second reviewer appraised 10% of collected data. We scanned references lists of selected papers to include other relevant RCTs.ResultsLiterature and registry search identified 3816 abstracts of journal articles and 493 records from trial registries. From a total of 177 RCTs on all types of AMD, 23 RCTs on GA were included. Eighty-one clinical outcomes were identified. Visual acuity (VA) was the most frequently used outcome, presented in 18 out of 23 RCTs and followed by the measures of lesion area. For sample size analysis, 8 GA RCTs were included. None of them provided sufficient Information on sample size calculations.ConclusionsThis systematic review illustrates a lack of standardisation in terms of outcome reporting in GA trials and issues regarding sample size calculation. These limitations significantly hamper attempts to compare outcomes across studies and also perform meta-analyses.
Resumo:
The rationale for identifying drug targets within helminth neuromuscular signalling systems is based on the premise that adequate nerve and muscle function is essential for many of the key behavioural determinants of helminth parasitism, including sensory perception/host location, invasion, locomotion/orientation, attachment, feeding and reproduction. This premise is validated by the tendency of current anthelmintics to act on classical neurotransmitter-gated ion channels present on helminth nerve and/or muscle, yielding therapeutic endpoints associated with paralysis and/or death. Supplementary to classical neurotransmitters, helminth nervous systems are peptide-rich and encompass associated biosynthetic and signal transduction components - putative drug targets that remain to be exploited by anthelmintic chemotherapy. At this time, no neuropeptide system-targeting lead compounds have been reported, and given that our basic knowledge of neuropeptide biology in parasitic helminths remains inadequate, the short-term prospects for such drugs remain poor. Here, we review current knowledge of neuropeptide signalling in Nematoda and Platyhelminthes, and highlight a suite of 19 protein families that yield deleterious phenotypes in helminth reverse genetics screens. We suggest that orthologues of some of these peptidergic signalling components represent appealing therapeutic targets in parasitic helminths.
Resumo:
In this paper, we consider the variable selection problem for a nonlinear non-parametric system. Two approaches are proposed, one top-down approach and one bottom-up approach. The top-down algorithm selects a variable by detecting if the corresponding partial derivative is zero or not at the point of interest. The algorithm is shown to have not only the parameter but also the set convergence. This is critical because the variable selection problem is binary, a variable is either selected or not selected. The bottom-up approach is based on the forward/backward stepwise selection which is designed to work if the data length is limited. Both approaches determine the most important variables locally and allow the unknown non-parametric nonlinear system to have different local dimensions at different points of interest. Further, two potential applications along with numerical simulations are provided to illustrate the usefulness of the proposed algorithms.
Resumo:
This paper investigates the gene selection problem for microarray data with small samples and variant correlation. Most existing algorithms usually require expensive computational effort, especially under thousands of gene conditions. The main objective of this paper is to effectively select the most informative genes from microarray data, while making the computational expenses affordable. This is achieved by proposing a novel forward gene selection algorithm (FGSA). To overcome the small samples' problem, the augmented data technique is firstly employed to produce an augmented data set. Taking inspiration from other gene selection methods, the L2-norm penalty is then introduced into the recently proposed fast regression algorithm to achieve the group selection ability. Finally, by defining a proper regression context, the proposed method can be fast implemented in the software, which significantly reduces computational burden. Both computational complexity analysis and simulation results confirm the effectiveness of the proposed algorithm in comparison with other approaches