164 resultados para Automatic selection
Resumo:
Outlier detection in high dimensional categorical data has been a problem of much interest due to the extensive use of qualitative features for describing the data across various application areas. Though there exist various established methods for dealing with the dimensionality aspect through feature selection on numerical data, the categorical domain is actively being explored. As outlier detection is generally considered as an unsupervised learning problem due to lack of knowledge about the nature of various types of outliers, the related feature selection task also needs to be handled in a similar manner. This motivates the need to develop an unsupervised feature selection algorithm for efficient detection of outliers in categorical data. Addressing this aspect, we propose a novel feature selection algorithm based on the mutual information measure and the entropy computation. The redundancy among the features is characterized using the mutual information measure for identifying a suitable feature subset with less redundancy. The performance of the proposed algorithm in comparison with the information gain based feature selection shows its effectiveness for outlier detection. The efficacy of the proposed algorithm is demonstrated on various high-dimensional benchmark data sets employing two existing outlier detection methods.
Resumo:
We propose energy harvesting technologies and cooperative relaying techniques to power the devices and improve reliability. We propose schemes to (a) maximize the packet reception ratio (PRR) by cooperation and (b) minimize the average packet delay (APD) by cooperation amongst nodes. Our key result and insight from the testbed implementation is about total data transmitted by each relay. A greedy policy that relays more data under a good harvesting condition turns out to be a sub optimal policy. This is because, energy replenishment is a slow process. The optimal scheme offers a low APD and also improves PRR.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.
Resumo:
Single receive antenna selection (AS) is a popular method for obtaining diversity benefits without the additional costs of multiple radio receiver chains. Since only one antenna receives at any time, the transmitter sends a pilot multiple times to enable the receiver to estimate the channel gains of its N antennas to the transmitter and select an antenna. In time-varying channels, the channel estimates of different antennas are outdated to different extents. We analyze the symbol error probability (SEP) in time-varying channels of the N-pilot and (N+1)-pilot AS training schemes. In the former, the transmitter sends one pilot for each receive antenna. In the latter, the transmitter sends one additional pilot that helps sample the channel fading process of the selected antenna twice. We present several new results about the SEP, optimal energy allocation across pilots and data, and optimal selection rule in time-varying channels for the two schemes. We show that due to the unique nature of AS, the (N+1)-pilot scheme, despite its longer training duration, is much more energy-efficient than the conventional N-pilot scheme. An extension to a practical scenario where all data symbols of a packet are received by the same antenna is also investigated.
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
Transmit antenna selection (AS) has been adopted in contemporary wideband wireless standards such as Long Term Evolution (LTE). We analyze a comprehensive new model for AS that captures several key features about its operation in wideband orthogonal frequency division multiple access (OFDMA) systems. These include the use of channel-aware frequency-domain scheduling (FDS) in conjunction with AS, the hardware constraint that a user must transmit using the same antenna over all its assigned subcarriers, and the scheduling constraint that the subcarriers assigned to a user must be contiguous. The model also captures the novel dual pilot training scheme that is used in LTE, in which a coarse system bandwidth-wide sounding reference signal is used to acquire relatively noisy channel state information (CSI) for AS and FDS, and a dense narrow-band demodulation reference signal is used to acquire accurate CSI for data demodulation. We analyze the symbol error probability when AS is done in conjunction with the channel-unaware, but fair, round-robin scheduling and with channel-aware greedy FDS. Our results quantify how effective joint AS-FDS is in dispersive environments, the interactions between the above features, and the ability of the user to lower SRS power with minimal performance degradation.
Resumo:
This paper considers antenna selection (AS) at a receiver equipped with multiple antenna elements but only a single radio frequency chain for packet reception. As information about the channel state is acquired using training symbols (pilots), the receiver makes its AS decisions based on noisy channel estimates. Additional information that can be exploited for AS includes the time-correlation of the wireless channel and the results of the link-layer error checks upon receiving the data packets. In this scenario, the task of the receiver is to sequentially select (a) the pilot symbol allocation, i.e., how to distribute the available pilot symbols among the antenna elements, for channel estimation on each of the receive antennas; and (b) the antenna to be used for data packet reception. The goal is to maximize the expected throughput, based on the past history of allocation and selection decisions, and the corresponding noisy channel estimates and error check results. Since the channel state is only partially observed through the noisy pilots and the error checks, the joint problem of pilot allocation and AS is modeled as a partially observed Markov decision process (POMDP). The solution to the POMDP yields the policy that maximizes the long-term expected throughput. Using the Finite State Markov Chain (FSMC) model for the wireless channel, the performance of the POMDP solution is compared with that of other existing schemes, and it is illustrated through numerical evaluation that the POMDP solution significantly outperforms them.
Resumo:
Opportunistic relay selection in a multiple source-destination (MSD) cooperative system requires quickly allocating to each source-destination (SD) pair a suitable relay based on channel gains. Since the channel knowledge is available only locally at a relay and not globally, efficient relay selection algorithms are needed. For an MSD system, in which the SD pairs communicate in a time-orthogonal manner with the help of decode-and-forward relays, we propose three novel relay selection algorithms, namely, contention-free en masse assignment (CFEA), contention-based en masse assignment (CBEA), and a hybrid algorithm that combines the best features of CFEA and CBEA. En masse assignment exploits the fact that a relay can often aid not one but multiple SD pairs, and, therefore, can be assigned to multiple SD pairs. This drastically reduces the average time required to allocate an SD pair when compared to allocating the SD pairs one by one. We show that the algorithms are much faster than other selection schemes proposed in the literature and yield significantly higher net system throughputs. Interestingly, CFEA is as effective as CBEA over a wider range of system parameters than in single SD pair systems.
Resumo:
This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
A supply chain ecosystem consists of the elements of the supply chain and the entities that influence the goods, information and financial flows through the supply chain. These influences come through government regulations, human, financial and natural resources, logistics infrastructure and management, etc., and thus affect the supply chain performance. Similarly, all the ecosystem elements also contribute to the risk. The aim of this paper is to identify both performances-based and risk-based decision criteria, which are important and critical to the supply chain. A two step approach using fuzzy AHP and fuzzy technique for order of preference by similarity to ideal solution has been proposed for multi-criteria decision-making and illustrated using a numerical example. The first step does the selection without considering risks and then in the next step suppliers are ranked according to their risk profiles. Later, the two ranks are consolidated into one. In subsequent section, the method is also extended for multi-tier supplier selection. In short, we are presenting a method for the design of a resilient supply chain, in this paper.
Resumo:
Exploiting the performance potential of GPUs requires managing the data transfers to and from them efficiently which is an error-prone and tedious task. In this paper, we develop a software coherence mechanism to fully automate all data transfers between the CPU and GPU without any assistance from the programmer. Our mechanism uses compiler analysis to identify potential stale accesses and uses a runtime to initiate transfers as necessary. This allows us to avoid redundant transfers that are exhibited by all other existing automatic memory management proposals. We integrate our automatic memory manager into the X10 compiler and runtime, and find that it not only results in smaller and simpler programs, but also eliminates redundant memory transfers. Tested on eight programs ported from the Rodinia benchmark suite it achieves (i) a 1.06x speedup over hand-tuned manual memory management, and (ii) a 1.29x speedup over another recently proposed compiler--runtime automatic memory management system. Compared to other existing runtime-only and compiler-only proposals, it also transfers 2.2x to 13.3x less data on average.
Resumo:
Data clustering groups data so that data which are similar to each other are in the same group and data which are dissimilar to each other are in different groups. Since generally clustering is a subjective activity, it is possible to get different clusterings of the same data depending on the need. This paper attempts to find the best clustering of the data by first carrying out feature selection and using only the selected features, for clustering. A PSO (Particle Swarm Optimization)has been used for clustering but feature selection has also been carried out simultaneously. The performance of the above proposed algorithm is evaluated on some benchmark data sets. The experimental results shows the proposed methodology outperforms the previous approaches such as basic PSO and Kmeans for the clustering problem.
Resumo:
In the underlay mode of cognitive radio, secondary users can transmit when the primary is transmitting, but under tight interference constraints, which limit the secondary system performance. Antenna selection (AS)-based multiple antenna techniques, which require less hardware and yet exploit spatial diversity, help improve the secondary system performance. In this paper, we develop the optimal transmit AS rule that minimizes the symbol error probability (SEP) of an average interference-constrained secondary system that operates in the underlay mode. We show that the optimal rule is a non-linear function of the power gains of the channels from secondary transmit antenna to primary receiver and secondary transmit antenna to secondary receive antenna. The optimal rule is different from the several ad hoc rules that have been proposed in the literature. We also propose a closed-form, tractable variant of the optimal rule and analyze its SEP. Several results are presented to compare the performance of the closed-form rule with the ad hoc rules, and interesting inter-relationships among them are brought out.
Resumo:
Transmit antenna selection (AS) is a popular, low hardware complexity technique that improves the performance of an underlay cognitive radio system, in which a secondary transmitter can transmit when the primary is on but under tight constraints on the interference it causes to the primary. The underlay interference constraint fundamentally changes the criterion used to select the antenna because the channel gains to the secondary and primary receivers must be both taken into account. We develop a novel and optimal joint AS and transmit power adaptation policy that minimizes a Chernoff upper bound on the symbol error probability (SEP) at the secondary receiver subject to an average transmit power constraint and an average primary interference constraint. Explicit expressions for the optimal antenna and power are provided in terms of the channel gains to the primary and secondary receivers. The SEP of the optimal policy is at least an order of magnitude lower than that achieved by several ad hoc selection rules proposed in the literature and even the optimal antenna selection rule for the case where the transmit power is either zero or a fixed value.