85 resultados para Information search – models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The keyword based search technique suffers from the problem of synonymic and polysemic queries. Current approaches address only theproblem of synonymic queries in which different queries might have the same information requirement. But the problem of polysemic queries,i.e., same query having different intentions, still remains unaddressed. In this paper, we propose the notion of intent clusters, the members of which will have the same intention. We develop a clustering algorithm that uses the user session information in query logs in addition to query URL entries to identify cluster of queries having the same intention. The proposed approach has been studied through case examples from the actual log data from AOL, and the clustering algorithm is shown to be successful in discerning the user intentions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a model for military conflicts where the defending forces have to determine an optimal partitioning of available resources to counter attacks from an adversary in two different fronts. The Lanchester attrition model is used to develop the dynamical equations governing the variation in force strength. Three different allocation schemes - Time-Zero-Allocation (TZA), Allocate-Assess-Reallocate (AAR), and Continuous Constant Allocation (CCA) - are considered and the optimal solutions are obtained in each case. Numerical examples are given to support the analytical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a model for military conflicts where the defending forces have to determine an optimal partitioning of available resources to counter attacks from an adversary in two different fronts. The Lanchester attrition model is used to develop the dynamical equations governing the variation in force strength. Three different allocation schemes - Time-Zero-Allocation (TZA), Allocate-Assess-Reallocate (AAR), and Continuous Constant Allocation (CCA) - are considered and the optimal solutions are obtained in each case. Numerical examples are given to support the analytical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Gaussianity of signals/noise often results in significant performance degradation for systems, which are designed using the Gaussian assumption. So non-Gaussian signals/noise require a different modelling and processing approach. In this paper, we discuss a new Bayesian estimation technique for non-Gaussian signals corrupted by colored non Gaussian noise. The method is based on using zero mean finite Gaussian Mixture Models (GMMs) for signal and noise. The estimation is done using an adaptive non-causal nonlinear filtering technique. The method involves deriving an estimator in terms of the GMM parameters, which are in turn estimated using the EM algorithm. The proposed filter is of finite length and offers computational feasibility. The simulations show that the proposed method gives a significant improvement compared to the linear filter for a wide variety of noise conditions, including impulsive noise. We also claim that the estimation of signal using the correlation with past and future samples leads to reduced mean squared error as compared to signal estimation based on past samples only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grover's database search algorithm, although discovered in the context of quantum computation, can be implemented using any physical system that allows superposition of states. A physical realization of this algorithm is described using coupled simple harmonic oscillators, which can be exactly solved in both classical and quantum domains. Classical wave algorithms are far more stable against decoherence compared to their quantum counterparts. In addition to providing convenient demonstration models, they may have a role in practical situations, such as catalysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present self assessment schemes (SAS) for multiple agents performing a search mission on an unknown terrain. The agents are subjected to limited communication and sensor ranges. The agents communicate and coordinate with their neighbours to arrive at route decisions. The self assessment schemes proposed here have very low communication and computational overhead. The SAS also has attractive features like scalability to large number of agents and fast decision-making capability. SAS can be used with partial or complete information sharing schemes during the search mission. We validate the performance of SAS using simulation on a large search space consisting of 100 agents with different information structures and self assessment schemes. We also compare the results obtained using SAS with that of a previously proposed negotiation scheme. The simulation results show that the SAS is scalable to large number of agents and can perform as good as the negotiation schemes with reduced communication requirement (almost 20% of that required for negotiation).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-orthogonal space-time block codes (STBC) with large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with large dimensions has been a challenge. In this paper, we present a reactive tabu search (RTS) based algorithm for decoding non-orthogonal STBCs from cyclic division algebras (CDA) having largedimensions. Under i.i.d fading and perfect channel state information at the receiver (CSIR), our simulation results show that RTS based decoding of 12 X 12 STBC from CDA and 4-QAM with 288 real dimensions achieves i) 10(-3) uncoded BER at an SNR of just 0.5 dB away from SISO AWGN performance, and ii) a coded BER performance close to within about 5 dB of the theoretical MIMO capacity, using rate-3/4 turbo code at a spectral efficiency of 18 bps/Hz. RTS is shown to achieve near SISO AWGN performance with less number of dimensions than with LAS algorithm (which we reported recently) at some extra complexity than LAS. We also report good BER performance of RTS when i.i.d fading and perfect CSIR assumptions are relaxed by considering a spatially correlated MIMO channel model, and by using a training based iterative RTS decoding/channel estimation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce the three-user cognitive radio channels with asymmetric transmitter cooperation, and derive achievable rate regions under several scenarios depending on the type of cooperation and decoding capability at the receivers. Two of the most natural cooperation mechanisms for the three-user channel are considered here: cumulative message sharing (CMS) and primary-only message sharing (PMS). In addition to the message sharing mechanism, the achievable rate region is critically dependent on the decoding capability at the receivers. Here, we consider two scenarios for the decoding capability, and derive an achievable rate region for each one of them by employing a combination of superposition and Gel'fand-Pinsker coding techniques. Finally, to provide a numerical example, we consider the Gaussian channel model to plot the rate regions. In terms of achievable rates, CMS turns out to be a better scheme than PMS. However, the practical aspects of implementing such message-sharing schemes remain to be investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we address the problem of transmission of correlated sources over a fast fading multiple access channel (MAC) with partial channel state information available at both the encoders and the decoder. We provide sufficient conditions for transmission with given distortions. Next these conditions are specialized to a Gaussian MAC (GMAC). We provide the optimal power allocation strategy and compare the strategy with various levels of channel state information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.