915 resultados para random search algorithms
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
The legality of the operation of Google’s search engine, and its liability as an Internet intermediary, has been tested in various jurisdictions on various grounds. In Australia, there was an ultimately unsuccessful case against Google under the Australian Consumer Law relating to how it presents results from its search engine. Despite this failed claim, several complex issues were not adequately addressed in the case including whether Google sufficiently distinguishes between the different parts of its search results page, so as not to mislead or deceive consumers. This article seeks to address this question of consumer confusion by drawing on empirical survey evidence of Australian consumers’ understanding of Google’s search results layout. This evidence, the first of its kind in Australia, indicates some level of consumer confusion. The implications for future legal proceedings in against Google in Australia and in other jurisdictions are discussed.
Resumo:
We describe a novel method for human activity segmentation and interpretation in surveillance applications based on Gabor filter-bank features. A complex human activity is modeled as a sequence of elementary human actions like walking, running, jogging, boxing, hand-waving etc. Since human silhouette can be modeled by a set of rectangles, the elementary human actions can be modeled as a sequence of a set of rectangles with different orientations and scales. The activity segmentation is based on Gabor filter-bank features and normalized spectral clustering. The feature trajectories of an action category are learnt from training example videos using dynamic time warping. The combined segmentation and the recognition processes are very efficient as both the algorithms share the same framework and Gabor features computed for the former can be used for the later. We have also proposed a simple shadow detection technique to extract good silhouette which is necessary for good accuracy of an action recognition technique.
Resumo:
In this paper we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we design a novel auction which we call the OPT (optimal) auction. The OPT mechanism maximizes the search engine's expected revenue while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We show that the OPT mechanism is superior to two of the most commonly used mechanisms for sponsored search namely (1) GSP (Generalized Second Price) and (2) VCG (Vickrey-Clarke-Groves). We then show an important revenue equivalence result that the expected revenue earned by the search engine is the same for all the three mechanisms provided the advertisers are symmetric and the number of sponsored slots is strictly less than the number of advertisers.
Resumo:
With the increasing adoption of wireless technology, it is reasonable to expect an increase in file demand for supporting both real-time multimedia and high rate reliable data services. Next generation wireless systems employ Orthogonal Frequency Division Multiplexing (OFDM) physical layer owing, to the high data rate transmissions that are possible without increase in bandwidth. Towards improving file performance of these systems, we look at the design of resource allocation algorithms at medium-access layer, and their impact on higher layers. While TCP-based clastic traffic needs reliable transport, UDP-based real-time applications have stringent delay and rate requirements. The MAC algorithms while catering to the heterogeneous service needs of these higher layers, tradeoff between maximizing the system capacity and providing fairness among users. The novelly of this work is the proposal of various channel-aware resource allocation algorithms at the MAC layer. which call result in significant performance gains in an OFDM based wireless system.
Resumo:
Randomness in the source condition other than the heterogeneity in the system parameters can also be a major source of uncertainty in the concentration field. Hence, a more general form of the problem formulation is necessary to consider randomness in both source condition and system parameters. When the source varies with time, the unsteady problem, can be solved using the unit response function. In the case of random system parameters, the response function becomes a random function and depends on the randomness in the system parameters. In the present study, the source is modelled as a random discrete process with either a fixed interval or a random interval (the Poisson process). In this study, an attempt is made to assess the relative effects of various types of source uncertainties on the probabilistic behaviour of the concentration in a porous medium while the system parameters are also modelled as random fields. Analytical expressions of mean and covariance of concentration due to random discrete source are derived in terms of mean and covariance of unit response function. The probabilistic behaviour of the random response function is obtained by using a perturbation-based stochastic finite element method (SFEM), which performs well for mild heterogeneity. The proposed method is applied for analysing both the 1-D as well as the 3-D solute transport problems. The results obtained with SFEM are compared with the Monte Carlo simulation for 1-D problems.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices, we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: (1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, (2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which (3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time), based on the sensor observations obtained until time slot k. Our results show that an optimum closed loop control on Mk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
In this paper an approach for obtaining depth and section modulus of the cantilever sheet pile wall using inverse reliability method is described. The proposed procedure employs inverse first order reliability method to obtain the design penetration depth and section modulus of the steel sheet pile wall in order that the reliability of the wall against failure modes must meet a desired level of safety. Sensitivity analysis is conducted to assess the effect of uncertainties in design parameters on the reliability of cantilever sheet pile walls. The analysis is performed by treating back fill soil properties, depth of the water table from the top of the sheet pile wall, yield strength of steel and section modulus of steel pile as random variables. Two limit states, viz., rotational and flexural failure of sheet pile wall are considered. The results using this approach are used to develop a set of reliability based design charts for different coefficients of variation of friction angle of the backfill (5%, 10% and 15%). System reliability considerations in terms of series and parallel systems are also studied.
Resumo:
This paper focuses on optimisation algorithms inspired by swarm intelligence for satellite image classification from high resolution satellite multi- spectral images. Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to satisfactorily classify all the basic land cover classes of an urban region. In both supervised and unsupervised classification methods, the evolutionary algorithms are not exploited to their full potential. This work tackles the land map covering by Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO) which are arguably the most popular algorithms in this category. We present the results of classification techniques using swarm intelligence for the problem of land cover mapping for an urban region. The high resolution Quick-bird data has been used for the experiments.
Resumo:
We propose several stochastic approximation implementations for related algorithms in flow-control of communication networks. First, a discrete-time implementation of Kelly's primal flow-control algorithm is proposed. Convergence with probability 1 is shown, even in the presence of communication delays and stochastic effects seen in link congestion indications. This ensues from an analysis of the flow-control algorithm using the asynchronous stochastic approximation (ASA) framework. Two relevant enhancements are then pursued: a) an implementation of the primal algorithm using second-order information, and b) an implementation where edge-routers rectify misbehaving flows. Next, discretetime implementations of Kelly's dual algorithm and primaldual algorithm are proposed. Simulation results a) verifying the proposed algorithms and, b) comparing the stability properties are presented.
Resumo:
The problem of identifying parameters of nonlinear vibrating systems using spatially incomplete, noisy, time-domain measurements is considered. The problem is formulated within the framework of dynamic state estimation formalisms that employ particle filters. The parameters of the system, which are to be identified, are treated as a set of random variables with finite number of discrete states. The study develops a procedure that combines a bank of self-learning particle filters with a global iteration strategy to estimate the probability distribution of the system parameters to be identified. Individual particle filters are based on the sequential importance sampling filter algorithm that is readily available in the existing literature. The paper develops the requisite recursive formulary for evaluating the evolution of weights associated with system parameter states. The correctness of the formulations developed is demonstrated first by applying the proposed procedure to a few linear vibrating systems for which an alternative solution using adaptive Kalman filter method is possible. Subsequently, illustrative examples on three nonlinear vibrating systems, using synthetic vibration data, are presented to reveal the correct functioning of the method. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter designs for multiuser multi-input multi-output (MIMO) downlink that minimize the sum mean square error (SMSE) in the presence of imperfect channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. We consider a stochastic error (SE) model and a norm-bounded error (NBE) model for the CSIT error. In the case of CSIT error following SE model, we compute the desired downlink precoder/receive filter matrices by solving the simpler uplink problem by exploiting the uplink-downlink duality for the MSE region. In the case of the CSIT error following the NBE model, we consider the worst-case SMSE as the objective function, and propose an iterative algorithm for the robust transceiver design. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.