51 resultados para test case optimization
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points (AP) to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing, resulting in diminished system performance. In this paper, we first provide a numerical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm for a small topology consisting of two access points. Using the value iteration algorithm, we determine the optimal association rule for the two-AP topology. Next, utilizing the insights obtained from the optimal association ride for the two-AP case, we propose a near-optimal heuristic that we call RAT. We test the efficacy of RAT by considering more realistic arrival patterns and a larger topology. Our results show that RAT performs very well in these scenarios as well. Moreover, RAT lends itself to a fairly simple implementation.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Based on a method proposed by Reddy and Daum, the equations governing the steady inviscid nonreacting gasdynamic laser (GDL) flow in a supersonic nozzle are reduced to a universal form so that the solutions depend on a single parameter which combines all the other parameters of the problem. Solutions are obtained for a sample case of available data and compared with existing results to validate the present approach. Also, similar solutions for a sample case are presented.
Resumo:
Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a novel analytical formulation for the coupled partial differential equations governing electrostatically actuated constrained elastic structures of inhomogeneous material composition. We also present a computationally efficient numerical framework for solving the coupled equations over a reference domain with a fixed finite-element mesh. This serves two purposes: (i) a series of problems with varying geometries and piece-wise homogeneous and/or inhomogeneous material distribution can be solved with a single pre-processing step, (ii) topology optimization methods can be easily implemented by interpolating the material at each point in the reference domain from a void to a dielectric or a conductor. This is attained by considering the steady-state electrical current conduction equation with a `leaky capacitor' model instead of the usual electrostatic equation. This formulation is amenable for both static and transient problems in the elastic domain coupled with the quasi-electrostatic electric field. The procedure is numerically implemented on the COMSOL Multiphysics (R) platform using the weak variational form of the governing equations. Examples have been presented to show the accuracy and versatility of the scheme. The accuracy of the scheme is validated for the special case of piece-wise homogeneous material in the limit of the leaky-capacitor model approaching the ideal case.
Resumo:
A connectionist approach for global optimization is proposed. The standard function set is tested. Results obtained, in the case of large scale problems, indicate excellent scalability of the proposed approach
Resumo:
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.
Resumo:
Abundant quantities of fly ash have been produced by thermal power plants situated ail over the world. Many applications of fly ash depend upon its pozzolanic reactivity. This reactivity depends upon many factors, including lime content. Many fly ashes show marked improvement with the addition of lime. However, for every fly ash, there is an optimum lime content for its maximum reactivity. There is no well-established simple test to determine the optimum lime content. In this paper an attempt is made to use a simple physical and physico chemical test to determine the optimum lime content. The principle behind the use of a pH test, liquid limit test, and free swell index test to determine the optimum lime content has been explained. All the methods predict nearly the same optimum lime content and correlate well with that determined by the strength test.
Resumo:
We study lazy structure sharing as a tool for optimizing equivalence testing on complex data types, We investigate a number of strategies for implementing lazy structure sharing and provide upper and lower bounds on their performance (how quickly they effect ideal configurations of our data structure). In most cases when the strategies are applied to a restricted case of the problem, the bounds provide nontrivial improvements over the naive linear-time equivalence-testing strategy that employs no optimization. Only one strategy, however, which employs path compression, seems promising for the most general case of the problem.
Resumo:
Full-scale test embankments, with and without geotextile reinforcement, were constructed on soft Bangkok clay. The performances of these embankments are evaluated and compared with each other on the basis of field measurements and FEM analysis. The analyses of failure mechanisms and the investigations on the embankment stability using undrained conditions were also done to determine the critical embankment height and the corresponding geotextile strain. The high-strength geotextile can reduce the plastic deformation in the underlying foundation soil, increase the collapse height of the embankment on soft ground, and produce a two-step failure mechanism. In this case study, the critical strain in the geotextile corresponding to the primary failure of foundation soils may be taken as 2.5-3% irrespective of the geotextile reinforcement stiffness. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
With deregulation, the total transfer capability (TTC) calculation, which is the basis for evaluating available transfer capability (ATC), has become very significant. TTC is an important index in power markets with large volume of inter-area power exchanges and wheeling transactions taking place on an hourly basis. Its computation helps to achieve a viable technical and commercial transmission operation. The aim of the paper is to evaluate TTC in the interconnections and also to improve it using reactive optimization technique and UPFC devices. Computations are carried out for normal and contingency cases such as single line, tie line and generator outages. Base and optimized results are presented, and the results show how reactive optimization and unified power flow controller help to improve the system conditions. In this paper repeated power flow method is used to calculate TTC due to its ease of implementation. A case study is carried out on a 205 bus equivalent system, a part of Indian Southern grid. Parameters like voltage magnitude, L-index, minimum singular value and MW losses are computed to analyze the system performance.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
Swarm Intelligence techniques such as particle swarm optimization (PSO) are shown to be incompetent for an accurate estimation of global solutions in several engineering applications. This problem is more severe in case of inverse optimization problems where fitness calculations are computationally expensive. In this work, a novel strategy is introduced to alleviate this problem. The proposed inverse model based on modified particle swarm optimization algorithm is applied for a contaminant transport inverse model. The inverse models based on standard-PSO and proposed-PSO are validated to estimate the accuracy of the models. The proposed model is shown to be out performing the standard one in terms of accuracy in parameter estimation. The preliminary results obtained using the proposed model is presented in this work.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.