895 resultados para Anchoring heuristic
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.
Resumo:
In this paper, we consider the problem of association of wireless stations (STAs) with an access network served by a wireless local area network (WLAN) and a 3G cellular network. There is a set of WLAN Access Points (APs) and a set of 3G Base Stations (BSs) and a number of STAs each of which needs to be associated with one of the APs or one of the BSs. We concentrate on downlink bulk elastic transfers. Each association provides each ST with a certain transfer rate. We evaluate an association on the basis of the sum log utility of the transfer rates and seek the utility maximizing association. We also obtain the optimal time scheduling of service from a 3G BS to the associated STAs. We propose a fast iterative heuristic algorithm to compute an association. Numerical results show that our algorithm converges in a few steps yielding an association that is within 1% (in objective value) of the optimal (obtained through exhaustive search); in most cases the algorithm yields an optimal solution.
Resumo:
In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using enhanced distributed channel access (EDCA). We build upon the fixed point analysis and performance insights. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures. The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY)
Resumo:
We consider the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modelled as batch processing machines. Most of the studies assume that ready times and due dates of jobs are agreeable (i.e., ri < rj implies di ≤ dj). In many real world applications, the agreeable property assumption does not hold. Therefore, in this paper, scheduling of a single burn-in oven with non-agreeable release times and due dates along with non-identical job sizes as well as non-identical processing of time problem is formulated as a Non-Linear (0-1) Integer Programming optimisation problem. The objective measure of the problem is minimising the maximum completion time (makespan) of all jobs. Due to computational intractability, we have proposed four variants of a two-phase greedy heuristic algorithm. Computational experiments indicate that two out of four proposed algorithms have excellent average performance and also capable of solving any large-scale real life problems with a relatively low computational effort on a Pentium IV computer.
Crystallization and preliminary X-ray diffraction studies of sortase A from Streptococcus pneumoniae
Resumo:
Sortases are cell-membrane-anchored cysteine transpeptidases that are essential for the assembly and anchoring of cell-surface adhesins in Gram-positive bacteria. Thus, they play critical roles in virulence, infection and colonization by pathogens. Sortases have been classified into four types based on their primary sequence and the target-protein motifs that they recognize. All Gram-positive bacteria express a class A housekeeping sortase (SrtA). Sortase A from Streptococcus pneumoniae (NP_358691) has been crystallized in two crystal forms. Diamond-shaped crystals of Delta N(59)SrtA diffracted to 4.0 angstrom resolution and belonged to a tetragonal system with unit-cell parameters a = b = 122.8, c = 86.5 angstrom, alpha = beta = gamma = 90 degrees, while rod-shaped crystals of Delta N(81)SrtA diffracted to 2.91 angstrom resolution and belonged to the monoclinic space group P2(1) with unit-cell parameters a = 66.8, b = 103.47, c = 74.79 angstrom, alpha = gamma = 90, beta = 115.65 degrees. The Matthews coefficient (V(M) = 2.77 angstrom(3) Da(-1)) with similar to 56% solvent content suggested the presence of four molecules in the asymmetric unit for Delta N(81)SrtA. Also, a multi-copy search using a monomer as a probe in the molecular-replacement method resulted in the successful location of four sortase molecules in the asymmetric unit, with statistics R = 41.61, R(free) = 46.44, correlation coefficient (CC) = 64.31, CC(free) = 57.67.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose a novel heuristic approach to segment recognizable symbols from online Kannada word data and perform recognition of the entire word. Two different estimates of first derivative are extracted from the preprocessed stroke groups and used as features for classification. Estimate 2 proved better resulting in 88% accuracy, which is 3% more than that achieved with estimate 1. Classification is performed by statistical dynamic space warping (SDSW) classifier which uses X, Y co-ordinates and their first derivatives as features. Classifier is trained with data from 40 writers. 295 classes are handled covering Kannada aksharas, with Kannada numerals, Indo-Arabic numerals, punctuations and other special symbols like $ and #. Classification accuracies obtained are 88% at the akshara level and 80% at the word level, which shows the scope for further improvement in segmentation algorithm
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
In this paper, we address a scheduling problem for minimizing total weighted flowtime, observed in automobile gear manufacturing. Specifically, the bottleneck operation of the pre-heat treatment stage of gear manufacturing process has been dealt with in scheduling. Many real-life scenarios like unequal release times, sequence dependent setup times, and machine eligibility restrictions have been considered. A mathematical model taking into account dynamic starting conditions has been proposed. The problem is derived to be NP-hard. To approach the problem, a few heuristic algorithms have been proposed. Based on planned computational experiments, the performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small-size problem instances and (b) in comparison with the estimated optimal solution for large-size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently yielding near-statistically estimated optimal solutions in a reasonable computational time.
Resumo:
A single-step magnetic separation procedure that can remove both organic pollutants and arsenic from contaminated water is clearly a desirable goal. Here we show that water dispersible magnetite nanoparticles prepared by anchoring carboxymethyl-beta-cyclodextrin (CMCD) cavities to the surface of magnetic nanoparticles are suitable host carriers for such a process. Monodisperse, 10 nm, spherical magnetite, Fe3O4, nanocrystals were prepared by the thermal decomposition of FeOOH. Trace amounts of antiferromagnet, FeO, present in the particles provides an exchange bias field that results in a high superparamagnetic blocking temperature and appreciable magnetization values that facilitate easy separation of the nanocrystals from aqueous dispersions on application of modest magnetic fields. We show here that small molecules like naphthalene and naphthol can be removed from aqueous media by forming inclusion complexes with the anchored cavities of the CMCD-Fe3O4 nanocrystals followed by separation of the nanocrystals by application of a magnetic field. The adsorption properties of the iron oxide surface towards As ions are unaffected by the CMCD capping so it too can be simultaneously removed in the separation process. The CMCD-Fe3O4 nanocrystals provide a versatile platform for magnetic separation with potential applications in water remediation.
Resumo:
Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Adaptive Gaussian Mixture Models (GMM) have been one of the most popular and successful approaches to perform foreground segmentation on multimodal background scenes. However, the good accuracy of the GMM algorithm comes at a high computational cost. An improved GMM technique was proposed by Zivkovic to reduce computational cost by minimizing the number of modes adaptively. In this paper, we propose a modification to his adaptive GMM algorithm that further reduces execution time by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we derive a heuristic that computes periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal.
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.