70 resultados para Localization real-world challenges
em Indian Institute of Science - Bangalore - Índia
Resumo:
Recently, efficient scheduling algorithms based on Lagrangian relaxation have been proposed for scheduling parallel machine systems and job shops. In this article, we develop real-world extensions to these scheduling methods. In the first part of the paper, we consider the problem of scheduling single operation jobs on parallel identical machines and extend the methodology to handle multiple classes of jobs, taking into account setup times and setup costs, The proposed methodology uses Lagrangian relaxation and simulated annealing in a hybrid framework, In the second part of the paper, we consider a Lagrangian relaxation based method for scheduling job shops and extend it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.
Resumo:
A real-time cooperative localization system, utilizing dual foot-mounted low-cost inertial sensors and RF-based inter-agent ranging, has been developed. Scenario-based tests have been performed, using fully-equipped firefighters mimicking a search operation in a partly smoke-filled environment, to evaluate the performance of the TOR (Tactical lOcatoR) system. The performed tests included realistic firefighter movements and inter-agent distances, factors that are crucial in order to provide realistic evaluations of the expected performance in real-world operations. The tests indicate that the TOR system may be able to provide a position accuracy of approximately two to three meters during realistic firefighter operations, with only two smoke diving firefighters and one supervising firefighter within range.
Resumo:
A major question in current network science is how to understand the relationship between structure and functioning of real networks. Here we present a comparative network analysis of 48 wasp and 36 human social networks. We have compared the centralisation and small world character of these interaction networks and have studied how these properties change over time. We compared the interaction networks of (1) two congeneric wasp species (Ropalidia marginata and Ropalidia cyathiformis), (2) the queen-right (with the queen) and queen-less (without the queen) networks of wasps, (3) the four network types obtained by combining (1) and (2) above, and (4) wasp networks with the social networks of children in 36 classrooms. We have found perfect (100%) centralisation in a queen-less wasp colony and nearly perfect centralisation in several other queen-less wasp colonies. Note that the perfectly centralised interaction network is quite unique in the literature of real-world networks. Differences between the interaction networks of the two wasp species are smaller than differences between the networks describing their different colony conditions. Also, the differences between different colony conditions are larger than the differences between wasp and children networks. For example, the structure of queen-right R. marginata colonies is more similar to children social networks than to that of their queen-less colonies. We conclude that network architecture depends more on the functioning of the particular community than on taxonomic differences (either between two wasp species or between wasps and humans).
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better orcomparable generalization performance over existing methods.
Resumo:
Fractal Dimensions (FD) are popular metrics for characterizing signals. They are used as complexity measuresin signal analysis applications in various fields. However, proper interpretation of such analyses has not been thoroughly addressed. In this paper, we study the effect of various signal properties on FD and interpret results in terms of classical signal processing concepts such as amplitude, frequency,number of harmonics, noise power and signal bandwidth. We have used Higuchi’s method for estimating FDs. This study helps in gaining a better understanding of the FD complexity measure for various signal parameters. Our results indicate that FD is a useful metric in estimating various signal properties. As an application of the FD measure in real world scenario, the FD is used as a feature in discriminating seizures from seizure free intervals in intracranial EEG data recordings and the FD feature has given good discrimination performance.
Resumo:
Mesh topologies are important for large-scale peer-to-peer systems that use low-power transceivers. The Quality of Service (QoS) in such systems is known to decrease as the scale increases. We present a scalable approach for dissemination that exploits all the shortest paths between a pair of nodes and improves the QoS. Despite th presence of multiple shortest paths in a system, we show that these paths cannot be exploited by spreading the messages over the paths in a simple round-robin manner; nodes along one of these paths will always handle more messages than the nodes along the other paths. We characterize the set of shortest paths between a pair of nodes in regular mesh topologies and derive rules, using this characterization, to effectively spread the messages over all the available paths. These rules ensure that all the nodes that are at the same distance from the source handle roughly the same number of messages. By modeling the multihop propagation in the mesh topology as a multistage queuing network, we present simulation results from a variety of scenarios that include link failures and propagation irregularities to reflect real-world characteristics. Our method achieves improved QoS in all these scenarios.
Resumo:
This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Resumo:
A considerable amount of work has been dedicated on the development of analytical solutions for flow of chemical contaminants through soils. Most of the analytical solutions for complex transport problems are closed-form series solutions. The convergence of these solutions depends on the eigen values obtained from a corresponding transcendental equation. Thus, the difficulty in obtaining exact solutions from analytical models encourages the use of numerical solutions for the parameter estimation even though, the later models are computationally expensive. In this paper a combination of two swarm intelligence based algorithms are used for accurate estimation of design transport parameters from the closed-form analytical solutions. Estimation of eigen values from a transcendental equation is treated as a multimodal discontinuous function optimization problem. The eigen values are estimated using an algorithm derived based on glowworm swarm strategy. Parameter estimation of the inverse problem is handled using standard PSO algorithm. Integration of these two algorithms enables an accurate estimation of design parameters using closed-form analytical solutions. The present solver is applied to a real world inverse problem in environmental engineering. The inverse model based on swarm intelligence techniques is validated and the accuracy in parameter estimation is shown. The proposed solver quickly estimates the design parameters with a great precision.
Resumo:
Optimal allocation of water resources for various stakeholders often involves considerable complexity with several conflicting goals, which often leads to multi-objective optimization. In aid of effective decision-making to the water managers, apart from developing effective multi-objective mathematical models, there is a greater necessity of providing efficient Pareto optimal solutions to the real world problems. This study proposes a swarm-intelligence-based multi-objective technique, namely the elitist-mutated multi-objective particle swarm optimization technique (EM-MOPSO), for arriving at efficient Pareto optimal solutions to the multi-objective water resource management problems. The EM-MOPSO technique is applied to a case study of the multi-objective reservoir operation problem. The model performance is evaluated by comparing with results of a non-dominated sorting genetic algorithm (NSGA-II) model, and it is found that the EM-MOPSO method results in better performance. The developed method can be used as an effective aid for multi-objective decision-making in integrated water resource management.
Resumo:
Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.
Resumo:
In this paper, the control aspects of a hierarchical organization under the influence of "proportionality" policies are analyzed. Proportionality policies are those that restrict the recruitment to every level of the hierarchy (except the bottom most level or base level) to be in strict proportion to the promotions into that level. Both long term and short term control analysis have been discussed. In long term control the specific roles of the parameters of the system with regard to control of the shape and size of the system have been analyzed and yield suitable control strategies. In short term control, the attainability of a target or goal structure within a specific time from a given initial structure has been analyzed and yields the required recruitment strategies. The theoretical analyses have been illustrated with computational examples and also with real world data.
Resumo:
In this paper, the control aspects of a hierarchical organization under the influence of "proportionality" policies are analyzed. Proportionality policies are those that restrict the recruitment to every level of the hierarchy (except the bottom most level or base level) to be in strict proportion to the promotions into that level. Both long term and short term control analysis have been discussed. In long term control the specific roles of the parameters of the system with regard to control of the shape and size of the system have been analyzed and yield suitable control strategies. In short term control, the attainability of a target or goal structure within a specific time from a given initial structure has been analyzed and yields the required recruitment strategies. The theoretical analyses have been illustrated with computational examples and also with real world data. The control of such proportionality systems is then compared with that of the general systems (which do not follow such policies) with some significant conclusions. The control relations of such proportionality systems are found to be simpler and more practically feasible than those of general Markov systems, which do not have such restrictions. Such proportionality systems thus not only retain and match the flexibility of general Markov systems but also have the added advantage of simpler and more practically feasible controls. The proportionality policies hence act as an alternative and more practicably feasible means of control. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Grid-connected systems when put to use at the site would experience scenarios like voltage sag, voltage swell, frequency deviations and unbalance which are common in the real world grid. When these systems are tested at laboratory, these scenarios do not exist and an almost stiff voltage source is what is usually seen. But, to qualify the grid-connected systems to operate at the site, it becomes essential to test them under the grid conditions mentioned earlier. The grid simulator is a hardware that can be programmed to generate some of the typical conditions experienced by the grid-connected systems at site. It is an inverter that is controlled to act like a voltage source in series with a grid impedance. The series grid impedance is emulated virtually within the inverter control rather than through physical components, thus avoiding the losses and the need for bulky reactive components. This paper describes the design of a grid simulator. Control implementation issues are highlighted in the experimental results.
Resumo:
Core Vector Machine(CVM) is suitable for efficient large-scale pattern classification. In this paper, a method for improving the performance of CVM with Gaussian kernel function irrespective of the orderings of patterns belonging to different classes within the data set is proposed. This method employs a selective sampling based training of CVM using a novel kernel based scalable hierarchical clustering algorithm. Empirical studies made on synthetic and real world data sets show that the proposed strategy performs well on large data sets.