969 resultados para Real-world


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper deals with haptic realism related to Kinematic capabilities of the devices used in manipulation of virtual objects in virtual assembly environments and its effect on achieving haptic realism. Haptic realism implies realistic touch sensation. In virtual world all the operations are to be performed in the same way and with same level of accuracy as in the real world .In order to achieve realism there should be a complete mapping of real and virtual world dimensions. Experiments are conducted to know the kinematic capabilities of the device by comparing the dimensions of the object in the real and virtual world. Registered dimensions in the virtual world are found to be approximately 1.5 times that of the real world. Dimensional variations observed were discrepancy due to exoskeleton and discrepancy due to real and virtual hands. Experiments are conducted to know the discrepancy due to exoskeleton and this discrepancy can be taken care of by either at the hardware or software level. A Mathematical model is proposed to know the discrepancy between real and virtual hands. This could not give a fixed value and can not be taken care of by calibration. Experiments are conducted to figure out how much compensation can be given to achieve haptic realism.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conservation of natural resources through sustainable ecosystem management and development is the key to our secured future. The management of ecosystem involves inventorying and monitoring, and applying integrated technologies, methodologies and interdisciplinary approaches for its conservation. Hence, now it is even more critical than ever before for the humans to be environmentally literate. To realise this vision, both ecological and environmental education must become a fundamental part of the education system at all levels of education. Currently, it is even more critical than ever before for the humankind as a whole to have a clear understanding of environmental concerns and to follow sustainable development practices. The degradation of our environment is linked to continuing problems of pollution, loss of forest, solid waste disposal, and issues related to economic productivity and national as well as ecological security. Environmental management has gained momentum in the recent years with the initiatives focussing on managing environmental hazards and preventing possible disasters. Environmental issues make better sense, when one can understand them in the context of one’s own cognitive sphere. Environmental education focusing on real-world contexts and issues often begins close to home, encouraging learners to understand and forge connections with their immediate surroundings. The awareness, knowledge, and skills needed for these local connections and understandings provide a base for moving out into larger systems, broader issues, and a more sophisticated comprehension of causes, connections, and consequences. Environmental Education Programme at CES in collaboration with Karnataka Environment Research Foundation (KERF) referred as ‘Know your Ecosystem’ focuses on the importance of investigating the ecosystems within the context of human influences, incorporating an examination of ecology, economics, culture, political structure, and social equity as well as natural processes and systems. The ultimate goal of environment education is to develop an environmentally literate public. It needs to address the connection between our conception and practice of education and our relationship as human cultures to life-sustaining ecological systems. For each environmental issue there are many perspectives and much uncertainty. Environmental education cultivates the ability to recognise uncertainty, envision alternative scenarios, and adapt to changing conditions and information. These knowledge, skills, and mindset translate into a citizenry who is better equipped to address its common problems and take advantage of opportunities, whether environmental concerns are involved or not.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Even though several techniques have been proposed in the literature for achieving multiclass classification using Support Vector Machine(SVM), the scalability aspect of these approaches to handle large data sets still needs much of exploration. Core Vector Machine(CVM) is a technique for scaling up a two class SVM to handle large data sets. In this paper we propose a Multiclass Core Vector Machine(MCVM). Here we formulate the multiclass SVM problem as a Quadratic Programming(QP) problem defining an SVM with vector valued output. This QP problem is then solved using the CVM technique to achieve scalability to handle large data sets. Experiments done with several large synthetic and real world data sets show that the proposed MCVM technique gives good generalization performance as that of SVM at a much lesser computational expense. Further, it is observed that MCVM scales well with the size of the data set.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Support Vector Clustering has gained reasonable attention from the researchers in exploratory data analysis due to firm theoretical foundation in statistical learning theory. Hard Partitioning of the data set achieved by support vector clustering may not be acceptable in real world scenarios. Rough Support Vector Clustering is an extension of Support Vector Clustering to attain a soft partitioning of the data set. But the Quadratic Programming Problem involved in Rough Support Vector Clustering makes it computationally expensive to handle large datasets. In this paper, we propose Rough Core Vector Clustering algorithm which is a computationally efficient realization of Rough Support Vector Clustering. Here Rough Support Vector Clustering problem is formulated using an approximate Minimum Enclosing Ball problem and is solved using an approximate Minimum Enclosing Ball finding algorithm. Experiments done with several Large Multi class datasets such as Forest cover type, and other Multi class datasets taken from LIBSVM page shows that the proposed strategy is efficient, finds meaningful soft cluster abstractions which provide a superior generalization performance than the SVM classifier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conservation of natural resources through sustainable ecosystem management and development is the key to our secured future. The management of ecosystem involves inventorying and monitoring, and applying integrated technologies, methodologies and interdisciplinary approaches for its conservation. Hence, now it is even more critical than ever before for the humans to be environmentally literate. To realise this vision, both ecological and environmental education must become a fundamental part of the education system at all levels of education. Currently, it is even more critical than ever before for the humankind as a whole to have a clear understanding of environmental concerns and to follow sustainable development practices. The degradation of our environment is linked to continuing problems of pollution, loss of forest, solid waste disposal, and issues related to economic productivity and national as well as ecological security. Environmental management has gained momentum in the recent years with the initiatives focussing on managing environmental hazards and preventing possible disasters. Environmental issues make better sense, when one can understand them in the context of one’s own cognitive sphere. Environmental education focusing on real-world contexts and issues often begins close to home, encouraging learners to understand and forge connections with their immediate surroundings. The awareness, knowledge, and skills needed for these local connections and understandings provide a base for moving out into larger systems, broader issues, and a more sophisticated comprehension of causes, connections, and consequences. Environmental Education Programme at CES in collaboration with Karnataka Environment Research Foundation (KERF) referred as ‘Know your Ecosystem’ focuses on the importance of investigating the ecosystems within the context of human influences, incorporating an examination of ecology, economics, culture, political structure, and social equity as well as natural processes and systems. The ultimate goal of environment education is to develop an environmentally literate public. It needs to address the connection between our conception and practice of education and our relationship as human cultures to life-sustaining ecological systems. For each environmental issue there are many perspectives and much uncertainty. Environmental education cultivates the ability to recognise uncertainty, envision alternative scenarios, and adapt to changing conditions and information. These knowledge, skills, and mindset translate into a citizenry who is better equipped to address its common problems and take advantage of opportunities, whether environmental concerns are involved or not.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When hosting XML information on relational backends, a mapping has to be established between the schemas of the information source and the target storage repositories. A rich body of recent literature exists for mapping isolated components of XML Schema to their relational counterparts, especially with regard to table configurations. In this paper, we present the Elixir system for designing industrial-strength mappings for real-world applications. Specifically, it produces an information-preserving holistic mapping that transforms the complete XML world-view (XML schema with constraints, XML documents XQuery queries including triggers and views) into a full-scale relational mapping (table definitions, integrity constraints, indices, triggers and views) that is tuned to the application workload. A key design feature of Elixir is that it performs all its mapping-related optimizations in the XML source space, rather than in the relational target space. Further, unlike the XML mapping tools of commercial database systems, which rely heavily on user inputs, Elixir takes a principled cost-based approach to automatically find an efficient relational mapping. A prototype of Elixir is operational and we quantitatively demonstrate its functionality and efficacy on a variety of real-life XML schemas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modelled as batch processing machines. Most of the studies assume that ready times and due dates of jobs are agreeable (i.e., ri < rj implies di ≤ dj). In many real world applications, the agreeable property assumption does not hold. Therefore, in this paper, scheduling of a single burn-in oven with non-agreeable release times and due dates along with non-identical job sizes as well as non-identical processing of time problem is formulated as a Non-Linear (0-1) Integer Programming optimisation problem. The objective measure of the problem is minimising the maximum completion time (makespan) of all jobs. Due to computational intractability, we have proposed four variants of a two-phase greedy heuristic algorithm. Computational experiments indicate that two out of four proposed algorithms have excellent average performance and also capable of solving any large-scale real life problems with a relatively low computational effort on a Pentium IV computer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many knowledge based systems (KBS) transform a situation information into an appropriate decision using an in built knowledge base. As the knowledge in real world situation is often uncertain, the degree of truth of a proposition provides a measure of uncertainty in the underlying knowledge. This uncertainty can be evaluated by collecting `evidence' about the truth or falsehood of the proposition from multiple sources. In this paper we propose a simple framework for representing uncertainty in using the notion of an evidence space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a new algorithm for learning polyhedral classifiers. In contrast to existing methods for learning polyhedral classifier which solve a constrained optimization problem, our method solves an unconstrained optimization problem. Our method is based on a logistic function based model for the posterior probability function. We propose an alternating optimization algorithm, namely, SPLA1 (Single Polyhedral Learning Algorithm1) which maximizes the loglikelihood of the training data to learn the parameters. We also extend our method to make it independent of any user specified parameter (e.g., number of hyperplanes required to form a polyhedral set) in SPLA2. We show the effectiveness of our approach with experiments on various synthetic and real world datasets and compare our approach with a standard decision tree method (OC1) and a constrained optimization based method for learning polyhedral sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A terrestrial biosphere model with dynamic vegetation capability, Integrated Biosphere Simulator (IBIS2), coupled to the NCAR Community Atmosphere Model (CAM2) is used to investigate the multiple climate-forest equilibrium states of the climate system. A 1000-year control simulation and another 1000-year land cover change simulation that consisted of global deforestation for 100 years followed by re-growth of forests for the subsequent 900 years were performed. After several centuries of interactive climate-vegetation dynamics, the land cover change simulation converged to essentially the same climate state as the control simulation. However, the climate system takes about a millennium to reach the control forest state. In the absence of deep ocean feedbacks in our model, the millennial time scale for converging to the original climate state is dictated by long time scales of the vegetation dynamics in the northern high latitudes. Our idealized modeling study suggests that the equilibrium state reached after complete global deforestation followed by re-growth of forests is unlikely to be distinguishable from the control climate. The real world, however, could have multiple climate-forest states since our modeling study is unlikely to have represented all the essential ecological processes (e. g. altered fire regimes, seed sources and seedling establishment dynamics) for the reestablishment of major biomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lack of supervision in clustering algorithms often leads to clusters that are not useful or interesting to human reviewers. We investigate if supervision can be automatically transferred for clustering a target task, by providing a relevant supervised partitioning of a dataset from a different source task. The target clustering is made more meaningful for the human user by trading-off intrinsic clustering goodness on the target task for alignment with relevant supervised partitions in the source task, wherever possible. We propose a cross-guided clustering algorithm that builds on traditional k-means by aligning the target clusters with source partitions. The alignment process makes use of a cross-task similarity measure that discovers hidden relationships across tasks. When the source and target tasks correspond to different domains with potentially different vocabularies, we propose a projection approach using pivot vocabularies for the cross-domain similarity measure. Using multiple real-world and synthetic datasets, we show that our approach improves clustering accuracy significantly over traditional k-means and state-of-the-art semi-supervised clustering baselines, over a wide range of data characteristics and parameter settings.