18 resultados para Artificial Intelligence, Constraint Programming, set variables, representation
em Indian Institute of Science - Bangalore - Índia
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The Dissolved Gas Analysis (DGA) a non destructive test procedure, has been in vogue for a long time now, for assessing the status of power and related transformers in service. An early indication of likely internal faults that may exist in Transformers has been seen to be revealed, to a reasonable degree of accuracy by the DGA. The data acquisition and subsequent analysis needs an expert in the concerned area to accurately assess the condition of the equipment. Since the presence of the expert is not always guaranteed, it is incumbent on the part of the power utilities to requisition a well planned and reliable artificial expert system to replace, at least in part, an expert. This paper presents the application of Ordered Ant Mner (OAM) classifier for the prediction of involved fault. Secondly, the paper also attempts to estimate the remaining life of the power transformer as an extension to the elapsed life estimation method suggested in the literature.
Resumo:
With increased number of new services and users being added to the communication network, management of such networks becomes crucial to provide assured quality of service. Finding skilled managers is often a problem. To alleviate this problem and also to provide assistance to the available network managers, network management has to be automated. Many attempts have been made in this direction and it is a promising area of interest to researchers in both academia and industry. In this paper, a review of the management complexities in present day networks and artificial intelligence approaches to network management are presented. Published by Elsevier Science B.V.
Resumo:
This paper investigates the use of Genetic Programming (GP) to create an approximate model for the non-linear relationship between flexural stiffness, length, mass per unit length and rotation speed associated with rotating beams and their natural frequencies. GP, a relatively new form of artificial intelligence, is derived from the Darwinian concept of evolution and genetics and it creates computer programs to solve problems by manipulating their tree structures. GP predicts the size and structural complexity of the empirical model by minimizing the mean square error at the specified points of input-output relationship dataset. This dataset is generated using a finite element model. The validity of the GP-generated model is tested by comparing the natural frequencies at training and at additional input data points. It is found that by using a non-dimensional stiffness, it is possible to get simple and accurate function approximation for the natural frequency. This function approximation model is then used to study the relationships between natural frequency and various influencing parameters for uniform and tapered beams. The relations obtained with GP model agree well with FEM results and can be used for preliminary design and structural optimization studies.
Resumo:
This article proposes a three-timescale simulation based algorithm for solution of infinite horizon Markov Decision Processes (MDPs). We assume a finite state space and discounted cost criterion and adopt the value iteration approach. An approximation of the Dynamic Programming operator T is applied to the value function iterates. This 'approximate' operator is implemented using three timescales, the slowest of which updates the value function iterates. On the middle timescale we perform a gradient search over the feasible action set of each state using Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates, thus finding the minimizing action in T. On the fastest timescale, the 'critic' estimates, over which the gradient search is performed, are obtained. A sketch of convergence explaining the dynamics of the algorithm using associated ODEs is also presented. Numerical experiments on rate based flow control on a bottleneck node using a continuous-time queueing model are performed using the proposed algorithm. The results obtained are verified against classical value iteration where the feasible set is suitably discretized. Over such a discretized setting, a variant of the algorithm of [12] is compared and the proposed algorithm is found to converge faster.
Resumo:
Biomimetics involves transfer from one or more biological examples to a technical system. This study addresses four questions. What are the essential steps in a biomimetic process? What is transferred? How can the transferred knowledge be structured in a way useful for biologists and engineers? Which guidelines can be given to support transfer in biomimetic design processes? In order to identify the essential steps involved in carrying out biomimetics, several procedures found in the literature were summarized, and four essential steps that are common across these procedures were identified. For identification of mechanisms for transfer, 20 biomimetic examples were collected and modeled according to a model. of causality called the SAPPhIRE model. These examples were then analyzed for identifying the underlying similarity between each biological and corresponding analogue technical system. Based on the SAPPhIRE model, four levels of abstraction at which transfer takes place were identified. Taking into account similarity, the biomimetic examples were assigned to the appropriate levels of abstraction of transfer. Based on the essential steps and the levels of transfer, guidelines for supporting transfer in biomimetic design were proposed and evaluated using design experiments. The 20 biological and analogue technical systems that were analyzed were similar in the physical effects used and at the most abstract levels of description of their functionality, but they were the least similar at the lowest levels of abstraction: the parts involved. Transfer most often was carried out at the physical effect level of abstraction. Compared to a generic set of guidelines based on the literature, the proposed guidelines improved design performance by about 60%. Further, the SAPPhIRE model turned out to be a useful representation for modeling complex biological systems and their functionality. Databases of biological systems, which are structured using the SAPPhIRE model, have the potential to aid biomimetic concept generation.
Resumo:
This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Resumo:
In engineering design, the end goal is the creation of an artifact, product, system, or process that fulfills some functional requirements at some desired level of performance. As such, knowledge of functionality is essential in a wide variety of tasks in engineering activities, including modeling, generation, modification, visualization, explanation, evaluation, diagnosis, and repair of these artifacts and processes. A formal representation of functionality is essential for supporting any of these activities on computers. The goal of Parts 1 and 2 of this Special Issue is to bring together the state of knowledge of representing functionality in engineering applications from both the engineering and the artificial intelligence (AI) research communities.
Resumo:
As power systems grow in their size and interconnections, their complexity increases. Rising costs due to inflation and increased environmental concerns has made transmission, as well as generation systems be operated closer to design limits. Hence power system voltage stability and voltage control are emerging as major problems in the day-to-day operation of stressed power systems. For secure operation and control of power systems under normal and contingency conditions it is essential to provide solutions in real time to the operator in energy control center (ECC). Artificial neural networks (ANN) are emerging as an artificial intelligence tool, which give fast, though approximate, but acceptable solutions in real time as they mostly use the parallel processing technique for computation. The solutions thus obtained can be used as a guide by the operator in ECC for power system control. This paper deals with development of an ANN architecture, which provide solutions for monitoring, and control of voltage stability in the day-to-day operation of power systems.
Resumo:
This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present a machine learning approach for subject independent human action recognition using depth camera, emphasizing the importance of depth in recognition of actions. The proposed approach uses the flow information of all 3 dimensions to classify an action. In our approach, we have obtained the 2-D optical flow and used it along with the depth image to obtain the depth flow (Z motion vectors). The obtained flow captures the dynamics of the actions in space time. Feature vectors are obtained by averaging the 3-D motion over a grid laid over the silhouette in a hierarchical fashion. These hierarchical fine to coarse windows capture the motion dynamics of the object at various scales. The extracted features are used to train a Meta-cognitive Radial Basis Function Network (McRBFN) that uses a Projection Based Learning (PBL) algorithm, referred to as PBL-McRBFN, henceforth. PBL-McRBFN begins with zero hidden neurons and builds the network based on the best human learning strategy, namely, self-regulated learning in a meta-cognitive environment. When a sample is used for learning, PBLMcRBFN uses the sample overlapping conditions, and a projection based learning algorithm to estimate the parameters of the network. The performance of PBL-McRBFN is compared to that of a Support Vector Machine (SVM) and Extreme Learning Machine (ELM) classifiers with representation of every person and action in the training and testing datasets. Performance study shows that PBL-McRBFN outperforms these classifiers in recognizing actions in 3-D. Further, a subject-independent study is conducted by leave-one-subject-out strategy and its generalization performance is tested. It is observed from the subject-independent study that McRBFN is capable of generalizing actions accurately. The performance of the proposed approach is benchmarked with Video Analytics Lab (VAL) dataset and Berkeley Multimodal Human Action Database (MHAD). (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In recent times, crowdsourcing over social networks has emerged as an active tool for complex task execution. In this paper, we address the problem faced by a planner to incen-tivize agents in the network to execute a task and also help in recruiting other agents for this purpose. We study this mecha-nism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner’s goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We define a set of fairness properties that should beideally satisfied by a crowdsourcing mechanism. We prove that no mechanism can satisfy all these properties simultane-ously. We relax some of these properties and define their ap-proximate counterparts. Under appropriate approximate fair-ness criteria, we obtain a non-trivial family of payment mech-anisms. Moreover, we provide precise characterizations of cost critical and time critical mechanisms.