199 resultados para computer prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Avoidance of collision between moving objects in a 3-D environment is fundamental to the problem of planning safe trajectories in dynamic environments. This problem appears in several diverse fields including robotics, air vehicles, underwater vehicles and computer animation. Most of the existing literature on collision prediction assumes objects to be modelled as spheres. While the conservative spherical bounding box is valid in many cases, in many other cases, where objects operate in close proximity, a less conservative approach, that allows objects to be modelled using analytic surfaces that closely mimic the shape of the object, is more desirable. In this paper, a collision cone approach (previously developed only for objects moving on a plane) is used to determine collision between objects, moving in 3-D space, whose shapes can be modelled by general quadric surfaces. Exact collision conditions for such quadric surfaces are obtained and used to derive dynamic inversion based avoidance strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The last few decades have witnessed application of graph theory and topological indices derived from molecular graph in structure-activity analysis. Such applications are based on regression and various multivariate analyses. Most of the topological indices are computed for the whole molecule and used as descriptors for explaining properties/activities of chemical compounds. However, some substructural descriptors in the form of topological distance based vertex indices have been found to be useful in identifying activity related substructures and in predicting pharmacological and toxicological activities of bioactive compounds. Another important aspect of drug discovery e. g. designing novel pharmaceutical candidates could also be done from the distance distribution associated with such vertex indices. In this article, we will review the development and applications of this approach both in activity prediction as well as in designing novel compounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work focuses on the formulation of an asymptotically correct theory for symmetric composite honeycomb sandwich plate structures. In these panels, transverse stresses tremendously influence design. The conventional 2-D finite elements cannot predict the thickness-wise distributions of transverse shear or normal stresses and 3-D displacements. Unfortunately, the use of the more accurate three-dimensional finite elements is computationally prohibitive. The development of the present theory is based on the Variational Asymptotic Method (VAM). Its unique features are the identification and utilization of additional small parameters associated with the anisotropy and non-homogeneity of composite sandwich plate structures. These parameters are ratios of smallness of the thickness of both facial layers to that of the core and smallness of 3-D stiffness coefficients of the core to that of the face sheets. Finally, anisotropy in the core and face sheets is addressed by the small parameters within the 3-D stiffness matrices. Numerical results are illustrated for several sample problems. The 3-D responses recovered using VAM-based model are obtained in a much more computationally efficient manner than, and are in agreement with, those of available 3-D elasticity solutions and 3-D FE solutions of MSC NASTRAN. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

South peninsular India experiences a large portion of the annual rainfall during the northeast monsoon season (October to December). In this study, the facets of diurnal, intra-seasonal and inter-annual variability of the northeast monsoon rainfall (the NEMR) over India have been examined. The analysis of satellite derived hourly rainfall reveals that there are distinct features of diurnal variation over the land and oceans during the season. Over the land, rainfall peaks during the late afternoon/evening, while over the oceans an early morning peak is observed. The harmonic analysis of hourly data reveals that the amplitude and variance are the largest over south peninsular India. The NEMR also exhibits significant intra-seasonal variability on a 20-40 day time scale. Analysis also shows significant northward propagation of the maximum cloud zone from south of equator to the south peninsula during the season. The NEMR exhibits large inter-annual variability with the co-efficient of variation (CV) of 25%. The positive phases of ENSO and the Indian Ocean Dipole (IOD) are conducive for normal to above normal rainfall activity during the northeast monsoon. There are multi-decadal variations in the statistical relationship between ENSO and the NEMR. During the period 2001-2010 the statistical relationship between ENSO and the NEMR has significantly weakened. The analysis of seasonal rainfall hindcasts for the period 1960-2005 produced by the state-of-the-art coupled climate models, ENSEMBLES, reveals that the coupled models have very poor skill in predicting the inter-annual variability of the NEMR. This is mainly due to the inability of the ENSEMBLES models to simulate the positive relationship between ENSO and the NEMR correctly. Copyright (C) 2012 Royal Meteorological Society

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Practical usage of machine learning is gaining strategic importance in enterprises looking for business intelligence. However, most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a flat form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets. namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resistance to therapy limits the effectiveness of drug treatment in many diseases. Drug resistance can be considered as a successful outcome of the bacterial struggle to survive in the hostile environment of a drug-exposed cell. An important mechanism by which bacteria acquire drug resistance is through mutations in the drug target. Drug resistant strains (multi-drug resistant and extensively drug resistant) of Mycobacterium tuberculosis are being identified at alarming rates, increasing the global burden of tuberculosis. An understanding of the nature of mutations in different drug targets and how they achieve resistance is therefore important. An objective of this study is to first decipher sequence as well as structural bases for the observed resistance in known drug resistant mutants and then to predict positions in each target that are more prone to acquiring drug resistant mutations. A curated database containing hundreds of mutations in the 38 drug targets of nine major clinical drugs, associated with resistance is studied here. Mutations have been classified into those that occur in the binding site itself, those that occur in residues interacting with the binding site and those that occur in outer zones. Structural models of the wild type and mutant forms of the target proteins have been analysed to seek explanations for reduction in drug binding. Stability analysis of an entire array of 19 mutations at each of the residues for each target has been computed using structural models. Conservation indices of individual residues, binding sites and whole proteins are computed based on sequence conservation analysis of the target proteins. The analyses lead to insights about which positions in the polypeptide chain have a higher propensity to acquire drug resistant mutations. Thus critical insights can be obtained about the effect of mutations on drug binding, in terms of which amino acid positions and therefore which interactions should not be heavily relied upon, which in turn can be translated into guidelines for modifying the existing drugs as well as for designing new drugs. The methodology can serve as a general framework to study drug resistant mutants in other micro-organisms as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic Algorithm for Rule-set Prediction (GARP) and Support Vector Machine (SVM) with free and open source software (FOSS) - Open Modeller were used to model the probable landslide occurrence points. Environmental layers such as aspect, digital elevation, flow accumulation, flow direction, slope, land cover, compound topographic index and precipitation have been used in modeling. Simulated output of these techniques is validated with the actual landslide occurrence points, which showed 92% (GARP) and 96% (SVM) accuracy considering precipitation in the wettest month and 91% and 94% accuracy considering precipitation in the wettest quarter of the year.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High performance video standards use prediction techniques to achieve high picture quality at low bit rates. The type of prediction decides the bit rates and the image quality. Intra Prediction achieves high video quality with significant reduction in bit rate. This paper presents novel area optimized architecture for Intra prediction of H.264 decoding at HDTV resolution. The architecture has been validated on a Xilinx Virtex-5 FPGA based platform and achieved a frame rate of 64 fps. The architecture is based on multi-level memory hierarchy to reduce latency and ensure optimum resources utilization. It removes redundancy by reusing same functional blocks across different modes. The proposed architecture uses only 13% of the total LUTs available on the Xilinx FPGA XC5VLX50T.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the details of crack growth study and remaining life assessment of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Flexural fatigue tests have been conducted on HSC, HSC1 and UHSC beams under constant amplitude loading with a stress ratio of 0.2. It is observed from the studies that (i) the failure patterns of HSC1 and UHSC beams indicate their ductility as the member was intact till the crack propagated up to 90% of the beam depth and (ii) the remaining life decreases with increase of notch depth (iii) the failure of the specimen is influenced by the frequency of loading. A ``Net K'' model has been proposed by using non-linear fracture mechanics principles for crack growth analysis and remaining life prediction. SIF (K) has been computed by using the principle of superposition. SIP due to the cohesive forces applied on the effective crack face inside the process zone has been obtained through Green's function approach by applying bi-linear tension softening relationship to consider the cohesive the stresses acting ahead of the crack tip. Remaining life values have been have been predicted and compared with the corresponding experimental values and observed that they are in good agreement with each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) have recently gained wide prominence in classifying structured and complex objects like parse-trees, image segments and Part-of-Speech (POS) tags. Typical learning algorithms used in training SSVMs result in model parameters which are vectors residing in a large-dimensional feature space. Such a high-dimensional model parameter vector contains many non-zero components which often lead to slow prediction and storage issues. Hence there is a need for sparse parameter vectors which contain a very small number of non-zero components. L1-regularizer and elastic net regularizer have been traditionally used to get sparse model parameters. Though L1-regularized structural SVMs have been studied in the past, the use of elastic net regularizer for structural SVMs has not been explored yet. In this work, we formulate the elastic net SSVM and propose a sequential alternating proximal algorithm to solve the dual formulation. We compare the proposed method with existing methods for L1-regularized Structural SVMs. Experiments on large-scale benchmark datasets show that the proposed dual elastic net SSVM trained using the sequential alternating proximal algorithm scales well and results in highly sparse model parameters while achieving a comparable generalization performance. Hence the proposed sequential alternating proximal algorithm is a competitive method to achieve sparse model parameters and a comparable generalization performance when elastic net regularized Structural SVMs are used on very large datasets.