32 resultados para semi-automatic method
Resumo:
A semi-experimental approach to solve two-dimensional problems in elasticity is given. The method has been applied to two problems, (i) a square deep beam, and (ii) a bridge pier with a sloping boundary. For the first problem sufficient analytical results are available and hence the accuracy of the method can be verified. Then the method has been extended to the second problem for which sufficient results are not available.
Resumo:
Main chain and segmental dynamics of polyisoprene (PI) and poly(methyl methacrylate)(PMMA) chains in semi IPNs were systematically studied over a wide range of temperatures (above and below T-g of both polymers) as a function of composition, crosslink density, and molecular weight. The immiscible polymers retained most of its characteristic molecular motion; however, the semi IPN synthesis resulted in dramatic changes in the motional behavior of both polymers due to the molecular level interpenetration between two polymer chains. ESR spin probe method was found to be sensitive to the concentration changes of PMMA in semi IPNs. Low temperature spectra showed the characteristics of rigid limit spectra, and in the range of 293-373 K.complex spectra were obtained with the slow component mostly arisingout of the PMMA rich regions and fast component from the PI phase. We found that the rigid PMMA chains closely interpenetrated into thehighly mobile PI network imparts motional restriction in nearby PI chains, and the highly mobile PI chains induce some degree of flexibility in highly rigid PMMA chains. Molecular level interchain mixing was found to be more efficient at a PMMA concentration of 35 wt.%. Moreover, the strong interphase formed in the above mentionedsemi IPN contributed to the large slow component in the ESR spectra at higher temperature. The shape of the spectra along with the data obtained from the simulations of spectra was correlated to the morphology of the semi IPNs. The correlation time measurement detected the motional region associated with the glass transition of PI and PMMA, and these regions were found to follow the same pattern of shifts in a-relaxation of PI and PMMA observed in DMA analysis. Activation energies associated with the T-g regions were also calculated. T-50G was found to correlate with the T-g of PMMA, and the volume of polymer segments undergoing glass transitional motion was calculated to be 1.7 nm(3).C-13 T-1 rho measurements of PMMA carbons indicate that the molecular level interactions were strong in semi IPN irrespective of the immiscible nature of polymers. The motional characteristics of H atoms attached to carbon atoms in both polymers were analyzed using 2D WISE NMR. Main relaxations of both components shifted inward, and both SEM and TEM analysis showed the development of a nanometer sized morphology in the case of highly crosslinked semi IPN. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A practical method is proposed to identify the mode associated with the frequency part of the eigenvalue of the Floquet transition matrix (FTM). From the FTM eigenvector, which contains the states and their derivatives, the ratio of the derivative and the state corresponding to the largest component is computed. The method exploits the fact that the imaginary part of this (complex) ratio closely approximates the frequency of the mode. It also lends itself well to automation and has been tested over a large number of FTMs of order as high as 250.
Resumo:
The unsteady laminar boundary layer flow of an electrically conducting fluid past a semi-infinite flat plate with an aligned magnetic field has been studied when at time t > 0 the plate is impulsively moved with a constant velocity which is in the same or opposite direction to that of free stream velocity. The effect of the induced magnetic field has been included in the analysis. The non-linear partial differential equations have been solved numerically using an implicit finite-difference method. The effect of the impulsive motion of the surface is found to be more pronounced on the skin friction but its effect on the x-component of the induced magnetic field and heat transfer is small. Velocity defect occurs near the surface when the plate is impulsively moved in the same direction as that of the free stream velocity. The surface shear stress, x-component of the induced magnetic field on the surface and the surface heat transfer decrease with an increasing magnetic field, but they increase with the reciprocal of the magnetic Prandtl number. However, the effect of the reciprocal of the magnetic Prandtl number is more pronounced on the x-component of the induced magnetic field. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, we present a novel differential geometric characterization of two- and three-degree-of-freedom rigid body kinematics, using a metric defined on dual vectors. The instantaneous angular and linear velocities of a rigid body are expressed as a dual velocity vector, and dual inner product is defined on this dual vector, resulting in a positive semi-definite and symmetric dual matrix. We show that the maximum and minimum magnitude of the dual velocity vector, for a unit speed motion, can be obtained as eigenvalues of this dual matrix. Furthermore, we show that the tip of the dual velocity vector lies on a dual ellipse for a two-degree-of-freedom motion and on a dual ellipsoid for a three-degree-of-freedom motion. In this manner, the velocity distribution of a rigid body can be studied algebraically in terms of the eigenvalues of a dual matrix or geometrically with the dual ellipse and ellipsoid. The second-order properties of the two- and three-degree-of-freedom motions of a rigid body are also obtained from the derivatives of the elements of the dual matrix. This results in a definition of the geodesic motion of a rigid body. The theoretical results are illustrated with the help of a spatial 2R and a parallel three-degree-of-freedom manipulator.
Resumo:
An analytical method is developed for solving an inverse problem for Helmholtz's equation associated with two semi-infinite incompressible fluids of different variable refractive indices, separated by a plane interface. The unknowns of the inverse problem are: (i) the refractive indices of the two fluids, (ii) the ratio of the densities of the two fluids, and (iii) the strength of an acoustic source assumed to be situated at the interface of the two fluids. These are determined from the pressure on the interface produced by the acoustic source. The effect of the surface tension force at the interface is taken into account in this paper. The application of the proposed analytical method to solve the inverse problem is also illustrated with several examples. In particular, exact solutions of two direct problems are first derived using standard classical methods which are then used in our proposed inverse method to recover the unknowns of the corresponding inverse problems. The results are found to be in excellent agreement.
Resumo:
This paper formulates the automatic generation control (AGC) problem as a stochastic multistage decision problem. A strategy for solving this new AGC problem formulation is presented by using a reinforcement learning (RL) approach This method of obtaining an AGC controller does not depend on any knowledge of the system model and more importantly it admits considerable flexibility in defining the control objective. Two specific RL based AGC algorithms are presented. The first algorithm uses the traditional control objective of limiting area control error (ACE) excursions, where as, in the second algorithm, the controller can restore the load-generation balance by only monitoring deviation in tie line flows and system frequency and it does not need to know or estimate the composite ACE signal as is done by all current approaches. The effectiveness and versatility of the approaches has been demonstrated using a two area AGC model. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.
Resumo:
This paper presents a new algorithm for extracting Free-Form Surface Features (FFSFs) from a surface model. The extraction algorithm is based on a modified taxonomy of FFSFs from that proposed in the literature. A new classification scheme has been proposed for FFSFs to enable their representation and extraction. The paper proposes a separating curve as a signature of FFSFs in a surface model. FFSFs are classified based on the characteristics of the separating curve (number and type) and the influence region (the region enclosed by the separating curve). A method to extract these entities is presented. The algorithm has been implemented and tested for various free-form surface features on different types of free-form surfaces (base surfaces) and is found to correctly identify and represent the features irrespective of the type of underlying surface. The representation and extraction algorithm are both based on topology and geometry. The algorithm is data-driven and does not use any pre-defined templates. The definition presented for a feature is unambiguous and application independent. The proposed classification of FFSFs can be used to develop an ontology to determine semantic equivalences for the feature to be exchanged, mapped and used across PLM applications. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the design and implementation of a learning controller for the Automatic Generation Control (AGC) in power systems based on a reinforcement learning (RL) framework. In contrast to the recent RL scheme for AGC proposed by us, the present method permits handling of power system variables such as Area Control Error (ACE) and deviations from scheduled frequency and tie-line flows as continuous variables. (In the earlier scheme, these variables have to be quantized into finitely many levels). The optimal control law is arrived at in the RL framework by making use of Q-learning strategy. Since the state variables are continuous, we propose the use of Radial Basis Function (RBF) neural networks to compute the Q-values for a given input state. Since, in this application we cannot provide training data appropriate for the standard supervised learning framework, a reinforcement learning algorithm is employed to train the RBF network. We also employ a novel exploration strategy, based on a Learning Automata algorithm,for generating training samples during Q-learning. The proposed scheme, in addition to being simple to implement, inherits all the attractive features of an RL scheme such as model independent design, flexibility in control objective specification, robustness etc. Two implementations of the proposed approach are presented. Through simulation studies the attractiveness of this approach is demonstrated.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.
Resumo:
The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
A phase field modelling approach is implemented in the present study towards simulation of microstructure evolution during cooling slope semi solid slurry generation process of A380 Aluminium alloy. First, experiments are performed to evaluate the number of seeds required within the simulation domain to simulate near spherical microstructure formation, occurs during cooling slope processing of the melt. Subsequently, microstructure evolution is studied employing a phase field method. Simulations are performed to understand the effect of cooling rate on the slurry microstructure. Encouraging results are obtained from the simulation studies which are validated by experimental observations. The results obtained from mesoscopic phase field simulations are grain size, grain density, degree of sphericity of the evolving primary Al phase and the amount of solid fraction present within the slurry at different time frames. Effect of grain refinement also has been studied with an aim of improving the slurry microstructure further. Insight into the process has been obtained from the numerical findings, which are found to be useful for process control.
Resumo:
The present work presents the results of experimental investigation of semi-solid rheocasting of A356 Al alloy using a cooling slope. The experiments have been carried out following Taguchi method of parameter design (orthogonal array of L-9 experiments). Four key process variables (slope angle, pouring temperature, wall temperature, and length of travel of the melt) at three different levels have been considered for the present experimentation. Regression analysis and analysis of variance (ANOVA) has also been performed to develop a mathematical model for degree of sphericity evolution of primary alpha-Al phase and to find the significance and percentage contribution of each process variable towards the final outcome of degree of sphericity, respectively. The best processing condition has been identified for optimum degree of sphericity (0.83) as A(3), B-3, C-2, D-1 i.e., slope angle of 60 degrees, pouring temperature of 650 degrees C, wall temperature 60 degrees C, and 500 mm length of travel of the melt, based on mean response and signal to noise ratio (SNR). ANOVA results shows that the length of travel has maximum impact on degree of sphericity evolution. The predicted sphericity obtained from the developed regression model and the values obtained experimentally are found to be in good agreement with each other. The sphericity values obtained from confirmation experiment, performed at 95% confidence level, ensures that the optimum result is correct and also the confirmation experiment values are within permissible limits. (c) 2014 Elsevier Ltd. All rights reserved.