59 resultados para Filmic approach methods
Resumo:
Carbon fiber reinforced polymer (CFRP) composite specimens with different thickness, geometry, and stacking sequences were subjected to fatigue spectrum loading in stages. Another set of specimens was subjected to static compression load. On-line acoustic Emission (AE) monitoring was carried out during these tests. Two artificial neural networks, Kohonen-self organizing feature map (KSOM), and multi-layer perceptron (MLP) have been developed for AE signal analysis. AE signals from specimens were clustered using the unsupervised learning KSOM. These clusters were correlated to the failure modes using available a priori information such as AE signal amplitude distributions, time of occurrence of signals, ultrasonic imaging, design of the laminates (stacking sequences, orientation of fibers), and AE parametric plots. Thereafter, AE signals generated from the rest of the specimens were classified by supervised learning MLP. The network developed is made suitable for on-line monitoring of AE signals in the presence of noise, which can be used for detection and identification of failure modes and their growth. The results indicate that the characteristics of AE signals from different failure modes in CFRP remain largely unaffected by the type of load, fiber orientation, and stacking sequences, they being representatives of the type of failure phenomena. The type of loading can have effect only on the extent of damage allowed before the specimens fail and hence on the number of AE signals during the test. The artificial neural networks (ANN) developed and the methods and procedures adopted show significant success in AE signal characterization under noisy environment (detection and identification of failure modes and their growth).
Resumo:
Skew correction of complex document images is a difficult task. We propose an edge-based connected component approach for robust skew correction of documents with complex layout and content. The algorithm essentially consists of two steps - an 'initialization' step to determine the image orientation from the centroids of the connected components and a 'search' step to find the actual skew of the image. During initialization, we choose two different sets of points regularly spaced across the the image, one from the left to right and the other from top to bottom. The image orientation is determined from the slope between the two succesive nearest neighbors of each of the points in the chosen set. The search step finds succesive nearest neighbors that satisfy the parameters obtained in the initialization step. The final skew is determined from the slopes obtained in the 'search' step. Unlike other connected component based methods, the proposed method does not require any binarization step that generally precedes connected component analysis. The method works well for scanned documents with complex layout of any skew with a precision of 0.5 degrees.
Resumo:
Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.
Resumo:
A new approach for unwrapping phase maps, obtained during the measurement of 3-D surfaces using sinusoidal structured light projection technique, is proposed. "Takeda's method" is used to obtain the wrapped phase map. Proposed method of unwrapping makes use of an additional image of the object captured under the illumination of a specifically designed color-coded pattern. The new approach demonstrates, for the first time, a method of producing reliable unwrapping of objects even with surface discontinuities from a single-phase map. It is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with S fringes in the projected pattern is carried out with both the methods, new method requires only 2 frames as compared to (log(2)S +1) frames required by the later method.
Resumo:
In this paper, an analytical study considering the effect of uncertainties in the seismic analysis of geosynthetic-reinforced soil (GRS) walls is presented. Using limit equilibrium method and assuming sliding wedge failure mechanism, analysis is conducted to evaluate the external stability of GRS walls when subjected to earthquake loads. Target reliability based approach is used to estimate the probability of failure in three modes of failure, viz., sliding, bearing, and eccentricity failure. The properties of reinforced backfill, retained backfill, foundation soil, and geosynthetic reinforcement are treated as random variables. In addition, the uncertainties associated with horizontal seismic acceleration and surcharge load acting on the wall are considered. The optimum length of reinforcement needed to maintain the stability against three modes of failure by targeting various component and system reliability indices is obtained. Studies have also been made to study the influence of various parameters on the seismic stability in three failure modes. The results are compared with those given by first-order second moment method and Monte Carlo simulation methods. In the illustrative example, external stability of the two walls, Gould and Valencia walls, subjected to Northridge earthquake is reexamined.
Resumo:
In this paper, an analytical study considering the effect of uncertainties in the seismic analysis of geosynthetic-reinforced soil (GRS) walls is presented. Using limit equilibrium method and assuming sliding wedge failure mechanism, analysis is conducted to evaluate the external stability of GRS walls when subjected to earthquake loads. Target reliability based approach is used to estimate the probability of failure in three modes of failure, viz., sliding, bearing, and eccentricity failure. The properties of reinforced backfill, retained backfill, foundation soil, and geosynthetic reinforcement are treated as random variables. In addition, the uncertainties associated with horizontal seismic acceleration and surcharge load acting on the wall are considered. The optimum length of reinforcement needed to maintain the stability against three modes of failure by targeting various component and system reliability indices is obtained. Studies have also been made to study the influence of various parameters on the seismic stability in three failure modes. The results are compared with those given by first-order second moment method and Monte Carlo simulation methods. In the illustrative example, external stability of the two walls, Gould and Valencia walls, subjected to Northridge earthquake is reexamined.
Resumo:
This paper presents an Artificial Neural Network (ANN) approach for locating faults in distribution systems. Different from the traditional Fault Section Estimation methods, the proposed approach uses only limited measurements. Faults are located according to the impedances of their path using a Feed Forward Neural Networks (FFNN). Various practical situations in distribution systems, such as protective devices placed only at the substation, limited measurements available, various types of faults viz., three-phase, line (a, b, c) to ground, line to line (a-b, b-c, c-a) and line to line to ground (a-b-g, b-c-g, c-a-g) faults and a wide range of varying short circuit levels at substation, are considered for studies. A typical IEEE 34 bus practical distribution system with unbalanced loads and with three- and single- phase laterals and a 69 node test feeder with different configurations are considered for studies. The results presented show that the proposed approach of fault location gives close to accurate results in terms of the estimated fault location.
Resumo:
Sinusoidal structured light projection (SSLP) technique, specifically-phase stepping method, is in widespread use to obtain accurate, dense 3-D data. But, if the object under investigation possesses surface discontinuities, phase unwrapping (an intermediate step in SSLP) stage mandatorily require several additional images, of the object with projected fringes (of different spatial frequencies), as input to generate a reliable 3D shape. On the other hand, Color-coded structured light projection (CSLP) technique is known to require a single image as in put, but generates sparse 3D data. Thus we propose the use of CSLP in conjunction with SSLP to obtain dense 3D data with minimum number of images as input. This approach is shown to be significantly faster and reliable than temporal phase unwrapping procedure that uses a complete exponential sequence. For example, if a measurement with the accuracy obtained by interrogating the object with 32 fringes in the projected pattern is carried out with both the methods, new strategy proposed requires only 5 frames as compared to 24 frames required by the later method.
Resumo:
A complete vibrational analysis was performed on the molecular structure of boldine hydrochloride using QM/MM method. The equilibrium geometry, harmonic vibrational frequencies and infrared intensities were calculated by QM/MM method with B3LYP/6-31G(d) and universal force field (UFF) combination using ONIOM code. We found the geometry obtained by the QM/MM method to be very accurate, and we can use this rapid method in place of time consuming ab initio methods for large molecules. A detailed interpretation of the infrared spectra of boldine hydrochloride is reported. The scaled theoretical wave numbers are in perfect agreement with the experimental values. The FT-IR spectra of boldine hydrochloride in the region 4000-500 cm(-1) were recorded in CsI (solid phase) and in chloroform with concentration 5 and 10 mg/ml.
Resumo:
Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed, A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance pf our GA-based approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger. To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
This paper deals with the development of a new model for the cooling process on the runout table of hot strip mills, The suitability of different numerical methods for the solution of the proposed model equation from the point of view of accuracy and computation time are studied, Parallel solutions for the model equation are proposed.
Resumo:
We discuss three methods to correct spherical aberration for a point to point imaging system. First, results obtained using Fermat's principle and the ray tracing method are described briefly. Next, we obtain solutions using Lie algebraic techniques. Even though one cannot always obtain analytical results using this method, it is often more powerful than the first method. The result obtained with this approach is compared and found to agree with the exact result of the first method.
Resumo:
Perfect or even mediocre weather predictions over a long period are almost impossible because of the ultimate growth of a small initial error into a significant one. Even though the sensitivity of initial conditions limits the predictability in chaotic systems, an ensemble of prediction from different possible initial conditions and also a prediction algorithm capable of resolving the fine structure of the chaotic attractor can reduce the prediction uncertainty to some extent. All of the traditional chaotic prediction methods in hydrology are based on single optimum initial condition local models which can model the sudden divergence of the trajectories with different local functions. Conceptually, global models are ineffective in modeling the highly unstable structure of the chaotic attractor. This paper focuses on an ensemble prediction approach by reconstructing the phase space using different combinations of chaotic parameters, i.e., embedding dimension and delay time to quantify the uncertainty in initial conditions. The ensemble approach is implemented through a local learning wavelet network model with a global feed-forward neural network structure for the phase space prediction of chaotic streamflow series. Quantification of uncertainties in future predictions are done by creating an ensemble of predictions with wavelet network using a range of plausible embedding dimensions and delay times. The ensemble approach is proved to be 50% more efficient than the single prediction for both local approximation and wavelet network approaches. The wavelet network approach has proved to be 30%-50% more superior to the local approximation approach. Compared to the traditional local approximation approach with single initial condition, the total predictive uncertainty in the streamflow is reduced when modeled with ensemble wavelet networks for different lead times. Localization property of wavelets, utilizing different dilation and translation parameters, helps in capturing most of the statistical properties of the observed data. The need for taking into account all plausible initial conditions and also bringing together the characteristics of both local and global approaches to model the unstable yet ordered chaotic attractor of a hydrologic series is clearly demonstrated.
Resumo:
Instruction scheduling with an automaton-based resource conflict model is well-established for normal scheduling. Such models have been generalized to software pipelining in the modulo-scheduling framework. One weakness with existing methods is that a distinct automaton must be constructed for each combination of a reservation table and initiation interval. In this work, we present a different approach to model conflicts. We construct one automaton for each reservation table which acts as a compact encoding of all the conflict automata for this table, which can be recovered for use in modulo-scheduling. The basic premise of the construction is to move away from the Proebsting-Fraser model of conflict automaton to the Muller model of automaton modelling issue sequences. The latter turns out to be useful and efficient in this situation. Having constructed this automaton, we show how to improve the estimate of resource constrained initiation interval. Such a bound is always better than the average-use estimate. We show that our bound is safe: it is always lower than the true initiation interval. This use of the automaton is orthogonal to its use in modulo-scheduling. Once we generate the required information during pre-processing, we can compute the lower bound for a program without any further reference to the automaton.