920 resultados para Least Square Adjustment
Resumo:
In this paper, we use optical flow based complex-valued features extracted from video sequences to recognize human actions. The optical flow features between two image planes can be appropriately represented in the Complex plane. Therefore, we argue that motion information that is used to model the human actions should be represented as complex-valued features and propose a fast learning fully complex-valued neural classifier to solve the action recognition task. The classifier, termed as, ``fast learning fully complex-valued neural (FLFCN) classifier'' is a single hidden layer fully complex-valued neural network. The neurons in the hidden layer employ the fully complex-valued activation function of the type of a hyperbolic secant function. The parameters of the hidden layer are chosen randomly and the output weights are estimated as the minimum norm least square solution to a set of linear equations. The results indicate the superior performance of FLFCN classifier in recognizing the actions compared to real-valued support vector machines and other existing results in the literature. Complex valued representation of 2D motion and orthogonal decision boundaries boost the classification performance of FLFCN classifier. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a fast learning neural network classifier for human action recognition. The proposed classifier is a fully complex-valued neural network with a single hidden layer. The neurons in the hidden layer employ the fully complex-valued hyperbolic secant as an activation function. The parameters of the hidden layer are chosen randomly and the output weights are estimated analytically as a minimum norm least square solution to a set of linear equations. The fast leaning fully complex-valued neural classifier is used for recognizing human actions accurately. Optical flow-based features extracted from the video sequences are utilized to recognize 10 different human actions. The feature vectors are computationally simple first order statistics of the optical flow vectors, obtained from coarse to fine rectangular patches centered around the object. The results indicate the superior performance of the complex-valued neural classifier for action recognition. The superior performance of the complex neural network for action recognition stems from the fact that motion, by nature, consists of two components, one along each of the axes.
Resumo:
In this paper, we present a methodology for designing a compliant aircraft wing, which can morph from a given airfoil shape to another given shape under the actuation of internal forces and can offer sufficient stiffness in both configurations under the respective aerodynamic loads. The least square error in displacements, Fourier descriptors, geometric moments, and moment invariants are studied to compare candidate shapes and to pose the optimization problem. Their relative merits and demerits are discussed in this paper. The `frame finite element ground structure' approach is used for topology optimization and the resulting solutions are converted to continuum solutions. The introduction of a notch-like feature is the key to the success of the design. It not only gives a good match for the target morphed shape for the leading and trailing edges but also minimizes the extension of the flexible skin that is to be put on the airfoil frame. Even though linear small-displacement elastic analysis is used in optimization, the obtained designs are analysed for large displacement behavior. The methodology developed here is not restricted to aircraft wings; it can be used to solve any shape-morphing requirement in flexible structures and compliant mechanisms.
Resumo:
Variable Endmember Constrained Least Square (VECLS) technique is proposed to account endmember variability in the linear mixture model by incorporating the variance for each class, the signals of which varies from pixel to pixel due to change in urban land cover (LC) structures. VECLS is first tested with a computer simulated three class endmember considering four bands having small, medium and large variability with three different spatial resolutions. The technique is next validated with real datasets of IKONOS, Landsat ETM+ and MODIS. The results show that correlation between actual and estimated proportion is higher by an average of 0.25 for the artificial datasets compared to a situation where variability is not considered. With IKONOS, Landsat ETM+ and MODIS data, the average correlation increased by 0.15 for 2 and 3 classes and by 0.19 for 4 classes, when compared to single endmember per class. (C) 2013 COSPAR. Published by Elsevier Ltd. All rights reserved.
Resumo:
The authors consider the channel estimation problem in the context of a linear equaliser designed for a frequency selective channel, which relies on the minimum bit-error-ratio (MBER) optimisation framework. Previous literature has shown that the MBER-based signal detection may outperform its minimum-mean-square-error (MMSE) counterpart in the bit-error-ratio performance sense. In this study, they develop a framework for channel estimation by first discretising the parameter space and then posing it as a detection problem. Explicitly, the MBER cost function (CF) is derived and its performance studied, when transmitting binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals. It is demonstrated that the MBER based CF aided scheme is capable of outperforming existing MMSE, least square-based solutions.
Resumo:
Climate change impact assessment studies involve downscaling large-scale atmospheric predictor variables (LSAPVs) simulated by general circulation models (GCMs) to site-scale meteorological variables. This article presents a least-square support vector machine (LS-SVM)-based methodology for multi-site downscaling of maximum and minimum daily temperature series. The methodology involves (1) delineation of sites in the study area into clusters based on correlation structure of predictands, (2) downscaling LSAPVs to monthly time series of predictands at a representative site identified in each of the clusters, (3) translation of the downscaled information in each cluster from the representative site to that at other sites using LS-SVM inter-site regression relationships, and (4) disaggregation of the information at each site from monthly to daily time scale using k-nearest neighbour disaggregation methodology. Effectiveness of the methodology is demonstrated by application to data pertaining to four sites in the catchment of Beas river basin, India. Simulations of Canadian coupled global climate model (CGCM3.1/T63) for four IPCC SRES scenarios namely A1B, A2, B1 and COMMIT were downscaled to future projections of the predictands in the study area. Comparison of results with those based on recently proposed multivariate multiple linear regression (MMLR) based downscaling method and multi-site multivariate statistical downscaling (MMSD) method indicate that the proposed method is promising and it can be considered as a feasible choice in statistical downscaling studies. The performance of the method in downscaling daily minimum temperature was found to be better when compared with that in downscaling daily maximum temperature. Results indicate an increase in annual average maximum and minimum temperatures at all the sites for A1B, A2 and B1 scenarios. The projected increment is high for A2 scenario, and it is followed by that for A1B, B1 and COMMIT scenarios. Projections, in general, indicated an increase in mean monthly maximum and minimum temperatures during January to February and October to December.
Resumo:
Accelerated electrothermal aging tests were conducted at a constant temperature of 60 degrees C and at different stress levels of 6 kV/mm, 7 kV/mm and 8 kV/mm on unfilled epoxy and epoxy filled with 5 wt% of nano alumina. The leakage current through the samples were continuously monitored and the variation in tan delta values with aging duration was monitored to predict the impending failure and the time to failure of the samples. It is observed that the time to failure of epoxy alumina nanocomposite samples is significantly higher as compared to the unfilled epoxy. Data from the experiments has been analyzed graphically by plotting the Weibull probability and theoretically by the linear least square regression analysis. The characteristic life obtained from the least square regression analysis has been used to plot the inverse power law curve. From the inverse power law curve, the life of the epoxy insulation with and without nanofiller loading at a stress level of 3 kV/mm, i.e. within the midrange of the design stress level of rotating machine insulation, has been obtained by extrapolation. It is observed that the life of epoxy alumina nanocomposite of 5 wt% filler loading is nine times higher than that of the unfilled epoxy.
Resumo:
The temperature (300-973K) and frequency (100Hz-10MHz) response of the dielectric and impedance characteristics of 2BaO-0.5Na(2)O-2.5Nb(2)O(5)-4.5B(2)O(3) glasses and glass nanocrystal composites were studied. The dielectric constant of the glass was found to be almost independent of frequency (100Hz-10MHz) and temperature (300-600K). The temperature coefficient of dielectric constant was 8 +/- 3ppm/K in the 300-600K temperature range. The relaxation and conduction phenomena were rationalized using modulus formalism and universal AC conductivity exponential power law, respectively. The observed relaxation behavior was found to be thermally activated. The complex impedance data were fitted using the least square method. Dispersion of Barium Sodium Niobate (BNN) phase at nanoscale in a glass matrix resulted in the formation of space charge around crystal-glass interface, leading to a high value of effective dielectric constant especially for the samples heat-treated at higher temperatures. The fabricated glass nanocrystal composites exhibited P versus E hysteresis loops at room temperature and the remnant polarization (P-r) increased with the increase in crystallite size.
Resumo:
Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.
Resumo:
This paper presents two methods of star camera calibration to determine camera calibrating parameters (like principal point, focal length etc) along with lens distortions (radial and decentering). First method works autonomously utilizing star coordinates in three consecutive image frames thus independent of star identification or biased attitude information. The parameters obtained in autonomous self-calibration technique helps to identify the imaged stars with the cataloged stars. Least Square based second method utilizes inertial star coordinates to determine satellite attitude and star camera parameters with lens radial distortion, both independent of each other. Camera parameters determined by the second method are more accurate than the first method of camera self calibration. Moreover, unlike most of the attitude determination algorithms where attitude of the satellite depend on the camera calibrating parameters, the second method has the advantage of computing spacecraft attitude independent of camera calibrating parameters except lens distortions (radial). Finally Kalman filter based sequential estimation scheme is employed to filter out the noise of the LS based estimation.
Resumo:
In this paper, an implicit scheme is presented for a meshless compressible Euler solver based on the Least Square Kinetic Upwind Method (LSKUM). The Jameson and Yoon's split flux Jacobians formulation is very popular in finite volume methodology, which leads to a scalar diagonal dominant matrix for an efficient implicit procedure (Jameson & Yoon, 1987). However, this approach leads to a block diagonal matrix when applied to the LSKUM meshless method. The above split flux Jacobian formulation, along with a matrix-free approach, has been adopted to obtain a diagonally dominant, robust and cheap implicit time integration scheme. The efficacy of the scheme is demonstrated by computing 2D flow past a NACA 0012 airfoil under subsonic, transonic and supersonic flow conditions. The results obtained are compared with available experiments and other reliable computational fluid dynamics (CFD) results. The present implicit formulation shows good convergence acceleration over the RK4 explicit procedure. Further, the accuracy and robustness of the scheme in 3D is demonstrated by computing the flow past an ONERA M6 wing and a clipped delta wing with aileron deflection. The computed results show good agreement with wind tunnel experiments and other CFD computations.
Resumo:
Biomolecular recognition underlying drug-target interactions is determined by both binding affinity and specificity. Whilst, quantification of binding efficacy is possible, determining specificity remains a challenge, as it requires affinity data for multiple targets with the same ligand dataset. Thus, understanding the interaction space by mapping the target space to model its complementary chemical space through computational techniques are desirable. In this study, active site architecture of FabD drug target in two apicomplexan parasites viz. Plasmodium falciparum (PfFabD) and Toxoplasma gondii (TgFabD) is explored, followed by consensus docking calculations and identification of fifteen best hit compounds, most of which are found to be derivatives of natural products. Subsequently, machine learning techniques were applied on molecular descriptors of six FabD homologs and sixty ligands to induce distinct multivariate partial-least square models. The biological space of FabD mapped by the various chemical entities explain their interaction space in general. It also highlights the selective variations in FabD of apicomplexan parasites with that of the host. Furthermore, chemometric models revealed the principal chemical scaffolds in PfFabD and TgFabD as pyrrolidines and imidazoles, respectively, which render target specificity and improve binding affinity in combination with other functional descriptors conducive for the design and optimization of the leads.
Resumo:
Development of computationally efficient and accurate attitude rate estimation algorithm using low-cost commercially available star sensor arrays and processing unit for micro-satellite mission is presented. Our design reduces the computational load of least square (LS)-based rate estimation method while maintaining the same accuracy compared to other rate estimation approaches. Furthermore, rate estimation accuracy is improved by using recently developed fast and accurate second-order sliding mode observer (SOSMO) scheme. It also gives robust estimation in the presence of modeling uncertainties, unknown disturbances, and measurement noise. Simulation study shows that rate estimation accuracy achieved by our LS-based method is comparable with other methods for a typical commercially available star sensor array. The robustness analysis of SOSMO with respect to measurement noise is also presented in this paper. Simulation test bench for a practical scenario of satellite rate estimation uses moment-of-inertia variation and environmental disturbances affecting a typical micro-satellite at 500km circular orbit. Comparison studies of SOSMO with 1-SMO and pseudo-linear Kalman filter show that satisfactory estimation accuracy is achieved by SOSMO.
Resumo:
It is generally recognized from the food balance sheet prepared by experts that Nigeria is a protein deficient country. Not only is the daily intake of protein low but the contribution from animal sources is extremely low. Fish has been found to be the cheapest source of protein in Nigeria hence the consumption of fish will supply the needed protein at a relatively low cost. The study, conducted in Calabar in 1981, was analysed using stepwise ordinary least square multiple regression technique as well as Pearson correlation analysis. The regression result was used to generate some demand curves for different levels of per capital income, as well as own price elasticity of demand. The results show that both own price elasticity of demand for fresh and frozen fish decreased as the level of per capital income increased while income elasticity of demand increased as per capital income increased. The calculated per capital consumption was found to be 5.18 kilograms and 4.31 kg per annum for fresh fish and frozen fish respectively. This is considered rather small since Calabar is a sea port where fish should be more readily available. The values of own price and income elasticities indicate that more fish will be consumed at every increase in income if both production and marketing are improved
Resumo:
A quadtree-based adaptive Cartesian grid generator and flow solver were developed. The grid adaptation based on pressure or density gradient was performed and a gridless method based on the least-square fashion was used to treat the wall surface boundary condition, which is generally difficult to be handled for the common Cartesian grid. First, to validate the technique of grid adaptation, the benchmarks over a forward-facing step and double Mach reflection were computed. Second, the flows over the NACA 0012 airfoil and a two-element airfoil were calculated to validate the developed gridless method. The computational results indicate the developed method is reasonable for complex flows.