50 resultados para partial least square (PLS)
Resumo:
In this paper, we present a methodology for designing a compliant aircraft wing, which can morph from a given airfoil shape to another given shape under the actuation of internal forces and can offer sufficient stiffness in both configurations under the respective aerodynamic loads. The least square error in displacements, Fourier descriptors, geometric moments, and moment invariants are studied to compare candidate shapes and to pose the optimization problem. Their relative merits and demerits are discussed in this paper. The `frame finite element ground structure' approach is used for topology optimization and the resulting solutions are converted to continuum solutions. The introduction of a notch-like feature is the key to the success of the design. It not only gives a good match for the target morphed shape for the leading and trailing edges but also minimizes the extension of the flexible skin that is to be put on the airfoil frame. Even though linear small-displacement elastic analysis is used in optimization, the obtained designs are analysed for large displacement behavior. The methodology developed here is not restricted to aircraft wings; it can be used to solve any shape-morphing requirement in flexible structures and compliant mechanisms.
Resumo:
Variable Endmember Constrained Least Square (VECLS) technique is proposed to account endmember variability in the linear mixture model by incorporating the variance for each class, the signals of which varies from pixel to pixel due to change in urban land cover (LC) structures. VECLS is first tested with a computer simulated three class endmember considering four bands having small, medium and large variability with three different spatial resolutions. The technique is next validated with real datasets of IKONOS, Landsat ETM+ and MODIS. The results show that correlation between actual and estimated proportion is higher by an average of 0.25 for the artificial datasets compared to a situation where variability is not considered. With IKONOS, Landsat ETM+ and MODIS data, the average correlation increased by 0.15 for 2 and 3 classes and by 0.19 for 4 classes, when compared to single endmember per class. (C) 2013 COSPAR. Published by Elsevier Ltd. All rights reserved.
Resumo:
The authors consider the channel estimation problem in the context of a linear equaliser designed for a frequency selective channel, which relies on the minimum bit-error-ratio (MBER) optimisation framework. Previous literature has shown that the MBER-based signal detection may outperform its minimum-mean-square-error (MMSE) counterpart in the bit-error-ratio performance sense. In this study, they develop a framework for channel estimation by first discretising the parameter space and then posing it as a detection problem. Explicitly, the MBER cost function (CF) is derived and its performance studied, when transmitting binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals. It is demonstrated that the MBER based CF aided scheme is capable of outperforming existing MMSE, least square-based solutions.
Resumo:
Climate change impact assessment studies involve downscaling large-scale atmospheric predictor variables (LSAPVs) simulated by general circulation models (GCMs) to site-scale meteorological variables. This article presents a least-square support vector machine (LS-SVM)-based methodology for multi-site downscaling of maximum and minimum daily temperature series. The methodology involves (1) delineation of sites in the study area into clusters based on correlation structure of predictands, (2) downscaling LSAPVs to monthly time series of predictands at a representative site identified in each of the clusters, (3) translation of the downscaled information in each cluster from the representative site to that at other sites using LS-SVM inter-site regression relationships, and (4) disaggregation of the information at each site from monthly to daily time scale using k-nearest neighbour disaggregation methodology. Effectiveness of the methodology is demonstrated by application to data pertaining to four sites in the catchment of Beas river basin, India. Simulations of Canadian coupled global climate model (CGCM3.1/T63) for four IPCC SRES scenarios namely A1B, A2, B1 and COMMIT were downscaled to future projections of the predictands in the study area. Comparison of results with those based on recently proposed multivariate multiple linear regression (MMLR) based downscaling method and multi-site multivariate statistical downscaling (MMSD) method indicate that the proposed method is promising and it can be considered as a feasible choice in statistical downscaling studies. The performance of the method in downscaling daily minimum temperature was found to be better when compared with that in downscaling daily maximum temperature. Results indicate an increase in annual average maximum and minimum temperatures at all the sites for A1B, A2 and B1 scenarios. The projected increment is high for A2 scenario, and it is followed by that for A1B, B1 and COMMIT scenarios. Projections, in general, indicated an increase in mean monthly maximum and minimum temperatures during January to February and October to December.
Resumo:
Accelerated electrothermal aging tests were conducted at a constant temperature of 60 degrees C and at different stress levels of 6 kV/mm, 7 kV/mm and 8 kV/mm on unfilled epoxy and epoxy filled with 5 wt% of nano alumina. The leakage current through the samples were continuously monitored and the variation in tan delta values with aging duration was monitored to predict the impending failure and the time to failure of the samples. It is observed that the time to failure of epoxy alumina nanocomposite samples is significantly higher as compared to the unfilled epoxy. Data from the experiments has been analyzed graphically by plotting the Weibull probability and theoretically by the linear least square regression analysis. The characteristic life obtained from the least square regression analysis has been used to plot the inverse power law curve. From the inverse power law curve, the life of the epoxy insulation with and without nanofiller loading at a stress level of 3 kV/mm, i.e. within the midrange of the design stress level of rotating machine insulation, has been obtained by extrapolation. It is observed that the life of epoxy alumina nanocomposite of 5 wt% filler loading is nine times higher than that of the unfilled epoxy.
Resumo:
The temperature (300-973K) and frequency (100Hz-10MHz) response of the dielectric and impedance characteristics of 2BaO-0.5Na(2)O-2.5Nb(2)O(5)-4.5B(2)O(3) glasses and glass nanocrystal composites were studied. The dielectric constant of the glass was found to be almost independent of frequency (100Hz-10MHz) and temperature (300-600K). The temperature coefficient of dielectric constant was 8 +/- 3ppm/K in the 300-600K temperature range. The relaxation and conduction phenomena were rationalized using modulus formalism and universal AC conductivity exponential power law, respectively. The observed relaxation behavior was found to be thermally activated. The complex impedance data were fitted using the least square method. Dispersion of Barium Sodium Niobate (BNN) phase at nanoscale in a glass matrix resulted in the formation of space charge around crystal-glass interface, leading to a high value of effective dielectric constant especially for the samples heat-treated at higher temperatures. The fabricated glass nanocrystal composites exhibited P versus E hysteresis loops at room temperature and the remnant polarization (P-r) increased with the increase in crystallite size.
Resumo:
Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.
Resumo:
This paper presents two methods of star camera calibration to determine camera calibrating parameters (like principal point, focal length etc) along with lens distortions (radial and decentering). First method works autonomously utilizing star coordinates in three consecutive image frames thus independent of star identification or biased attitude information. The parameters obtained in autonomous self-calibration technique helps to identify the imaged stars with the cataloged stars. Least Square based second method utilizes inertial star coordinates to determine satellite attitude and star camera parameters with lens radial distortion, both independent of each other. Camera parameters determined by the second method are more accurate than the first method of camera self calibration. Moreover, unlike most of the attitude determination algorithms where attitude of the satellite depend on the camera calibrating parameters, the second method has the advantage of computing spacecraft attitude independent of camera calibrating parameters except lens distortions (radial). Finally Kalman filter based sequential estimation scheme is employed to filter out the noise of the LS based estimation.
Resumo:
In this paper, an implicit scheme is presented for a meshless compressible Euler solver based on the Least Square Kinetic Upwind Method (LSKUM). The Jameson and Yoon's split flux Jacobians formulation is very popular in finite volume methodology, which leads to a scalar diagonal dominant matrix for an efficient implicit procedure (Jameson & Yoon, 1987). However, this approach leads to a block diagonal matrix when applied to the LSKUM meshless method. The above split flux Jacobian formulation, along with a matrix-free approach, has been adopted to obtain a diagonally dominant, robust and cheap implicit time integration scheme. The efficacy of the scheme is demonstrated by computing 2D flow past a NACA 0012 airfoil under subsonic, transonic and supersonic flow conditions. The results obtained are compared with available experiments and other reliable computational fluid dynamics (CFD) results. The present implicit formulation shows good convergence acceleration over the RK4 explicit procedure. Further, the accuracy and robustness of the scheme in 3D is demonstrated by computing the flow past an ONERA M6 wing and a clipped delta wing with aileron deflection. The computed results show good agreement with wind tunnel experiments and other CFD computations.
Resumo:
Development of computationally efficient and accurate attitude rate estimation algorithm using low-cost commercially available star sensor arrays and processing unit for micro-satellite mission is presented. Our design reduces the computational load of least square (LS)-based rate estimation method while maintaining the same accuracy compared to other rate estimation approaches. Furthermore, rate estimation accuracy is improved by using recently developed fast and accurate second-order sliding mode observer (SOSMO) scheme. It also gives robust estimation in the presence of modeling uncertainties, unknown disturbances, and measurement noise. Simulation study shows that rate estimation accuracy achieved by our LS-based method is comparable with other methods for a typical commercially available star sensor array. The robustness analysis of SOSMO with respect to measurement noise is also presented in this paper. Simulation test bench for a practical scenario of satellite rate estimation uses moment-of-inertia variation and environmental disturbances affecting a typical micro-satellite at 500km circular orbit. Comparison studies of SOSMO with 1-SMO and pseudo-linear Kalman filter show that satisfactory estimation accuracy is achieved by SOSMO.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
Unsteady natural convection flow in a two- dimensional square cavity filled with a porous material has been studied. The flow is initially steady where the left- hand vertical wall has temperature T-h and the right- hand vertical wall is maintained at temperature T-c ( T-h > T-c) and the horizontal walls are insulated. At time t > 0, the left- hand vertical wall temperature is suddenly raised to (T-h) over bar ((T-h) over bar > T-h) which introduces unsteadiness in the flow field. The partial differential equations governing the unsteady natural convection flow have been solved numerically using a finite control volume method. The computation has been carried out until the final steady state is reached. It is found that the average Nusselt number attains a minimum during the transient period and that the time required to reach the final steady state is longer for low Rayleigh number and shorter for high Rayleigh number.
Resumo:
This paper considers the applicability of the least mean fourth (LM F) power gradient adaptation criteria with 'advantage' for signals associated with gaussian noise, the associated noise power estimate not being known. The proposed method, as an adaptive spectral estimator, is found to provide superior performance than the least mean square (LMS) adaptation for the same (or even lower) speed of convergence for signals having sufficiently high signal-to-gaussian noise ratio. The results include comparison of the performance of the LMS-tapped delay line, LMF-tapped delay line, LMS-lattice and LMF-lattice algorithms, with the Burg's block data method as reference. The signals, like sinusoids with noise and stochastic signals like EEG, are considered in this study.
Resumo:
Ternary metal complexes involving vitamin B6 with formulas [CO",(PN-H)](anCdI [OC)'(bpy)(PN)Cl]C10(.bpHy 0 = 2,2'-bipyridine, PN = neutral pyridoxine, PN-H = anionic pyridoxine) have been prepared for the first time and characterized by means of magnetic and spectroscopic measurements. The crystal structures of the compounds have also been determined. [CO(PN-H)](CcryIsOta,l)lize s in the space group P2,/c with a = 18.900 (3) A, b = 8.764 (1) A, c = 20.041 (2) A,p = 116.05 (l)', and Z = 4 and [Cu(bpy)(PN)C1]C104-H20in the space group Pi with a = 12.136 (5) A, b = 13.283 (4) A,c = 7.195 (2) A, a = 96.91 (Z)', 0 = 91.25 (3)', y = 71.63 (3)', and Z = 2. The structures were solved by the heavy-atom method and refined by least-squares techniques to R values of 0.080 and 0.042 for 3401 and 2094 independent reflections, respectively. Both structures consist of monomeric units. The geometry around Co(II1) is octahedral and around Cu(I1) is distorted square pyramidal. In [CO(PN-H)]t(wCo IoxOy~ge)n~s ,fro m phenolic and 4-(hydroxymethyl) groups of PN-H and two nitrogens from each of two bpy's form the coordination sphere. In [Cu(bpy)(PN)C1]C104.H20o ne PN and one bpy, with the same donor sites, act as bidentate chelates in the basal plane, with a chloride ion occupying the apical position. In both structures PN and PN-H exist in the tautomeric form wherein pyridine N is protonated and phenolic 0 is deprotonated. However, a novel feature of the cobalt compound is that PN-H is anionic due to the deprotonation of the 4-(hydroxymethyl) group. The packing in both structures is governed by hydrogen bonds, and in the copper compound partial stacking of bpy's at a distance of -3.55 also adds to the stability of the system. Infrared, NMR, and ligand field spectroscopic results and magnetic measurements are interpreted in light of the structures.
Resumo:
Contraction of an edge e merges its end points into a new single vertex, and each neighbor of one of the end points of e is a neighbor of the new vertex. An edge in a k-connected graph is contractible if its contraction does not result in a graph with lesser connectivity; otherwise the edge is called non-contractible. In this paper, we present results on the structure of contractible edges in k-trees and k-connected partial k-trees. Firstly, we show that an edge e in a k-tree is contractible if and only if e belongs to exactly one (k + 1) clique. We use this characterization to show that the graph formed by contractible edges is a 2-connected graph. We also show that there are at least |V(G)| + k - 2 contractible edges in a k-tree. Secondly, we show that if an edge e in a partial k-tree is contractible then e is contractible in any k-tree which contains the partial k-tree as an edge subgraph. We also construct a class of contraction critical 2k-connected partial 2k-trees.