68 resultados para linear mixing model
em Indian Institute of Science - Bangalore - Índia
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
Chemical composition of rainwater changes from sea to inland under the influence of several major factors - topographic location of area, its distance from sea, annual rainfall. A model is developed here to quantify the variation in precipitation chemistry under the influence of inland distance and rainfall amount. Various sites in India categorized as 'urban', 'suburban' and 'rural' have been considered for model development. pH, HCO3, NO3 and Mg do not change much from coast to inland while, SO4 and Ca change is subjected to local emissions. Cl and Na originate solely from sea salinity and are the chemistry parameters in the model. Non-linear multiple regressions performed for the various categories revealed that both rainfall amount and precipitation chemistry obeyed a power law reduction with distance from sea. Cl and Na decrease rapidly for the first 100 km distance from sea, then decrease marginally for the next 100 km, and later stabilize. Regression parameters estimated for different cases were found to be consistent (R-2 similar to 0.8). Variation in one of the parameters accounted for urbanization. Model was validated using data points from the southern peninsular region of the country. Estimates are found to be within 99.9% confidence interval. Finally, this relationship between the three parameters - rainfall amount, coastline distance, and concentration (in terms of Cl and Na) was validated with experiments conducted in a small experimental watershed in the south-west India. Chemistry estimated using the model was in good correlation with observed values with a relative error of similar to 5%. Monthly variation in the chemistry is predicted from a downscaling model and then compared with the observed data. Hence, the model developed for rain chemistry is useful in estimating the concentrations at different spatio-temporal scales and is especially applicable for south-west region of India. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
He propose a new time domain method for efficient representation of the KCG and delineation of its component waves. The method is based on the multipulse Linear prediction (LP) coding which is being widely used in speech processing. The excitation to the LP synthesis filter consists of a few pulses defined by their locations and amplitudes. Based on the amplitudes and their distribution, the pulses are suitably combined to delineate the component waves. Beat to beat correlation in the ECG signal is used in QRS periodicity prediction. The method entails a data compression of 1 in 6. The method reconstructs the signal with an NMSE of less than 5%.
Resumo:
We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.
Resumo:
Contrary to the actual nonlinear Glauber model, the linear Glauber model (LGM) is exactly solvable, although the detailed balance condition is not generally satisfied. This motivates us to address the issue of writing the transition rate () in a best possible linear form such that the mean squared error in satisfying the detailed balance condition is least. The advantage of this work is that, by studying the LGM analytically, we will be able to anticipate how the kinetic properties of an arbitrary Ising system depend on the temperature and the coupling constants. The analytical expressions for the optimal values of the parameters involved in the linear are obtained using a simple Moore-Penrose pseudoinverse matrix. This approach is quite general, in principle applicable to any system and can reproduce the exact results for one dimensional Ising system. In the continuum limit, we get a linear time-dependent Ginzburg-Landau equation from the Glauber's microscopic model of non-conservative dynamics. We analyze the critical and dynamic properties of the model, and show that most of the important results obtained in different studies can be reproduced by our new mathematical approach. We will also show in this paper that the effect of magnetic field can easily be studied within our approach; in particular, we show that the inverse of relaxation time changes quadratically with (weak) magnetic field and that the fluctuation-dissipation theorem is valid for our model.
Resumo:
Cool cluster cores are in global thermal equilibrium but are locally thermally unstable. We study a non-linear phenomenological model for the evolution of density perturbations in the intracluster medium (ICM) due to local thermal instability and gravity. We have analysed and extended a model for the evolution of an overdense blob in the ICM. We find two regimes in which the overdense blobs can cool to thermally stable low temperatures. One for large t(cool)/t(ff) (t(cool) is the cooling time and t(ff) is the free-fall time), where a large initial overdensity is required for thermal runaway to occur; this is the regime which was previously analysed in detail. We discover a second regime for t(cool)/t(ff) less than or similar to 1 (in agreement with Cartesian simulations of local thermal instability in an external gravitational field), where runaway cooling happens for arbitrarily small amplitudes. Numerical simulations have shown that cold gas condenses out more easily in a spherical geometry. We extend the analysis to include geometrical compression in weakly stratified atmospheres such as the ICM. With a single parameter, analogous to the mixing length, we are able to reproduce the results from numerical simulations; namely, small density perturbations lead to the condensation of extended cold filaments only if t(cool)/t(ff) less than or similar to 10.
Resumo:
The scope of the differential transformation technique, developed earlier for the study of non-linear, time invariant systems, has been extended to the domain of time-varying systems by modifications to the differential transformation laws proposed therein. Equivalence of a class of second-order, non-linear, non-autonomous systems with a linear autonomous model of second order is established through these transformation laws. The feasibility of application of this technique in obtaining the response of such non-linear time-varying systems is discussed.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
The granular flow down an inclined plane is simulated using the discrete element (DE) technique to examine the extent to which the dynamics of an unconfined dense granular flow can be well described by a hard particle model First, we examine the average coordination number for the particles in the flow down an inclined plane using the DE technique using the linear contact model with and without friction, and the Hertzian contact model with friction The simulations show that the average coordination number decreases below 1 for values of the spring stiffness corresponding to real materials, such as sand and glass, even when the angle of inclination is only 10 larger than the angle of repose Additional measures of correlations in the system, such as the fraction of particles with multibody contact, the force ratio (average ratio of the magnitudes of the largest and the second largest force on a particle), and the angle between the two largest forces on the particle, show no evidence of force chains or other correlated motions in the system An analysis of the bond-orientational order parameter indicates that the flow is in the random state, as in event-driven (ED) simulations V Kumaran, J Fluid Mech 632, 107 (2009), J Fluid Mech 632, 145 (2009)] The results of the two simulation techniques for the Bagnold coefficients (ratio of stress and square of the strain rate) and the granular temperature (mean square of the fluctuating velocity) are compared with the theory V Kumaran, J Fluid Mech 632, 107 (2009), J Fluid Mech 632, 145 (2009)] and are found to be in quantitative agreement In addition, we also conduct a comparison of the collision frequency and the distribution of the precollisional relative velocities of particles in contact The strong correlation effects exhibited by these two quantities in event-driven simulations V Kumaran, J Fluid Mech 632, 145 (2009)] are also found in the DE simulations (C) 2010 American Institute of Physics doi 10 1063/1 3504660]
Resumo:
Multiple input multiple output (MIMO) systems with large number of antennas have been gaining wide attention as they enable very high throughputs. A major impediment is the complexity at the receiver needed to detect the transmitted data. To this end we propose a new receiver, called LRR (Linear Regression of MMSE Residual), which improves the MMSE receiver by learning a linear regression model for the error of the MMSE receiver. The LRR receiver uses pilot data to estimate the channel, and then uses locally generated training data (not transmitted over the channel), to find the linear regression parameters. The proposed receiver is suitable for applications where the channel remains constant for a long period (slow-fading channels) and performs quite well: at a bit error rate (BER) of 10(-3), the SNR gain over MMSE receiver is about 7 dB for a 16 x 16 system; for a 64 x 64 system the gain is about 8.5 dB. For large coherence time, the complexity order of the LRR receiver is the same as that of the MMSE receiver, and in simulations we find that it needs about 4 times as many floating point operations. We also show that further gain of about 4 dB is obtained by local search around the estimate given by the LRR receiver.
Resumo:
The conformational stability of the homodimeric pea lectin was determined by both isothermal urea-induced and thermal denaturation in the absence and presence of urea. The denaturation profiles were analyzed to obtain the thermodynamic parameters associated with the unfolding of the protein. The data not only conform to the simple A(2) double left right arrow 2U model of unfolding but also are well described by the linear extrapolation model for the nature of denaturant-protein interactions. In addition, both the conformational stability (Delta G(s)) and the Delta C-p for the protein unfolding is quite high, at about 18.79 kcal/ mol and 5.32 kcal/(mol K), respectively, which may be a reflection of the relatively larger size of the dimeric molecule (M-r 49 000) and, perhaps, a consequent larger buried hydrophobic core in the folded protein. The simple two-state (A(2) double left right arrow 2U) nature of the unfolding process, with the absence of any monomeric intermediate, suggests that the quaternary interactions alone may contribute significantly to the conformational stability of the oligomer-a point that may be general to many oligomeric proteins.
Resumo:
The constitutive model for a magnetostrictive material and its effect on the structural response is presented in this article. The example of magnetostrictive material considered is the TERFENOL-D. As like the piezoelectric material, this material has two constitutive laws, one of which is the sensing law and the other is the actuation law, both of which are highly coupled and non-linear. For the purpose of analysis, the constitutive laws can be characterized as coupled or uncoupled and linear or non linear. Coupled model is studied without assuming any explicit direct relationship with magnetic field. In the linear coupled model, which is assumed to preserve the magnetic flux line continuity, the elastic modulus, the permeability and magneto-elastic constant are assumed as constant. In the nonlinear-coupled model, the nonlinearity is decoupled and solved separately for the magnetic domain and the mechanical domain using two nonlinear curves, namely the stress vs. strain curve and the magnetic flux density vs. magnetic field curve. This is performed by two different methods. In the first, the magnetic flux density is computed iteratively, while in the second, the artificial neural network is used, where in the trained network will give the necessary strain and magnetic flux density for a given magnetic field and stress level. The effect of nonlinearity is demonstrated on a simple magnetostrictive rod.
Resumo:
The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.