141 resultados para Least-Squares Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study reports an investigation of the ion exchange treatment of sodium chloride solutions in relation to use of resin technology for applications such as desalination of brackish water. In particular, a strong acid cation (SAC) resin (DOW Marathon C) was studied to determine its capacity for sodium uptake and to evaluate the fundamentals of the ion exchange process involved. Key questions to answer included: impact of resin identity; best models to simulate the kinetics and equilibrium exchange behaviour of sodium ions; difference between using linear least squares (LLS) and non-linear least squares (NLLS) methods for data interpretation; and, effect of changing the type of anion in solution which accompanied the sodium species. Kinetic studies suggested that the exchange process was best described by a pseudo first order rate expression based upon non-linear least squares analysis of the test data. Application of the Langmuir Vageler isotherm model was recommended as it allowed confirmation that experimental conditions were sufficient for maximum loading of sodium ions to occur. The Freundlich expression best fitted the equilibrium data when analysing the information by a NLLS approach. In contrast, LLS methods suggested that the Langmuir model was optimal for describing the equilibrium process. The Competitive Langmuir model which considered the stoichiometric nature of ion exchange process, estimated the maximum loading of sodium ions to be 64.7 g Na/kg resin. This latter value was comparable to sodium ion capacities for SAC resin published previously. Inherent discrepancies involved when using linearized versions of kinetic and isotherm equations were illustrated, and despite their widespread use, the value of this latter approach was questionable. The equilibrium behaviour of sodium ions form sodium fluoride solution revealed that the sodium ions were now more preferred by the resin compared to the situation with sodium chloride. The solution chemistry of hydrofluoric acid was suggested as promoting the affinity of the sodium ions to the resin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, spatially offset Raman spectroscopy (SORS) is demonstrated for non-invasively investigating the composition of drug mixtures inside an opaque plastic container. The mixtures consisted of three components including a target drug (acetaminophen or phenylephrine hydrochloride) and two diluents (glucose and caffeine). The target drug concentrations ranged from 5% to 100%. After conducting SORS analysis to ascertain the Raman spectra of the concealed mixtures, principal component analysis (PCA) was performed on the SORS spectra to reveal trends within the data. Partial least squares (PLS) regression was used to construct models that predicted the concentration of each target drug, in the presence of the other two diluents. The PLS models were able to predict the concentration of acetaminophen in the validation samples with a root-mean-square error of prediction (RMSEP) of 3.8% and the concentration of phenylephrine hydrochloride with an RMSEP of 4.6%. This work demonstrates the potential of SORS, used in conjunction with multivariate statistical techniques, to perform non-invasive, quantitative analysis on mixtures inside opaque containers. This has applications for pharmaceutical analysis, such as monitoring the degradation of pharmaceutical products on the shelf, in forensic investigations of counterfeit drugs, and for the analysis of illicit drug mixtures which may contain multiple components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near-infrared spectroscopy (NIRS) calibrations were developed for the discrimination of Chinese hawthorn (Crataegus pinnatifida Bge. var. major) fruit from three geographical regions as well as for the estimation of the total sugar, total acid, total phenolic content, and total antioxidant activity. Principal component analysis (PCA) was used for the discrimination of the fruit on the basis of their geographical origin. Three pattern recognition methods, linear discriminant analysis, partial least-squares-discriminant analysis, and back-propagation artificial neural networks, were applied to classify and compare these samples. Furthermore, three multivariate calibration models based on the first derivative NIR spectroscopy, partial least-squares regression, back-propagation artificial neural networks, and least-squares-support vector machines, were constructed for quantitative analysis of the four analytes, total sugar, total acid, total phenolic content, and total antioxidant activity, and validated by prediction data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Samples of sea water contain phytoplankton taxa in varying amounts, and marine scientists are interested in the relative abundance of each taxa. Their relative biomass can be ascertained indirectly by measuring the quantity of various pigments using high performance liquid chromatography. However, the conversion from pigment to taxa is mathematically non trivial as it is a positive matrix factorisation problem where both matrices are unknown beyond the level of initial estimates. The prior information on the pigment to taxa conversion matrix is used to give the problem a unique solution. An iteration of two non-negative least squares algorithms gives satisfactory results. Some sample analysis of data indicates prospects for this type of analysis. An alternative more computationally intensive approach using Bayesian methods is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study focuses on using the partial least squares (PLS) path modelling technique in archival auditing research by replicating the data and research questions from prior bank audit fee studies. PLS path modelling allows for inter-correlations among audit fee determinants by establishing latent constructs and multiple relationship paths in one simultaneous PLS path model. Endogeneity concerns about auditor choice can also be addressed with PLS path modelling. With a sample of US bank holding companies for the period 2003-2009, we examine the associations among on-balance sheet financial risks, off-balance sheet risks and audit fees, and also address the pervasive client size effect, and the effect of the self-selection of auditors. The results endorse the dominating effect of size on audit fees, both directly and indirectly via its impacts on other audit fee determinants. By simultaneously considering the self-selection of auditors, we still find audit fee premiums on Big N auditors, which is the second important factor on audit fee determination. On-balance-sheet financial risk measures in terms of capital adequacy, loan composition, earnings and asset quality performance have positive impacts on audit fees. After allowing for the positive influence of on-balance sheet financial risks and entity size on off-balance sheet risk, the off-balance sheet risk measure, SECRISK, is still positively associated with bank audit fees, both before and after the onset of the financial crisis. The consistent results from this study compared with prior literature provide supporting evidence and enhance confidence on the application of this new research technique in archival accounting studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study focuses on using the partial least squares (PLS) path modelling methodology in archival auditing research by replicating the data and research questions from prior bank audit fee studies. PLS path modelling allows for inter-correlations among audit fee determinants by establishing latent constructs and multiple relationship paths in one simultaneous PLS path model. Endogeneity concerns about auditor choice can also be addressed with PLS path modelling. With a sample of US bank holding companies for the period 2003-2009, we examine the associations among on-balance sheet financial risks, off-balance sheet risks and audit fees, and also address the pervasive client size effect, and the effect of the self-selection of auditors. The results endorse the dominating effect of size on audit fees, both directly and indirectly via its impacts on other audit fee determinants. By simultaneously considering the self-selection of auditors, we still find audit fee premiums on Big N auditors, which is the second important factor on audit fee determination. On-balance-sheet financial risk measures in terms of capital adequacy, loan composition, earnings and asset quality performance have positive impacts on audit fees. After allowing for the positive influence of on-balance sheet financial risks and entity size on off-balance sheet risk, the off-balance sheet risk measure, SECRISK, is still positively associated with bank audit fees, both before and after the onset of the financial crisis. The consistent results from this study compared with prior literature provide supporting evidence and enhance confidence on the application of this new research technique in archival accounting studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phenols are well known noxious compounds, which are often found in various water sources. A novel analytical method has been researched and developed based on the properties of hemin–graphene hybrid nanosheets (H–GNs). These nanosheets were synthesized using a wet-chemical method, and they have peroxidase-like activity. Also, in the presence of H2O2, the nanosheets are efficient catalysts for the oxidation of the substrate, 4-aminoantipine (4-AP), and the phenols. The products of such an oxidation reaction are the colored quinone-imines (benzodiazepines). Importantly, these products enabled the differentiation of the three common phenols – pyrocatechol, resorcin and hydroquinone, with the use of a novel, spectroscopic method, which was developed for the simultaneous determination of the above three analytes. This spectroscopic method produced linear calibrations for the pyrocatechol (0.4–4.0 mg L−1), resorcin (0.2–2.0 mg L−1) and hydroquinone (0.8–8.0 mg L−1) analytes. In addition, kinetic and spectral data, obtained from the formation of the colored benzodiazepines, were used to establish multi-variate calibrations for the prediction of the three phenol analytes found in various kinds of water; partial least squares (PLS), principal component regression (PCR) and artificial neural network (ANN) models were used and the PLS model performed best.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Samples of Forsythia suspensa from raw (Laoqiao) and ripe (Qingqiao) fruit were analyzed with the use of HPLC-DAD and the EIS-MS techniques. Seventeen peaks were detected, and of these, twelve were identified. Most were related to the glucopyranoside molecular fragment. Samples collected from three geographical areas (Shanxi, Henan and Shandong Provinces), were discriminated with the use of hierarchical clustering analysis (HCA), discriminant analysis (DA), and principal component analysis (PCA) models, but only PCA was able to provide further information about the relationships between objects and loadings; eight peaks were related to the provinces of sample origin. The supervised classification models-K-nearest neighbor (KNN), least squares support vector machines (LS-SVM), and counter propagation artificial neural network (CP-ANN) methods, indicated successful classification but KNN produced 100% classification rate. Thus, the fruit were discriminated on the basis of their places of origin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations