949 resultados para multivariate approximation
Resumo:
Many active pharmaceutical ingredients (APIs) have both anhydrate and hydrate forms. Due to the different physicochemical properties of solid forms, the changes in solid-state may result in therapeutic, pharmaceutical, legal and commercial problems. In order to obtain good solid dosage form quality and performance, there is a constant need to understand and control these phase transitions during manufacturing and storage. Thus it is important to detect and also quantify the possible transitions between the different forms. In recent years, vibrational spectroscopy has become an increasingly popular tool to characterise the solid-state forms and their phase transitions. It offers several advantages over other characterisation techniques including an ability to obtain molecular level information, minimal sample preparation, and the possibility of monitoring changes non-destructively in-line. Dehydration is the phase transition of hydrates which is frequently encountered during the dosage form production and storage. The aim of the present thesis was to investigate the dehydration behaviour of diverse pharmaceutical hydrates by near infrared (NIR), Raman and terahertz pulsed spectroscopic (TPS) monitoring together with multivariate data analysis. The goal was to reveal new perspectives for investigation of the dehydration at the molecular level. Solid-state transformations were monitored during dehydration of diverse hydrates on hot-stage. The results obtained from qualitative experiments were used to develop a method and perform the quantification of the solid-state forms during process induced dehydration in a fluidised bed dryer. Both in situ and in-line process monitoring and quantification was performed. This thesis demonstrated the utility of vibrational spectroscopy techniques and multivariate modelling to monitor and investigate dehydration behaviour in situ and during fluidised bed drying. All three spectroscopic methods proved complementary in the study of dehydration. NIR spectroscopy models could quantify the solid-state forms in the binary system, but were unable to quantify all the forms in the quaternary system. Raman spectroscopy models on the other hand could quantify all four solid-state forms that appeared upon isothermal dehydration. The speed of spectroscopic methods makes them applicable for monitoring dehydration and the quantification of multiple forms was performed during phase transition. Thus the solid-state structure information at the molecular level was directly obtained. TPS detected the intermolecular phonon modes and Raman spectroscopy detected mostly the changes in intramolecular vibrations. Both techniques revealed information about the crystal structure changes. NIR spectroscopy, on the other hand was more sensitive to water content and hydrogen bonding environment of water molecules. This study provides a basis for real time process monitoring using vibrational spectroscopy during pharmaceutical manufacturing.
Resumo:
This paper deals with the optimal load flow problem in a fixed-head hydrothermal electric power system. Equality constraints on the volume of water available for active power generation at the hydro plants as well as inequality constraints on the reactive power generation at the voltage controlled buses are imposed. Conditions for optimal load flow are derived and a successive approximation algorithm for solving the optimal generation schedule is developed. Computer implementation of the algorithm is discussed, and the results obtained from the computer solution of test systems are presented.
Resumo:
In this dissertation, I present an overall methodological framework for studying linguistic alternations, focusing specifically on lexical variation in denoting a single meaning, that is, synonymy. As the practical example, I employ the synonymous set of the four most common Finnish verbs denoting THINK, namely ajatella, miettiä, pohtia and harkita ‘think, reflect, ponder, consider’. As a continuation to previous work, I describe in considerable detail the extension of statistical methods from dichotomous linguistic settings (e.g., Gries 2003; Bresnan et al. 2007) to polytomous ones, that is, concerning more than two possible alternative outcomes. The applied statistical methods are arranged into a succession of stages with increasing complexity, proceeding from univariate via bivariate to multivariate techniques in the end. As the central multivariate method, I argue for the use of polytomous logistic regression and demonstrate its practical implementation to the studied phenomenon, thus extending the work by Bresnan et al. (2007), who applied simple (binary) logistic regression to a dichotomous structural alternation in English. The results of the various statistical analyses confirm that a wide range of contextual features across different categories are indeed associated with the use and selection of the selected think lexemes; however, a substantial part of these features are not exemplified in current Finnish lexicographical descriptions. The multivariate analysis results indicate that the semantic classifications of syntactic argument types are on the average the most distinctive feature category, followed by overall semantic characterizations of the verb chains, and then syntactic argument types alone, with morphological features pertaining to the verb chain and extra-linguistic features relegated to the last position. In terms of overall performance of the multivariate analysis and modeling, the prediction accuracy seems to reach a ceiling at a Recall rate of roughly two-thirds of the sentences in the research corpus. The analysis of these results suggests a limit to what can be explained and determined within the immediate sentential context and applying the conventional descriptive and analytical apparatus based on currently available linguistic theories and models. The results also support Bresnan’s (2007) and others’ (e.g., Bod et al. 2003) probabilistic view of the relationship between linguistic usage and the underlying linguistic system, in which only a minority of linguistic choices are categorical, given the known context – represented as a feature cluster – that can be analytically grasped and identified. Instead, most contexts exhibit degrees of variation as to their outcomes, resulting in proportionate choices over longer stretches of usage in texts or speech.
Resumo:
The paper studies stochastic approximation as a technique for bias reduction. The proposed method does not require approximating the bias explicitly, nor does it rely on having independent identically distributed (i.i.d.) data. The method always removes the leading bias term, under very mild conditions, as long as auxiliary samples from distributions with given parameters are available. Expectation and variance of the bias-corrected estimate are given. Examples in sequential clinical trials (non-i.i.d. case), curved exponential models (i.i.d. case) and length-biased sampling (where the estimates are inconsistent) are used to illustrate the applications of the proposed method and its small sample properties.
Resumo:
An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.
Resumo:
In this work, we theoretically examine recent pump/probe photoemission experiments on the strongly correlated charge-density-wave insulator TaS2.We describe the general nonequilibrium many-body formulation of time-resolved photoemission in the sudden approximation, and then solve the problem using dynamical mean-field theory with the numerical renormalization group and a bare density of states calculated from density functional theory including the charge-density-wave distortion of the ion cores and spin-orbit coupling. We find a number of interesting results: (i) the bare band structure actually has more dispersion in the perpendicular direction than in the two-dimensional planes; (ii) the DMFT approach can produce upper and lower Hubbard bands that resemble those in the experiment, but the upper bands will overlap in energy with other higher energy bands; (iii) the effect of the finite width of the probe pulse is minimal on the shape of the photoemission spectra; and (iv) the quasiequilibrium approximation does not fully describe the behavior in this system.
Resumo:
The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between hard, intermediate and soft maize kernels from inbred lines was evaluated. NIR hyperspectral images of two sets (12 and 24 kernels) of whole maize kernels were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm and a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) was used on absorbance images to remove background, bad pixels and shading. On the cleaned images. PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between glassy and floury endosperm along principal component (PC) three on the MatrixNIR and PC two on the sisuChema with two distinguishable clusters. Subsequently partial least squares discriminant analysis (PLS-DA) was applied to build a classification model. The PLS-DA model from the MatrixNIR image (12 kernels) resulted in root mean square error of prediction (RMSEP) value of 0.18. This was repeated on the MatrixNIR image of the 24 kernels which resulted in RMSEP of 0.18. The sisuChema image yielded RMSEP value of 0.29. The reproducible results obtained with the different data sets indicate that the method proposed in this paper has a real potential for future classification uses.
Resumo:
The maximum independent set problem is NP-complete even when restricted to planar graphs, cubic planar graphs or triangle free graphs. The problem of finding an absolute approximation still remains NP-complete. Various polynomial time approximation algorithms, that guarantee a fixed worst case ratio between the independent set size obtained to the maximum independent set size, in planar graphs have been proposed. We present in this paper a simple and efficient, O(|V|) algorithm that guarantees a ratio 1/2, for planar triangle free graphs. The algorithm differs completely from other approaches, in that, it collects groups of independent vertices at a time. Certain bounds we obtain in this paper relate to some interesting questions in the theory of extremal graphs.
Resumo:
Quality of fresh-cut carambola (Averrhoa carambola L) is related to many chemical and biochemical variables especially those involved with softening and browning, both influenced by storage temperature. To study these effects, a multivariate analysis was used to evaluate slices packaged in vacuum-sealed polyolefin bags, and stored at 2.5 degrees C, 5 degrees C and 10 degrees C, for up to 16 d. The quality of slices at each temperature was correlated with the duration of storage, O(2) and CO(2) concentration in the package, physical chemical constituents, and activity of enzymes involved in softening (PG) and browning (PPO) metabolism. Three quality groups were identified by hierarchical cluster analysis, and the classification of the components within each of these groups was obtained from a principal component analysis (PCA). The characterization of samples by PCA clearly distinguished acceptable and non-acceptable slices. According to PCA, acceptable slices presented higher ascorbic acid content, greater hue angles ((o)h) and final lightness (L-5) in the first principal component (PC1). On the other hand, non-acceptable slices presented higher total pectin content. PPO activity in the PC1. Non-acceptable slices also presented higher soluble pectin content, increased pectin solubilisation and higher CO(2) concentration in the second principal component (PC2) whereas acceptable slices showed lower total sugar content. The hierarchical cluster and PCA analyses were useful for discriminating the quality of slices stored at different temperatures.
Resumo:
Mango is an important horticultural fruit crop and breeding is a key strategy to improve ongoing sustainability. Knowledge of breeding values of potential parents is important for maximising progress from breeding. This study successfully employed a mixed linear model methods incorporating a pedigree to predict breeding values for average fruit weight from highly unbalanced data for genotypes planted over three field trials and assessed over several harvest seasons. Average fruit weight was found to be under strong additive genetic control. There was high correlation between hybrids propagated as seedlings and hybrids propagated as scions grafted onto rootstocks. Estimates of additive genetic correlation among trials ranged from 0.69 to 0.88 with correlations among harvest seasons within trials greater than 0.96. These results suggest that progress from selection for broad adaptation can be achieved, particularly as no repeatable environmental factor that could be used to predict G x E could be identified. Predicted breeding values for 35 known cultivars are presented for use in ongoing breeding programs.
Resumo:
It is shown how the single-site coherent potential approximation and the averaged T-matrix approximation become exact in the calculation of the averaged single-particle Green function of the electron in the Anderson model when the site energy is distributed randomly with lorentzian distribution. Using these approximations, Lloyd's exact result is reproduced.
Resumo:
The family of location and scale mixtures of Gaussians has the ability to generate a number of flexible distributional forms. The family nests as particular cases several important asymmetric distributions like the Generalized Hyperbolic distribution. The Generalized Hyperbolic distribution in turn nests many other well known distributions such as the Normal Inverse Gaussian. In a multivariate setting, an extension of the standard location and scale mixture concept is proposed into a so called multiple scaled framework which has the advantage of allowing different tail and skewness behaviours in each dimension with arbitrary correlation between dimensions. Estimation of the parameters is provided via an EM algorithm and extended to cover the case of mixtures of such multiple scaled distributions for application to clustering. Assessments on simulated and real data confirm the gain in degrees of freedom and flexibility in modelling data of varying tail behaviour and directional shape.
Resumo:
We propose a family of multivariate heavy-tailed distributions that allow variable marginal amounts of tailweight. The originality comes from introducing multidimensional instead of univariate scale variables for the mixture of scaled Gaussian family of distributions. In contrast to most existing approaches, the derived distributions can account for a variety of shapes and have a simple tractable form with a closed-form probability density function whatever the dimension. We examine a number of properties of these distributions and illustrate them in the particular case of Pearson type VII and t tails. For these latter cases, we provide maximum likelihood estimation of the parameters and illustrate their modelling flexibility on simulated and real data clustering examples.
Resumo:
BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.
Resumo:
Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.