953 resultados para Linear multivariate methods
Resumo:
This study developed a gluten-free granola and evaluated it during storage with the application of multivariate and regression analysis of the sensory and instrumental parameters. The physicochemical, sensory, and nutritional characteristics of a product containing quinoa, amaranth and linseed were evaluated. The crude protein and lipid contents ranged from 97.49 and 122.72 g kg-1 of food, respectively. The polyunsaturated/saturated, and n-6:n-3 fatty acid ratios ranged from 2.82 and 2.59:1, respectively. Granola had the best alpha-linolenic acid content, nutritional indices in the lipid fraction, and mineral content. There were good hygienic and sanitary conditions during storage; probably due to the low water activity of the formulation, which contributed to inhibit microbial growth. The sensory attributes ranged from 'like very much' to 'like slightly', and the regression models were highly fitted and correlated during the storage period. A reduction in the sensory attribute levels and in the product physical stabilisation was verified by principal component analysis. The use of the affective test acceptance and instrumental analysis combined with statistical methods allowed us to obtain promising results about the characteristics of gluten-free granola.
Resumo:
This study aimed at comparing both the results of wheat flour quality assessed by the new equipment Wheat Gluten Quality Analyser (WGQA) and those obtained by the extensigraph and farinograph. Fifty-nine wheat samples were evaluated for protein and gluten contents; the rheological properties of gluten and wheat flour were assessed using the WGQA and the extensigraph/farinograph methods, respectively, in addition to the baking test. Principal component analysis (PCA) and linear regression were used to evaluate the results. The parameters of energy and maximum resistance to extension determined by the extensigraph and WGQA showed an acceptable level for the linear correlation within the range from 0.6071 to 0.6511. The PCA results obtained using WGQA and the other rheological apparatus showed values similar to those expected for wheat flours in the baking test. Although all equipment used was effective in assessing the behavior of strong and weak flours, the results of medium strength wheat flour varied. WGQA has shown to use less amount of sample and to be faster and easier to use in relation to the other instruments used.
Resumo:
In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.
Resumo:
In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.
Resumo:
Nowadays problem of solving sparse linear systems over the field GF(2) remain as a challenge. The popular approach is to improve existing methods such as the block Lanczos method (the Montgomery method) and the Wiedemann-Coppersmith method. Both these methods are considered in the thesis in details: there are their modifications and computational estimation for each process. It demonstrates the most complicated parts of these methods and gives the idea how to improve computations in software point of view. The research provides the implementation of accelerated binary matrix operations computer library which helps to make the progress steps in the Montgomery and in the Wiedemann-Coppersmith methods faster.
Resumo:
Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.
Resumo:
Behavioral researchers commonly use single subject designs to evaluate the effects of a given treatment. Several different methods of data analysis are used, each with their own set of methodological strengths and limitations. Visual inspection is commonly used as a method of analyzing data which assesses the variability, level, and trend both within and between conditions (Cooper, Heron, & Heward, 2007). In an attempt to quantify treatment outcomes, researchers developed two methods for analysing data called Percentage of Non-overlapping Data Points (PND) and Percentage of Data Points Exceeding the Median (PEM). The purpose of the present study is to compare and contrast the use of Hierarchical Linear Modelling (HLM), PND and PEM in single subject research. The present study used 39 behaviours, across 17 participants to compare treatment outcomes of a group cognitive behavioural therapy program, using PND, PEM, and HLM on three response classes of Obsessive Compulsive Behaviour in children with Autism Spectrum Disorder. Findings suggest that PEM and HLM complement each other and both add invaluable information to the overall treatment results. Future research should consider using both PEM and HLM when analysing single subject designs, specifically grouped data with variability.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
In this paper, we study the asymptotic distribution of a simple two-stage (Hannan-Rissanen-type) linear estimator for stationary invertible vector autoregressive moving average (VARMA) models in the echelon form representation. General conditions for consistency and asymptotic normality are given. A consistent estimator of the asymptotic covariance matrix of the estimator is also provided, so that tests and confidence intervals can easily be constructed.
Resumo:
We derive conditions that must be satisfied by the primitives of the problem in order for an equilibrium in linear Markov strategies to exist in some common property natural resource differential games. These conditions impose restrictions on the admissible form of the natural growth function, given a benefit function, or on the admissible form of the benefit function, given a natural growth function.