950 resultados para Mathematical and statistical techniques
Resumo:
The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as "quebra-pedras" in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, ¹H HR-MAS NMR and ¹H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques.
Resumo:
Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.
Resumo:
Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.
Resumo:
Standard techniques for radioautography used in biological and medical research can be classified into three categories, i.e., macroscopic radioautography, light microscopic radioautography and electron microscopic radioautography. The routine techniques used in these three procedures are described. With regard to macroscopic radioautography, whole body radioautography is a standard technique which employs freezing and cryosectioning and can demonstrate organ distributions of both soluble and insoluble compounds. In contrast, in light and electron microscopic radioautography, soluble and insoluble techniques are separated. In order to demonstrate insoluble labeled compounds, conventional chemical fixations such as formalin for light microscopy or buffered glutaraldehyde and osmium tetroxide for both light and electron microscopy followed by dehydration, embedding and wet-mounting applications of radioautographic emulsions can be used. For the demonstration of soluble labeled compounds, however, cryotechniques such as cryofixation, cryosectioning, freeze-drying, freeze-substitution followed by dry-sectioning and dry-mounting radioautography should be employed both for light and electron microscopy. The outlines of these techniques, which should be utilized in various fields of biological and medical research, are described in detail
Resumo:
An interesting fact about language cognition is that stimulation involving incongruence in the merge operation between verb and complement has often been related to a negative event-related potential (ERP) of augmented amplitude and latency of ca. 400 ms - the N400. Using an automatic ERP latency and amplitude estimator to facilitate the recognition of waves with a low signal-to-noise ratio, the objective of the present study was to study the N400 statistically in 24 volunteers. Stimulation consisted of 80 experimental sentences (40 congruous and 40 incongruous), generated in Brazilian Portuguese, involving two distinct local verb-argument combinations (nominal object and pronominal object series). For each volunteer, the EEG was simultaneously acquired at 20 derivations, topographically localized according to the 10-20 International System. A computerized routine for automatic N400-peak marking (based on the ascendant zero-cross of the first waveform derivative) was applied to the estimated individual ERP waveform for congruous and incongruous sentences in both series for all ERP topographic derivations. Peak-to-peak N400 amplitude was significantly augmented (P < 0.05; one-sided Wilcoxon signed-rank test) due to incongruence in derivations F3, T3, C3, Cz, T5, P3, Pz, and P4 for nominal object series and in P3, Pz and P4 for pronominal object series. The results also indicated high inter-individual variability in ERP waveforms, suggesting that the usual procedure of grand averaging might not be considered a generally adequate approach. Hence, signal processing statistical techniques should be applied in neurolinguistic ERP studies allowing waveform analysis with low signal-to-noise ratio.
Resumo:
The influence of some process variables on the productivity of the fractions (liquid yield times fraction percent) obtained from SCFE of a Brazilian mineral coal using isopropanol and ethanol as primary solvents is analyzed using statistical techniques. A full factorial 23 experimental design was adopted to investigate the effects of process variables (temperature, pressure and cosolvent concentration) on the extraction products. The extracts were analyzed by the Preparative Liquid Chromatography-8 fractions method (PLC-8), a reliable, non destructive solvent fractionation method, especially developed for coal-derived liquids. Empirical statistical modeling was carried out in order to reproduce the experimental data. Correlations obtained were always greater than 0.98. Four specific process criteria were used to allow process optimization. Results obtained show that it is not possible to maximize both extract productivity and purity (through the minimization of heavy fraction content) simultaneously by manipulating the mentioned process variables.
Resumo:
Cooked ham is considered a high-value product due to the quality of its raw material. Although its consumption is still low in Brazil, it is increasing due to the rising purchasing power of sectors of the population. This study aimed to assess the microbiological, physicochemical, rheological, and sensory quality of cooked hams (n=11) marketed in Brazil. All samples showed microbiological results within the standards established by Brazilian legislation. Eight of the eleven samples studied met all the legal requirements; two samples violated the standards due to the addition of starch; one sample had lower protein content than the minimum required, and another one had sodium content higher than that stated on the label. The use of Hierarchical Cluster Analysis allowed the agglomeration of the samples into three groups with distinct quality traits and with significant differences in moisture content, chromaticity, syneresis, and heating and freezing loss. Principal Component Analysis showed that the samples which correlated to higher sensory acceptance regarding flavor and overall acceptability were those with higher moisture, protein, fat, and luminosity values. This study confirmed the efficacy of multivariate statistical techniques in assessing the quality of commercial cooked hams and in indicating the physicochemical parameters associated with the perception of product quality.
Resumo:
The infinitesimal differential quantum Monte Carlo (QMC) technique is used to estimate electrostatic polarizabilities of the H and He atoms up to the sixth order in the electric field perturbation. All 542 different QMC estimators of the nonzero atomic polarizabilities are derived and used in order to decrease the statistical error and to obtain the maximum efficiency of the simulations. We are confident that the estimates are "exact" (free of systematic error): the two atoms are nodeless systems, hence no fixed-node error is introduced. Furthermore, we develope and use techniques which eliminate systematic error inherent when extrapolating our results to zero time-step and large stack-size. The QMC results are consistent with published accurate values obtained using perturbation methods. The precision is found to be related to the number of perturbations, varying from 2 to 4 significant digits.
Resumo:
This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
We consider a probabilistic approach to the problem of assigning k indivisible identical objects to a set of agents with single-peaked preferences. Using the ordinal extension of preferences, we characterize the class of uniform probabilistic rules by Pareto efficiency, strategy-proofness, and no-envy. We also show that in this characterization no-envy cannot be replaced by anonymity. When agents are strictly risk averse von-Neumann-Morgenstern utility maximizers, then we reduce the problem of assigning k identical objects to a problem of allocating the amount k of an infinitely divisible commodity.
Resumo:
We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.
Resumo:
This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.