915 resultados para Normal distribution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A rigorous asymptotic theory for Wald residuals in generalized linear models is not yet available. The authors provide matrix formulae of order O(n(-1)), where n is the sample size, for the first two moments of these residuals. The formulae can be applied to many regression models widely used in practice. The authors suggest adjusted Wald residuals to these models with approximately zero mean and unit variance. The expressions were used to analyze a real dataset. Some simulation results indicate that the adjusted Wald residuals are better approximated by the standard normal distribution than the Wald residuals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maize is one of the most important crops in the world. The products generated from this crop are largely used in the starch industry, the animal and human nutrition sector, and biomass energy production and refineries. For these reasons, there is much interest in figuring the potential grain yield of maize genotypes in relation to the environment in which they will be grown, as the productivity directly affects agribusiness or farm profitability. Questions like these can be investigated with ecophysiological crop models, which can be organized according to different philosophies and structures. The main objective of this work is to conceptualize a stochastic model for predicting maize grain yield and productivity under different conditions of water supply while considering the uncertainties of daily climate data. Therefore, one focus is to explain the model construction in detail, and the other is to present some results in light of the philosophy adopted. A deterministic model was built as the basis for the stochastic model. The former performed well in terms of the curve shape of the above-ground dry matter over time as well as the grain yield under full and moderate water deficit conditions. Through the use of a triangular distribution for the harvest index and a bivariate normal distribution of the averaged daily solar radiation and air temperature, the stochastic model satisfactorily simulated grain productivity, i.e., it was found that 10,604 kg ha(-1) is the most likely grain productivity, very similar to the productivity simulated by the deterministic model and for the real conditions based on a field experiment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: To evaluate the in vitro changes on the enamel surface after a micro-abrasion treatment promoted by different products. Material and Methods: Fifty (50) fragments of bovine enamel (15 mm × 5 mm) were randomly assigned to five groups (n=10) according to the product utilized: G1 (control)= silicone polisher (TDV), G2= 37% phosphoric acid (3M/ESPE) + pumice stone (SS White), G3= Micropol (DMC Equipment), G4= Opalustre (Ultradent) and G5= Whiteness RM (FGM Dental Products). Roughness and wear were the responsible variables used to analyze these surfaces in four stages: baseline, 60 s and 120 s after the micro-abrasion and after polishing, using a Hommel Tester T1000 device. After the tests, a normal distribution of data was verified, with repeated ANOVA analyses (p?0.05) which were used to compare each product in different stages. One-way ANOVA and Tukey tests were applied for individual comparisons between the products in each stage (p?0.05). Results: Means and standard deviations of roughness and wear (µm) after all the promoted stages were: G1=7.26(1.81)/13.16(2.67), G2=2.02(0.62)/37.44(3.33), G3=1.81(0.91)/34.93(6.92), G4=1.92(0.29)/38.42(0.65) and G5=1.98(0.53)/33.45(2.66). At 60 seconds, all products tended to produce less surface roughness with a variable gradual decrease over time. After polishing, there were no statistically significant differences between the groups, except for G1. Independent of the product utilized, the enamel wear occurred after the micro-abrasion. Conclusions: In this in vitro study, enamel micro-abrasion presented itself as a conservative approach, regardless of the type of the paste compound utilized. These products promoted minor roughness alterations and minimal wear. The use of phosphoric acid and pumice stone showed similar results to commercial products for the micro-abrasion with regard to the surface roughness and wear.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aimed to evaluate the spatial variability of leaf content of macro and micronutrients. The citrus plants orchard with 5 years of age, planted at regular intervals of 8 x 7 m, was managed under drip irrigation. Leaf samples were collected from each plant to be analyzed in the laboratory. Data were analyzed using the software R, version 2.5.1 Copyright (C) 2007, along with geostatistics package GeoR. All contents of macro and micronutrients studied were adjusted to normal distribution and showed spatial dependence.The best-fit models, based on the likelihood, for the macro and micronutrients were the spherical and matern. It is suggest for the macronutrients nitrogen, phosphorus, potassium, calcium, magnesium and sulfur the minimum distances between samples of 37; 58; 29; 63; 46 and 15 m respectively, while for the micronutrients boron, copper, iron, manganese and zinc, the distances suggests are 29; 9; 113; 35 and 14 m, respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with inflation theory, focussing on the model of Jarrow & Yildirim, which is nowadays used when pricing inflation derivatives. After recalling main results about short and forward interest rate models, the dynamics of the main components of the market are derived. Then the most important inflation-indexed derivatives are explained (zero coupon swap, year-on-year, cap and floor), and their pricing proceeding is shown step by step. Calibration is explained and performed with a common method and an heuristic and non standard one. The model is enriched with credit risk, too, which allows to take into account the possibility of bankrupt of the counterparty of a contract. In this context, the general method of pricing is derived, with the introduction of defaultable zero-coupon bonds, and the Monte Carlo method is treated in detailed and used to price a concrete example of contract. Appendixes: A: martingale measures, Girsanov's theorem and the change of numeraire. B: some aspects of the theory of Stochastic Differential Equations; in particular, the solution for linear EDSs, and the Feynman-Kac Theorem, which shows the connection between EDSs and Partial Differential Equations. C: some useful results about normal distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose an extension of the approach provided by Kluppelberg and Kuhn (2009) for inference on second-order structure moments. As in Kluppelberg and Kuhn (2009) we adopt a copula-based approach instead of assuming normal distribution for the variables, thus relaxing the equality in distribution assumption. A new copula-based estimator for structure moments is investigated. The methodology provided by Kluppelberg and Kuhn (2009) is also extended considering the copulas associated with the family of Eyraud-Farlie-Gumbel-Morgenstern distribution functions (Kotz, Balakrishnan, and Johnson, 2000, Equation 44.73). Finally, a comprehensive simulation study and an application to real financial data are performed in order to compare the different approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die vorliegende Arbeit beschäftigt sich mit der Entwicklung eines Funktionsapproximators und dessen Verwendung in Verfahren zum Lernen von diskreten und kontinuierlichen Aktionen: 1. Ein allgemeiner Funktionsapproximator – Locally Weighted Interpolating Growing Neural Gas (LWIGNG) – wird auf Basis eines Wachsenden Neuralen Gases (GNG) entwickelt. Die topologische Nachbarschaft in der Neuronenstruktur wird verwendet, um zwischen benachbarten Neuronen zu interpolieren und durch lokale Gewichtung die Approximation zu berechnen. Die Leistungsfähigkeit des Ansatzes, insbesondere in Hinsicht auf sich verändernde Zielfunktionen und sich verändernde Eingabeverteilungen, wird in verschiedenen Experimenten unter Beweis gestellt. 2. Zum Lernen diskreter Aktionen wird das LWIGNG-Verfahren mit Q-Learning zur Q-LWIGNG-Methode verbunden. Dafür muss der zugrunde liegende GNG-Algorithmus abgeändert werden, da die Eingabedaten beim Aktionenlernen eine bestimmte Reihenfolge haben. Q-LWIGNG erzielt sehr gute Ergebnisse beim Stabbalance- und beim Mountain-Car-Problem und gute Ergebnisse beim Acrobot-Problem. 3. Zum Lernen kontinuierlicher Aktionen wird ein REINFORCE-Algorithmus mit LWIGNG zur ReinforceGNG-Methode verbunden. Dabei wird eine Actor-Critic-Architektur eingesetzt, um aus zeitverzögerten Belohnungen zu lernen. LWIGNG approximiert sowohl die Zustands-Wertefunktion als auch die Politik, die in Form von situationsabhängigen Parametern einer Normalverteilung repräsentiert wird. ReinforceGNG wird erfolgreich zum Lernen von Bewegungen für einen simulierten 2-rädrigen Roboter eingesetzt, der einen rollenden Ball unter bestimmten Bedingungen abfangen soll.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we studied the efficiency of the benchmarks used in the asset management industry. In chapter 2 we analyzed the efficiency of the benchmark used for the government bond markets. We found that for the Emerging Market Bonds an equally weighted index for the country weights is probably the more suited because guarantees maximum diversification of country risk but for the Eurozone government bond market we found a GDP weighted index is better because the most important matter is to avoid a higher weight for highly indebted countries. In chapter 3 we analyzed the efficiency of a Derivatives Index to invest in the European corporate bond market instead of a Cash Index. We can state that the two indexes are similar in terms of returns, but that the Derivatives Index is less risky because it has a lower volatility, has values of skewness and kurtosis closer to those of a normal distribution and is a more liquid instrument, as the autocorrelation is not significant. In chapter 4 it is analyzed the impact of fallen angels on the corporate bond portfolios. Our analysis investigated the impact of the month-end rebalancing of the ML Emu Non Financial Corporate Index for the exit of downgraded bond (the event). We can conclude a flexible approach to the month-end rebalancing is better in order to avoid a loss of valued due to the benchmark construction rules. In chapter 5 we did a comparison between the equally weighted and capitalization weighted method for the European equity market. The benefit which results from reweighting the portfolio into equal weights can be attributed to the fact that EW portfolios implicitly follow a contrarian investment strategy, because they mechanically rebalance away from stocks that increase in price.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: To evaluate diffusion-weighted magnetic resonance (MR) imaging of the human placenta in fetuses with and fetuses without intrauterine growth restriction (IUGR) who were suspected of having placental insufficiency. MATERIALS AND METHODS: The study was approved by the local ethics committee, and written informed consent was obtained. The authors retrospectively evaluated 1.5-T fetal MR images from 102 singleton pregnancies (mean gestation ± standard deviation, 29 weeks ± 5; range, 21-41 weeks). Morphologic and diffusion-weighted MR imaging were performed. A region of interest analysis of the apparent diffusion coefficient (ADC) of the placenta was independently performed by two observers who were blinded to clinical data and outcome. Placental insufficiency was diagnosed if flattening of the growth curve was detected at obstetric ultrasonography (US), if the birth weight was in the 10th percentile or less, or if fetal weight estimated with US was below the 10th percentile. Abnormal findings at Doppler US of the umbilical artery and histopathologic examination of specimens from the placenta were recorded. The ADCs in fetuses with placental insufficiency were compared with those in fetuses of the same gestational age without placental insufficiency and tested for normal distribution. The t tests and Pearson correlation coefficients were used to compare these results at 5% levels of significance. RESULTS: Thirty-three of the 102 pregnancies were ultimately categorized as having an insufficient placenta. MR imaging depicted morphologic changes (eg, infarction or bleeding) in 27 fetuses. Placental dysfunction was suspected in 33 fetuses at diffusion-weighted imaging (mean ADC, 146.4 sec/mm(2) ± 10.63 for fetuses with placental insufficiency vs 177.1 sec/mm(2) ± 18.90 for fetuses without placental insufficiency; P < .01, with one false-positive case). The use of diffusion-weighted imaging in addition to US increased sensitivity for the detection of placental insufficiency from 73% to 100%, increased accuracy from 91% to 99%, and preserved specificity at 99%. CONCLUSION: Placental dysfunction associated with growth restriction is associated with restricted diffusion and reduced ADC. A decreased ADC used as an early marker of placental damage might be indicative of pregnancy complications such as IUGR.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Locally affine (polyaffine) image registration methods capture intersubject non-linear deformations with a low number of parameters, while providing an intuitive interpretation for clinicians. Considering the mandible bone, anatomical shape differences can be found at different scales, e.g. left or right side, teeth, etc. Classically, sequential coarse to fine registration are used to handle multiscale deformations, instead we propose a simultaneous optimization of all scales. To avoid local minima we incorporate a prior on the polyaffine transformations. This kind of groupwise registration approach is natural in a polyaffine context, if we assume one configuration of regions that describes an entire group of images, with varying transformations for each region. In this paper, we reformulate polyaffine deformations in a generative statistical model, which enables us to incorporate deformation statistics as a prior in a Bayesian setting. We find optimal transformations by optimizing the maximum a posteriori probability. We assume that the polyaffine transformations follow a normal distribution with mean and concentration matrix. Parameters of the prior are estimated from an initial coarse to fine registration. Knowing the region structure, we develop a blockwise pseudoinverse to obtain the concentration matrix. To our knowledge, we are the first to introduce simultaneous multiscale optimization through groupwise polyaffine registration. We show results on 42 mandible CT images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. METHODS: Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. RESULTS: Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. CONCLUSIONS: When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright 2014 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a numerically simple routine for locally adaptive smoothing. The locally heterogeneous regression function is modelled as a penalized spline with a smoothly varying smoothing parameter modelled as another penalized spline. This is being formulated as hierarchical mixed model, with spline coe±cients following a normal distribution, which by itself has a smooth structure over the variances. The modelling exercise is in line with Baladandayuthapani, Mallick & Carroll (2005) or Crainiceanu, Ruppert & Carroll (2006). But in contrast to these papers Laplace's method is used for estimation based on the marginal likelihood. This is numerically simple and fast and provides satisfactory results quickly. We also extend the idea to spatial smoothing and smoothing in the presence of non normal response.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We describe sufficient conditions for the strong consistency and asymptotic normality of both methods and investigate their finite sample properties in a variety of examples.