914 resultados para Error probability
Resumo:
This paper is mainly concerned with the tracking accuracy of Exchange Traded Funds (ETFs) listed on the London Stock Exchange (LSE) but also evaluates their performance and pricing efficiency. The findings show that ETFs offer virtually the same return but exhibit higher volatility than their benchmark. It seems that the pricing efficiency, which should come from the creation and redemption process, does not fully hold as equity ETFs show consistent price premiums. The tracking error of the funds is generally small and is decreasing over time. The risk of the ETF, daily price volatility and the total expense ratio explain a large part of the tracking error. Trading volume, fund size, bid-ask spread and average price premium or discount did not have an impact on the tracking error. Finally, it is concluded that market volatility and the tracking error are positively correlated.
Resumo:
The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.
Resumo:
El objetivo que persigue un proceso de auditoría de estados contables es la comunicación por parte del auditor de una conclusión en relación al grado de razonabilidad con que tales estados reflejan la situación patrimonial, económica y financiera del ente de acuerdo a los criterios plasmados en las normas contables de referencia a ser utilizadas. El hecho que un auditor emita una conclusión errónea como consecuencia de su labor puede implicar la asunción de responsabilidades profesionales, civiles y penales como consecuencia de reclamos de usuarios de los estados contables que pudieran haberse visto perjudicados como consecuencia de la emisión de la conclusión errónea. Las normas contables a nivel nacional e internacional admiten la existencia de errores u omisiones en la información contenida en los estados contables, en la medida que tales desvíos no provoquen en los usuarios interesados en tales estados una decisión distinta a la que tomarían en caso de no existir los errores u omisiones aludidos. De lo expuesto en el párrafo anterior surge la cabal importancia que la determinación del nivel de significación total (nivel de desvíos admitidos por los usuarios de los estados contables en la información por ellos contenida) adquiere en los procesos de auditoría, como así también la asignación de tal nivel entre los distintos componentes de los estados contables (asignación del error tolerable) a los efectos de que los auditores eviten asumir responsabilidades de índole profesional, civil y/o penal. Hasta el momento no se conoce la existencia de modelos matemáticos que respalden de modo objetivo y verificable el cálculo del nivel de significación total y la asignación del error tolerable entre los distintos elementos conformantes de los estados contables. Entendemos que el desarrollo e integración de un modelo de cuantificación del nivel de significación total y de asignación del error tolerable tiene las siguientes repercusiones: 1 – Representaría para el auditor un elemento que respalde el modo de cuantificación del nivel de significación y la asignación del error tolerable entre los componentes de los estados contables. 2 – Permitiría que los auditores reduzcan las posibilidades de asumir responsabilidades de carácter profesional, civil y/o penales como consecuencia de su labor. 3 – Representaría un principio de avance a los efectos de que los organismos emisores de normas de auditoría a nivel nacional e internacional recepten elementos a los efectos de fijar directrices en relación al cálculo del nivel de significación y de asignación del error tolerable. 4 - Eliminaría al cálculo del nivel de significación como una barrera que afecte la comparabilidad de los estados contables.
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2007
Resumo:
The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...
Resumo:
This paper deals with the estimation of milk production by means of weekly, biweekly, bimonthly observations and also by method known as 6-5-8, where one observation is taken at the 6th week of lactation, another at 5th month and a third one at the 8th month. The data studied were obtained from 72 lactations of the Holstein Friesian breed of the "Escola Superior de Agricultura "Luiz de Queiroz" (Piracicaba), S. Paulo, Brazil), being 6 calvings on each month of year and also 12 first calvings, 12 second calvings, and so on, up to the sixth. The authors criticize the use of "maximum error" to be found in papers dealing with this subject, and also the use of mean deviation. The former is completely supersed and unadvisable and latter, although equivalent, to a certain extent, to the usual standard deviation, has only 87,6% of its efficiency, according to KENDALL (9, pp. 130-131, 10, pp. 6-7). The data obtained were compared with the actual production, obtained by daily control and the deviations observed were studied. Their means and standard deviations are given on the table IV. Inspite of BOX's recent results (11) showing that with equal numbers in all classes a certain inequality of varinces is not important, the autors separated the methods, before carrying out the analysis of variance, thus avoiding to put together methods with too different standard deviations. We compared the three first methods, to begin with (Table VI). Then we carried out the analysis with the four first methods. (Table VII). Finally we compared the two last methods. (Table VIII). These analysis of variance compare the arithmetic means of the deviations by the methods studied, and this is equivalent to compare their biases. So we conclude tht season of calving and order of calving do not effect the biases, and the methods themselves do not differ from this view point, with the exception of method 6-5-8. Another method of attack, maybe preferrable, would be to compare the estimates of the biases with their expected mean under the null hypothesis (zero) by the t-test. We have: 1) Weekley control: t = x - 0/c(x) = 8,59 - 0/ = 1,56 2) Biweekly control: t = 11,20 - 0/6,21= 1,80 3) Monthly control: t = 7,17 - 0/9,48 = 0,76 4) Bimonthly control: t = - 4,66 - 0/17,56 = -0,26 5) Method 6-5-8 t = 144,89 - 0/22,41 = 6,46*** We denote above by three asterisks, significance the 0,1% level of probability. In this way we should conclude that the weekly, biweekly, monthly and bimonthly methods of control may be assumed to be unbiased. The 6-5-8 method is proved to be positively biased, and here the bias equals 5,9% of the mean milk production. The precision of the methods studied may be judged by their standard deviations, or by intervals covering, with a certain probability (95% for example), the deviation x corresponding to an estimate obtained by cne of the methods studied. Since the difference x - x, where x is the mean of the 72 deviations obtained for each method, has a t distribution with mean zero and estimate of standard deviation. s(x - x) = √1+ 1/72 . s = 1.007. s , and the limit of t for the 5% probability, level with 71 degrees of freedom is 1.99, then the interval to be considered is given by x ± 1.99 x 1.007 s = x ± 2.00. s The intervals thus calculated are given on the table IX.
Resumo:
The authors studied the rainfall in Pesqueira (Pernambuco, Brasil) in a period of 48 years (1910 through 1957) by the method of orthogonal polynomials, degrees up to the fourth having been tried. None of them was significant, so that it seems that no trend is present. The mean observed was 679.00 mm., with standard error of the mean 205.5 mm., and a 30.3% coefficient of variation. The 95% level of probability would include annual rainfall from 263.9 up to 1094.1mm.
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2014
Resumo:
This paper dis cusses the fitting of a Cobb-Doug las response curve Yi = αXβi, with additive error, Yi = αXβi + e i, instead of the usual multiplicative error Yi = αXβi (1 + e i). The estimation of the parameters A and B is discussed. An example is given with use of both types of error.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Univ., Dissertation, 2015
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
The present notes are intended to present a detailed review of the existing results in dissipative kinetic theory which make use of the contraction properties of two main families of probability metrics: optimal mass transport and Fourier-based metrics. The first part of the notes is devoted to a self-consistent summary and presentation of the properties of both probability metrics, including new aspects on the relationships between them and other metrics of wide use in probability theory. These results are of independent interest with potential use in other contexts in Partial Differential Equations and Probability Theory. The second part of the notes makes a different presentation of the asymptotic behavior of Inelastic Maxwell Models than the one presented in the literature and it shows a new example of application: particle's bath heating. We show how starting from the contraction properties in probability metrics, one can deduce the existence, uniqueness and asymptotic stability in classical spaces. A global strategy with this aim is set up and applied in two dissipative models.