931 resultados para Error estimator
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
The James-Stein estimator is a biased shrinkage estimator with uniformly smaller risk than the risk of the sample mean estimator for the mean of multivariate normal distribution, except in the one-dimensional or two-dimensional cases. In this work we have used more heuristic arguments and intensified the geometric treatment of the theory of James-Stein estimator. New type James-Stein shrinking estimators are proposed and the Mahalanobis metric used to address the James-Stein estimator. . To evaluate the performance of the estimator proposed, in relation to the sample mean estimator, we used the computer simulation by the Monte Carlo method by calculating the mean square error. The result indicates that the new estimator has better performance relative to the sample mean estimator.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This article deals with classification problems involving unequal probabilities in each class and discusses metrics to systems that use multilayer perceptrons neural networks (MLP) for the task of classifying new patterns. In addition we propose three new pruning methods that were compared to other seven existing methods in the literature for MLP networks. All pruning algorithms presented in this paper have been modified by the authors to do pruning of neurons, in order to produce fully connected MLP networks but being small in its intermediary layer. Experiments were carried out involving the E. coli unbalanced classification problem and ten pruning methods. The proposed methods had obtained good results, actually, better results than another pruning methods previously defined at the MLP neural network area. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Cognitive radio is a growing zone in wireless communication which offers an opening in complete utilization of incompetently used frequency spectrum: deprived of crafting interference for the primary (authorized) user, the secondary user is indorsed to use the frequency band. Though, scheming a model with the least interference produced by the secondary user for primary user is a perplexing job. In this study we proposed a transmission model based on error correcting codes dealing with a countable number of pairs of primary and secondary users. However, we obtain an effective utilization of spectrum by the transmission of the pairs of primary and secondary users' data through the linear codes with different given lengths. Due to the techniques of error correcting codes we developed a number of schemes regarding an appropriate bandwidth distribution in cognitive radio.
Resumo:
In this paper we introduce a type of Hypercomplex Fourier Series based on Quaternions, and discuss on a Hypercomplex version of the Square of the Error Theorem. Since their discovery by Hamilton (Sinegre [1]), quaternions have provided beautifully insights either on the structure of different areas of Mathematics or in the connections of Mathematics with other fields. For instance: I) Pauli spin matrices used in Physics can be easily explained through quaternions analysis (Lan [2]); II) Fundamental theorem of Algebra (Eilenberg [3]), which asserts that the polynomial analysis in quaternions maps into itself the four dimensional sphere of all real quaternions, with the point infinity added, and the degree of this map is n. Motivated on earlier works by two of us on Power Series (Pendeza et al. [4]), and in a recent paper on Liouville’s Theorem (Borges and Mar˜o [5]), we obtain an Hypercomplex version of the Fourier Series, which hopefully can be used for the treatment of hypergeometric partial differential equations such as the dumped harmonic oscillation.
Resumo:
The focus of this paper is to address some classical results for a class of hypercomplex numbers. More specifically we present an extension of the Square of the Error Theorem and a Bessel inequality for octonions.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Bovine tuberculosis (BTB) was introduced into Swedish farmed deer herds in 1987. Epidemiological investigations showed that 10 deer herds had become infected (July 1994) and a common source of infection, a consignment of 168 imported farmed fallow deer, was identified (I). As trace-back of all imported and in-contact deer was not possible, a control program, based on tuberculin testing, was implemented in July 1994. As Sweden has been free from BTB since 1958, few practicing veterinarians had experience in tuberculin testing. In this test, result relies on the skill, experience and conscientiousness of the testing veterinarian. Deficiencies in performing the test may adversely affect the test results and thereby compromise a control program. Quality indicators may identify possible deficiencies in testing procedures. For that purpose, reference values for measured skin fold thickness (prior to injection of the tuberculin) were established (II) suggested to be used mainly by less experienced veterinarians to identify unexpected measurements. Furthermore, the within-veterinarian variation of the measured skin fold thickness was estimated by fitting general linear models to data (skin fold measurements) (III). The mean square error was used as an estimator of the within-veterinarian variation. Using this method, four (6%) veterinarians were considered to have unexpectedly large variation in measurements. In certain large extensive deer farms, where mustering of all animals was difficult, meat inspection was suggested as an alternative to tuberculin testing. The efficiency of such a control was estimated in paper IV and V. A Reed Frost model was fitted to data from seven BTB-infected deer herds and the spread of infection was estimated (< 0.6 effective contacts per deer and year) (IV). These results were used to model the efficiency of meat inspection in an average extensive Swedish deer herd. Given a 20% annual slaughter and meat inspection, the model predicted that BTB would be either detected or eliminated in most herds (90%) 15 years after introduction of one infected deer. In 2003, an alternative control for BTB in extensive Swedish deer herds, based on the results of paper V, was implemented.
Resumo:
You published recently (Nature 374, 587; 1995) a report headed "Error re-opens 'scientific' whaling debate". The error in question, however, relates to commercial whaling, not to scientific whaling. Although Norway cites science as a basis for the way in which it sets its own quota. scientific whaling means something quite different. namely killing whales for research purposes. Any member of the International Whaling Commission (IWC) has the right to conduct a research catch under the International Convention for the Regulation of Whaling. 1946. The IWC has reviewed new research or scientific whaling programs for Japan and Norway since the IWC moratorium on commercial whaling began in 1986. In every case, the IWC advised Japan and Norway to reconsider the lethal aspects of their research programs. Last year, however, Norway started a commercial hunt in combination with its scientific catch, despite the IWC moratorium.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.