923 resultados para Teorema de Bayes
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
The Tree Augmented Naïve Bayes (TAN) classifier relaxes the sweeping independence assumptions of the Naïve Bayes approach by taking account of conditional probabilities. It does this in a limited sense, by incorporating the conditional probability of each attribute given the class and (at most) one other attribute. The method of boosting has previously proven very effective in improving the performance of Naïve Bayes classifiers and in this paper, we investigate its effectiveness on application to the TAN classifier.
Resumo:
Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.
Resumo:
Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems. © 2006 American Institute of Physics.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
This study will present the results of an investigation of how the history of mathematics and theater can contribute to the construction of mathematical knowledge of students in the 9th year of elementary school, through the experience, preparation and execution of a play, beyond presentation of the script. This brings a historical approach, defining space and time of events, putting the reader and viewer to do the route in the biography of Thales of Miletus (624-546 a.C), creating situations that led to the study and discussion of the content related to the episode possible to measure the height of the pyramid Khufu and the Theorem of Thales. That said, the pedagogical proposal implemented in this work was based on theoretical and methodological assumptions of the History of Mathematics and Theatre, drawing upon authors such as Mendes (2006), Miguel (1993), Gutierre (2010), Desgrandes (2011), Cabral (2012). Regarding the methodological procedures used qualitative research because it responds to particular issues, analyzing and interpreting the data generated in the research field. As methodological tools we have used participant observation, the questionnaire given to the students, field diary and dissertativos texts produced by students. The processing and analysis of data collected through the questionnaires were organized, classified and quantified in tables and graphs for easy viewing, interpretation, understanding and analysis of data. Data analysis corroborated our hypothesis and contributed to improving the use and display of the play as a motivating activity in mathematics classrooms. Thus, we consider that the script developed, ie the educational product proposed will bring significant contributions to the teaching of Mathematics in Primary Education
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Resumo:
Dark matter is a fundamental ingredient of the modern Cosmology. It is necessary in order to explain the process of structures formation in the Universe, rotation curves of galaxies and the mass discrepancy in clusters of galaxies. However, although many efforts, in both aspects, theoretical and experimental, have been made, the nature of dark matter is still unknown and the only convincing evidence for its existence is gravitational. This rises doubts about its existence and, in turn, opens the possibility that the Einstein’s gravity needs to be modified at some scale. We study, in this work, the possibility that the Eddington-Born-Infeld (EBI) modified gravity provides en alternative explanation for the mass discrepancy in clusters of galaxies. For this purpose we derive the modified Einstein field equations and find their solutions to a spherical system of identical and collisionless point particles. Then, we took into account the collisionless relativistic Boltzmann equation and using some approximations and assumptions for weak gravitational field, we derived the generalized virial theorem in the framework of EBI gravity. In order to compare the predictions of EBI gravity with astrophysical observations we estimated the order of magnitude of the geometric mass, showing that it is compatible with present observations. Finally, considering a power law for the density of galaxies in the cluster, we derived expressions for the radial velocity dispersion of the galaxies, which can be used for testing some features of the EBI gravity.
Resumo:
Dark matter is a fundamental ingredient of the modern Cosmology. It is necessary in order to explain the process of structures formation in the Universe, rotation curves of galaxies and the mass discrepancy in clusters of galaxies. However, although many efforts, in both aspects, theoretical and experimental, have been made, the nature of dark matter is still unknown and the only convincing evidence for its existence is gravitational. This rises doubts about its existence and, in turn, opens the possibility that the Einstein’s gravity needs to be modified at some scale. We study, in this work, the possibility that the Eddington-Born-Infeld (EBI) modified gravity provides en alternative explanation for the mass discrepancy in clusters of galaxies. For this purpose we derive the modified Einstein field equations and find their solutions to a spherical system of identical and collisionless point particles. Then, we took into account the collisionless relativistic Boltzmann equation and using some approximations and assumptions for weak gravitational field, we derived the generalized virial theorem in the framework of EBI gravity. In order to compare the predictions of EBI gravity with astrophysical observations we estimated the order of magnitude of the geometric mass, showing that it is compatible with present observations. Finally, considering a power law for the density of galaxies in the cluster, we derived expressions for the radial velocity dispersion of the galaxies, which can be used for testing some features of the EBI gravity.
Resumo:
Le th eor eme de Riemann-Roch originale a rme que pour tout morphisme propre f : Y ! X entre vari et es quasi-projectifs lisses sur un corps, et tout el ement a 2 K0(Y ) du groupe de Grothendieck des br es vectoriels on a ch(f!(a)) = f {u100000}Td(Tf ) ch(a) (cf. [BS58]). Ici ch est le caract ere de Chern, Td(Tf ) est la classe de Todd du br e tangent relative et f et f! sont les images directes de l'anneau de Chow et K0 respectivement. Apr es, Baum, Fulton et MacPherson ont d emontr e en [BFM75] le th eor eme de Riemann-Roch pour des morphismes localement intersection compl ete entre des sch emas alg ebriques (sch emas s epar es et localement de type ni sur un corps) projectifs et singuli eres. En [FG83] Fulton et Gillet ont d emontr e le th eor eme sans hypoth eses projectifs. L'extension a la th eorie K sup erieure pour des sch emas r eguli eres sur une base fut d emontr e par Gillet en [Gil81]. Le th eor eme de Riemann-Roch qu'il prouve est pour des morphismes projectifs entre des sch emas lisses et quasi-projectifs. Donc, dans le cas des sch emas sur un corps, le r esultat de Gillet n'inclus pas le th eor eme de [BFM75]. La plus grande g en eralisation du th eor eme de Riemann-Roch que je connais est [D eg14] et [HS15], o u D eglise et Holmstrom-Scholbach obtiennent ind ependamment le th eor eme de Riemann- Roch pour la K-th eorie sup erieure et les morphismes projectifs lic entre sch emas r eguli eres sur une base noetherienne de dimension nie... NOTA 520 8 El teorema de Riemann-Roch original de Grothendieck a rma que para todo mor smo propio f : Y ! X, entre variedades irreducibles quasiproyectivas lisas sobre un cuerpo, y todo elemento a 2 K0(Y ) del grupo de Grothendieck de brados vectoriales se satisface la relaci on ch(f!(a)) = f {u100000}Td(Tf ) ch(a) (cf. [BS58]). Recu erdese que ch denota el car acter de Chern, Td(Tf ) la clase de Todd del brado tangente relativo y f y f! las im agenes directas en el anillo de Chow y K0 respectivamente. M as tarde Baum, Fulton MacPherson probaron en [BFM75] el teorema de Riemann-Roch para mor smos localmente intersecci on completa entre esquemas algebraicos (es decir, esquemas separados localmente de tipo nito sobre cuerpo) proyectivos singulares. En [FG83] Fulton y Gillet probaron el teorema sin hip otesis proyectivas. La notable extensi on a la teor a K superior para esquemas regulares sobre una base fue probada por Gillet en [Gil81]. El teorema de Riemann-Roch all probado es para mor smos proyectivos entre esquemas lisos quasiproyectivos. Sin embargo, obs ervese que en el caso de esquemas sobre cuerpo el resultado de Gillet no recupera el teorema de [BFM75]. La mayor generalizaci on del teorema de Riemann-Roch que yo conozco es [D eg14] y [HS15] donde D eglise y Holmstrom-Scholbach obtuvieron independientemente el teorema de Riemann-Roch para teor a K superior y mor smos proyectivos lic entre esquemas regulares sobre una base noetheriana nito dimensional...
Resumo:
In questa tesi si presenta il concetto di politopo convesso e se ne forniscono alcuni esempi, poi si introducono alcuni metodi di base e risultati significativi della teoria dei politopi. In particolare si dimostra l'equivalenza tra le due definizioni di H-politopo e di V-politopo, sfruttando il metodo di eliminazione di Fourier-Motzkin per coni. Tutto ciò ha permesso di descrivere, grazie al lemma di Farkas, alcune importanti costruzioni come il cono di recessione e l'omogeneizzazione di un insieme convesso.
Resumo:
The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.