10 resultados para General Information Theory
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields
Resumo:
Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations
Resumo:
The information constitutes one of the most valuable strategic assets for the organization. However, the organizational environment in which it is inserted is very complex and heterogeneous, making emerging issues relevant to the Governance of information technology (IT) and Information Security. Academic Studies and market surveys indicate that the origin of most accidents with the information assets is the behavior of people organization itself rather than external attacks. Taking as a basis the promotion of a culture of safety among users and ensuring the protection of information in their properties of confidentiality, integrity and availability, organizations must establish its Information Security Policy (PSI). This policy is to formalise the guidelines in relation to the security of corporate information resources, in order to avoid that the asset vulnerabilities are exploited by threats and can bring negative consequences to the business. But, for the PSI being effective, it is required that the user have readiness to accept and follow the procedures and safety standards. In the light of this context, the present study aims to investigate what are the motivators extrinsic and intrinsic that affect the willingness of the user to be in accordance with the organization's security policies. The theoretical framework addresses issues related to IT Governance, Information Security, Theory of deterrence, Motivation and Behavior Pro-social. It was created a theoretical model based on the studies of Herath and Rao (2009) and D'Arcy, Hovav and Galletta (2009) that are based on General Deterrence Theory and propose the following influencing factors in compliance with the Policy: Severity of Punishment, Certainty of Detection, Peer Behaviour, Normative Beliefs, Perceived Effectiveness and Moral Commitment. The research used a quantitative approach, descriptive. The data were collected through a questionnaire with 18 variables with a Likert scale of five points representing the influencing factors proposed by the theory. The sample was composed of 391 students entering the courses from the Center for Applied Social Sciences of the Universidade Federal do Rio Grande do Norte. For the data analysis, were adopted the techniques of Exploratory Factor Analysis, Analysis of Cluster hierarchical and nonhierarchical, Logistic Regression and Multiple Linear Regression. As main results, it is noteworthy that the factor severity of punishment is what contributes the most to the theoretical model and also influences the division of the sample between users more predisposed and less prone. As practical implication, the research model applied allows organizations to provide users less prone and, with them, to carry out actions of awareness and training directed and write Security Policies more effective.
Resumo:
Using literature to discuss the topic of food, proper bourgeois cuisine, was the purpose of this work. As a corpus, we use one of the works of Eça de Queiroz, The City and the Mountains. Served as theoretical references the Claude Levi-Strauss s concept of universal culinary and the Jean Claude Fischler s concept of specific culinary who understands food as a cultural system which includes representations, beliefs and practices of a specific group. After the initial reading of the novel and construction of a file containing general information of the work, categories designed for elaboration of a material for analysis were these: work, characters, food, intellectuals and geographies. We realized the culinary as an epicenter for understanding the culture of a specific group: in this case, the bourgeois. We proposed a quaternary model for systematizing it: this bourgeois cuisine highlights the technique, has affection for what is rare and/or expensive but still consume it with temperance, establishing a new relationship with the use of time and, finally, it is the one that opens the ritual that involves frequent restaurants and cafes. The exercise of thinking the bourgeois cuisine through the literature suggests that the art may work on increase the comprehensive capabilities of nutritionists, professionals who deal with a complex object in your practice: the food
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
Many astronomical observations in the last few years are strongly suggesting that the current Universe is spatially flat and dominated by an exotic form of energy. This unknown energy density accelerates the universe expansion and corresponds to around 70% of its total density being usually called Dark Energy or Quintessence. One of the candidates to dark energy is the so-called cosmological constant (Λ) which is usually interpreted as the vacuum energy density. However, in order to remove the discrepancy between the expected and observed values for the vacuum energy density some current models assume that the vacuum energy is continuously decaying due to its possible coupling with the others matter fields existing in the Cosmos. In this dissertation, starting from concepts and basis of General Relativity Theory, we study the Cosmic Microwave Background Radiation with emphasis on the anisotropies or temperature fluctuations which are one of the oldest relic of the observed Universe. The anisotropies are deduced by integrating the Boltzmann equation in order to explain qualitatively the generation and c1assification of the fluctuations. In the following we construct explicitly the angular power spectrum of anisotropies for cosmologies with cosmological constant (ΛCDM) and a decaying vacuum energy density (Λ(t)CDM). Finally, with basis on the quadrupole moment measured by the WMAP experiment, we estimate the decaying rates of the vacuum energy density in matter and in radiation for a smoothly and non-smoothly decaying vacuum
Resumo:
In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality
Resumo:
I thank to my advisor, João Marcos, for the intellectual support and patience that devoted me along graduate years. With his friendship, his ability to see problems of the better point of view and his love in to make Logic, he became a great inspiration for me. I thank to my committee members: Claudia Nalon, Elaine Pimentel and Benjamin Bedregal. These make a rigorous lecture of my work and give me valuable suggestions to make it better. I am grateful to the Post-Graduate Program in Systems and Computation that accepted me as student and provided to me the propitious environment to develop my research. I thank also to the CAPES for a 21 months fellowship. Thanks to my research group, LoLITA (Logic, Language, Information, Theory and Applications). In this group I have the opportunity to make some friends. Someone of them I knew in my early classes, they are: Sanderson, Haniel and Carol Blasio. Others I knew during the course, among them I’d like to cite: Patrick, Claudio, Flaulles and Ronildo. I thank to Severino Linhares and Maria Linhares who gently hosted me at your home in my first months in Natal. This couple jointly with my colleagues of student flat Fernado, Donátila and Aline are my nuclear family in Natal. I thank my fiancée Luclécia for her precious a ective support and to understand my absence at home during my master. I thank also my parents Manoel and Zenilda, my siblings Alexandre, Paulo and Paula.Without their confidence and encouragement I wouldn’t achieve success in this journey. If you want the hits, be prepared for the misses Carl Yastrzemski
Resumo:
Coding process is a fundamental aspect of cerebral functioning. The sensory stimuli transformation in neurophysiological responses has been a research theme in several areas of Neuroscience. One of the most used ways to measure a neural code e ciency is by the use of Information Theory measures, such as mutual information. Using these tools, recent studies show that in the auditory cortex both local eld potentials (LFPs) and action potential spiking times code information about sound stimuli. However, there are no studies applying Information Theory tools to investigate the e ciency of codes that use postsynaptics potentials (PSPs), alone and associated with LFP analysis. These signals are related in the sense that LFPs are partly created by joint action of several PSPs. The present dissertation reports information measures between PSP and LFP responses obtained in the primary auditory cortex of anaesthetized rats and auditory stimuli of distinct frequencies. Our results show that PSP responses hold information about sound stimuli in comparable levels and even greater than LFP responses. We have also found that PSPs and LFPs code sound information independently, since the joint analysis of these signals did neither show synergy nor redundancy.
Resumo:
Information extraction is a frequent and relevant problem in digital signal processing. In the past few years, different methods have been utilized for the parameterization of signals and the achievement of efficient descriptors. When the signals possess statistical cyclostationary properties, the Cyclic Autocorrelation Function (CAF) and the Spectral Cyclic Density (SCD) can be used to extract second-order cyclostationary information. However, second-order cyclostationary information is poor in nongaussian signals, as the cyclostationary analysis in this case should comprise higher-order statistical information. This paper proposes a new mathematical tool for the higher-order cyclostationary analysis based on the correntropy function. Specifically, the cyclostationary analysis is revisited focusing on the information theory, while the Cyclic Correntropy Function (CCF) and Cyclic Correntropy Spectral Density (CCSD) are also defined. Besides, it is analytically proven that the CCF contains information regarding second- and higher-order cyclostationary moments, being a generalization of the CAF. The performance of the aforementioned new functions in the extraction of higher-order cyclostationary characteristics is analyzed in a wireless communication system where nongaussian noise exists.