883 resultados para 280506 Coding and Information Theory
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new ‘Danger Theory’ (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of ‘grounding’ the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
The conceptual domain of agency theory is one of the dominant organisational theory perspectives applied in current family business research (Chrisman et al., 2010). According to agency theory (Jensen and Meckling, 1976), agency costs generally arise due to individuals’ selfinterest and decision making based on rational thinking and oriented toward own preferences. With more people involved in decision making, such as through the separation of ownership and management, agency costs occur due to different preferences and information asymmetries between the owner (principal) and the employed management (agent) (Jensen and Meckling, 1976). In other words, agents take decisions based on their individual preferences (for example, short term, financial gains) instead of the owners’ preferences (for example, long term, sustainable development).
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
The thesis deals with the problem of Model Selection (MS) motivated by information and prediction theory, focusing on parametric time series (TS) models. The main contribution of the thesis is the extension to the multivariate case of the Misspecification-Resistant Information Criterion (MRIC), a criterion introduced recently that solves Akaike’s original research problem posed 50 years ago, which led to the definition of the AIC. The importance of MS is witnessed by the huge amount of literature devoted to it and published in scientific journals of many different disciplines. Despite such a widespread treatment, the contributions that adopt a mathematically rigorous approach are not so numerous and one of the aims of this project is to review and assess them. Chapter 2 discusses methodological aspects of MS from information theory. Information criteria (IC) for the i.i.d. setting are surveyed along with their asymptotic properties; and the cases of small samples, misspecification, further estimators. Chapter 3 surveys criteria for TS. IC and prediction criteria are considered for: univariate models (AR, ARMA) in the time and frequency domain, parametric multivariate (VARMA, VAR); nonparametric nonlinear (NAR); and high-dimensional models. The MRIC answers Akaike’s original question on efficient criteria, for possibly-misspecified (PM) univariate TS models in multi-step prediction with high-dimensional data and nonlinear models. Chapter 4 extends the MRIC to PM multivariate TS models for multi-step prediction introducing the Vectorial MRIC (VMRIC). We show that the VMRIC is asymptotically efficient by proving the decomposition of the MSPE matrix and the consistency of its Method-of-Moments Estimator (MoME), for Least Squares multi-step prediction with univariate regressor. Chapter 5 extends the VMRIC to the general multiple regressor case, by showing that the MSPE matrix decomposition holds, obtaining consistency for its MoME, and proving its efficiency. The chapter concludes with a digression on the conditions for PM VARX models.
Resumo:
In a quantum critical chain, the scaling regime of the energy and momentum of the ground state and low-lying excitations are described by conformal field theory (CFT). The same holds true for the von Neumann and Renyi entropies of the ground state, which display a universal logarithmic behavior depending on the central charge. In this Letter we generalize this result to those excited states of the chain that correspond to primary fields in CFT. It is shown that the nth Renyi entropy is related to a 2n-point correlator of primary fields. We verify this statement for the critical XX and XXZ chains. This result uncovers a new link between quantum information theory and CFT.
Resumo:
This article discusses issues related to the organization and reception of information in the context of services and public information systems driven by technology. It stems from the assumption that in a ""technologized"" society, the distance between users and information is almost always of cognitive and socio-cultural nature, a product of our effort to design communication. In this context, we favor the approach of the information sign, seeking to answer how a documentary message turns into information, i.e. a structure recognized as socially useful. Observing the structural, cognitive and communicative aspects of the documentary message, based on Documentary Linguistics, Terminology, as well as on Textual Linguistics, the policy of knowledge management and innovation of the Government of the State of Sao Paulo is analyzed, which authorizes the use of Web 2.0, also questioning to what extent this initiative represents innovation in the environment of libraries.
Resumo:
Assuming as a starting point the acknowledge that the principles and methods used to build and manage the documentary systems are disperse and lack systematization, this study hypothesizes that the notion of structure, when assuming mutual relationships among its elements, promotes more organical systems and assures better quality and consistency in the retrieval of information concerning users` matters. Accordingly, it aims to explore the fundamentals about the records of information and documentary systems, starting from the notion of structure. In order to achieve that, it presents basic concepts and relative matters to documentary systems and information records. Next to this, it lists the theoretical subsides over the notion of structure, studied by Benveniste, Ferrater Mora, Levi-Strauss, Lopes, Penalver Simo, Saussure, apart from Ducrot, Favero and Koch. Appropriations that have already been done by Paul Otlet, Garcia Gutierrez and Moreiro Gonzalez. In Documentation come as a further topic. It concludes that the adopted notion of structure to make explicit a hypothesis of real systematization achieves more organical systems, as well as it grants pedagogical reference to the documentary tasks.
Resumo:
We study the transformation of maximally entangled states under the action of Lorentz transformations in a fully relativistic setting. By explicit calculation of the Wigner rotation, we describe the relativistic analog of the Bell states as viewed from two inertial frames moving with constant velocity with respect to each other. Though the finite dimensional matrices describing the Lorentz transformations are non-unitary, each single particle state of the entangled pair undergoes an effective, momentum dependent, local unitary rotation, thereby preserving the entanglement fidelity of the bipartite state. The details of how these unitary transformations are manifested are explicitly worked out for the Bell states comprised of massive spin 1/2 particles and massless photon polarizations. The relevance of this work to non-inertial frames is briefly discussed.
Resumo:
Using a species’ population to measure its conservation status, this note explores how an increase in knowledge about this status would change the public’s willingness to donate funds for its conservation. This is done on the basis that the relationship between the level of donations and a species’ conservation status satisfies stated general mathematical properties. This level of donation increases, on average, with greater knowledge of a species’ conservation status if it is endangered, but falls if it is secure. Game theory and other theory is used to show how exaggerating the degree of endangerment of a species can be counterproductive for conservation.
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
Recent reports indicate that several descriptors of pain sensations in the McGill Pain Questionnaire (MPQ) are difficult to classify within MPQ sensory subcategories because of incomprehension, underuse, or ambiguity of usage. Adopting the same methodology of recent studies, the present investigation focused on the affective and evaluative subcategories of the MPQ. A decision rule revealed that only 6 of 18 words met criteria for the affective category and 5 of 11 words met criteria for the evaluative category, thus warranting a reduced list of words in these categories. This reduction, however, led to negligible loss of information transmitted. Despite notable changes in classification, the intensity ratings of the retained words correlated very highly with those originally reported for the MPQ. In conclusion, although the intensity ratings of MPQ affective and evaluative descriptors need no revision, selective reduction and reorganization of these descriptors can enhance the efficiency of this approach to pain assessment. [copy ] 2001 by the American Pain Society
Resumo:
Two stock-market simulation experiments investigated the notion that rumors that invoke stable-cause attributions spawn illusory associations and less regressive predictions and behavior. In Study 1, illusory perceptions of association and stable causation (rumors caused price changes on the day after they appeared) existed despite rigorous conditions of nonassociation (price changes were unrelated to rumors). Predictions (recent price trends will continue) and trading behavior (departures from a strong buy-low-sell-high strategy) were both anti-regressive. In Study 2, stability of attribution was manipulated via a computerized tutorial. Participants taught to view price-changes as caused by stable forces predicted less regressively and departed more from buy-low-sell-high trading patterns than those taught to perceive changes as caused by unstable forces. Results inform a social cognitive and decision theoretic understanding of rumor by integrating it with causal attribution, covariation detection, and prediction theory. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
This article presents the results of a research to understand the conditions of interaction between work and three specific information systems (ISs) used in the Brazilian banking sector. We sought to understand how systems are redesigned in work practices, and how work is modified by the insertion of new systems. Data gathering included 46 semi-structured interviews, together with an analysis of system-related documents. We tried to identify what is behind the practices that modify the ISs and work. The data analysis revealed an operating structure: a combination of different practices ensuring that the interaction between agents and systems will take place. We discovered a structure of reciprocal conversion caused by the increased technical skills of the agent and the humanization of the systems. It is through ongoing adjustment between work and ISs that technology is tailored to the context and people become more prepared to handle with technology.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.