816 resultados para Information Flows
Resumo:
This article presents a novel approach to confidentiality violation detection based on taint marking. Information flows are dynamically tracked between applications and objects of the operating system such as files, processes and sockets. A confidentiality policy is defined by labelling sensitive information and defining which information may leave the local system through network exchanges. Furthermore, per application profiles can be defined to restrict the sets of information each application may access and/or send through the network. In previous works, we focused on the use of mandatory access control mechanisms for information flow tracking. In this current work, we have extended the previous information flow model to track network exchanges, and we are able to define a policy attached to network sockets. We show an example application of this extension in the context of a compromised web browser: our implementation detects a confidentiality violation when the browser attempts to leak private information to a remote host over the network.
Resumo:
Advances in information and communication technologies have brought about an information revolution, leading to fundamental changes in the way information is collected or generated, shared and distributed. The internet and digital technologies are re-shaping research, innovation and creativity. Economic research has highlighted the importance of information flows and the availability of information for access and re-use. Information is crucial to the efficiency of markets and enhanced information flows promote creativity, innovation and productivity. There is a rapidly expanding body of literature which supports the economic and social benefits of enabling access to and re-use of public sector information.1 (Note that a substantial research project associated with QUT’s Intellectual Property: Knowledge, Culture and Economy (IPKCE) Research Program is engaged in a comprehensive study and analysis of the literature on the economics of access to public sector information.)
Resumo:
This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.
Resumo:
For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.
Lesser-known worlds : bridging the telematic flows with located human experience through game design
Resumo:
This paper represents a new theorization of the role of location-based games (LBGs) as potentially playing specific roles in peoples’ access to the culture of cities [22]. A LBG is a game that employs mobile technologies as tools for game play in real world environments. We argue that as a new genre in the field of mobile entertainment, research in this area tends to be preoccupied with the newness of the technology and its commercial possibilities. However, this overlooks its potential to contribute to cultural production. We argue that the potential to contribute to cultural production lies in the capacity of these experiences to enhance relationships between specific groups and new urban spaces. Given that developers can design LBGs to be played with everyday devices in everyday environments, what new creative opportunities are available to everyday people?
Resumo:
In the scope of this study, ‘performance measurement’ includes the collection and presentation of relevant information that reflects progress in achieving organisational strategic aims and meeting the needs of stakeholders such as merchants, importers, exporters and other clients. Evidence shows that utilising information technology (IT) in customs matters supports import and export practices and ensures that supply chain management flows seamlessly. This paper briefly reviews some practical techniques for measuring performance. Its aim is to recommend a model for measuring the performance of information systems (IS): in this case, the Customs Information System (CIS) used by the Royal Malaysian Customs Department (RMCD).The study evaluates the effectiveness of CIS implementation measures in Malaysia from an IT perspective. A model based on IS theories will be used to assess the impact of CIS. The findings of this study recommend measures for evaluating the performance of CIS and its organisational impacts in Malaysia. It is also hoped that the results of the study will assist other Customs administrations evaluate the performance of their information systems.
Resumo:
Complex flow datasets are often difficult to represent in detail using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows (i.e., complex dynamics and time-dependent). In this paper, we review two popular texture-based techniques and their application to flow datasets sourced from real research projects. The texture-based techniques investigated were Line Integral Convolution (LIC), and Image-Based Flow Visualisation (IBFV). We evaluated these techniques and in this paper report on their visualisation effectiveness (when compared with traditional techniques), their ease of implementation, and their computational overhead.
Resumo:
Detailed representations of complex flow datasets are often difficult to generate using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows. We review two popular texture based techniques and their application to flow datasets sourced from active research projects. The techniques investigated were Line integral convolution (LIC) [1], and Image based flow visualisation (IBFV) [18]. We evaluated these and report on their effectiveness from a visualisation perspective. We also report on their ease of implementation and computational overheads.
Resumo:
Existing process mining techniques provide summary views of the overall process performance over a period of time, allowing analysts to identify bottlenecks and associated performance issues. However, these tools are not de- signed to help analysts understand how bottlenecks form and dissolve over time nor how the formation and dissolution of bottlenecks – and associated fluctua- tions in demand and capacity – affect the overall process performance. This paper presents an approach to analyze the evolution of process performance via a notion of Staged Process Flow (SPF). An SPF abstracts a business process as a series of queues corresponding to stages. The paper defines a number of stage character- istics and visualizations that collectively allow process performance evolution to be analyzed from multiple perspectives. The approach has been implemented in the ProM process mining framework. The paper demonstrates the advantages of the SPF approach over state-of-the-art process performance mining tools using two real-life event logs publicly available.
Resumo:
Developments in information technology will drive the change in records management; however, it should be the health information managers who drive the information management change. The role of health information management will be challenged to use information technology to broker a range of requests for information from a variety of users, including he alth consumers. The purposes of this paper are to conceptualise the role of health information management in the context of a technologically driven and managed health care environment, and to demonstrat e how this framework has been used to review and develop the undergraduate program in health information management at the Queensland University of Technology.