820 resultados para information security management assessment
Resumo:
Real-world business processes are resource-intensive. In work environments human resources usually multitask, both human and non-human resources are typically shared between tasks, and multiple resources are sometimes necessary to undertake a single task. However, current Business Process Management Systems focus on task-resource allocation in terms of individual human resources only and lack support for a full spectrum of resource classes (e.g., human or non-human, application or non-application, individual or teamwork, schedulable or unschedulable) that could contribute to tasks within a business process. In this paper we develop a conceptual data model of resources that takes into account the various resource classes and their interactions. The resulting conceptual resource model is validated using a real-life healthcare scenario.
Resumo:
Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.
Resumo:
Before making a security or privacy decision, Internet users should evaluate several security indicators in their browser, such as the use of HTTPS (indicated via the lock icon), the domain name of the site, and information from extended validation certificates. However, studies have shown that human subjects infrequently employ these indicators, relying on other indicators that can be spoofed and convey no cryptographic assurances. We identify four simple security indicators that accurately represent security properties of the connection and then examine 125 popular websites to determine if the sites' designs result in correctly displayed security indicators during login. In the vast majority of cases, at least some security indicators are absent or suboptimal. This suggests users are becoming habituated to ignoring recommended security indicators.
Resumo:
Several studies have developed metrics for software quality attributes of object-oriented designs such as reusability and functionality. However, metrics which measure the quality attribute of information security have received little attention. Moreover, existing security metrics measure either the system from a high level (i.e. the whole system’s level) or from a low level (i.e. the program code’s level). These approaches make it hard and expensive to discover and fix vulnerabilities caused by software design errors. In this work, we focus on the design of an object-oriented application and define a number of information security metrics derivable from a program’s design artifacts. These metrics allow software designers to discover and fix security vulnerabilities at an early stage, and help compare the potential security of various alternative designs. In particular, we present security metrics based on composition, coupling, extensibility, inheritance, and the design size of a given object-oriented, multi-class program from the point of view of potential information flow.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
This study conceptualizes, operationalises and validates the concept of Knowledge Management Competence as a four-phase multidimensional formative index. Employing survey data from 310 respondents representing 27 organizations using the SAP Enterprise System Financial module, the study results demonstrate a large, significant, positive relationship between Knowledge Management Competence and Enterprise Systems Success (ES-success, as conceived by Gable Sedera and Chan (2008)); suggesting important implications for practice. Strong evidence of the validity of Knowledge Management Competence as conceived and operationalised, too suggests potential from future research evaluating its relationships with possible antecedents and consequences.
Resumo:
This paper investigates the current turbulent state of copyright in the digital age, and explores the viability of alternative compensation systems that aim to achieve the same goals with fewer negative consequences for consumers and artists. To sustain existing business models associated with creative content, increased recourse to DRM (Digital Rights Management) technologies, designed to restrict access to and usage of digital content, is well underway. Considerable technical challenges associated with DRM systems necessitate increasingly aggressive recourse to the law. A number of controversial aspects of copyright enforcement are discussed and contrasted with those inherent in levy based compensation systems. Lateral exploration of the copyright dilemma may help prevent some undesirable societal impacts, but with powerful coalitions of creative, consumer electronics and information technology industries having enormous vested interest in current models, alternative schemes are frequently treated dismissively. This paper focuses on consideration of alternative models that better suit the digital era whilst achieving a more even balance in the copyright bargain.
Resumo:
Agile ridesharing aims to utilise the capability of social networks and mobile phones to facilitate people to share vehicles and travel in real time. However the application of social networking technologies in local communities to address issues of personal transport faces significant design challenges. In this paper we describe an iterative design-based approach to exploring this problem and discuss findings from the use of an early prototype. The findings focus upon interaction, privacy and profiling. Our early results suggest that explicitly entering information such as ride data and personal profile data into formal fields for explicit computation of matches, as is done in many systems, may not be the best strategy. It might be preferable to support informal communication and negotiation with text search techniques.
Resumo:
In today's technological age, fraud has become more complicated, and increasingly more difficult to detect, especially when it is collusive in nature. Different fraud surveys showed that the median loss from collusive fraud is much greater than fraud perpetrated by a single person. Despite its prevalence and potentially devastating effects, collusion is commonly overlooked as an organizational risk. Internal auditors often fail to proactively consider collusion in their fraud assessment and detection efforts. In this paper, we consider fraud scenarios with collusion. We present six potentially collusive fraudulent behaviors and show their detection process in an ERP system. We have enhanced our fraud detection framework to utilize aggregation of different sources of logs in order to detect communication and have further enhanced it to render it system-agnostic thus achieving portability and making it generally applicable to all ERP systems.
Resumo:
QUT Library and the High Performance Computing and Research Support (HPC) Team have been collaborating on developing and delivering a range of research support services, including those designed to assist researchers to manage their data. QUT’s Management of Research Data policy has been available since 2010 and is complemented by the Data Management Guidelines and Checklist. QUT has partnered with the Australian Research Data Service (ANDS) on a number of projects including Seeding the Commons, Metadata Hub (with Griffith University) and the Data Capture program. The HPC Team has also been developing the QUT Research Data Repository based on the Architecta Mediaflux system and have run several pilots with faculties. Library and HPC staff have been trained in the principles of research data management and are providing a range of research data management seminars and workshops for researchers and HDR students.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
This paper proposes a semi-supervised intelligent visual surveillance system to exploit the information from multi-camera networks for the monitoring of people and vehicles. Modules are proposed to perform critical surveillance tasks including: the management and calibration of cameras within a multi-camera network; tracking of objects across multiple views; recognition of people utilising biometrics and in particular soft-biometrics; the monitoring of crowds; and activity recognition. Recent advances in these computer vision modules and capability gaps in surveillance technology are also highlighted.
Resumo:
Information Systems researchers have employed a diversity of sometimes inconsistent measures of IS success, seldom explicating the rationale, thereby complicating the choice for future researchers. In response to these and other issues, Gable, Sedera and Chan introduced the IS-Impact measurement model. This model represents “the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. Although the IS-Impact model was rigorously validated in previous research, there is a need to further generalise and validate it in different context. This paper reported the findings of the IS-Impact model revalidation study at four state governments in Malaysia with 232 users of a financial system that is currently being used at eleven state governments in Malaysia. Data was analysed following the guidelines for formative measurement validation using SmartPLS. Based on the PLS results, data supported the IS-Impact dimensions and measures thus confirming the validity of the IS-Impact model in Malaysia. This indicates that the IS-Impact model is robust and can be used across different context.
Resumo:
Sustainable practices are more than ever on the radar screen of organizations, triggered by a growing demand of the wider population towards approaches and practices that can be considered "green" or "sustainable". Our specific intent with this call for action is to immerse deeper into the role of business processes, and specifically the contributions that the management of these processes can play in leveraging the transformative power of information systems (IS) in order to create environmentally sustainable organizations. Our key premise is that business and information technology (IT) managers need to engage in a process-focused discussion to enable a common, comprehensive understanding of process, and the process-centered opportunities for making these processes, and ultimately the organization as a process-centric entity, "green". Based on a business process lifecycle model, we propose possible avenues for future research.
Resumo:
This paper presents a modified approach to evaluate access control policy similarity and dissimilarity based on the proposal by Lin et al. (2007). Lin et al.'s policy similarity approach is intended as a filter stage which identifies similar XACML policies that can be analysed further using more computationally demanding techniques based on model checking or logical reasoning. This paper improves the approach of computing similarity of Lin et al. and also proposes a mechanism to calculate a dissimilarity score by identifying related policies that are likely to produce different access decisions. Departing from the original algorithm, the modifications take into account the policy obligation, rule or policy combining algorithm and the operators between attribute name and value. The algorithms are useful in activities involving parties from multiple security domains such as secured collaboration or secured task distribution. The algorithms allow various comparison options for evaluating policies while retaining control over the restriction level via a number of thresholds and weight factors.