820 resultados para information security management assessment
Resumo:
We examine IT-enabled Business Transformations (ITBT) based on three case studies of successful, multi-year ERP implementation programs. Given the inconsistencies in segmenting the different key periods in ITBTs in both literature and our cases, we sought to consolidate the common events or critical incidents in such initiatives. We label those key periods as waves, and the emergence of triggers and reactions thereunto in the management of business transformations. We show that business transformations unfold in four distinct waves: Wave 1 Concept Development, Wave 2 Blueprint Design, Wave 3 Solution Delivery and Wave 4 Post-Transformation. These waves are characterized by the occurrence of strategic- and program-level triggers to which organizations respond by invoking different management services. Our interpretive research provides a new conceptualization of ITBTs based on a service-oriented view of such initiatives. This view draws attention to managerial capabilities as a service to transformations, and how and when these capabilities are required to respond to triggering incidents. We outline propositions and recommendations for business transformation management.
Resumo:
Socio-economic characteristics such as age, gender, educational attainment, employment status, and income contain vital information about how an industry may respond to changing circumstances, and hence are of importance to decision makers. While some socio-economic studies exist, relatively little attention has been given to fishery and aquaculture industries in regards to their socio-economic profiles and their role in the development prospects of these industries. In this study, by way of example, we focus on Australia’s Sydney rock oyster (Saccostrea glomerata) (SRO) industry. The aim of this study was identify the socio-economic profile of the SRO industry and to illustrate the value of such information for an industry management assessment. The SRO industry has experienced a major decrease in production volume since the late 1970 and continues to be affected by prevailing diseases and increasing market competition from Australia’s Pacific oyster (Crassostrea gigas) industry. It is likely that socio-economic aspects have influenced this development within the SRO industry. The socio-economic profile was developed using data from a SRO industry farm survey which was undertaken in 2012. Findings suggested that this industry is characterised by a mature aged oyster farmer population and a part-time oyster farming approach. These characteristics may affect the farmers’ ability to drive innovation and growth. The results also suggested that there may be potential industry entry barriers present in the SRO industry which may prevent younger people taking up oyster farming. Given the results, the study concluded that the current socio-economic profile of the industry has likely contributed to the present economic status quo of the industry.
The suffix-free-prefix-free hash function construction and its indifferentiability security analysis
Resumo:
In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value (IV) of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD) strengthening in the padding functionality of the hash functions. We propose a generic n -bit-iterated hash function framework based on an n -bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary IV s and does not possess MD strengthening. We formally prove that SFPF is indifferentiable from a random oracle (RO) when the compression function is viewed as a fixed input-length random oracle (FIL-RO). We show that some hash function constructions proposed in the literature fit in the SFPF framework while others that do not fit in this framework are not indifferentiable from a RO. We also show that the SFPF hash function framework with the provision of MD strengthening generalizes any n -bit-iterated hash function based on an n -bit compression function and with an n -bit chaining value that is proven indifferentiable from a RO.
Resumo:
Process improvement and innovation are risky endeavors, like swimming in unknown waters. In this chapter, I will discuss how process innovation through BPM can benefit from Research-as-a-Service, that is, from the application of research concepts in the processes of BPM projects. A further subject will be how innovations can be converted from confidence-based to evidence-based models due to affordances of digital infrastructures such as large-scale enterprise soft-ware or social media. I will introduce the relevant concepts, provide illustrations for digital capabilities that allow for innovation, and share a number of key takeaway lessons for how organizations can innovate on the basis of digital opportunities and principles of evidence-based BPM: the foundation of all process decisions in facts rather than fiction.
Resumo:
Management accounting practices are expected to adapt and evolve with changing information requirements. The purpose of this study is to determine the factors that enable management accounting adaptability and effectiveness. This study identifies three factors that drive management accounting adaptability through their support of sense-making and responding. Specifically, it is examined how top management team knowledge, team-based structures, and information system flexibility affect management accounting adaptability. The hypotheses are tested using data collected from an online survey of Australian and New Zealand companies. The results support the proposed relationships. Also a positive association between management accounting adaptability and management accounting effectiveness was found. This empirical study contributes to the literature on management accounting change by determining a number of drivers that improve upon the agility of organizational management accounting practices.
Resumo:
XACML has become the defacto standard for enterprise- wide, policy-based access control. It is a structured, extensible language that can express and enforce complex access control policies. There have been several efforts to extend XACML to support specific authorisation models, such as the OASIS RBAC profile to support Role Based Access Control. A number of proposals for authorisation models that support business processes and workflow systems have also appeared in the literature. However, there is no published work describing an extension to allow XACML to be used as a policy language with these models. This paper analyses the specific requirements of a policy language to express and enforce business process authorisation policies. It then introduces BP-XACML, a new profile that extends the RBAC profile for XACML so it can support business process authorisation policies. In particular, BP-XACML supports the notion of tasks, and constraints at the level of a task instance, which are important requirements in enforcing business process authorisation policies.
Resumo:
A Powerpoint presentation on increasing research data management capability within your university, presented from the university library perspective, and focusing on collaborations with university partners to develop and implement university wide data management services and infrastructure.
Resumo:
QUT Library Research Support has simplified and streamlined the process of research data management planning, storage, discovery and reuse through collaboration and the use of integrated and tailored online tools, and a simplification of the metadata schema. This poster presents the integrated data management services a QUT, including QUT’s Data Management Planning Tool, Research Data Finder, Spatial Data Finder and Software Finder, and information on the simplified Registry Interchange Format – Collections and Services (RIF-CS) Schema. The QUT Data Management Planning (DMP) Tool was built using the Digital Curation Centre’s DMP Online Tool and modified to QUT’s needs and policies. The tool allows researchers and Higher Degree Research students to plan how to handle research data throughout the active phase of their research. The plan is promoted as a ‘live’ document’ and researchers are encouraged to update it as required. The information entered into the plan can be made private or shared with supervisors, project members and external examiners. A plan is mandatory when requesting storage space on the QUT Research Data Storage Service. QUT’s Research Data Finder is integrated with QUT’s Academic Profiles and the Data Management Planning Tool to create a seamless data management process. This process aims to encourage the creation of high quality rich records which facilitate discovery and reuse of quality data. The Registry Interchange Format – Collections and Services (RIF-CS) Schema that is used in the QUT Research Data Finder was simplified to “RIF-CS lite” to reflect mandatory and optional metadata requirements. RIF-CS lite removed schema fields that were underused or extra to the needs of the users and system. This has reduced the amount of metadata fields required from users and made integration of systems a far more simple process where field content is easily shared across services making the process of collecting metadata as transparent as possible.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
Many fisheries worldwide have adopted vessel monitoring systems (VMS) for compliance purposes. An added benefit of these systems is that they collect a large amount of data on vessel locations at very fine spatial and temporal scales. This data can provide a wealth of information for stock assessment, research, and management. However, since most VMS implementations record vessel location at set time intervals with no regard to vessel activity, some methodology is required to determine which data records correspond to fishing activity. This paper describes a probabilistic approach, based on hidden Markov models (HMMs), to determine vessel activity. A HMM provides a natural framework for the problem and, by definition, models the intrinsic temporal correlation of the data. The paper describes the general approach that was developed and presents an example of this approach applied to the Queensland trawl fishery off the coast of eastern Australia. Finally, a simulation experiment is presented that compares the misallocation rates of the HMM approach with other approaches.
Resumo:
Since 2007, close collaboration between the Learning and Teaching Unit’s Academic Quality and Standards team and the Department of Reporting and Analysis’ Business Objects team resulted in a generational approach to reporting where QUT established a place of trust. This place of trust is where data owners are confident in date storage, data integrity, reported and shared. While the role of the Department of Reporting and Analysis focused on the data warehouse, data security and publication of reports, the Academic Quality and Standards team focused on the application of learning analytics to solve academic research questions and improve student learning. Addressing questions such as: • Are all students who leave course ABC academically challenged? • Do the students who leave course XYZ stay within the faculty, university or leave? • When students withdraw from a unit do they stay enrolled on full or part load or leave? • If students enter through a particular pathway, what is their experience in comparison to other pathways? • With five years historic reporting, can a two-year predictive forecast provide any insight? In answering these questions, the Academic Quality and Standards team then developed prototype data visualisation through curriculum conversations with academic staff. Where these enquiries were applicable more broadly this information would be brought into the standardised reporting for the benefit of the whole institution. At QUT an annual report to the executive committees allows all stakeholders to record the performance and outcomes of all courses in a snapshot in time or use this live report at any point during the year. This approach to learning analytics was awarded the Awarded 2014 ATEM/Campus Review Best Practice Awards in Tertiary Education Management for The Unipromo Award for Excellence in Information Technology Management.
Resumo:
The delivery of products and services for construction-based businesses is increasingly becoming knowledge-driven and information-intensive. The proliferation of building information modelling (BIM) has increased business opportunities as well as introduced new challenges for the architectural, engineering and construction and facilities management (AEC/FM) industry. As such, the effective use, sharing and exchange of building life cycle information and knowledge management in building design, construction, maintenance and operation assumes a position of paramount importance. This paper identifies a subset of construction management (CM) relevant knowledge for different design conditions of building components through a critical, comprehensive review of synthesized literature and other information gathering and knowledge acquisition techniques. It then explores how such domain knowledge can be formalized as ontologies and, subsequently, a query vocabulary in order to equip BIM users with the capacity to query digital models of a building for the retrieval of useful and relevant domain-specific information. The formalized construction knowledge is validated through interviews with domain experts in relation to four case study projects. Additionally, retrospective analyses of several design conditions are used to demonstrate the soundness (realism), completeness, and appeal of the knowledge base and query-based reasoning approach in relation to the state-of-the-art tools, Solibri Model Checker and Navisworks. The knowledge engineering process and the methods applied in this research for information representation and retrieval could provide useful mechanisms to leverage BIM in support of a number of knowledge intensive CM/FM tasks and functions.
Resumo:
With the level of digital disruption that is affecting businesses around the globe, you might expect high levels of Governance of Enterprise Information and Technology (GEIT) capability within boards. Boards and their senior executives know technology is important. More than 90% of boards and senior executives currently identify technology as essential to their current businesses, and to their organization’s future. But as few as 16% have sufficient GEIT capability. Global Centre for Digital Business Transformation’s recent research contains strong indicators of the need for change. Despite board awareness of both the likelihood and impact of digital disruption, things digital are still not viewed as a board-level matter in 45% of companies. And, it’s not just the board. The lack of board attention to technology can be mirrored at senior executive level as well. When asked about their organization’s attitude towards digital disruption, 43% of executives said their business either did not recognise it as a priority or was not responding appropriately. A further 32% were taking a “follower” approach, a potentially risky move as we will explain. Given all the evidence that boards know information and technology (I&T***) is vital, that they understand the inevitably, impact and speed of digital change and disruption, why are so many boards dragging their heels? Ignoring I&T disruption and refusing to build capability at board level is nothing short of negligence. Too many boards risk flying blind without GEIT capability [2]. To help build decision quality and I&T governance capability, this research: • Confirms a pressing need to build individual competency and cumulative, across-board capability in governing I&T • Identifies six factors that have rapidly increased the need, risk and urgency • Finds that boards may risk not meeting their duty of care responsibilities when it comes to I&T oversight • Highlights barriers to building capability details three GEIT competencies that boards and executives can use for evaluation, selection, recruitment and professional development.