47 resultados para 080403 Data Structures

em Deakin Research Online - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes research that was conducted into the potential of modeling the activities of the Data Processing Department as an aid to the computer auditor. A methodology is composed to aid in the evaluation of the Internal Controls, particularly the General Controls relative to computer processing. Consisting of three major components, the methodology enables the auditor to model the presumed activities of the Data Processing Department against the actual activities, as recorded on the Operating System Log. The first component of the methodology is the construction and loading of a model of the presumed activities of the Data Processing Department from its verbal, scheduled, and reported activities. The second component is the generation of a description of the actual activities of the Data Processing Department from the information recorded on the Operating System Log. This is effected by reducing the Operating System Log to the format described by the Standard Audit File concept. Finally, the third component in the methodology is the modeling process itself. This is in fact a new analysis technique proposed for use by the EDP auditor. The modeling process is composed of software that compares the model developed and loaded in the first component, with the description of actual activity as collated by the second component. Results from this comparison are then reviewed by the auditor, who determines if they adequately depict the situation, or whether the models description as specified in the first component requires to be altered, and the modeling process re-initiated. In conducting the research, information and data from a production installation was used. Use of the ‘real-world’ input proved both the feasibility of developing a model of the reported activities of the Data Processing Department, and the adequacy of the operating system log as a source of information to report the departments actual activities. Additionally, it enabled the involvement and comment of practicing auditors. The research involved analysis of the effect of EDP on the audit process, structure of the EDP audit process, data reduction, data structures, model formalization, and model processing software. Additionally, the Standard Audit File concept was verified through its use by practising auditors, and expanded by the development of an indexed data structure, which enabled its analysis to be conducted interactively. Results from the trial implementation of the research software and methodology at a production installation confirmed the research hypothesis that the activities of the Data Processing Department could be modelled, and that there are substantial benefits from the EDP auditor in analysing this process. The research in fact provides a new source of information, and develops a new analysis technique for the EDP auditor. It demonstrates the utilization of computer technology to monitor itself for the audit function, and reasserts auditor independence by providing access to technical detail describing the processing activities of the computer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of extracting infrequent patterns from streams and building associations between these patterns is becoming increasingly relevant today as many events of interest such as attacks in network data or unusual stories in news data occur rarely. The complexity of the problem is compounded when a system is required to deal with data from multiple streams. To address these problems, we present a framework that combines the time based association mining with a pyramidal structure that allows a rolling analysis of the stream and maintains a synopsis of the data without requiring increasing memory resources. We apply the algorithms and show the usefulness of the techniques. © 2007 Crown Copyright.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer-aided design decision support has proved to be an elusive and intangible project for many researchers as they seek to encapsulate information and knowledge-based systems as useful multifunctional data structures. Definitions of ‘knowledge', ‘information', ‘facts', and ‘data' become semantic footballs in the struggle to identify what designers actually do, and what level of support would suit them best, and how that support might be offered. The Construction Primer is a database-drivable interactive multimedia environment that provides readily updated access to many levels of information aimed to suit students and practitioners alike. This is hardly a novelty in itself. The innovative interface and metadata structures, however, combine with the willingness of national building control legislators, standards authorities, materials producers, building research organisations, and specification services to make the Construction Primer a versatile design decision support vehicle. It is both compatible with most working methodologies while remaining reasonably future-proof. This paper describes the structure of the project and highlights the importance of sound planning and strict adhesion to library-standard metadata protocols as a means to avoid the support becoming too specific or too paradigmatic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently high-speed networks have been utilized by attackers as Distributed Denial of Service (DDoS) attack infrastructure. Services on high-speed networks also have been attacked by successive waves of the DDoS attacks. How to sensitively and accurately detect the attack traffic, and quickly filter out the attack packets are still the major challenges in DDoS defense. Unfortunately most current defense approaches can not efficiently fulfill these tasks. Our approach is to find the network anomalies by using neural network and classify DDoS packets by a Bloom filter-based classifier (BFC). BFC is a set of spaceefficient data structures and algorithms for packet classification. The evaluation results show that the simple complexity, high classification speed and accuracy and low storage requirements of this classifier make it not only suitable for DDoS filtering in high-speed networks, but also suitable for other applications such as string matching for intrusion detection systems and IP lookup for programmable routers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design space exploration formalism has developed data structures and algorithms of sufficient complexity and scope to support conceptual layout, massing, and enclosure configurations. However, design remains a human enterprise. To support the user in designing with the formalism, we have developed an interaction model that addresses the interleaving of user actions with the formal operations of design space exploration. The central feature of our interaction model is the modeling of control based on mixed-initiative. Initiative is sometimes taken by the designer and sometimes by the formalism in working on a shared design task. The model comprises three layers, domain, task, and dialogue. In this paper we describe the formulation of the domain layer of our mixed-initiative interaction model for design space exploration. We present the view of the domain as understood in the formalism in terms of the three abstract concepts of state, move, and structure. In order to support mixed initiative, it is necessary to develop a shared view of the domain. The domain layer addresses this problem by mapping the designer's view onto the symbol substrate. First, we present the designer's view of the domain in terms of problems, solutions, choices, and history. Second, we show how this view is interleaved with the symbol-substrate through four domain layer constructs, problem state, solution state, choice, and exploration history. The domain layer presents a suitable foundation for integrating the role of the designer with a description formalism. It enables the designer to maintain exploration freedom in terms of formulating and reformulating problems, generating solutions, making choices, and navigating the history of exploration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective
The use of then-test (retrospective pre-test) scores has frequently been proposed as a solution to potential confounding of change scores because of response shift, as it is assumed that then-test and post-test responses are provided from the same perspective. However, this assumption has not been formally tested using robust quantitative methods. The aim of this study was to compare the psychometric performance of then-test/post-test with traditional pre-test/post-test data and assessing whether the resulting data structures support the application of the then-test for evaluations of chronic disease self-management interventions.

Study Design and Setting
Pre-test, post-test, and then-test data were collected from 314 participants of self-management courses using the Health Education Impact Questionnaire (heiQ). The derived change scores (pre-test/post-test; then-test/post-test) were examined for their psychometric performance using tests of measurement invariance.

Results
Few questionnaire items were noninvariant across pre-test/post-test, with four items identified and requiring removal to enable an unbiased comparison of factor means. In contrast, 12 items were identified and required removal in then-test/post-test data to avoid biased change score estimates.

Conclusion
Traditional pre-test/post-test data appear to be robust with little indication of response shift. In contrast, the weaker psychometric performance of then-test/post-test data suggests psychometric flaws that may be the result of implicit theory of change, social desirability, and recall bias.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The major outcomes of this research project were the development of a set of decentralized algorithms to index, locate and synchronize replicated information in a networked environment. This study exploits the application specific design constraints of networked systems to improve performance, instead of relying on data structures and algorithms best suited to centralized systems.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Describes the design and implementation of an operating system kernel specifically designed to support real-time applications. It emphasises portability and aims to support state-of-the-art concepts in real-time programming. Discusses architectural aspects of the ARTOS kernel, and introduces new concepts on the areas of interrupt processing, scheduling, mutual exclusion and inter-task communication. Also explains the programming environment of ARTOS kernal and its task model, defines the real-time task states and system data structures and discusses exception handling mechanisms which are used to detect missed deadlines and take corrective action.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we discuss the design aspects of a dynamic distributed directory scheme (DDS) to facilitate efficient and transparent access to information files in mobile environments. The proposed directory interface enables users of mobile computers to view a distributed file system on a network of computers as a globally shared file system. In order to counter some of the limitations of wireless communications, we propose improvised invalidation schemes that avoid false sharing and ensure uninterrupted usage under disconnected and low bandwidth conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The two-dimensional Principal Component Analysis (2DPCA) is a robust method in face recognition. Much recent research shows that the 2DPCA is more reliable than the well-known PCA method in recognising human face. However, in many cases, this method tends to be overfitted to sample data. In this paper, we proposed a novel method named random subspace two-dimensional PCA (RS-2DPCA), which combines the 2DPCA method with the random subspace (RS) technique. The RS-2DPCA inherits the advantages of both the 2DPCA and RS technique, thus it can avoid the overfitting problem and achieve high recognition accuracy. Experimental results in three benchmark face data sets -the ORL database, the Yale face database and the extended Yale face database B - confirm our hypothesis that the RS-2DPCA is superior to the 2DPCA itself.