952 resultados para financial data processing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common characteristic among parallel/distributed programming languages is that the one language is used to specify not only the overall organisation of the distributed application, but also the functionality of the application. That is, the connectivity and functionality of processes are specified within a single program. Connectivity and functionality are independent aspects of a distributed application. This thesis shows that these two aspects can be specified separately, therefore allowing application designers to freely concentrate on either aspect in a modular fashion. Two new programming languages have been developed for specifying each aspect. These languages are for loosely coupled distributed applications based on message passing, and have been designed to simplify distributed programming by completely removing all low level interprocess communication. A suite of languages and tools has been designed and developed. It includes the two new languages, parsers, a compilation system to generate intermediate C code that is compiled to binary object modules, a run-time system to create, manage and terminate several distributed applications, and a shell to communicate with the run-tune system. DAL (Distributed Application Language) and DAPL (Distributed Application Process Language) are the new programming languages for the specification and development of process oriented, asynchronous message passing, distributed applications. These two languages have been designed and developed as part of this doctorate in order to specify such distributed applications that execute on a cluster of computers. Both languages are used to specify orthogonal components of an application, on the one hand the organisation of processes that constitute an application, and on the other the interface and functionality of each process. Consequently, these components can be created in a modular fashion, individually and concurrently. The DAL language is used to specify not only the connectivity of all processes within an application, but also a cluster of computers for which the application executes. Furthermore, sub-clusters can be specified for individual processes of an application to constrain a process to a particular group of computers. The second language, DAPL, is used to specify the interface, functionality and data structures of application processes. In addition to these languages, a DAL parser, a DAPL parser, and a compilation system have been designed and developed (in this project). This compilation system takes DAL and DAPL programs to generate object modules based on machine code, one module for each application process. These object modules are used by the Distributed Application System (DAS) to instantiate and manage distributed applications. The DAS system is another new component of this project. The purpose of the DAS system is to create, manage, and terminate many distributed applications of similar and different configurations. The creation procedure incorporates the automatic allocation of processes to remote machines. Application management includes several operations such as deletion, addition, replacement, and movement of processes, and also detection and reaction to faults such as a processor crash. A DAS operator communicates with the DAS system via a textual shell called DASH (Distributed Application SHell). This suite of languages and tools allowed distributed applications of varying connectivity and functionality to be specified quickly and simply at a high level of abstraction. DAL and DAPL programs of several processes may require a few dozen lines to specify as compared to several hundred lines of equivalent C code that is generated by the compilation system. Furthermore, the DAL and DAPL compilation system is successful at generating binary object modules, and the DAS system succeeds in instantiating and managing several distributed applications on a cluster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a theoretical basis for and outlines the method of finding the Lie point symmetries of systems of partial differential equations. It seeks to determine which of five computer algebra packages is best at finding these symmetries. The chosen packages are LIEPDE and DIMSYM for REDUCE, LIE and BIGLIE for MUMATH, DESOLV for MAPLE, and MATHLIE for MATHEMATICA. This work concludes that while all of the computer packages are useful, DESOLV appears to be the most successful system at determining the complete set of Lie symmetries. Also, the study describes REDUCEVAR, a new package for MAPLE, that reduces the number of independent variables in systems of partial differential equations, using particular Lie point symmetries. It outlines the results of some testing carried out on this package. It concludes that REDUCEVAR is a very useful tool in performing the reduction of independent variables according to Lie's theory and is highly accurate in identifying cases where the symmetries are not suitable for finding S/G equations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fifty years ago there were no stored-program electronic computers in the world. Even thirty years ago a computer was something that few organisations could afford, and few people could use. Suddenly, in the 1960s and 70s, everything changed and computers began to become accessible. Today* the need for education in Business Computing is generally acknowledged, with each of Victoria's seven universities offering courses of this type. What happened to promote the extremely rapid adoption of such courses is the subject of this thesis. I will argue that although Computer Science began in Australia's universities of the 1950s, courses in Business Computing commenced in the 1960s due to the requirement of the Commonwealth Government for computing professionals to fulfil its growing administrative needs. The Commonwealth developed Programmer-in-Training courses were later devolved to the new Colleges of Advanced Education. The movement of several key figures from the Commonwealth Public Service to take up positions in Victorian CAEs was significant, and the courses they subsequently developed became the model for many future courses in Business Computing. The reluctance of the universities to become involved in what they saw as little more than vocational training, opened the way for the CAEs to develop this curriculum area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information technology research over the past two decades suggests that the installation and use of computers fundamentally affects the structure and function of organisations and, m particular, the workers in these organizations. Following the release of the IBM Personal Computer in 1982, microcomputers have become an integral part of most work environments. The accounting services industry, in particular, has felt the impact of this ‘microcomputer revolution’. In Big Six accounting firms, there is almost one microcomputer for each professional accountant employed, Notwithstanding this, little research has been done on the effect of microcomputers on the work outcomes of professional accountants working in these firms. This study addresses this issue. It assesses, in an organisational setting, how accountant’ perceptions of ease of use and usefulness of microcomputers act on their computer anxieties, microcomputer attitudes and use to affect their job satisfaction and job performance. The research also examines how different types of human-computer interfaces affect the relationships between accountants' beliefs about microcomputer utility and ease of use, computer anxiety, microcomputer attitudes and microcomputer use. To attain this research objective, a conceptual model was first developed, The model indicates that work outcomes (job satisfaction and job performance) of professional accountants using microcomputers are influenced by users' perceptions of ease of use and usefulness of microcomputers via paths through (a) the level of computer anxiety experienced by users, (b) the general attitude of users toward using microcomputers, and (c) the extent to which microcomputers are used by individuals. Empirically testable propositions were derived from the model to test the postulated relationships between these constructs. The study also tested whether or not users of different human-computer interfaces reacted differently to the perceptions and anxieties they hold about microcomputers and their use in the workplace. It was argued that users of graphical interfaces, because of the characteristics of those interfaces, react differently to their perceptions and anxieties about microcomputers compared with users of command-line (or textual-based) interfaces. A passive-observational study in a field setting was used to test the model and the research propositions. Data was collected from 164 professional accountants working in a Big Six accounting firm in a metropolitan city in Australia. Structural equation modelling techniques were used to test the, hypothesised causal relationships between the components comprising the general research model. Path analysis and ordinary least squares regression was used to estimate the parameters of the model and analyse the data obtained. Multisample analysis (or stacked model analysis) using EQS was used to test the fit of the model to the data of the different human-computer interface groups and to estimate the parameters for the paths in those different groups. The results show that the research model is a good description of the data. The job satisfaction of professional accountants is directly affected by their attitude toward using microcomputers and by microcomputer use itself. However, job performance appears to be only directly affected by microcomputer attitudes. Microcomputer use does not directly affect job performance. Along with perceived ease of use and perceived usefulness, computer anxiety is shown to be an important determinant of attitudes toward using microcomputers - higher levels of computer anxiety negatively affect attitudes toward using microcomputers. Conversely, higher levels of perceived ease of use and perceived usefulness heighten individuals' positive attitudes toward using microcomputers. Perceived ease of use and perceived usefulness also indirectly affect microcomputer attitudes through their effect on computer anxiety. The results show that higher levels of perceived ease of use and perceived usefulness result in lower levels of computer anxiety. A surprising result from the study is that while perceived ease of use is shown to directly affect the level of microcomputer usage, perceived usefulness and attitude toward using microcomputers does not. The results of the multisample analysis confirm that the research model fits the stacked model and that the stacked model is a significantly better fit if specific parameters are allowed to vary between the two human-computer interface user groups. In general, these results confirm that an interaction exists between the type of human-computer interface (the variable providing the grouping) and the other variables in the model The results show a clear difference between the two groups in the way in which perceived ease of use and perceived usefulness affect microcomputer attitude. In the case of users of command-line interfaces, these variables appear to affect microcomputer attitude via an intervening variable, computer anxiety, whereas in the graphical interface user group the effect occurs directly. Related to this, the results show that perceived ease of use and perceived usefulness have a significant direct effect on computer anxiety in command-line interface users, but no effect at all for graphical interface users. Of the two exogenous variables only perceived ease of use, and that in the case of the command-line interface users, has a direct significant effect on extent of use of microcomputers. In summary, the research has contributed to the development of a theory of individual adjustment to information technology in the workplace. It identifies certain perceptions, anxieties and attitudes about microcomputers and shows how they may affect work outcomes such as job satisfaction and job performance. It also shows that microcomputer-interface types have a differential effect on some of the hypothesised relationships represented in the general model. Future replication studies could sample a broader cross-section of the microcomputer user community. Finally, the results should help Big Six accounting firms to maximise the benefits of microcomputer use by making them aware of how working with microcomputers affects job satisfaction and job performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis reviews the literature relating to girls and computing within a framework which is structured around three specific questions. First, are there differences between girls and boys in their participation in class computing activities and/or in non-class computing activities? Second, do these differences in participation in computing activities have broader implications which justify the growing concern about the under-representation of girls? Third, wahy are girls under-represented in these activities? Although the available literature is predominantly descriptive, the underlying implicit theoretical model is essentially a social learning model. Girl's differential participation is attributed to learned attitudes towards computing rathan to differences between girls and boys in general ability. These attitudes, which stress the masculine, mathematical, technological aspects of computing are developed through modelling, direct experience, intrinsic and extrinsic reinforcement and generalisation from pre-existing, attitudes to related curriculum areas. In the literature it is implicitly assumed that these attitudes underlie girl's decisions to self-select out of computing activities. In this thesis predictions from a social learning model are complemented by predictions derived from expectancy-value, cognitive dissonance and self-perception theories. These are tested in three separate studies. Study one provides data from a pretest-posttest study of 24 children in a year four class learning BASIC. It examines pre- and posttest differences between girls and boys in computing experience, knowledge and achievement as well as the factors relating to computing achievement. Study two uses a pretest-posttest control group design to study the gender differences in the impact of the introduction of Logo into years 1, 3, 5 and 7 in both a coeducational and single-sex setting using a sample of 222 children from three schools. Study three utilises a larger sample of 1176 students, drawn from three secondary schools and five primary schools, enabling an evaluation of gender differences in relation to a wide range of class computing experiences and in a broader range of school contexts. The overall results are consistent across the three studies, supporting the contention that social factors, rather than ability differences influence girls' participation and achievement in computing. The more global theoretical framework, drawing on social learning, expectancy-value, cognitive dissonance and self-perception theories, provides a more adequate explanation of gender differences in participation than does any one of these models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed Shared Memory (DSM) provides programmers with a shared memory environment in systems where memory is not physically shared. Clusters of Workstations (COWs), an often untapped source of computing power, are characterised by a very low cost/performance ratio. The combination of Clusters of Workstations (COWs) with DSM provides an environment in which the programmer can use the well known approaches and methods of programming for physically shared memory systems and parallel processing can be carried out to make full use of the computing power and cost advantages of the COW. The aim of this research is to synthesise and develop a distributed shared memory system as an integral part of an operating system in order to provide application programmers with a convenient environment in which the development and execution of parallel applications can be done easily and efficiently, and which does this in a transparent manner. Furthermore, in order to satisfy our challenging design requirements we want to demonstrate that the operating system into which the DSM system is integrated should be a distributed operating system. In this thesis a study into the synthesis of a DSM system within a microkernel and client-server based distributed operating system which uses both strict and weak consistency models, with a write-invalidate and write-update based approach for consistency maintenance is reported. Furthermore a unique automatic initialisation system which allows the programmer to start the parallel execution of a group of processes with a single library call is reported. The number and location of these processes are determined by the operating system based on system load information. The DSM system proposed has a novel approach in that it provides programmers with a complete programming environment in which they are easily able to develop and run their code or indeed run existing shared memory code. A set of demanding DSM system design requirements are presented and the incentives for the placement of the DSM system with a distributed operating system and in particular in the memory management server have been reported. The new DSM system concentrated on an event-driven set of cooperating and distributed entities, and a detailed description of the events and reactions to these events that make up the operation of the DSM system is then presented. This is followed by a pseudocode form of the detailed design of the main modules and activities of the primitives used in the proposed DSM system. Quantitative results of performance tests and qualitative results showing the ease of programming and use of the RHODOS DSM system are reported. A study of five different application is given and the results of tests carried out on these applications together with a discussion of the results are given. A discussion of how RHODOS’ DSM allows programmers to write shared memory code in an easy to use and familiar environment and a comparative evaluation of RHODOS DSM with other DSM systems is presented. In particular, the ease of use and transparency of the DSM system have been demonstrated through the description of the ease with which a moderately inexperienced undergraduate programmer was able to convert, write and run applications for the testing of the DSM system. Furthermore, the description of the tests performed using physically shared memory shows that the latter is indistinguishable from distributed shared memory; this is further evidence that the DSM system is fully transparent. This study clearly demonstrates that the aim of the research has been achieved; it is possible to develop a programmer friendly and efficient DSM system fully integrated within a distributed operating system. It is clear from this research that client-server and microkernel based distributed operating system integrated DSM makes shared memory operations transparent and almost completely removes the involvement of the programmer beyond classical activities needed to deal with shared memory. The conclusion can be drawn that DSM, when implemented within a client-server and microkernel based distributed operating system, is one of the most encouraging approaches to parallel processing since it guarantees performance improvements with minimal programmer involvement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of fault-tolerant computing systems is a very difficult task. Two reasons contributed to this difficulty can be described as follows. The First is that, in normal practice, fault-tolerant computing policies and mechanisms are deeply embedded into most application programs, so that these application programs cannot cope with changes in environments, policies and mechanisms. These factors may change frequently in a distributed environment, especially in a heterogeneous environment. Therefore, in order to develop better fault-tolerant systems that can cope with constant changes in environments and user requirements, it is essential to separate the fault tolerant computing policies and mechanisms in application programs. The second is, on the other hand, a number of techniques have been proposed for the construction of reliable and fault-tolerant computing systems. Many computer systems are being developed to tolerant various hardware and software failures. However, most of these systems are to be used in specific application areas, since it is extremely difficult to develop systems that can be used in general-purpose fault-tolerant computing. The motivation of this thesis is based on these two aspects. The focus of the thesis is on developing a model based on the reactive system concepts for building better fault-tolerant computing applications. The reactive system concepts are an attractive paradigm for system design, development and maintenance because it separates policies from mechanisms. The stress of the model is to provide flexible system architecture for the general-purpose fault-tolerant application development, and the model can be applied in many specific applications. With this reactive system model, we can separate fault-tolerant computing polices and mechanisms in the applications, so that the development and maintenance of fault-tolerant computing systems can be made easier.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A range of factors, both internal and external, is creating changes in teaching and teachers’ professional lives. Information and Communication Technology (ICT) is just one of the major changes impacting on the teaching profession. As teachers face intense pressure to adapt to this tsunami, this study aims to investigate ways in which teachers can be helped. In South Australia, where this study is set, all teachers in Government schools are expected to be "ICT Smart", i.e. able to use appropriate forms of ICT to enhance the teaching and learning environment of their classrooms. From the researcher’s involvement for over a decade in professional development for teachers, and from visits to many schools, it appears that numerous teachers have not reached this standard. The greatest need is in Reception to Year 7 schools where the average age of teachers is nearly 50. Because no state-wide data exists, this study is intended to establish if there is a problem and if there is, to identify specific needs and offer possible solutions. The study is comprised of four parts: Part A, the Introduction gives an overview of the inter-relationships between these parts and the overall Folio. It establishes the setting and provides a rationale for the study and its focus on Professional Development in Information and Communication Technology. Part B, the Elective Research Studies, follows the writer’s involvement in this field since the 1980s. It establishes the theme of "Moving best practice in ICT from the few to the many" which underlies the whole study. Part C, the Dissertation, traces the steps taken to investigate the need for professional development in ICT. This is achieved by analysing and commenting on data collected from a state-wide survey and a series of interviews with leading figures, and by providing a review of the relevant literature and past and existing models of professional development. Part D, Final Comments, provides an overview of the whole Folio and a reflection on the research that has been conducted. The findings are that there is widespread dissatisfaction with existing models and that there is an urgent need for professional development in this area, because nearly 20% of teachers either do not use computers or are considered to be novice users. Another 25% are considered to be below not yet "ICT Smart". Less than 10% of ICT co-ordinators have a formal qualification in the field but more than 85% of them are interested in a Masters program. The study offers solutions in Part B where there is a discussion of a range of strategies to provide on-going professional development for teachers. Chapter 9 provides an outline of a proposed Masters level program and offers suggestions on how it could be best delivered. This program would meet the identified needs of ICT co-ordinators. The study concludes with a series of recommendations and suggestions for further research. The Education Department must address these urgent professional development needs of teachers, particularly those in the more remote country regions. There needs to be a follow-up survey to establish to what extent teachers in South Australia are now "ICT Smart ".

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[No Abstract]

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the major challenges of MIS activities is the difficulty in measuring the effectiveness of delivered systems. The principal purpose of my research is to explore this field in order to develop an instrument by which to measure such effectiveness. Conceptualisation of Information System (IS) Effectiveness has been substantially framed by DeLone and McLean's (1992) Success; Model. But with the innovation in Information Technology (IT) over the past decade, and the constant pressure in IT to improve performance, there is merit in undertaking a fresh appraisal of the issue. This study built on the model of IS Success developed by DeLone and MeLean, but was broadened to include related research from the domains of IS, Management and Marketing. This analysis found that an effective IS function is built on three pillars: the systems implemented; the information held and delivered by these systems; and, the service provided in support of the IS function. A common foundation for these pillars is the concept of stakeholder needs. In seeking to appreciate the effectiveness: of delivered IS applications in relation to the job performance of stakeholders, this research developed an understanding of what quality means in an IT context I argue that quality is a more useful criterion for effectiveness than the more customary measures of use and user satisfaction. Respecification of the IS Success Model was then proposed. The second phase of the research was to test this model empirically through judgment panels, focus groups and interviews. Results consistently supported the structure and components of the respecified model. Quality was determined as a multi-dimensional construct, with the key dimensions for the quality of delivered IS differing from those used in the research from other disciplines. Empirical work indicated that end-user stakeholders derived their evaluations of quality by internally evaluating perceived performance of delivered IS in relation to their expectations for such performance. A short trial explored whether, when overt measurement of expectations was concurrent with the measurement of perceptions, a more revealing appraisal of delivered IS quality was provided than when perceptions alone were measured. Results revealed a difference between the two measures. Using the New IS Success Model as the foundation, and drawing upon the related theoretical and empirical research, an instrument was developed to measure the quality/effectiveness of delivered IS applications. Four trials of this instrument, QUALIT, are documented. Analysis of results from preliminary trials indicates promise in terms of business value: the instrument is simple to administer and has the capacity to pinpoint areas of weakness. The research related to the respecification of the New IS Success Model and the associated empirical studies, including the development of QTJALIT, have both contributed to the development of theory about IS Effectiveness. More precisely, my research has reviewed the components of an information system, the dimensions comprising these components and the indicators of each, and based upon these findings, formulated an instrument by which to measure the effectiveness of a delivered IS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Explores space and object relations in a digital 3-D animation production, "Moving-Image". The exegesis examines these relations through an analysis of pictorial realism in painting. The illusion of three dimensional forms in the space of the computer screen is contextualised by investigation of the work's underlying digital conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The major outcomes of this research project were the development of a set of decentralized algorithms to index, locate and synchronize replicated information in a networked environment. This study exploits the application specific design constraints of networked systems to improve performance, instead of relying on data structures and algorithms best suited to centralized systems.