63 resultados para Parallel processing (Electronic computers) - Research


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fifty years ago there were no stored-program electronic computers in the world. Even thirty years ago a computer was something that few organisations could afford, and few people could use. Suddenly, in the 1960s and 70s, everything changed and computers began to become accessible. Today* the need for education in Business Computing is generally acknowledged, with each of Victoria's seven universities offering courses of this type. What happened to promote the extremely rapid adoption of such courses is the subject of this thesis. I will argue that although Computer Science began in Australia's universities of the 1950s, courses in Business Computing commenced in the 1960s due to the requirement of the Commonwealth Government for computing professionals to fulfil its growing administrative needs. The Commonwealth developed Programmer-in-Training courses were later devolved to the new Colleges of Advanced Education. The movement of several key figures from the Commonwealth Public Service to take up positions in Victorian CAEs was significant, and the courses they subsequently developed became the model for many future courses in Business Computing. The reluctance of the universities to become involved in what they saw as little more than vocational training, opened the way for the CAEs to develop this curriculum area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, coordinated scheduling of multiple parallel applications across computers has been considered as the critical factor to achieve high execution performance. We claim in this report that the performance and costs of the execution of parallel applications could be improved if not only dedicated clusters but also non-dedicated clusters were used and several parallel applications were executed concurreontly. To support this claim we carried out experimental study into the performance of multiple NAS parallel programs executing concurrently on a non-dedicated cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present techniques for inverting sparse, symmetric and positive definite matrices on parallel and distributed computers. We propose two algorithms, one for SIMD implementation and the other for MIMD implementation. These algorithms are modified versions of Gaussian elimination and they take into account the sparseness of the matrix. Our algorithms perform better than the general parallel Gaussian elimination algorithm. In order to demonstrate the usefulness of our technique, we implemented the snake problem using our sparse matrix algorithm. Our studies reveal that the proposed sparse matrix inversion algorithm significantly reduces the time taken for obtaining the solution of the snake problem. In this paper, we present the results of our experimental work.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate and timely traffic flow prediction is crucial to proactive traffic management and control in data-driven intelligent transportation systems (D2ITS), which has attracted great research interest in the last few years. In this paper, we propose a Spatial-Temporal Weighted K-Nearest Neighbor model, named STW-KNN, in a general MapReduce framework of distributed modeling on a Hadoop platform, to enhance the accuracy and efficiency of short-term traffic flow forecasting. More specifically, STW-KNN considers the spatial-temporal correlation and weight of traffic flow with trend adjustment features, to optimize the search mechanisms containing state vector, proximity measure, prediction function, and K selection. urthermore, STW-KNN is implemented on a widely adopted Hadoop distributed computing platform with the MapReduce parallel processing paradigm, for parallel prediction of traffic flow in real time. inally, with extensive experiments on real-world big taxi trajectory data, STW-KNN is compared with the state-of-the-art prediction models including conventional K-Nearest Neighbor (KNN), Artificial Neural Networks (ANNs), Naïve Bayes (NB), Random orest (R), and C4.. The results demonstrate that the proposed model is superior to existing models on accuracy by decreasing the mean absolute percentage error (MAPE) value more than 11.9% only in time domain and even achieves 89.71% accuracy improvement with the MAPEs of between 4% and 6.% in both space and time domains, and also significantly improves the efficiency and scalability of short-term traffic flow forecasting over existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying gang scheduling can alleviate the blockade problem caused by exclusively used space-sharing strategies for parallel processing. However, the original form of gang scheduling is not practical as there are several fundamental problems associated with it. Recently many researchers have developed new strategies to alleviate some of these problems. Unfortunately, one important problem has not been so far seriously addressed, that is, how to set the length of time slots to obtain a good performance of gang scheduling. In this paper we present a strategy to deal with this important issue for efficient gang scheduling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cluster computing has come to prominence as a cost-effective parallel processing tool for solving many complex computational problems. In this paper, we propose a new timesharing opportunistic scheduling policy to support remote batch job executions over networked clusters to be used in conjunction with the Condor Up-Down scheduling algorithm. We show that timesharing approaches can be used in an opportunistic setting to improve both mean job slowdowns and mean response times with little or no throughput reduction. We also show that the proposed algorithm achieves significant improvement in job response time and slowdown as compared to exiting approaches and some recently proposed new approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cluster systems are becoming more prevalent in today’s computer society and users are beginning to request that these systems be reliable. Currently, most clusters have been designed to provide high performance at the cost of providing little to no reliability. To combat this, this report looks at how a recovery facility, based on either a centralised or distributed approach could be implemented into a cluster that is supported by a check pointing facility. This recovery facility can then recover failed user processes by using checkpoints of the processes that have been taken during failure free execution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying gang scheduling can alleviate the blockade problem caused by exclusively used space-sharing strategies for parallel processing. However, the original form of gang scheduling is not practical as there are several fundamental problems associated with it. Recently many researchers have developed new strategies to alleviate some of these problems. Unfortunately, one important problem has not been so far seriously addressed, that is, how to set the length of time slot to obtain a good performance of gang scheduling. With gang scheduling time is divided into time slots of equal length, the number of time slots introduced in the system forms a scheduling round and each new job is first allocated to a particular time slot and then starts to run in the following scheduling round. Ideally, the length of time slot should be set long to avoid frequent context switches and so to reduce the scheduling overhead. The number of time slots in a scheduling round should also be limited to avoid a large number of jobs competing for limited resources (CPU time and memory). Long time slots and the limited number of time slots in each scheduling round may cause jobs to wait for a long time before it can be executed after arrival, which can significantly affect the performance of jobs, especially short jobs which are normally expected to finish quickly. However, the performance of a short job can also suffer if the length of time slot is not long enough to let the short job complete in a single time slot. In this paper we present a strategy to deal with this important issue for efficient gang scheduling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The major barriers to the Implementation of electronic commerce by businesses globally arc well understood. These comprise security and pnvacy issues, the lack of established regulations governing commercial behaviour and liability, and the lack of universally accepted standards. In this article. we focus on the security concerns of Australian SMEs. Medium, and especially small, enterprises are hindered in the implementation of communications security technology by a lack of expertize and a poor understandmg of the services and resources available to them. As a response to this situation, we examme the facilities avallable to Australian SMEs which help them to make reasonable e- secunty decisions as part of an overall e-busmess strategy. We demonstrate that there are sufficient resources at appropnate levels of availability to enable small and medium Australian enterprises to implement communicatlons security effectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Local government in Australia is under pressure to modernize its structures in the new public management environment, as well as respond to  increasing demands from its local electorates for better delivery of services and greater levels of participation in the democratic process. This article analyzes local government’s response to these pressures through its use of information communication technologies (ICT) to execute its broad range of tasks. I begin by discussing e-governance in the light of Chadwick and May’s (2003) three basic models of interaction between the state and its citizens: managerial, consultative, and participatory. Using data collected from an analysis of 658 local government Web sites in Australia together with existing survey research, I analyze the extent to which local government sites fit into the three models. The article then concludes with a discussion of the issues and problems faced by local government in its attempt to develop e-governance, as both an extension of its administrative as well as democratic functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When building a cost-effective high-performance parallel processing system, a performance model is a useful tool for exploring the design space and examining various parameters. However, performance analysis in such systems has proven to be a challenging task that requires the innovative performance analysis tools and methods to keep up with the rapid evolution and ever increasing complexity of such systems. To this end, we propose an analytical model for heterogeneous multi-cluster systems. The model takes into account stochastic quantities as well as network heterogeneity in bandwidth and latency in each cluster. Also, blocking and non-blocking network architecture model is proposed and are used in performance analysis of the system. The message latency is used as the primary performance metric. The model is validated by constructing a set of simulators to simulate different types of clusters, and by comparing the modeled results with the simulated ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flood of new genomic sequence information together with technological innovations in protein structure determination have led to worldwide structural genomics (SG) initiatives. The goals of SG initiatives are to accelerate the process of protein structure determination, to fill in protein fold space and to provide information about the function of uncharacterized proteins. In the long-term, these outcomes are likely to impact on medical biotechnology and drug discovery, leading to a better understanding of disease as well as the development of new therapeutics. Here we describe the high throughput pipeline established at the University of Queensland in Australia. In this focused pipeline, the targets for structure determination are proteins that are expressed in mouse macrophage cells and that are inferred to have a role in innate immunity. The aim is to characterize the molecular structure and the biochemical and cellular function of these targets by using a parallel processing pipeline. The pipeline is designed to work with tens to hundreds of target gene products and comprises target selection, cloning, expression, purification, crystallization and structure determination. The structures from this pipeline will provide insights into the function of previously uncharacterized macrophage proteins and could lead to the validation of new drug targets for chronic obstructive pulmonary disease and arthritis.