96 resultados para Distributed computer-controlled systems

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new versatile computer controlled electrochemlcal/ESR data acquisition system has been developed for the Investigation of short-lived radicals with life-times of 20 milliseconds and greater, Different computer programs have been developed to monitor the decay of radicals; over hours or minutes, seconds or milliseconds. Signal averaging and Fourier smoothing is employed in order to improve the signal to noise ratio. Two microcomputers are used to control the system, one home-made computer containing the M6800 chip which controls the magnetic field, and an IBM PC XT which controls the electrochemistry and the data acquisition. The computer programs are written in Fortran and C, and call machine language subroutines, The system functions by having the radical generated by an electrochemical pulse: after or during the pulse the ESR data are collected. Decaying radicals which have half-lives of seconds or greater have their spectra collected in the magnetic field domain, which can be swept as fast as 200 Gauss per second. The decay of the radicals in the millisecond region is monitored by time-resolved ESR: a technique in which data is collected in both the time domain and in the magnetic field domain. Previously, time-resolved ESR has been used (without field modulation) to investigate ultra-short-lived species with life-times in the region of only a few microseconds. The application of the data acquisition system to chemical systems is illustrated. This is the first time a computer controlled system whereby the radical is generated by electrochemical means and subsequently the ESR data collected, has been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the RhoVeR (Rhodes Virtual Reality) system and classify it as a second generation parallel/distributed virtual reality (DVR) system. We discuss the components of the system and thereby demonstrate its support for virtual reality application development, its configurable, parallel and distributed nature, and its synthesis of first generation DVR techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security and privacy have been the major concern when people build parallel and distributed networks and systems. While the attack systems have become more easy-to-use, sophisticated, and powerful, interest has greatly increased in the field of building more effective, intelligent, adaptive, active and high performance defense systems which are distributed and networked. This special issue focuses on the issues of building secure parallel and distributed networks and systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With decades of progress toward ubiquitous networks and systems, distributed computing systems have played an increasingly important role in the industry and society. However, not many distributed networks and systems are secure and reliable in the sense of defending against different attacks and tolerating failures automatically, thus guaranteeing properties such as performance, and offering security against intentional threats. This special issue focuses on securing distributed networks and systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determining the causal structure of a domain is frequently a key task in the area of Data Mining and Knowledge Discovery. This paper introduces ensemble learning into linear causal model discovery, then examines several algorithms based on different ensemble strategies including Bagging, Adaboost and GASEN. Experimental results show that (1) Ensemble discovery algorithm can achieve an improved result compared with individual causal discovery algorithm in terms of accuracy; (2) Among all examined ensemble discovery algorithms, BWV algorithm which uses a simple Bagging strategy works excellently compared to other more sophisticated ensemble strategies; (3) Ensemble method can also improve the stability of parameter estimation. In addition, Ensemble discovery algorithm is amenable to parallel and distributed processing, which is important for data mining in large data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A highly programmable electro-mechanical surface is developed using an effective array of individual pins arranged in a gridform. Each pin can be independently raised or lowered to create a wide range of contoured surfaces. It was found that as the number of elements increased. high levels of accuracy could still be achieved. however the required processing power increased logarithmically. This finding was attributed to the large amounts of data being passed. and subsequently led to a second focus; various methods of data management and flow control techniques within large-scale multi elemental systems. Results indicated a large potential for highly programmable surfaces within industry to provide a computer controlled surface for rapid prototyping. The research also revealed the potential for such a device to be used as a HID within Haptic applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last 30 to 40 years, many researchers have combined to build the knowledge base of theory and solution techniques that can be applied to the case of differential equations which include the effects of noise. This class of ``noisy'' differential equations is now known as stochastic differential equations (SDEs). Markov diffusion processes are included within the field of SDEs through the drift and diffusion components of the Itô form of an SDE. When these drift and diffusion components are moderately smooth functions, then the processes' transition probability densities satisfy the Fokker-Planck-Kolmogorov (FPK) equation -- an ordinary partial differential equation (PDE). Thus there is a mathematical inter-relationship that allows solutions of SDEs to be determined from the solution of a noise free differential equation which has been extensively studied since the 1920s. The main numerical solution technique employed to solve the FPK equation is the classical Finite Element Method (FEM). The FEM is of particular importance to engineers when used to solve FPK systems that describe noisy oscillators. The FEM is a powerful tool but is limited in that it is cumbersome when applied to multidimensional systems and can lead to large and complex matrix systems with their inherent solution and storage problems. I show in this thesis that the stochastic Taylor series (TS) based time discretisation approach to the solution of SDEs is an efficient and accurate technique that provides transition and steady state solutions to the associated FPK equation. The TS approach to the solution of SDEs has certain advantages over the classical techniques. These advantages include their ability to effectively tackle stiff systems, their simplicity of derivation and their ease of implementation and re-use. Unlike the FEM approach, which is difficult to apply in even only two dimensions, the simplicity of the TS approach is independant of the dimension of the system under investigation. Their main disadvantage, that of requiring a large number of simulations and the associated CPU requirements, is countered by their underlying structure which makes them perfectly suited for use on the now prevalent parallel or distributed processing systems. In summary, l will compare the TS solution of SDEs to the solution of the associated FPK equations using the classical FEM technique. One, two and three dimensional FPK systems that describe noisy oscillators have been chosen for the analysis. As higher dimensional FPK systems are rarely mentioned in the literature, the TS approach will be extended to essentially infinite dimensional systems through the solution of stochastic PDEs. In making these comparisons, the advantages of modern computing tools such as computer algebra systems and simulation software, when used as an adjunct to the solution of SDEs or their associated FPK equations, are demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed replication is the key to providing high availability, fault-tolerance, and enhanced performance. The thesis focuses on providing a toolkit to support the automatic construction of reliable distributed service replication systems. The toolkit frees programmers from dealing with network communications and replication control protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of a business intelligence (BI) system is a complex undertaking requiring considerable resources. Yet there is a limited authoritative set of critical success factors (CSFs) for management reference because the BI market has been driven mainly by the IT industry and vendors. This research seeks to bridge the gap that exists between academia and practitioners by investigating the CSFs influencing BI systems success. The study followed a two-stage qualitative approach. Firstly, the authors utilised the Delphi method to conduct three rounds of studies. The study develops a CSFs framework crucial for BI systems implementation. Next, the framework and the associated CSFs are delineated through a series of case studies. The empirical findings substantiate the construct and applicability of the framework. More significantly, the research further reveals that those organisations which address the CSFs from a business orientation approach will be more likely to achieve better results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Security and privacy have been the major concern when people build computer networks and systems. Any computer network or system must be trustworthy to avoid the risk of losing control and retain confidence that it will not fail [1] Jun Ho Huh, John Lyle, Cornelius Namiluko and Andrew Martin, Managing application whitelists in trusted distributed systems. Future Generation Computer Systems,  27 2 (2011), pp. 211–226. [1]. Trust is the key factor to enable dynamic interaction and cooperation of various users, systems and services [2]. Trusted Computing aims at making computer networks, systems, and services available, predictable, traceable, controllable, assessable, sustainable, dependable, and security/privacy protectable. This special section focuses on the issues related to trusted computing, such as trusted computing models and specifications, trusted reliable and dependable systems, trustworthy services and applications, and trust standards and protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2015 The Institution of Engineering and Technology. In this study, the authors derive some new refined Jensen-based inequalities, which encompass both the Jensen inequality and its most recent improvement based on the Wirtinger integral inequality. The potential capability of this approach is demonstrated through applications to stability analysis of time-delay systems. More precisely, by using the newly derived inequalities, they establish new stability criteria for two classes of time-delay systems, namely discrete and distributed constant delays systems and interval time-varying delay systems. The resulting stability conditions are derived in terms of linear matrix inequalities, which can be efficiently solved by various convex optimisation algorithms. Numerical examples are given to show the effectiveness and least conservativeness of the results obtained in this study.