934 resultados para Operating Systems


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The human body was used to illustrate an Autonomic Computing system that possesses self-knowledge, self-configuration, self-optimization, self-healing, and self-protection, knowledge of its environment and user friendliness properties. Autonomic Computing was identified by IBM as one of the Grand Challenges. Many researchers and research groups have responded positively to the challenge by initiating research around one or two of the characteristics
identified by IBM as the requirements for Autonomic Computing. One of the areas that could benefit from the comprehensive approach created by the Autonomic Computing vision is parallel processing on nondedicated clusters. This paper shows a general design of services and initial implementation of a system that moves parallel processing on clusters to the computing mainstream using the Autonomic Computing vision.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Personal identification of individuals is becoming increasingly adopted in society today. Due to the large number of electronic systems that require human identification, faster and more secure identification systems are pursued. Biometrics is based upon the physical characteristics of individuals; of these the fingerprint is the most common as used within law enforcement. Fingerprint-based systems have been introduced into the society but have not been well received due to relatively high rejection rates and false acceptance rates. This limited acceptance of fingerprint identification systems requires new techniques to be investigated to improve this identification method and the acceptance of the technology within society. Electronic fingerprint identification provides a method of identifying an individual within seconds quickly and easily. The fingerprint must be captured instantly to allow the system to identify the individual without any technical user interaction to simplify system operation. The performance of the entire system relies heavily on the quality of the original fingerprint image that is captured digitally. A single fingerprint scan for verification makes it easier for users accessing the system as it replaces the need to remember passwords or authorisation codes. The identification system comprises of several components to perform this function, which includes a fingerprint sensor, processor, feature extraction and verification algorithms. A compact texture feature extraction method will be implemented within an embedded microprocessor-based system for security, performance and cost effective production over currently available commercial fingerprint identification systems. To perform these functions various software packages are available for developing programs for windows-based operating systems but must not constrain to a graphical user interface alone. MATLAB was the software package chosen for this thesis due to its strong mathematical library, data analysis and image analysis libraries and capability. MATLAB enables the complete fingerprint identification system to be developed and implemented within a PC environment and also to be exported at a later date directly to an embedded processing environment. The nucleus of the fingerprint identification system is the feature extraction approach presented in this thesis that uses global texture information unlike traditional local information in minutiae-based identification methods. Commercial solid-state sensors such as the type selected for use in this thesis have a limited contact area with the fingertip and therefore only sample a limited portion of the fingerprint. This limits the number of minutiae that can be extracted from the fingerprint and as such limits the number of common singular points between two impressions of the same fingerprint. The application of texture feature extraction will be tested using variety of fingerprint images to determine the most appropriate format for use within the embedded system. This thesis has focused on designing a fingerprint-based identification system that is highly expandable using the MATLAB environment. The main components that are defined within this thesis are the hardware design, image capture, image processing and feature extraction methods. Selection of the final system components for this electronic fingerprint identification system was determined by using specific criteria to yield the highest performance from an embedded processing environment. These platforms are very cost effective and will allow fingerprint-based identification technology to be implemented in more commercial products that can benefit from the security and simplicity of a fingerprint identification system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current attempts to manage parallel applications on Clusters of Workstations (COWs) have either generally followed the parallel execution environment approach or been extensions to existing network operating systems, both of which do not provide complete or satisfactory solutions. The efficient and transparent management of parallelism within the COW environment requires enhanced methods of process instantiation, mapping of parallel process to workstations, maintenance of process relationships, process communication facilities, and process coordination mechanisms. The aim of this research is to synthesise, design, develop and experimentally study a system capable of efficiently and transparently managing SPMD parallelism on a COW. This system should both improve the performance of SPMD based parallel programs and relieve the programmer from the involvement into parallelism management in order to allow them to concentrate on application programming. It is also the aim of this research to show that such a system, to achieve these objectives, is best achieved by adding new special services and exploiting the existing services of a client/server and microkernel based distributed operating system. To achieve these goals the research methods of the experimental computer science should be employed. In order to specify the scope of this project, this work investigated the issues related to parallel processing on COWs and surveyed a number of relevant systems including PVM, NOW and MOSIX. It was shown that although the MOSIX system provide a number of good services related to parallelism management, none of the system forms a complete solution. The problems identified with these systems include: instantiation services that are not suited to parallel processing; duplication of services between the parallelism management environment and the operating system; and poor levels of transparency. A high performance and transparent system capable of managing the execution of SPMD parallel applications was synthesised and the specific services of process instantiation, process mapping and process interaction detailed. The process instantiation service designed here provides the capability to instantiate parallel processes using either creation or duplication methods and also supports multiple and group based instantiation which is specifically design for SPMD parallel processing. The process mapping service provides the combination of process allocation and dynamic load balancing to ensure the load of a COW remains balanced not only at the time a parallel program is initialised but also during the execution of the program. The process interaction service guarantees to maintain transparently process relationships, communications and coordination services between parallel processes regardless of their location within the COW. The combination of these services provides an original architecture and organisation of a system that is capable of fully managing the execution of SPMD parallel applications on a COW. A logical design of a parallelism management system was developed derived from the synthesised system and was shown that it should ideally be based on a distributed operating system employing the client server model. The client/server based distributed operating system provides the level of transparency, modularity and flexibility necessary for a complete parallelism management system. The services identified in the synthesised system have been mapped to a set of server processes including: Process Instantiation Server providing advanced multiple and group based process creation and duplication; Process Mapping Server combining load collection, process allocation and dynamic load balancing services; and Process Interaction Server providing transparent interprocess communication and coordination. A Process Migration Server was also identified as vital to support both the instantiation and mapping servers. The RHODOS client/server and microkernel based distributed operating system was selected to carry out research into the detailed design and to be used for the implementation this parallelism management system. RHODOS was enhanced to provide the required servers and resulted in the development of the REX Manager, Global Scheduler and Process Migration Manager to provide the services of process instantiation, mapping and migration, respectively. The process interaction services were already provided within RHODOS and only required some extensions to the existing Process Manager and IPC Managers. Through a variety of experiments it was shown that when this system was used to support the execution of SPMD parallel applications the overall execution times were improved, especially when multiple and group based instantiation services are employed. The RHODOS PMS was also shown to greatly reduce the programming burden experienced by users when writing SPMD parallel applications by providing a small set of powerful primitives specially designed to support parallel processing. The system was also shown to be applicable and has been used in a variety of other research areas such as Distributed Shared Memory, Parallelising Compilers and assisting the port of PVM to the RHODOS system. The RHODOS Parallelism Management System (PMS) provides a unique and creative solution to the problem of transparently and efficiently controlling the execution of SPMD parallel applications on COWs. Combining advanced services such as multiple and group based process creation and duplication; combined process allocation and dynamic load balancing; and complete COW wide transparency produces a totally new system that addresses many of the problems not addressed in other systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Shared clusters represent an excellent platform for the execution of parallel applications given their low price/performance ratio and the presence of cluster infrastructure in many organisations. The focus of recent research efforts are on parallelism management, transport and efficient access to resources, and making clusters easy to use. In this thesis, we examine reliable parallel computing on clusters. The aim of this research is to demonstrate the feasibility of developing an operating system facility providing transport fault tolerance using existing, enhanced and newly built operating system services for supporting parallel applications. In particular, we use existing process duplication and process migration services, and synthesise a group communications facility for use in a transparent checkpointing facility. This research is carried out using the methods of experimental computer science. To provide a foundation for the synthesis of the group communications and checkpointing facilities, we survey and review related work in both fields. For group communications, we examine the V Distributed System, the x-kernel and Psync, the ISIS Toolkit, and Horus. We identify a need for services that consider the placement of processes on computers in the cluster. For Checkpointing, we examine Manetho, KeyKOS, libckpt, and Diskless Checkpointing. We observe the use of remote computer memories for storing checkpoints, and the use of copy-on-write mechanisms to reduce the time to create a checkpoint of a process. We propose a group communications facility providing two sets of services: user-oriented services and system-oriented services. User-oriented services provide transparency and target application. System-oriented services supplement the user-oriented services for supporting other operating systems services and do not provide transparency. Additional flexibility is achieved by providing delivery and ordering semantics independently. An operating system facility providing transparent checkpointing is synthesised using coordinated checkpointing. To ensure a consistent set of checkpoints are generated by the facility, instead of blindly blocking the processes of a parallel application, only non-deterministic events are blocked. This allows the processes of the parallel application to continue execution during the checkpoint operation. Checkpoints are created by adapting process duplication mechanisms, and checkpoint data is transferred to remote computer memories and disk for storage using the mechanisms of process migration. The services of the group communications facility are used to coordinate the checkpoint operation, and to transport checkpoint data to remote computer memories and disk. Both the group communications facility and the checkpointing facility have been implemented in the GENESIS cluster operating system and provide proof-of-concept. GENESIS uses a microkernel and client-server based operating system architecture, and is demonstrated to provide an appropriate environment for the development of these facilities. We design a number of experiments to test the performance of both the group communications facility and checkpointing facility, and to provide proof-of-performance. We present our approach to testing, the challenges raised in testing the facilities, and how we overcome them. For group communications, we examine the performance of a number of delivery semantics. Good speed-ups are observed and system-oriented group communication services are shown to provide significant performance advantages over user-oriented semantics in the presence of packet loss. For checkpointing, we examine the scalability of the facility given different levels of resource usage and a variable number of computers. Low overheads are observed for checkpointing a parallel application. It is made clear by this research that the microkernel and client-server based cluster operating system provide an ideal environment for the development of a high performance group communications facility and a transparent checkpointing facility for generating a platform for reliable parallel computing on clusters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The provision of fault tolerance is an important aspect to the success of distributed and cluster computing. Through this research , a transparent, autonomic and efficient fault tolerant facility was designed and implemented; thereby relieving the burden of a user having to handle and react to the failure of an application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research aims at improving the accessibility of cluster computer systems by introducing autonomic self-management facilities incorporating; 1) resource discovery and self awareness, 2) virtualised resource pools, and 3) automated cluster membership and self configuration. These facilities simplify the user's programming workload and improve system usability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Describes the design and implementation of an operating system kernel specifically designed to support real-time applications. It emphasises portability and aims to support state-of-the-art concepts in real-time programming. Discusses architectural aspects of the ARTOS kernel, and introduces new concepts on the areas of interrupt processing, scheduling, mutual exclusion and inter-task communication. Also explains the programming environment of ARTOS kernal and its task model, defines the real-time task states and system data structures and discusses exception handling mechanisms which are used to detect missed deadlines and take corrective action.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In autonomously managed distributed systems for collaboration, provenance can facilitate reuse of information that are interchanged, repetition of successful experiments, or to provide evidence for trust mechanisms that certain information existed at a certain period during collaboration. In this paper, we propose domain independent information provenance architecture for open collaborative distributed systems. The proposed system uses XML for interchanging information and RDF to track information provenance. The use of XML and RDF also ensures that information is universally acceptable even among heterogeneous nodes. Our proposed information provenance model can work on any operating systems or workflows.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Each year, large amounts of money and labor are spent on patching the vulnerabilities in operating systems and various popular software to prevent exploitation by worms. Modeling the propagation process can help us to devise effective strategies against those worms' spreading. This paper presents a microcosmic analysis of worm propagation procedures. Our proposed model is different from traditional methods and examines deep inside the propagation procedure among nodes in the network by concentrating on the propagation probability and time delay described by a complex matrix. Moreover, since the analysis gives a microcosmic insight into a worm's propagation, the proposed model can avoid errors that are usually concealed in the traditional macroscopic analytical models. The objectives of this paper are to address three practical aspects of preventing worm propagation: (i) where do we patch? (ii) how many nodes do we need to patch? (iii) when do we patch? We implement a series of experiments to evaluate the effects of each major component in our microcosmic model. Based on the results drawn from the experiments, for high-risk vulnerabilities, it is critical that networks reduce the number of vulnerable nodes to below 80%. We believe our microcosmic model can benefit the security industry by allowing them to save significant money in the deployment of their security patching schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Operating systems and programmes are more protected these days and attackers have shifted their attention to human elements to break into the organisation's information systems. As the number and frequency of cyber-attacks designed to take advantage of unsuspecting personnel are increasing, the significance of the human factor in information security management cannot be understated. In order to counter cyber-attacks designed to exploit human factors in information security chain, information security awareness with an objective to reduce information security risks that occur due to human related vulnerabilities is paramount. This paper discusses and evaluates the effects of various information security awareness delivery methods used in improving end-users’ information security awareness and behaviour. There are a wide range of information security awareness delivery methods such as web-based training materials, contextual training and embedded training. In spite of efforts to increase information security awareness, research is scant regarding effective information security awareness delivery methods. To this end, this study focuses on determining the security awareness delivery method that is most successful in providing information security awareness and which delivery method is preferred by users. We conducted information security awareness using text-based, game-based and video-based delivery methods with the aim of determining user preferences. Our study suggests that a combined delivery methods are better than individual security awareness delivery method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Governments have traditionally censored drug-related information, both in traditional media and, in recent years, in online media. We explore Internet content regulation from a drug-policy perspective by describing the likely impacts of censoring drug websites and the parallel growth in hidden Internet services. Australia proposes a compulsory Internet filtering regime that would block websites that ‘depict, express or otherwise deal with matters of… drug misuse or addiction’ and/or ‘promote, incite or instruct in matters of crime’. In this article, we present findings from a mixed-methods study of online drug discussion. Our research found that websites dealing with drugs, that would likely be blocked by the filter, in fact contributed positively to harm reduction. Such sites helped people access more comprehensive and relevant information than was available elsewhere. Blocking these websites would likely drive drug discussion underground at a time when corporate-controlled ‘walled gardens’ (e.g. Facebook) and proprietary operating systems on mobile devices may also limit open drug discussion. At the same time, hidden Internet services, such as Silk Road, have emerged that are not affected by Internet filtering. The inability for any government to regulate Tor websites and the crypto-currency Bitcoin poses a unique challenge to drug prohibition policies.
Read More: http://informahealthcare.com/doi/full/10.3109/09687637.2012.745828

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Circos plots are graphical outputs that display three dimensional chromosomal interactions and fusion transcripts. However, the Circos plot tool is not an interactive visualization tool, but rather a figure generator. For example, it does not enable data to be added dynamically nor does it provide information for specific data points interactively. Recently, an R-based Circos tool (RCircos) has been developed to integrate Circos to R, but similarly, Rcircos can only be used to generate plots. Thus, we have developed a Circos plot tool (J-Circos) that is an interactive visualization tool that can plot Circos figures, as well as being able to dynamically add data to the figure, and providing information for specific data points using mouse hover display and zoom in/out functions. J-Circos uses the Java computer language to enable, it to be used on most operating systems (Windows, MacOS, Linux). Users can input data into J-Circos using flat data formats, as well as from the Graphical user interface (GUI). J-Circos will enable biologists to better study more complex chromosomal interactions and fusion transcripts that are otherwise difficult to visualize from next-generation sequencing data. Availability and implementation: J-circos and its manual are freely available at http://www.australianprostatecentre.org/research/software/jcircos CONTACT: j.an@qut.edu.au Supplementary information: Supplementary data are available at Bioinformatics online.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of tracking settings in smartphones facilitates the provision of tailored services to users by allowing service providers access to unique identifiers stored on the smartphones. In this paper, we investigate the 'tracking off' settings on the Blackberry 10 and Windows Phone 8 platforms. To determine if they work as claimed, we set up a test bed suitable for both operating systems to capture traffic between the smartphone and external servers. We dynamically execute a set of similar Blackberry 10 and Windows Phone 8 applications, downloaded from their respective official markets. Our results indicate that even if users turn off tracking settings in their smartphones, some applications leak unique identifiers without their knowledge. © 2014 Springer International Publishing Switzerland.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A área de Detecção de Intrusão, apesar de muito pesquisada, não responde a alguns problemas reais como níveis de ataques, dim ensão e complexidade de redes, tolerância a falhas, autenticação e privacidade, interoperabilidade e padronização. Uma pesquisa no Instituto de Informática da UFRGS, mais especificamente no Grupo de Segurança (GSEG), visa desenvolver um Sistema de Detecção de Intrusão Distribuído e com características de tolerância a falhas. Este projeto, denominado Asgaard, é a idealização de um sistema cujo objetivo não se restringe apenas a ser mais uma ferramenta de Detecção de Intrusão, mas uma plataforma que possibilite agregar novos módulos e técnicas, sendo um avanço em relação a outros Sistemas de Detecção atualmente em desenvolvimento. Um tópico ainda não abordado neste projeto seria a detecção de sniffers na rede, vindo a ser uma forma de prevenir que um ataque prossiga em outras estações ou redes interconectadas, desde que um intruso normalmente instala um sniffer após um ataque bem sucedido. Este trabalho discute as técnicas de detecção de sniffers, seus cenários, bem como avalia o uso destas técnicas em uma rede local. As técnicas conhecidas são testadas em um ambiente com diferentes sistemas operacionais, como linux e windows, mapeando os resultados sobre a eficiência das mesmas em condições diversas.