892 resultados para Distributed computer systems
Resumo:
Monitoring multiple myeloma patients for relapse requires sensitive methods to measure minimal residual disease and to establish a more precise prognosis. The present study aimed to standardize a real-time quantitative polymerase chain reaction (PCR) test for the IgH gene with a JH consensus self-quenched fluorescence reverse primer and a VDJH or DJH allele-specific sense primer (self-quenched PCR). This method was compared with allele-specific real-time quantitative PCR test for the IgH gene using a TaqMan probe and a JH consensus primer (TaqMan PCR). We studied nine multiple myeloma patients from the Spanish group treated with the MM2000 therapeutic protocol. Self-quenched PCR demonstrated sensitivity of >or=10(-4) or 16 genomes in most cases, efficiency was 1.71 to 2.14, and intra-assay and interassay reproducibilities were 1.18 and 0.75%, respectively. Sensitivity, efficiency, and residual disease detection were similar with both PCR methods. TaqMan PCR failed in one case because of a mutation in the JH primer binding site, and self-quenched PCR worked well in this case. In conclusion, self-quenched PCR is a sensitive and reproducible method for quantifying residual disease in multiple myeloma patients; it yields similar results to TaqMan PCR and may be more effective than the latter when somatic mutations are present in the JH intronic primer binding site.
Resumo:
Collaboration in the public sector is imperative to achieve e-government objectives such as improved efficiency and effectiveness of public administration and improved quality of public services. Collaboration across organizational and institutional boundaries requires public organizations to share e-government systems and services through for instance, interoperable information technology and processes. Demands on public organizations to become more open also require that public organizations adopt new collaborative approaches for inviting and engaging citizens in governmental activities. E-government related collaboration in the public sector is challenging, however, and collaboration initiatives often fail. Public organizations need to learn how to collaborate since forms of e-government collaboration and expected outcomes are mostly unknown. How public organizations can collaborate and the expected outcomes are thus investigated in this thesis by studying multiple collaboration cases on the acquisition and implementation of a particular e-government investment (digital archive). This thesis also investigates how e-government collaboration can be facilitated through artifacts. It is done through a case study, where objects that cross boundaries between collaborating communities in the public sector are studied, and by designing a configurable process model integrating several processes for social services. By using design science, this thesis also investigates how an m-government solution that facilitates collaboration between citizens and public organizations can be designed. The thesis contributes to literature through describing five different modes of interorganizational collaboration in the public sector and the expected benefits from each mode. It also contributes with an instantiation of a configurable process model supporting three open social e-services and with evidence of how it can facilitate collaboration. This thesis further describes how boundary objects facilitate collaboration between different communities in an open government design initiative. It contributes with a designed mobile government solution, thereby providing proof of concept and initial design implications for enabling collaboration with citizens through citizen sourcing (outsourcing a governmental activity to citizens through an open call). This thesis also identifies research streams within e-government collaboration research through a literature review and the thesis contributions are related to the identified research streams. This thesis gives directions for future research by suggesting that future research should focus further on understanding e-government collaboration and how information and communication technology can facilitate collaboration in the public sector. It is suggested that further research should investigate m-government solutions to form design theories. Future research should also examine how value can be co-created in e-government collaboration.
Resumo:
This thesis explores aesthetization in general and fashion in particular in digital technology design and how we can design digital technology to account for the extended influences of fashion. The thesis applies a combination of methods to explore the new design space at the intersection of fashion and technology. First, it contributes to theoretical understandings of aesthetization and fashion institutionalization that influence digital technology design. We show that there is an unstable aesthetization in mobile design and the increased aesthetization is closely related to the fashion industry. Fashion emerged through shared institutional activities, which are usually in the form of action nets in the design of digital devices. “Tech Fashion” is proposed to interpret such dynamic action nets of institutional arrangements that make digital technology fashionable and desirable. Second, through associative design research, we have designed and developed two prototypes that account for institutionalized fashion values, such as the concept “outfit-centric accessory.” We call for a more extensive collaboration between fashion design and interaction design.
Resumo:
Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay
Resumo:
This paper presents a multi-class AdaBoost based on incorporating an ensemble of binary AdaBoosts which is organized as Binary Decision Tree (BDT). It is proved that binary AdaBoost is extremely successful in producing accurate classification but it does not perform very well for multi-class problems. To avoid this performance degradation, the multi-class problem is divided into a number of binary problems and binary AdaBoost classifiers are invoked to solve these classification problems. This approach is tested with a dataset consisting of 6500 binary images of traffic signs. Haar-like features of these images are computed and the multi-class AdaBoost classifier is invoked to classify them. A classification rate of 96.7% and 95.7% is achieved for the traffic sign boarders and pictograms, respectively. The proposed approach is also evaluated using a number of standard datasets such as Iris, Wine, Yeast, etc. The performance of the proposed BDT classifier is quite high as compared with the state of the art and it converges very fast to a solution which indicates it as a reliable classifier.
Resumo:
This article discusses event monitoring options for heterogeneous event sources as they are given in nowadays heterogeneous distributed information systems. It follows the central assumption, that a fully generic event monitoring solution cannot provide complete support for event monitoring; instead, event source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Following from this, the core result of the work presented here is the extension of a configurable event monitoring (Web) service for a variety of event sources. A service approach allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Resumo:
Abstract not available
Resumo:
Unstructured mesh based codes for the modelling of continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Such codes have the potential to provide a high performance on parallel platforms for a small investment in programming. The critical parameters for success are to minimise changes to the code to allow for maintenance while providing high parallel efficiency, scalability to large numbers of processors and portability to a wide range of platforms. The paradigm of domain decomposition with message passing has for some time been demonstrated to provide a high level of efficiency, scalability and portability across shared and distributed memory systems without the need to re-author the code into a new language. This paper addresses these issues in the parallelisation of a complex three dimensional unstructured mesh Finite Volume multiphysics code and discusses the implications of automating the parallelisation process.
Resumo:
The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.
Resumo:
Stand-alone and networked surgical virtual reality based simulators have been proposed as means to train surgical skills with or without a supervisor nearby the student or trainee -- However, surgical skills teaching in medicine schools and hospitals is changing, requiring the development of new tools to focus on: (i) importance of mentors role, (ii) teamwork skills and (iii) remote training support -- For these reasons, a surgical simulator should not only allow the training involving a student and an instructor that are located remotely, but also the collaborative training of users adopting different medical roles during the training sesión -- Collaborative Networked Virtual Surgical Simulators (CNVSS) allow collaborative training of surgical procedures where remotely located users with different surgical roles can take part in the training session -- To provide successful training involving good collaborative performance, CNVSS should handle heterogeneity factors such as users’ machine capabilities and network conditions, among others -- Several systems for collaborative training of surgical procedures have been developed as research projects -- To the best of our knowledge none has focused on handling heterogeneity in CNVSS -- Handling heterogeneity in this type of collaborative sessions is important because not all remotely located users have homogeneous internet connections, nor the same interaction devices and displays, nor the same computational resources, among other factors -- Additionally, if heterogeneity is not handled properly, it will have an adverse impact on the performance of each user during the collaborative sesión -- In this document, the development of a context-aware architecture for collaborative networked virtual surgical simulators, in order to handle the heterogeneity involved in the collaboration session, is proposed -- To achieve this, the following main contributions are accomplished in this thesis: (i) Which and how infrastructure heterogeneity factors affect the collaboration of two users performing a virtual surgical procedure were determined and analyzed through a set of experiments involving users collaborating, (ii) a context-aware software architecture for a CNVSS was proposed and implemented -- The architecture handles heterogeneity factors affecting collaboration, applying various adaptation mechanisms and finally, (iii) A mechanism for handling heterogeneity factors involved in a CNVSS is described, implemented and validated in a set of testing scenarios
Resumo:
Advances in FPGA technology and higher processing capabilities requirements have pushed to the emerge of All Programmable Systems-on-Chip, which incorporate a hard designed processing system and a programmable logic that enable the development of specialized computer systems for a wide range of practical applications, including data and signal processing, high performance computing, embedded systems, among many others. To give place to an infrastructure that is capable of using the benefits of such a reconfigurable system, the main goal of the thesis is to implement an infrastructure composed of hardware, software and network resources, that incorporates the necessary services for the operation, management and interface of peripherals, that coompose the basic building blocks for the execution of applications. The project will be developed using a chip from the Zynq-7000 All Programmable Systems-on-Chip family.
Resumo:
The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.
Resumo:
A utilização de sistemas embutidos distribuídos em diversas áreas como a robótica, automação industrial e aviónica tem vindo a generalizar-se no decorrer dos últimos anos. Este tipo de sistemas são compostos por vários nós, geralmente designados por sistemas embutidos. Estes nós encontram-se interligados através de uma infra-estrutura de comunicação de forma a possibilitar a troca de informação entre eles de maneira a concretizar um objetivo comum. Por norma os sistemas embutidos distribuídos apresentam requisitos temporais bastante exigentes. A tecnologia Ethernet e os protocolos de comunicação, com propriedades de tempo real, desenvolvidos para esta não conseguem associar de uma forma eficaz os requisitos temporais das aplicações de tempo real aos requisitos Quality of Service (QoS) dos diferentes tipos de tráfego. O switch Hard Real-Time Ethernet Switching (HaRTES) foi desenvolvido e implementado com o objetivo de solucionar estes problemas devido às suas capacidades como a sincronização de fluxos diferentes e gestão de diferentes tipos de tráfego. Esta dissertação apresenta a adaptação de um sistemas físico de modo a possibilitar a demonstração do correto funcionamento do sistema de comunicação, que será desenvolvido e implementado, utilizando um switch HaRTES como o elemento responsável pela troca de informação na rede entre os nós. O desempenho da arquitetura de rede desenvolvida será também testada e avaliada.
Resumo:
Starting in December 1982 the University of Nottingham decided to phototypeset almost all of its examination papers `in house' using the troff, tbl and eqn programs running under UNIX. This tutorial lecture highlights the features of the three programs with particular reference to their strengths and weaknesses in a production environment. The following issues are particularly addressed: Standards -- all three software packages require the embedding of commands and the invocation of pre-written macros, rather than `what you see is what you get'. This can help to enforce standards, in the absence of traditional compositor skills. Hardware and Software -- the requirements are analysed for an inexpensive preview facility and a low-level interface to the phototypesetter. Mathematical and Technical papers -- the fine-tuning of eqn to impose a standard house style. Staff skills and training -- systems of this kind do not require the operators to have had previous experience of phototypesetting. Of much greater importance is willingness and flexibility in learning how to use computer systems.