892 resultados para Parallel computing, Virtual machine, Composition, Determinism, Abstraction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a screw theory based method termed constraint and position identification (CPI) approach to synthesize decoupled spatial translational compliant parallel manipulators (XYZ CPMs) with consideration of actuation isolation. The proposed approach is based on a systematic arrangement of rigid stages and compliant modules in a three-legged XYZ CPM system using the constraint spaces and the position spaces of the compliant modules. The constraint spaces and the position spaces are firstly derived based on the screw theory instead of using the rigid-body mechanism design experience. Additionally, the constraint spaces are classified into different constraint combinations, with typical position spaces depicted via geometric entities. Furthermore, the systematic synthesis process based on the constraint combinations and the geometric entities is demonstrated via several examples. Finally, several novel decoupled XYZ CPMs with monolithic configurations are created and verified by finite elements analysis. The present CPI approach enables experts and beginners to synthesize a variety of decoupled XYZ CPMs with consideration of actuation isolation by selecting an appropriate constraint and an optimal position for each of the compliant modules according to a specific application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crystallization is the critical process used by pharmaceutical industries to achieve the desired size, size distribution, shape and polymorphism of a product material. Control of these properties presents a major challenge since they influence considerably downstream processing factors. Experimental work aimed at finding ways to control the crystal shape of Lacosamide, an active pharmaceutical ingredient developed by UCB Pharma, during crystallization was carried out. It was found that the crystal lattice displayed a very strong unidirectional double hydrogen bonding, which was at the origin of the needle shape of the Lacosamide crystals. Two main strategies were followed to hinder the hydrogen bonding and compete with the addition of a Lacosamide molecule along the crystal length axis: changing the crystallization medium or weakening the hydrogen bonding. Various solvents were tested to check whether the solvent used to crystallize Lacosamide had an influence on the final crystal shape. Solvent molecules seemed to slow down the growth in the length axis by hindering the unidirectional hydrogen bonding of Lacosamide crystals, but not enough to promote the crystal growth in the width axis. Additives were also tested. Certain additives have shown to compete in a more efficient way than solvent molecules with the hydrogen bonding of Lacosamide. The additive effect has also shown to be compatible with the solvent effect. In parallel, hydrogen atoms in Lacosamide were changed into deuterium atoms in order to weaken the hydrogen bonds strength. Weakening the hydrogen bonds of Lacosamide allowed to let the crystal grow in the width axis. Deuteration was found to be combinable with solvent effect while being in competition with the additive effect. The Lacosamide molecule was eventually deemed an absolute needle by the terms of Lovette and Doherty. The results of this dissertation are aimed at contributing to this classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular biological methods were used to investigate the microbial diversity and community structure in intertidal sandy sediments near the island of Sylt (Wadden Sea) at a site which was characterized for transport and mineralization rates in de Beer et al., (2005, hdl:10013/epic.21375). The sampling was performed during low tide in the middle of the flat, approximately 40 m in the offshore direction from the high water line on October 6, 1999, March 7, 2000, and July 5, 2000. Two parallel cores were collected from each season for molecular analyses. Within 2 h after sampling the sediment cores were sub-sampled and fixed in formaldehyde for FISH analysis. The cells were hybridized, stained with 4',6'-diamidino-2-phenylindole (DAPI) and microscopically counted as described previously [55]. Details of probes and formamide concentrations which were used are shown in further details. Counts are reported as means calculated from 10-15 randomly chosen microscopic fields corresponding to 700-1000 DAPI-stained cells. Values were corrected for the signals counted with the probe NON338. Fluorescence in situ hybridization (FISH)with group-specific rRNA-targeted oligonucleotide probes were used to characterize the microbial community structure over depth (0-12 cm) and seasons (March, July, October). We found high abundances of bacteria with total cell numbers up to 3×109 cells ml-1 and a clear seasonal variation, with higher values in July and October versus March. The microbial community was dominated by members of the Planctomycetes, the Cytophaga/Flavobacterium group, Gammaproteobacteria, and bacteria of the Desulfosarcina/Desulfococcus group. The high abundance (1.5×10**7 - 1.8×10**8 cells/ml accounting for 3-19% of all cells) of presumably aerobic heterotrophic polymer-degrading planctomycetes is in line with the high permeability, deep oxygen penetration, and the high rates of aerobic mineralization of algal biomass measured in the sandy sediments by de Beer et al., (2005, hdl:10013/epic.21375). The high and stable abundance of members of the Desulfosarcina/Desulfococcus group, both over depth and season, suggests that these bacteria may play a more important role than previously assumed based on low sulfate reduction rates in parallel cores de Beer et al., (2005).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interacting with a computer system in the operating room (OR) can be a frustrating experience for a surgeon, who currently has to verbally delegate to an assistant every computer interaction task. This indirect mode of interaction is time consuming, error prone and can lead to poor usability of OR computer systems. This thesis describes the design and evaluation of a joystick-like device that allows direct surgeon control of the computer in the OR. The device was tested extensively in comparison to a mouse and delegated dictation with seven surgeons, eleven residents, and five graduate students. The device contains no electronic parts, is easy to use, is unobtrusive, has no physical connection to the computer and makes use of an existing tool in the OR. We performed a user study to determine its effectiveness in allowing a user to perform all the tasks they would be expected to perform on an OR computer system during a computer-assisted surgery. Dictation was found to be superior to the joystick in qualitative measures, but the joystick was preferred over dictation in user satisfaction responses. The mouse outperformed both joystick and dictation, but it is not a readily accepted modality in the OR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the quick advance of web service technologies, end-users can conduct various on-line tasks, such as shopping on-line. Usually, end-users compose a set of services to accomplish a task, and need to enter values to services to invoke the composite services. Quite often, users re-visit websites and use services to perform re-occurring tasks. The users are required to enter the same information into various web services to accomplish such re-occurring tasks. However, repetitively typing the same information into services is a tedious job for end-users. It can negatively impact user experience when an end-user needs to type the re-occurring information repetitively into web services. Recent studies have proposed several approaches to help users fill in values to services automatically. However, prior studies mainly suffer the following drawbacks: (1) limited support of collecting and analyzing user inputs; (2) poor accuracy of filling values to services; (3) not designed for service composition. To overcome the aforementioned drawbacks, we need maximize the reuse of previous user inputs across services and end-users. In this thesis, we introduce our approaches that prevent end-users from entering the same information into repetitive on-line tasks. More specifically, we improve the process of filling out services in the following 4 aspects: First, we investigate the characteristics of input parameters. We propose an ontology-based approach to automatically categorize parameters and fill values to the categorized input parameters. Second, we propose a comprehensive framework that leverages user contexts and usage patterns into the process of filling values to services. Third, we propose an approach for maximizing the value propagation among services and end-users by linking a set of semantically related parameters together and similar end-users. Last, we propose a ranking-based framework that ranks a list of previous user inputs for an input parameter to save a user from unnecessary data entries. Our framework learns and analyzes interactions of user inputs and input parameters to rank user inputs for input parameters under different contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the Accurate Google Cloud Simulator (AGOCS) – a novel high-fidelity Cloud workload simulator based on parsing real workload traces, which can be conveniently used on a desktop machine for day-to-day research. Our simulation is based on real-world workload traces from a Google Cluster with 12.5K nodes, over a period of a calendar month. The framework is able to reveal very precise and detailed parameters of the executed jobs, tasks and nodes as well as to provide actual resource usage statistics. The system has been implemented in Scala language with focus on parallel execution and an easy-to-extend design concept. The paper presents the detailed structural framework for AGOCS and discusses our main design decisions, whilst also suggesting alternative and possibly performance enhancing future approaches. The framework is available via the Open Source GitHub repository.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving and producing the emotional and non-verbal behaviours required to sustain a conversational dialogue. We present the Sensitive Artificial Listener (SAL) scenario as a setting which seems particularly suited for the study of emotional and non-verbal behaviour, since it requires only very limited verbal understanding on the part of the machine. This scenario allows us to concentrate on non-verbal capabilities without having to address at the same time the challenges of spoken language understanding, task modeling etc. We first summarise three prototype versions of the SAL scenario, in which the behaviour of the Sensitive Artificial Listener characters was determined by a human operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to collect data required for building system components for analysing and synthesising the respective behaviours. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behaviour, dialogue management, and synthesis of speaker and listener behaviour of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-type systems. Since the system is designed for modularity and reuse, and since it is publicly available, the SAL system has potential as a joint research tool in the affective computing research community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This theoretical paper attempts to define some of the key components and challenges required to create embodied conversational agents that can be genuinely interesting conversational partners. Wittgenstein's argument concerning talking lions emphasizes the importance of having a shared common ground as a basis for conversational interactions. Virtual bats suggests that-for some people at least-it is important that there be a feeling of authenticity concerning a subjectively experiencing entity that can convey what it is like to be that entity. Electric sheep reminds us of the importance of empathy in human conversational interaction and that we should provide a full communicative repertoire of both verbal and non-verbal components if we are to create genuinely engaging interactions. Also we may be making the task more difficult rather than easy if we leave out non-verbal aspects of communication. Finally, analogical peacocks highlights the importance of between minds alignment and establishes a longer term goal of being interesting, creative, and humorous if an embodied conversational agent is to be truly an engaging conversational partner. Some potential directions and solutions to addressing these issues are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we advocate the Loop-of-stencil-reduce pattern as a way to simplify the parallel programming of heterogeneous platforms (multicore+GPUs). Loop-of-Stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop. It transparently targets (by using OpenCL) combinations of CPU cores and GPUs, and it makes it possible to simplify the deployment of a single stencil computation kernel on different GPUs. The paper discusses the implementation of Loop-of-stencil-reduce within the FastFlow parallel framework, considering a simple iterative data-parallel application as running example (Game of Life) and a highly effective parallel filter for visual data restoration to assess performance. Thanks to the high-level design of the Loop-of-stencil-reduce, it was possible to run the filter seamlessly on a multicore machine, on multi-GPUs, and on both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In acoustic instruments, the controller and the sound producing system often are one and the same object. If virtualacoustic instruments are to be designed to not only simulate the vibrational behaviour of a real-world counterpart but also to inherit much of its interface dynamics, it would make sense that the physical form of the controller is similar to that of the emulated instrument. The specific physical model configuration discussed here reconnects a (silent) string controller with a modal synthesis string resonator across the real and virtual domains by direct routing of excitation signals and model parameters. The excitation signals are estimated in their original force-like form via careful calibration of the sensor, making use of adaptive filtering techniques to design an appropriate inverse filter. In addition, the excitation position is estimated from sensors mounted under the legs of the bridges on either end of the prototype string controller. The proposed methodology is explained and exemplified with preliminary results obtained with a number of off-line experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The education of the radiography profession is based within higher education establishments, yet a critical part of all radiography programmes is the clinical component where students learn the practical skills of the profession. Assessments therefore not only have to assess a student’s knowledge, but also their clinical competence and core skills in line with both Health and Care Professions Council and the Society and College of Radiographers requirements. This timely thesis examines the possibility of using the Virtual Environment for RadioTherapy (VERT) as an assessment tool to evaluate a student’s competence so giving the advantage of a standard assessment and relieving time pressures in the clinical department. A mixed methods approach was taken which can be described as a Quantitative Qualitative design with the emphasis being on the Quantitative element; a so called QUAN  qual design. The quantitative evaluation compared two simulations, one in the virtual reality environment and another in the department using a real treatment machine. Students were asked to perform two electron setups in each simulation; the order being randomly decided and so the study would be described as a randomised cross-over design. Following this, qualitative data was collected in student focus groups to explore student perspectives in more depth. Findings indicated that the performance between the two simulators was significantly different, p < 0∙001; the virtual simulation scoring significantly lower than the hospital based simulation overall and in virtually all parameters being assessed. Thematic analysis of the qualitative data supported this finding and identified 4 main themes; equipment use, a lack of reality, learning opportunities and assessment of competence. One other sub-theme identified for reality was that of the environment and senses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individuals and corporate users are persistently considering cloud adoption due to its significant benefits compared to traditional computing environments. The data and applications in the cloud are stored in an environment that is separated, managed and maintained externally to the organisation. Therefore, it is essential for cloud providers to demonstrate and implement adequate security practices to protect the data and processes put under their stewardship. Security transparency in the cloud is likely to become the core theme that underpins the systematic disclosure of security designs and practices that enhance customer confidence in using cloud service and deployment models. In this paper, we present a framework that enables a detailed analysis of security transparency for cloud based systems. In particular, we consider security transparency from three different levels of abstraction, i.e., conceptual, organisation and technical levels, and identify the relevant concepts within these levels. This allows us to provide an elaboration of the essential concepts at the core of transparency and analyse the means for implementing them from a technical perspective. Finally, an example from a real world migration context is given to provide a solid discussion on the applicability of the proposed framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.