895 resultados para arbitrary sharing configurations
Resumo:
This article presents the results from an analysis of data from service providers and young adults who were formerly in state care about how information about the sexual health of young people in state care is managed. In particular, the analysis focuses on the perceived impact of information sharing between professionals on young people. Twenty-two service providers from a range of professions including social work, nursing and psychology, and 19 young people aged 18–22 years who were formerly in state care participated in the study. A qualitative approach was employed in which participants were interviewed in depth and data were analysed using modified analytical induction (Bogdan & Biklen, 2007). Findings suggest that within the care system in which service provider participants worked it was standard practice that sensitive information about a young person’s sexual health would be shared across team members, even where there appeared to be no child protection issues. However, the accounts of the young people indicated that they experienced the sharing of information in this way as an invasion of their privacy. An unintended outcome of a high level of information sharing within teams is that the privacy of the young person in care is compromised in a way that is not likely to arise in the case of young people who are not in care. This may deter young people from availing themselves of the sexual health services.
Resumo:
The separation of enantiomers and confirmation of their absolute configurations is significant in the development of chiral drugs. The interactions between the enantiomers of chiral pyrazole derivative and polysaccharide-based chiral stationary phase cellulose tris(4-methylbenzoate) (Chiralcel OJ) in seven solvents and under different temperature were studied using molecular dynamics simulations. The results show that solvent effect has remarkable influence on the interactions. Structure analysis discloses that the different interactions between two isomers and chiral stationary phase are dependent on the nature of solvents, which may invert the elution order. The computational method in the present study can be used to predict the elution order and the absolute configurations of enantiomers in HPLC separations and therefore would be valuable in development of chiral drugs.
Resumo:
This chapter discusses that the theoretical studies, using both atomistic and phenomenological approaches, have made clear predictions about the existence and behaviour of ferroelectric (FE) vortices. Effective Hamiltonians can be implemented within both Monte Carlo (MC) and molecular dynamics (MD) simulations. In contrast to the effective Hamiltonian method, which is atomistic in nature, the phase field method employs a continuum approach, in which the polarization field is the order parameter. Properties of FE nanostructures are largely governed by the existence of a depolarization field, which is much stronger than the demagnetization field in magnetic nanosystems. The topological patterns seen in rare earth manganites are often referred to as vortices and yet this claim never seems to be explicitly justified. By inspection, the form of a vortex structure is such that there is a continuous rotation in the orientation of dipole vectors around the singularity at the centre of the vortex.
Resumo:
Rapid and affordable tumor molecular profiling has led to an explosion of clinical and genomic data poised to enhance the diagnosis, prognostication and treatment of cancer. A critical point has now been reached at which the analysis and storage of annotated clinical and genomic information in unconnected silos will stall the advancement of precision cancer care. Information systems must be harmonized to overcome the multiple technical and logistical barriers to data sharing. Against this backdrop, the Global Alliance for Genomic Health (GA4GH) was established in 2013 to create a common framework that enables responsible, voluntary and secure sharing of clinical and genomic data. This Perspective from the GA4GH Clinical Working Group Cancer Task Team highlights the data-aggregation challenges faced by the field, suggests potential collaborative solutions and describes how GA4GH can catalyze a harmonized data-sharing culture.
Resumo:
Researchers want to run scientific experiments focusing on their disciplines. They do not want to know how and where the experiments are executed. Science gateways hide details by coordinating the execution of experiments using different infrastructures and workflow systems. ER-flow/SHIWA and SCI-BUS project developed repositories to share artefacts such as applications, portlets, workflows, etc. inside and among research communities. Sharing artefacts in re-positories enable gateway developers to reuse them when building a new gateway and/or creating a new application.
Resumo:
E-scientists want to run their scientific experiments on Distributed Computing Infrastructures (DCI) to be able to access large pools of resources and services. To run experiments on these infrastructures requires specific expertise that e-scientists may not have. Workflows can hide resources and services as a virtualization layer providing a user interface that e-scientists can use. There are many workflow systems used by research communities but they are not interoperable. To learn a workflow system and create workflows in this workflow system may require significant efforts from e-scientists. Considering these efforts it is not reasonable to expect that research communities will learn new workflow systems if they want to run workflows developed in other workflow systems. The solution is to create workflow interoperability solutions to allow workflow sharing. The FP7 Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs (SHIWA) project developed two interoperability solutions to support workflow sharing: Coarse-Grained Interoperability (CGI) and Fine-Grained Interoperability (FGI). The project created the SHIWA Simulation Platform (SSP) to implement the Coarse-Grained Interoperability approach as a production-level service for research communities. The paper describes the CGI approach and how it enables sharing and combining existing workflows into complex applications and run them on Distributed Computing Infrastructures. The paper also outlines the architecture, components and usage scenarios of the simulation platform.
Resumo:
— In the new learning environments, built width digital technologies, the need to promote quality of education resources, commonly known as Learning Objects, which can support formal and informal distance learning, emerge as one of the biggest challenge that educational institutions will have to face. Due to the fact that is expensive, the reuse and sharing became very important issue. This article presents a Learning Object Repository which aims to store, to disseminate and maintain accessible Learning Objects.
Resumo:
Several projects in the recent past have aimed at promoting Wireless Sensor Networks as an infrastructure technology, where several independent users can submit applications that execute concurrently across the network. Concurrent multiple applications cause significant energy-usage overhead on sensor nodes, that cannot be eliminated by traditional schemes optimized for single-application scenarios. In this paper, we outline two main optimization techniques for reducing power consumption across applications. First, we describe a compiler based approach that identifies redundant sensing requests across applications and eliminates those. Second, we cluster the radio transmissions together by concatenating packets from independent applications based on Rate-Harmonized Scheduling.
Resumo:
We present a 12(1 + 3R/(4m)) competitive algorithm for scheduling implicit-deadline sporadic tasks on a platform comprising m processors, where a task may request one of R shared resources.
Resumo:
This paper proposes a dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) residual capacity allocated but unused when jobs complete in less than their budgeted execution time; (ii) stealing capacity from inactive non-isolated servers used to schedule best-effort jobs. The effectiveness of the proposed approach in reducing the mean tardiness of periodic jobs is demonstrated through extensive simulations. The achieved results become even more significant when tasks’ computation times have a large variance.
Resumo:
A new algorithm is proposed for scheduling preemptible arbitrary-deadline sporadic task systems upon multiprocessor platforms, with interprocessor migration permitted. This algorithm is based on a task-splitting approach - while most tasks are entirely assigned to specific processors, a few tasks (fewer than the number of processors) may be split across two processors. This algorithm can be used for two distinct purposes: for actually scheduling specific sporadic task systems, and for feasibility analysis. Simulation- based evaluation indicates that this algorithm offers a significant improvement on the ability to schedule arbitrary- deadline sporadic task systems as compared to the contemporary state-of-art. With regard to feasibility analysis, the new algorithm is proved to offer superior performance guarantees in comparison to prior feasibility tests.
Resumo:
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a greedy capacity sharing and stealing policy to efficiently exchange bandwidth among tasks, minimising the degree of deviation from the ideal system's behaviour caused by inter-application blocking. The proposed capacity exchange protocol (CXP) focus on exchanging extra capacities as early, and not necessarily as fairly, as possible. This loss of optimality is worth the reduced complexity as the protocol's behaviour nevertheless tends to be fair in the long run and outperforms other solutions in highly dynamic scenarios, as demonstrated by extensive simulations.