939 resultados para ontology alignment
Resumo:
Objective Leadership is particularly important in complex highly interprofessional health care contexts involving a number of staff, some from the same specialty (intraprofessional), and others from different specialties (interprofessional). The authors recently published the concept of “The Burns Suite” (TBS) as a novel simulation tool to deliver interprofessional and teamwork training. It is unclear which leadership behaviors are the most important in an interprofessional burns resuscitation scenario, and whether they can be modeled on to current leadership theory. The purpose of this study was to perform a comprehensive video analysis of leadership behaviors within TBS. Methods A total of 3 burns resuscitation simulations within TBS were recorded. The video analysis was grounded-theory inspired. Using predefined criteria, actions/interactions deemed as leadership behaviors were identified. Using an inductive iterative process, 8 main leadership behaviors were identified. Cohen’s κ coefficient was used to measure inter-rater agreement and calculated as κ = 0.7 (substantial agreement). Each video was watched 4 times, focusing on 1 of the 4 team members per viewing (senior surgeon, senior nurse, trainee surgeon, and trainee nurse). The frequency and types of leadership behavior of each of the 4 team members were recorded. Statistical significance to assess any differences was assessed using analysis of variance, whereby a p < 0.05 was taken to be significant. Leadership behaviors were triangulated with verbal cues and actions from the videos. Results All 3 scenarios were successfully completed. The mean scenario length was 22 minutes. A total of 362 leadership behaviors were recorded from the 12 participants. The most evident leadership behaviors of all team members were adhering to guidelines (which effectively equates to following Advanced Trauma and Life Support/Emergency Management of Severe Burns resuscitation guidelines and hence “maintaining standards”), followed by making decisions. Although in terms of total frequency the senior surgeon engaged in more leadership behaviors compared with the entire team, statistically there was no significant difference between all 4 members within the 8 leadership categories. This analysis highlights that “distributed leadership” was predominant, whereby leadership was “distributed” or “shared” among team members. The leadership behaviors within TBS also seemed to fall in line with the “direction, alignment, and commitment” ontology. Conclusions Effective leadership is essential for successful functioning of work teams and accomplishment of task goals. As the resuscitation of a patient with major burns is a dynamic event, team leaders require flexibility in their leadership behaviors to effectively adapt to changing situations. Understanding leadership behaviors of different team members within an authentic simulation can identify important behaviors required to optimize nontechnical skills in a major resuscitation. Furthermore, attempting to map these behaviors on to leadership models can help further our understanding of leadership theory. Collectively this can aid the development of refined simulation scenarios for team members, and can be extrapolated into other areas of simulation-based team training and interprofessional education.
Resumo:
Unstructured mesh codes for modelling continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Parallelisation of such codes using single Program Multi Data (SPMD) domain decomposition techniques implemented with message passing has been demonstrated to provide high parallel efficiency, scalability to large numbers of processors P and portability across a wide range of parallel platforms. High efficiency, especially for large P requires that load balance is achieved in each parallel loop. For a code in which loops span a variety of mesh entity types, for example, elements, faces and vertices, some compromise is required between load balance for each entity type and the quantity of inter-processor communication required to satisfy data dependence between processors.
Resumo:
The number of Open Access (OA) policies that have been adopted by universities, research institutes and research funders has been increasing at a fast pace. The Registry of Open Access Repository Mandates and Policies (ROARMAP) records the existence of 724 OA policies across the world, of which 512 have been adopted by universities and research institutions. The UK is one of the leading countries in terms of OA policy development and implementation with a total of 85 institutional1 and an estimated 35 funder2 OA policies. In order to understand and contextualise how OA policies are developed and how they can be effectively implemented and aligned, this brief looks at two areas. The first section provides an overview on the processes evolving around policy making, policy effectiveness and policy alignment. In particular, it summarises the criteria and elements generally specified in OA policies, it points out some of the relevant steps informing the development, monitoring and revision of OA policies, it outlines what OA policy elements contribute to policy effectiveness, and highlights the benefits in aligning OA policies. The second section revisits the issues previously discussed within the context of the UK institutional (universities) OA policy landscape.
Resumo:
Background: Protein structural alignment is one of the most fundamental and crucial areas of research in the domain of computational structural biology. Comparison of a protein structure with known structures helps to classify it as a new or belonging to a known group of proteins. This, in turn, is useful to determine the function of protein, its evolutionary relationship with other protein molecules and grasping principles underlying protein architecture and folding. Results: A large number of protein structure alignment methods are available. Each protein structure alignment tool has its own strengths andweaknesses that need to be highlighted.We compared and presented results of six most popular and publically available servers for protein structure comparison. These web-based servers were compared with the respect to functionality (features provided by these servers) and accuracy (how well the structural comparison is performed). The CATH was used as a reference. The results showed that overall CE was top performer. DALI and PhyreStorm showed similar results whereas PDBeFold showed the lowest performance. In case of few secondary structural elements, CE, DALI and PhyreStorm gave 100% success rate. Conclusion: Overall none of the structural alignment servers showed 100% success rate. Studies of overall performance, effect of mainly alpha and effect of mainly beta showed consistent performance. CE, DALI, FatCat and PhyreStorm showed more than 90% success rate.
Resumo:
Los sistemas de respuesta activa tienen por objetivo ejecutar una respuesta en contra de una intrusión de forma automática. Sin embargo, ejecutar una respuesta automáticamente no es una tarea trivial ya que el costo de ejecutar una respuesta podría ser más grande que el efecto que cause la intrusión propiamente dicha. También, el sistema debe contar con un amplio conjunto de acciones de respuesta y un algoritmo que seleccione la respuesta óptima. Este artículo propone un toolkit de respuestas que será integrado a un IRS basado en Ontologías para permitir la ejecución automática de la mejor respuesta cuando una intrusión es detectada. Se presenta un conjunto de respuestas basadas en host y basadas en red que pueden ser ejecutadas por el IRS, dicha ejecución es llevada a cabo mediante agentes basados en plugins que han sido distribuidos en la red. Finalmente, se realiza la verificación del sistema propuesto, tomando como caso de uso un ataque de defacement obteniéndose resultados satisfactorios.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
Statistical methodology is proposed for comparing molecular shapes. In order to account for the continuous nature of molecules, classical shape analysis methods are combined with techniques used for predicting random fields in spatial statistics. Applying a modification of Procrustes analysis, Bayesian inference is carried out using Markov chain Monte Carlo methods for the pairwise alignment of the resulting molecular fields. Superimposing entire fields rather than the configuration matrices of nuclear positions thereby solves the problem that there is usually no clear one--to--one correspondence between the atoms of the two molecules under consideration. Using a similar concept, we also propose an adaptation of the generalised Procrustes analysis algorithm for the simultaneous alignment of multiple molecular fields. The methodology is applied to a dataset of 31 steroid molecules.
Resumo:
Part 20: Health and Care Networks
Resumo:
Purpose.: To evaluate the levels of dinucleotides diadenosine tetraphosphate (Ap4A) and diadenosine pentaphosphate (Ap5A) in tears of patients wearing rigid gas permeable (RGP) contact lenses on a daily wear basis and of patients wearing reverse-geometry RGP lenses overnight for orthokeratology treatment. Methods.: Twenty-two young volunteers (10 females, 12 males; 23.47 ± 4.49 years) were fitted with an alignment-fit RGP lens (paflufocon B) for a month, and after a 15-day washout period they were fitted with reverse-geometry RGP lenses for corneal reshaping (paflufocon D) for another month. During each period, tears were collected at baseline day 1, 7, 15, and 28. Ap4A and Ap5A were measured by high-pressure liquid chromatography (HPLC). Additionally, corneal staining, break-up time (BUT), Schirmer test, and dryness symptoms were evaluated. Results.: Ap4A concentrations increased significantly from baseline during the whole period of daily wear of RGP lenses (P < 0.001); concentration was also significantly higher than in the orthokeratology group, which remained at baseline levels during the study period except at day 1 (P < 0.001) and day 28 (P = 0.041). While BUT and Schirmer remained unchanged in both groups, discomfort and dryness were significantly increased during alignment-fit RGP daily wear but not during the orthokeratology period. Conclusions.: Daily wear of RGP lenses increased the levels of Ap4A due to mechanical stimulation by blinking of the corneal epithelium, and this is associated with discomfort. Also, orthokeratology did not produce symptoms or signs of ocular dryness, which could be a potential advantage over soft contact lenses in terms of contact lens-induced dryness.
Resumo:
Part 8: Business Strategies Alignment
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
This dissertation studies the manipulation of particles using acoustic stimulation for applications in microfluidics and templating of devices. The term particle is used here to denote any solid, liquid or gaseous material that has properties, which are distinct from the fluid in which it is suspended. Manipulation means to take over the movements of the particles and to position them in specified locations. ^ Using devices, microfabricated out of silicon, the behavior of particles under the acoustic stimulation was studied with the main purpose of aligning the particles at either low-pressure zones, known as the nodes or high-pressure zones, known as anti-nodes. By aligning particles at the nodes in a flow system, these particles can be focused at the center or walls of a microchannel in order to ultimately separate them. These separations are of high scientific importance, especially in the biomedical domain, since acoustopheresis provides a unique approach to separate based on density and compressibility, unparalleled by other techniques. The study of controlling and aligning the particles in various geometries and configurations was successfully achieved by controlling the acoustic waves. ^ Apart from their use in flow systems, a stationary suspended-particle device was developed to provide controllable light transmittance based on acoustic stimuli. Using a glass compartment and a carbon-particle suspension in an organic solvent, the device responded to acoustic stimulation by aligning the particles. The alignment of light-absorbing carbon particles afforded an increase in visible light transmittance as high as 84.5%, and it was controlled by adjusting the frequency and amplitude of the acoustic wave. The device also demonstrated alignment memory rendering it energy-efficient. A similar device for suspended-particles in a monomer enabled the development of electrically conductive films. These films were based on networks of conductive particles. Elastomers doped with conductive metal particles were rendered surface conductive at particle loadings as low as 1% by weight using acoustic focusing. The resulting films were flexible and had transparencies exceeding 80% in the visible spectrum (400-800 nm) These films had electrical bulk conductivities exceeding 50 S/cm. ^
Resumo:
The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t + 1, using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.