47 resultados para Toolkits


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Canadian Best Practice Recommendations for Stroke Care are intended to reduce variations in stroke care and facilitate closure of the gap between evidence and practice (Lindsay et al., 2010). The publication of best practice recommendations is only the beginning of this process. The guidelines themselves are not sufficient to change practice and increase consistency in care. Therefore, a key objective of the Canadian Stroke Network (CSN) Best Practices Working Group (BPWG) is to encourage and facilitate ongoing professional development and training for health care professionals providing stroke care. This is addressed through a multi-factorial approach to the creation and dissemination of inter-professional implementation tools and resources. The resources developed by CSN span pre-professional education, ongoing professional development, patient education and may be used to inform systems change. With a focus on knowledge translation, several inter-professional point-of-care tools have been developed by the CSN in collaboration with numerous professional organizations and expert volunteers. These resources are used to facilitate awareness, understanding and applications of evidence-based care across stroke care settings. Similar resources are also developed specifically for stroke patients, their families and informal caregivers, and the general public. With each update of the Canadian Best Practice Recommendations for Stroke Care, the BPWG and topic-specific writing groups propose priority areas for ongoing resource development. In 2010, two of these major educational initiatives were undertaken and recently completed—one to support continuing education for health care professionals regarding secondary stroke prevention and the other to educate families, informal caregivers and the public about pediatric stroke. This paper presents an overview of these two resources, and we encourage health care professionals to integrate these into their personal learning plans and tool kits for patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of start small and iterate fast. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When considering the potential uptake and utilization of technology management tools by industry, it must be recognized that companies face the difficult challenges of selecting, adopting and integrating individual tools into a toolkit that must be implemented within their current organizational processes and systems. This situation is compounded by the lack of sound advice on integrating well-founded individual tools into a robust toolkit that has the necessary degree of flexibility such that they can be tailored for application to specific problems faced by individual organizations. As an initial stepping stone to offering a toolkit with empirically proven utility, this paper provides a conceptual foundation to the development of toolkits by outlining an underlying philosophical position based on observations from multiple research and commercial collaborations with industry. This stance is underpinned by a set of operationalized principles that can offer guidance to organizations when deciding upon the appropriate form, functions and features that should be embodied by any potential tool/toolkit. For example, a key objective of any tool is to aid decision-making and a core set of powerful, flexible, scaleable and modular tools should be sufficient to allow users to generate, explore, shape and implement possible solutions across a wide array of strategic issues. From our philosophical stance, the preferred mode of engagement is facilitated workshops with a participatory process that enables multiple perspectives and structures the conversation through visual representations in order to manage the cognitive load in the collaborative environment. The generic form of the tools should be configurable for the given context and utilized in a lightweight manner based on the premise of 'start small and iterate fast'. © 2012 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fibrillar collagens provide the most fundamental platform in the vertebrate organism for the attachment of cells and matrix molecules. we have identified specific sites in collagens to which cells can attach, either directly or through protein intermediaries. Using Toolkits of triple-helical peptides, each peptide comprising 27 residues of collagen primary sequence and overlapping with its neighbours by nine amino acids, we have mapped the binding of receptors and other proteins on to collagens II or III. Integrin alpha 2 beta 1 binds to several GXX'GER motifs within the collagens, the affinities of which differ sufficiently to control cell adhesion and migration independently of the cellular regulation of the integrin. The platelet receptor, Gp (glycoprotein) VI binds well to GPO (where 0 is hydroxyproline)-containing model peptides, but to very few Toolkit peptides, suggesting that sequence in addition to GPO triplets is important in defining GpVI binding. The Toolkits have been applied to the plasma protein vWF (von Willebrand factor), which binds to only a single sequence, identified by truncation and amino acid substitution within Toolkit peptides, as GXRGQOGVMGFO in collagens II and III. Intriguingly, the receptor tyrosine kinase, DDR2 (discoidin domain receptor 2) recognizes three sites in collagen II, including its vWF-binding site, although the amino acids that support the interaction differ slightly within this motif. Furthermore, the secreted protein BM-40 (basement membrane protein 40) also binds well to this same region. Thus the availability of extracellular collagen-binding proteins may be important in regulating and facilitating direct collagen-receptor interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The productisation of crime toolkits is happening at an ever-increasing rate. Previous attacks that required indepth knowledge of computer systems can now be purchased online. Large scale attacks previously requiring months to setup a botnet can now be scheduled for a small fee. Criminals are leveraging this opportunity of commercialization, by compromising web applications and user's browser, to gain advantages such as using the computer's resources for launching further attacks, or stealing data such as identifying information. Crime toolkits are being developed to attack an increasing number of applications and can now be deployed by attackers with little technical knowledge. This paper surveys the current trends in crime toolkits, with a case study on the Zeus botnet. We profile the types of exploits that malicious writers prefer, with a view to predicting future attack trends. We find that the scope for damage is increasing, particularly as specialisation and scale increase in cybercrime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly, web applications are being developed over the Internet. Securing these web applications is becoming important as they hold critical security features. However, cybercriminals are becoming smarter by developing a crime toolkit, and employing sophisticated techniques to evade detection. These crime toolkits can be used by any person to target Internet users. In this paper, we explore the techniques used in crime toolkits. We present a current state-of-the-art analysis of crime toolkits and focus on attacks against web applications. The crime toolkit techniques are compared with the vulnerability of web applications to help reveal particular behaviour such as popular web application vulnerabilities that malicious writers prefer. In addition, we outline the existing protection mechanism, and observe that the possibility for damage is rising, particularly as specialization and scale increase in cybercrime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information literacy is the set of research skills needed to access, retrieve, and analyze information. By creating an information literacy plan and an online toolkit that the entire campus uses for research purposes, faculty and students will be equipped with the necessary tools to become better researchers, understand the importance of citing information and the significance of searching for peer-reviewed articles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital Songlines (DSL) is an Australasian CRC for Interaction Design (ACID) project that is developing protocols, methodologies and toolkits to facilitate the collection, education and sharing of indigenous cultural heritage knowledge. This paper outlines the goals achieved over the last three years in the development of the Digital Songlines game engine (DSE) toolkit that is used for Australian Indigenous storytelling. The project explores the sharing of indigenous Australian Aboriginal storytelling in a sensitive manner using a game engine. The use of the game engine in the field of Cultural Heritage is expanding. They are an important tool for the recording and re-presentation of historically, culturally, and sociologically significant places, infrastructure, and artefacts, as well as the stories that are associated with them. The DSL implementation of a game engine to share storytelling provides an educational interface. Where the DSL implementation of a game engine in a CH application differs from others is in the nature of the game environment itself. It is modelled on the 'country' (the 'place' of their heritage which is so important to the clients' collective identity) and authentic fauna and flora that provides a highly contextualised setting for the stories to be told. This paper provides an overview on the development of the DSL game engine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article focuses on problem solving activities in a first grade classroom in a typical small community and school in Indiana. But, the teacher and the activities in this class were not at all typical of what goes on in most comparable classrooms; and, the issues that will be addressed are relevant and important for students from kindergarten through college. Can children really solve problems that involve concepts (or skills) that they have not yet been taught? Can children really create important mathematical concepts on their own – without a lot of guidance from teachers? What is the relationship between problem solving abilities and the mastery of skills that are widely regarded as being “prerequisites” to such tasks?Can primary school children (whose toolkits of skills are limited) engage productively in authentic simulations of “real life” problem solving situations? Can three-person teams of primary school children really work together collaboratively, and remain intensely engaged, on problem solving activities that require more than an hour to complete? Are the kinds of learning and problem solving experiences that are recommended (for example) in the USA’s Common Core State Curriculum Standards really representative of the kind that even young children encounter beyond school in the 21st century? … This article offers an existence proof showing why our answers to these questions are: Yes. Yes. Yes. Yes. Yes. Yes. And: No. … Even though the evidence we present is only intended to demonstrate what’s possible, not what’s likely to occur under any circumstances, there is no reason to expect that the things that our children accomplished could not be accomplished by average ability children in other schools and classrooms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.