962 resultados para Integration-responsiveness Framework
Resumo:
Aim: The relative effectiveness of different methods of prevention of HIV transmission is a subject of debate that is renewed with the integration of each new method. The relative weight of values and evidence in decision-making is not always clearly defined. Debate is often confused, as the proponents of different approaches address the issue at different levels of implementation. This paper defines and delineates the successive levels of analysis of effectiveness, and proposes a conceptual framework to clarify debate. Method / Issue: Initially inspired from work on contraceptive effectiveness, a first version of the conceptual framework was published in 1993 with definition of the Condom Effectiveness Matrix (Spencer, 1993). The framework has since integrated and further developed thinking around distinctions made between efficacy and effectiveness and has been applied to HIV prevention in general. Three levels are defined: theoretical effectiveness (ThE), use-effectiveness (UseE) and population use-effectiveness (PopUseE). For example, abstinence and faithfulness, as proposed in the ABC strategy, have relatively high theoretical effectiveness but relatively low effectiveness at subsequent levels of implementation. The reverse is true of circumcision. Each level is associated with specific forms of scientific enquiry and associated research questions: basic and clinical sciences with ThE; clinical and social sciences with UseE; epidemiology and social, economic and political sciences with PopUseE. Similarly, the focus of investigation moves from biological organisms, to the individual at the physiological and then psychological, social and ecological level, and finally takes as perspective populations and societies as a whole. The framework may be applied to analyse issues on any approach. Hence, regarding consideration of HIV treatment as a means of prevention, examples of issues at each level would be: ThE: achieving adequate viral suppression and non-transmission to partners; UseE: facility and degree of adherence to treatment and medical follow-up; PopUseE: perceived validity of strategy, feasibility of achieving adequate population coverage. Discussion: Use of the framework clarifies the questions that need to be addressed at all levels in order to improve effectiveness. Furthermore, the interconnectedness and complementary nature of research from the different scientific disciplines and the relative contribution of each become apparent. The proposed framework could bring greater rationality to the prevention effectiveness debate and facilitate communication between stakeholders.
Resumo:
The Office of the Minister for Integration (OMI), in conjunction with the Department of Education and Science (DES), commissioned an independent review to assist in the development of a national English Language policy and framework for legally-resident adult immigrants. Horwath Consulting Ireland, in association with Rambll Management and Matrix Knowledge Group, were awarded the contract to undertake this assignment. The terms of reference for the assignment state that: “proposed future developments will be governed by a clear strategy which reflects the importance of English language tuition in overall integration objectives and which addresses key coordination, technical, funding and service-delivery issues."
Resumo:
This paper introduces the evaluation report after fostering a Standard-based Interoperability Framework (SIF) between the Virgen del Rocío University Hospital (VRUH) Haemodialysis (HD) Unit and 5 outsourced HD centres in order to improve integrated care by automatically sharing patients' Electronic Health Record (EHR) and lab test reports. A pre-post study was conducted during fourteen months. The number of lab test reports of both emergency and routine nature regarding to 379 outpatients was computed before and after the integration of the SIF. Before fostering SIF, 19.38 lab tests per patient were shared between VRUH and HD centres, 5.52 of them were of emergency nature while 13.85 were routine. After integrating SIF, 17.98 lab tests per patient were shared, 3.82 of them were of emergency nature while 14.16 were routine. The inclusion of a SIF in the HD Integrated Care Process has led to an average reduction of 1.39 (p=0.775) lab test requests per patient, including a reduction of 1.70 (p=0.084) in those of emergency nature, whereas an increase of 0.31 (p=0.062) was observed in routine lab tests. Fostering this strategy has led to the reduction in emergency lab test requests, which implies a potential improvement of the integrated care.
Resumo:
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders.
Resumo:
Background: To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results: To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions: We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development.
Resumo:
Background: Single nucleotide polymorphisms (SNPs) are the most frequent type of sequence variation between individuals, and represent a promising tool for finding genetic determinants of complex diseases and understanding the differences in drug response. In this regard, it is of particular interest to study the effect of non-synonymous SNPs in the context of biological networks such as cell signalling pathways. UniProt provides curated information about the functional and phenotypic effects of sequence variation, including SNPs, as well as on mutations of protein sequences. However, no strategy has been developed to integrate this information with biological networks, with the ultimate goal of studying the impact of the functional effect of SNPs in the structure and dynamics of biological networks. Results: First, we identified the different challenges posed by the integration of the phenotypic effect of sequence variants and mutations with biological networks. Second, we developed a strategy for the combination of data extracted from public resources, such as UniProt, NCBI dbSNP, Reactome and BioModels. We generated attribute files containing phenotypic and genotypic annotations to the nodes of biological networks, which can be imported into network visualization tools such as Cytoscape. These resources allow the mapping and visualization of mutations and natural variations of human proteins and their phenotypic effect on biological networks (e.g. signalling pathways, protein-protein interaction networks, dynamic models). Finally, an example on the use of the sequence variation data in the dynamics of a network model is presented. Conclusion: In this paper we present a general strategy for the integration of pathway and sequence variation data for visualization, analysis and modelling purposes, including the study of the functional impact of protein sequence variations on the dynamics of signalling pathways. This is of particular interest when the SNP or mutation is known to be associated to disease. We expect that this approach will help in the study of the functional impact of disease-associated SNPs on the behaviour of cell signalling pathways, which ultimately will lead to a better understanding of the mechanisms underlying complex diseases.
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
We analyze the linkage between protectionism and invasive species (IS) hazard in the context of two-way trade and multilateral trade integration, two major features of real-world agricultural trade. Multilateral integration includes the joint reduction of tariffs and trade costs among trading partners. Multilateral trade integration is more likely to increase damages from IS than predicted by unilateral trade opening under the classic Heckscher-Ohlin-Samuelson (HOS) framework because domestic production (the base susceptible to damages) is likely to increase with expanding export markets. A country integrating its trade with a partner characterized by relatively higher tariff and trade costs is also more likely to experience increased IS damages via expanded domestic production for the same reason. We illustrate our analytical results with a stylized model of the world wheat market.
Resumo:
Background: The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties.Results: We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php.Conclusions: BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.
Resumo:
The human brain displays heterogeneous organization in both structure and function. Here we develop a method to characterize brain regions and networks in terms of information-theoretic measures. We look at how these measures scale when larger spatial regions as well as larger connectome sub-networks are considered. This framework is applied to human brain fMRI recordings of resting-state activity and DSI-inferred structural connectivity. We find that strong functional coupling across large spatial distances distinguishes functional hubs from unimodal low-level areas, and that this long-range functional coupling correlates with structural long-range efficiency on the connectome. We also find a set of connectome regions that are both internally integrated and coupled to the rest of the brain, and which resemble previously reported resting-state networks. Finally, we argue that information-theoretic measures are useful for characterizing the functional organization of the brain at multiple scales.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
Resumo:
The competitiveness of businesses is increasingly dependent on their electronic networks with customers, suppliers, and partners. While the strategic and operational impact of external integration and IOS adoption has been extensively studied, much less attention has been paid to the organizational and technical design of electronic relationships. The objective of our longitudinal research project is the development of a framework for understanding and explaining B2B integration. Drawing on existing literature and empirical cases we present a reference model (a classification scheme for B2B Integration). The reference model comprises technical, organizational, and institutional levels to reflect the multiple facets of B2B integration. In this paper we onvestigate the current state of electronic collaboration in global supply chains focussing on the technical view. Using an indepth case analysis we identify five integration scenarios. In the subsequent confirmatory phase of the research we analyse 112 real-world company cases to validate these five integration scenarios. Our research advances and deepens existing studies by developing a B2B reference model, which reflects the current state of practice and is independent of specific implementation technologies. In the next stage of the research the emerging reference model will be extended to create an assessment model for analysing the maturity level of a given company in a specific supply chain.
Resumo:
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Resumo:
THE COMBINATION OF ADVANCED NEUROIMAGING TECHNIQUES AND MAJOR DEVELOPMENTS IN COMPLEX NETWORK SCIENCE, HAVE GIVEN BIRTH TO A NEW FRAMEWORK FOR STUDYING THE BRAIN: "connectomics." This framework provides the ability to describe and study the brain as a dynamic network and to explore how the coordination and integration of information processing may occur. In recent years this framework has been used to investigate the developing brain and has shed light on many dynamic changes occurring from infancy through adulthood. The aim of this article is to review this work and to discuss what we have learned from it. We will also use this body of work to highlight key technical aspects that are necessary in general for successful connectome analysis using today's advanced neuroimaging techniques. We look to identify current limitations of such approaches, what can be improved, and how these points generalize to other topics in connectome research.
Resumo:
Ohjelmistoprojektit pohjautuvat nykyään useasti osittain itsenäisesti suunniteltujen ja; toteutettujen ohjelmakomponenttien yhdistelemiseen. Tällä keinolla voidaan vähentää kehitystyön; viemää aikaa ja kustannuksia, jotta saadaan tuotettua kilpailukykyisempiä ohjelmistoja.; Tässädokumentissa käsitellään komponenttipohjaisen ohjelmistotuotannon näkökulmia ja; Microsoft .NET Framework ympäristöä, joka on kehitysympäristö komponenttipohjaisille; ohjelmistoille. Lisäksi esitellään tapauskohtainen ohjelmistoprojekti extranet-verkon; toteutukseen.