70 resultados para COMTRADE format
Resumo:
Du sens du jeu à la raison d'agirComment est-ce qu'un enfant apprend le sens des mots à partir de mots dont il ne comprend pas le sens ?La thèse « Du sens du jeu à la raison» défend une conception de l'acquisition du langage chez l'enfant. Inspirée de la philosophie de Wittgenstein, elle conteste une représentation empiriste qui voudrait que l'enfant apprenne d'abord la signification des mots dans son expérience prélinguistique du monde pour ensuite associer des noms à des objets - qu'ils soient internes ou qu'ils soient externes - pour revendiquer une représentation profondément anthropologique : celle-ci veut que l'enfant apprenne à parler dans des jeux verbaux dans lesquels il fait usage des mots qu'il hérite d'un mode de vie particulier.«Il ne s'agit pas d'expliquer un jeu de langage par nos expériences vécues, mais de constater un jeu de langage» (Wittgenstein, Recherches philosophiques, Gallimard, Paris, §655).La méthode de recherche est aussi originale. Elle ne se veut pas empirique, mais grammaticale. Il s'agit au moyen d'une série d'exemples - des vidéos d'enfants qui apprennent à parler - de proposer une représentation synoptique de nos usages ordinaires du langage comme «dire maman ou papa», «reconnaître une couleur», «dire aïe!»,. La description de ces jeux de langage n'a pas pour but de nous faire découvrir quelque chose de nouveau, mais de nous faire voir ce que nous avons constamment sous les yeux et qui reste inaperçu. La méthode vise donc une certaine éducation du regard.Ce retour aux jeux de langage de ceux qui apprennent à parler permet de mieux comprendre les possibilités de notre langage et contribue à nous en faire voir les impossibilités. C'est un combat contre les fausses images que nous nous faisons de notre langage, qui nous empêchent de voir le réel usage que nous faisons de nos propres mots.Pour peu que l'on s'accorde avec Wittgenstein pour considérer comme purement mythique la conception de la signification comme quelque chose qui serait associé au mot, on est alors amené à voir autrement - dans l'ordre - nos concepts d'apprentissage, de compréhension et de sujet parlant. En suivant ces étapes, différents types de descriptions (sous forme de texte, de transcription d'interactions, d'images au format de bande dessinée, de vidéos sur DVD en annexe de la thèse) se chevauchent vous invitant à voir autrement comment un enfant apprend le sens des mots à partir de mots dont il ne comprend pas le sens en évoluant progressivement du «sens du jeu à la raison d'agir».
Resumo:
BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies. SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.
Resumo:
This paper presents an ITK implementation for exportingthe contours of the automated segmentation results toDICOM-RT Structure Set format. The âeurooeradiotherapystructure setâeuro (RTSTRUCT) object of the DICOM standard isused for the transfer of patient structures and relateddata, between the devices found within and outside theradiotherapy department. It mainly contains theinformation of regions of interest (ROIs) and points ofinterest (E.g. dose reference points). In many cases,rather than manually drawing these ROIs on the CT images,one can indeed benefit from the automated segmentationalgorithms already implemented in ITK. But at present, itis not possible to export the ROIs obtained from ITK toRTSTRUCT format. In order to bridge this gap, we havedeveloped a framework for exporting contour data toRTSTRUCT. We provide here the complete implementation ofRTSTRUCT exporter and present the details of the pipelineused. Results on a 3-D CT image of the Head and Neck(H&N) region are presented.
Resumo:
The induction of fungal metabolites by fungal co-cultures grown on solid media was explored using multi-well co-cultures in 2 cm diameter Petri dishes. Fungi were grown in 12-well plates to easily and rapidly obtain the large number of replicates necessary for employing metabolomic approaches. Fungal culture using such a format accelerated the production of metabolites by several weeks compared with using the large-format 9 cm Petri dishes. This strategy was applied to a co-culture of a Fusarium and an Aspergillus strain. The metabolite composition of the cultures was assessed using ultra-high pressure liquid chromatography coupled to electrospray ionisation and time-of-flight mass spectrometry, followed by automated data mining. The de novo production of metabolites was dramatically increased by nutriment reduction. A time-series study of the induction of the fungal metabolites of interest over nine days revealed that they exhibited various induction patterns. The concentrations of most of the de novo induced metabolites increased over time. However, interesting patterns were observed, such as with the presence of some compounds only at certain time points. This result indicates the complexity and dynamic nature of fungal metabolism. The large-scale production of the compounds of interest was verified by co-culture in 15 cm Petri dishes; most of the induced metabolites of interest (16/18) were found to be produced as effectively as on a small scale, although not in the same time frames. Large-scale production is a practical solution for the future production, identification and biological evaluation of these metabolites.
Resumo:
This study was designed to check for the equivalence of the ZKPQ-50-CC (Spanish and French versions) through Internet on-line (OL) and paper and pencil (PP) answer format. Differences in means and devia- tions were significant in some scales, but effect sizes are minimal except for Sociability in the Spanish sample. Alpha reliabilities are also very similar in both versions with no significant differences between formats. A robust factorial structure was found for the two formats and the average congruency coefficients were 0.98. The goodness-of-fit indexes obtained by confirmatory factorial analysis are very similar to those obtained in the ZKPQ-50-CC validation study and they do not differ between the two formats. The multi-group analysis confirms the equivalence among the OL-PP formats in both countries. These results in general support the validity and reliability of the Internet as a method in investigations using the ZKPQ-50-CC.
Resumo:
La littératie en santé est un concept prenant en compte les compétences et les ressources que chacun doit posséder pour gérer l'information nécessaire au maintien d'un bon état de santé. Néanmoins, il n'y pas de consensus sur une définition unique, ce qui complique une intégration adéquate de cette thématique dans le domaine de la santé. Un faible niveau de littératie en santé est un problème fréquent et concernerait près de 50% de la population Suisse (OCDE 2005). Cette problématique est d'une importance majeure car les individus avec un niveau de littératie insuffisant sont plus à risque d'un mauvais état de santé (AMA 1999) et ce phénomène représenterait 3 à 6% des coûts de la santé en Suisse selon une estimation basée sur les chiffres d'études américaines. (Spycher 2006). Les médecins de famille, considérés comme l'une des principales sources d'information en santé pour la population en Suisse, jouent un rôle central dans la promotion d'un bon niveau de santé. L'idée de ce travail vient à la suite des résultats du travail de Maîtrise de de Lara Van Leckwyck qui s'est intéressée à la perception que les médecins de famille de la région lausannoise ont du niveau de littératie de leurs patients. Ces derniers considèrent posséder les ressources nécessaires pour prendre en charge leurs patients avec un faible niveau de littératie mais ils sont ouverts à de nouveaux outils. Nous avons alors voulu tenter l'expérience et créer un outil sous la forme d'une brochure A6 contenant quatre faces dans le but d'aider ces médecins. Les objectifs sont les suivants : 1) sensibiliser et informer les médecins de famille à la problématique de la littératie en santé ; 2) offrir une aide au dépistage du niveau de littératie en santé de leurs patients ; 3) proposer une aide à la prise en charge des patients avec un faible niveau de littératie en fournissant une liste de moyens pratiques basés sur une revue de la littérature pour aider les médecins généralistes internistes dans leur prise en charge et 4) proposer une sélection d'adresses internet utiles en lien avec la problématique de la littératie en santé. Cet outil a été présenté à 15 assistants et chefs de clinique de la Policlinique médicale universitaire (PMU) de Lausanne ainsi qu'à 30 médecins internistes généralistes installés dans la région de Lausanne qui évalueront son utilité dans le cadre d'un prochain travail de Maîtrise. Les limites principales concernant un tel projet sont le format choisi pour l'outil et le fait de récolter et de transcrire des informations sur un sujet principalement étudié dans les pays anglo-saxons. Nous pouvons déjà prévoir que des adaptations sur la traduction de certains éléments de l'outil (notamment les questions de dépistage) seront certainement relevées par les médecins qui auront testé l'outil. Un travail supplémentaire mené de manière différente pourra également faire l'objet d'un futur travail.
Resumo:
A novel approach for the identification of tumor antigen-derived sequences recognized by CD8(+) cytolytic T lymphocytes (CTL) consists in using synthetic combinatorial peptide libraries. Here we have screened a library composed of 3.1 x 10(11) nonapeptides arranged in a positional scanning format, in a cytotoxicity assay, to search the antigen recognized by melanoma-reactive CTL of unknown specificity. The results of this analysis enabled the identification of several optimal peptide ligands, as most of the individual nonapeptides deduced from the primary screening were efficiently recognized by the CTL. The results of the library screening were also analyzed with a mathematical approach based on a model of independent and additive contribution of individual amino acids to antigen recognition. This biometrical data analysis enabled the retrieval, in public databases, of the native antigenic peptide SSX-2(41-49), whose sequence is highly homologous to the ones deduced from the library screening, among the ones with the highest stimulatory score. These results underline the high predictive value of positional scanning synthetic combinatorial peptide library analysis and encourage its use for the identification of CTL ligands.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
Analyse comparative du film du réalisateur suisse Frédéric Mermoud L'Escalier (2003) avec un extrait du scénario d'une part, de la nouvelle parue chez Actes Sud que l'auteur a rétroactivement tiré du film d'autre part. Question de point de vue et de temporalité (itérativité) dans le cas d'un format court-métrage.
Resumo:
The goal of this study was to compare the quantity and purity of DNA extracted from biological tracesusing the QIAsymphony robot with that of the manual QIAamp DNA mini kit currently in use in ourlaboratory. We found that the DNA yield of robot was 1.6-3.5 times lower than that of the manualprotocol. This resulted in a loss of 8% and 29% of the alleles correctly scored when analyzing 1/400 and 1/800 diluted saliva samples, respectively. Specific tests showed that the QIAsymphony was at least 2-16times more efficient at removing PCR inhibitors. The higher purity of the DNA may therefore partlycompensate for the lower DNA yield obtained. No case of cross-contamination was observed amongsamples. After purification with the robot, DNA extracts can be automatically transferred in 96-wellsplates, which is an ideal format for subsequent RT-qPCR quantification and DNA amplification. Lesshands-on time and reduced risk of operational errors represent additional advantages of the robotic platform.
Resumo:
We previously introduced two new protein databases (trEST and trGEN) of hypothetical protein sequences predicted from EST and HTG sequences, respectively. Here, we present the updates made on these two databases plus a new database (trome), which uses alignments of EST data to HTG or full genomes to generate virtual transcripts and coding sequences. This new database is of higher quality and since it contains the information in a much denser format it is of much smaller size. These new databases are in a Swiss-Prot-like format and are updated on a weekly basis (trEST and trGEN) or every 3 months (trome). They can be downloaded by anonymous ftp from ftp://ftp.isrec.isb-sib.ch/pub/databases.
Resumo:
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.