880 resultados para library databases
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
The clinical content of administrative databases includes, among others, patient demographic characteristics, and codes for diagnoses and procedures. The data in these databases is standardized, clearly defined, readily available, less expensive than collected by other means, and normally covers hospitalizations in entire geographic areas. Although with some limitations, this data is often used to evaluate the quality of healthcare. Under these circumstances, the quality of the data, for instance, errors, or it completeness, is of central importance and should never be ignored. Both the minimization of data quality problems and a deep knowledge about this data (e.g., how to select a patient group) are important for users in order to trust and to correctly interpret results. In this paper we present, discuss and give some recommendations for some problems found in these administrative databases. We also present a simple tool that can be used to screen the quality of data through the use of domain specific data quality indicators. These indicators can significantly contribute to better data, to give steps towards a continuous increase of data quality and, certainly, to better informed decision-making.
Resumo:
Introdução – Ao aumento exponencial de informação, sobretudo a científica, não corresponde obrigatoriamente a melhoria de qualidade na pesquisa e no uso da mesma. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. A biblioteca académica assume, neste contexto, o papel de parceiro privilegiado, preparando o momento em que o estudante se sente capaz de produzir e registar novo conhecimento através da escrita. Objectivo – A Biblioteca da ESTeSL reestruturou as sessões desenvolvidas desde o ano lectivo 2002/2003 e deu início a um projecto mais formal denominado «Saber usar a informação de forma eficiente e eficaz». Objectivos: a) promover a melhoria da qualidade dos trabalhos académicos e científicos; b) contribuir para a diminuição do risco de plágio; c) aumentar a confiança dos estudantes nas suas capacidades de utilização dos recursos de informação; d) incentivar uma participação mais ativa em sala de aulas; e) colaborar para a integração dos conteúdos pedagógicos e das várias fontes de informação. Método – Dinamizaram-se várias sessões de formação de curta duração, versando diferentes temas associados à literacia de informação, designadamente: 1) Pesquisa de informação com sessões dedicadas à MEDLINE, RCAAP, SciELO, B-ON e Scopus; 2) Factor de impacto das revistas científicas: Journal Citation Reports e SciMAGO; 3) Como fazer um resumo científico?; 4) Como estruturar o trabalho científico?; 5) Como fazer uma apresentação oral?; 6) Como evitar o plágio?; 7) Referenciação bibliográfica usando a norma de Vancouver; 8) Utilização de gestores de referências bibliográficas: ZOTERO (primeira abordagem para os estudantes de 1º ano de licenciatura) e a gestão de referências e rede académica de informação com o MENDELEY (direcionado para estudantes finalistas, mestrandos, docentes e investigadores). O projecto foi apresentado à comunidade académica no site da ESTeSL; cada sessão foi divulgada individualmente no site e por email. Em 2015, a divulgação investiu na nova página da Biblioteca (https://estesl.biblio.ipl.pt/), que alojava informações e recursos abordados nas formações. As inscrições eram feitas por email, sem custos associados ou limite mínimo ou máximo de sessões para participar. Resultados – Em 2014 registaram-se 87 inscrições. Constatou-se a presença de, pelo menos, um participante em cada sessão de formação. Em 2015, o total de inscrições foi de 190. Foram reagendadas novas sessões a pedido dos estudantes cujos horários não eram compatíveis com os inicialmente agendados. Foram então organizados dois dias de formação seguida (cerca de 4h em cada dia) com conteúdos selecionados pelos estudantes. Registou-se, nestas sessões, a presença contante de cerca de 30 estudantes em sala. No total, as sessões da literacia da informação contaram com estudantes de licenciatura de todos os anos, estudantes de mestrado, docentes e investigadores (internos e externos à ESTeSL). Conclusões – Constata-se a necessidade de introdução de novos conteúdos no projeto de literacia da informação. O tempo, os conteúdos e o interesse demonstrado por aqueles que dele usufruíram evidenciam que este é um projeto que está a ganhar o seu espaço na comunidade da ESTeSL e que a literacia da informação contribui de forma efetiva para a construção e para a produção de conhecimento no meio académico.
Resumo:
Introdução – A pesquisa de informação realizada pelos estudantes de ensino superior em recursos eletrónicos não corresponde necessariamente ao domínio de competências de pesquisa, análise, avaliação, seleção e bom uso da informação recuperada. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. Objetivo – A meta da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) foi a formação em competências de literacia da informação, fora da ESTeSL, de estudantes, professores e investigadores. Métodos – A formação foi integrada em projetos nacionais e internacionais, dependendo dos públicos-alvo, das temáticas, dos conteúdos, da carga horária e da solicitação da instituição parceira. A Fundação Calouste Gulbenkian foi o promotor financeiro privilegiado. Resultados – Decorreram várias intervenções em território nacional e internacional. Em 2010, em Angola, no Instituto Médio de Saúde do Bengo, formação de 10 bibliotecários sobre a construção e a gestão de uma biblioteca de saúde e introdução à literacia da informação (35h). Em 2014, decorrente do ERASMUS Intensive Programme, o OPTIMAX (Radiation Dose and Image Quality Optimisation in Medical Imaging) para 40 professores e estudantes de radiologia (oriundos de Portugal, Reino Unido, Noruega, Países Baixos e Suíça) sobre metodologia e pesquisa de informação na MEDLINE e na Web of Science e sobre o Mendeley, enquanto gestor de referências (4h). Os trabalhos finais deste curso foram publicados em formato de ebook (http://usir.salford.ac.uk/34439/1/Final%20complete%20version.pdf), cuja revisão editorial foi da responsabilidade dos bibliotecários. Ao longo de 2014, na Escola Superior de Educação, Escola Superior de Dança, Instituto Politécnico de Setúbal e Faculdade de Medicina de Lisboa e, ao longo de 2015, na Universidade Aberta, Escola Superior de Comunicação Social, Instituto Egas Moniz, Faculdade de Letras de Lisboa e Centro de Linguística da Universidade de Lisboa foram desenhados conteúdos sobre o uso do ZOTERO e do Mendeley para a gestão de referências bibliográficas e sobre uma nova forma de fazer investigação. Cada uma destas sessões (2,5h) envolveu cerca de 25 estudantes finalistas, mestrandos e professores. Em 2015, em Moçambique, no Instituto Superior de Ciências da Saúde, decorreu a formação de 5 bibliotecários e 46 estudantes e professores (70h). Os conteúdos ministrados foram: 1) gestão e organização de uma biblioteca de saúde (para bibliotecários); 2) literacia da informação: pesquisa de informação na MEDLINE, SciELO e RCAAP, gestores de referências e como evitar o plágio (para bibliotecários e estudantes finalistas de radiologia). A carga horária destinada aos estudantes incluiu a tutoria das monografias de licenciatura, em colaboração com mais duas professoras do projeto. Para 2016 está agendada formação noutras instituições de ensino superior nacionais. Perspetiva-se, ainda, formação similar em Timor-Leste, cujos conteúdos, datas e carga horária estão por agendar. Conclusões – Destas iniciativas beneficia a instituição (pela visibilidade), os bibliotecários (pelo evidenciar de competências) e os estudantes, professores e investigadores (pelo ganho de novas competências e pela autonomia adquirida). O projeto de literacia da informação da ESTeSL tem contribuído de forma efetiva para a construção e para a produção de conhecimento no meio académico, nacional e internacional, sendo a biblioteca o parceiro privilegiado nesta cultura de colaboração.
Resumo:
A S. mansoni adult worm cDNA expression library was screened with sera from baboons in a early phase after infection. The clones that were positive with the early infection sera were examined for reactivity with pre-infection sera and heterologous infection sera. In order to discriminate a positive antibody reaction from the reactivity due to residual anti-E. coli antibodies, an unrelated cDNA clone was plated with the positive clone. The unrelated clone provided the negative background and the contrast necessary to discern a positive antibody reaction. In this way, we were able to eliminate selected clones that were positive with the pre-infection sera or heterologous infection sera. This characterization of the expression library clones enabled us to quickly target only clones with the desired pattern of antibody reactivity for sequencing, subcloning, and expressing
Resumo:
Considering the scarcity of defined antigens, actually useful and reliable for use in the field studies, we propose an alternative method for selection of cDNA clones with potential use in the diagnosis of schistosomiasis. Human antibodies specific to a protein fraction of 31/32 kDa (Sm31/32), dissociated from immune complexes, are used for screening of clones from an adult worm cDNA library. Partial sequencing of five clones, selected through this strategy, showed to be related to Schistosoma mansoni: two were identified as homologous to heat shock protein 70, one to glutathione S-transferase, one to homeodomain protein, and one to a previously described EST (expressed sequence tag) of S. mansoni. This last clone was the most consistently reactive during the screening process with the anti-Sm31/32 antibodies dissociated from the immune complexes. The complete sequence of this clone was obtained and the translation data yielded only one ORF (open reading frame) that code for a protein with 57 amino acids. Based on this amino acid sequence two peptides were chemically synthesized and evaluated separately against a pool of serum samples from schistosomiasis patients and non-schistosomiasis individuals. Both peptides showed strong reactivity only against the positive pool, suggesting that these peptides may be useful as antigens for the diagnosis of schistosomiasis mansoni.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Fundação para a Ciência e Tecnologia no âmbito de Bolsa de Doutoramento (SFRH/BD/86280/2012)
Resumo:
We aimed to assess and synthesize the information available in the literature regarding the treatment of American tegumentary leishmaniasis in special populations. We searched MEDLINE (via PubMed), EMBASE, LILACS, SciELO, Scopus, Cochrane Library and mRCT databases to identify clinical trials and observational studies that assessed the pharmacological treatment of the following groups of patients: pregnant women, nursing mothers, children, the elderly, individuals with chronic diseases and individuals with suppressed immune systems. The quality of evidence was assessed using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach. The available evidence suggests that the treatments of choice for each population or disease entity are as follows: nursing mothers and children (meglumine antimoniate or pentamidine), patients with renal disease (amphotericin B or miltefosine), patients with heart disease (amphotericin B, miltefosine or pentamidine), immunosuppressed patients (liposomal amphotericin), the elderly (meglumine antimoniate), pregnant women (amphotericin B) and patients with liver disease (no evidence available). The quality of evidence is low or very low for all groups. Accurate controlled studies are required to fill in the gaps in evidence for treatment in special populations. Post-marketing surveillance programs could also collect relevant information to guide treatment decision-making.
Resumo:
Abstract: An integrative literature review was conducted to synthesize available publications regarding the potential use of serological tests in leprosy programs. We searched the databases Literatura Latino-Americana e do Caribe em Ciências da Saúde, Índice Bibliográfico Espanhol em Ciências da Saúde, Acervo da Biblioteca da Organização Pan-Americana da Saúde, Medical Literature Analysis and Retrieval System Online, Hanseníase, National Library of Medicine, Scopus, Ovid, Cinahl, and Web of Science for articles investigating the use of serological tests for antibodies against phenolic glycolipid-I (PGL-I), ML0405, ML2331, leprosy IDRI diagnostic-1 (LID-1), and natural disaccharide octyl-leprosy IDRI diagnostic-1 (NDO-LID). From an initial pool of 3.514 articles, 40 full-length articles fulfilled our inclusion criteria. Based on these papers, we concluded that these antibodies can be used to assist in diagnosing leprosy, detecting neuritis, monitoring therapeutic efficacy, and monitoring household contacts or at-risk populations in leprosy-endemic areas. Thus, available data suggest that serological tests could contribute substantially to leprosy management.
Resumo:
The purpose of this paper is to review clinical studies on hypophosphatemia in pediatric intensive care unit patients with a view to verifying prevalence and risk factors associated with this disorder. We searched the computerized bibliographic databases Medline, Embase, Cochrane Library, and LILACS to identify eligible studies. Search terms included critically ill, pediatric intensive care, trauma, sepsis, infectious diseases, malnutrition, inflammatory response, surgery, starvation, respiratory failure, diuretic, steroid, antiacid therapy, mechanical ventilation. The search period covered those clinical trials published from January 1990 to January 2004. Studies concerning endocrinological disorders, genetic syndromes, rickets, renal diseases, anorexia nervosa, alcohol abuse, and prematurity were not included in this review. Out of 27 studies retrieved, only 8 involved pediatric patients, and most of these were case reports. One clinical trial and one retrospective study were identified. The prevalence of hypophosphatemia exceeded 50%. The commonly associated factors in most patients with hypophosphatemia were refeeding syndrome, malnutrition, sepsis, trauma, and diuretic and steroid therapy. Given the high prevalence, clinical manifestations, and multiple risk factors, the early identification of this disorder in critically ill children is crucial for adequate replacement therapy and also to avoid complications.
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
O aumento da quantidade de dados gerados que se tem verificado nos últimos anos e a que se tem vindo a dar o nome de Big Data levou a que a tecnologia relacional começasse a demonstrar algumas fragilidades no seu armazenamento e manuseamento o que levou ao aparecimento das bases de dados NoSQL. Estas estão divididas por quatro tipos distintos nomeadamente chave/valor, documentos, grafos e famílias de colunas. Este artigo é focado nas bases de dados do tipo column-based e nele serão analisados os dois sistemas deste tipo considerados mais relevantes: Cassandra e HBase.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
This work reports the implementation and verification of a new so lver in OpenFOAM® open source computational library, able to cope with integral viscoelastic models based on the integral upper-convected Maxwell model. The code is verified through the comparison of its predictions with analytical solutions and numerical results obtained with the differential upper-convected Maxwell model