837 resultados para Web-based tools


Relevância:

90.00% 90.00%

Publicador:

Resumo:

La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El mercado de los semiconductores está saturado de productos similares y de distribuidores con una propuesta de servicios similar. Los procesos de Co-Creación en los que el cliente colabora en la definición y desarrollo del producto y proporciona información sobre su utilidad, prestaciones y valor percibido, con el resultado de un producto que soluciona sus necesidades reales, se están convirtiendo en un paso adelante en la diferenciación y expansión de la cadena de valor. El proceso de diseño y fabricación de semiconductores es bastante complejo, requiere inversiones cada vez mayores y demanda soluciones completas. Se requiere un ecosistema que soporte el desarrollo de los equipos electrónicos basados en dichos semiconductores. La facilidad para el diálogo y compartir información que proporciona internet, las herramientas basadas en web 2.0 y los servicios y aplicaciones en la nube; favorecen la generación de ideas, el desarrollo y evaluación de productos y posibilita la interacción entre diversos co-creadores. Para iniciar un proceso de co-creación se requiere métodos y herramientas adecuados para interactuar con los participantes e intercambiar experiencias, procesos para integrar la co-creación dentro de la operativa de la empresa, y desarrollar una organización y cultura que soporten y fomenten dicho proceso. Entre los métodos más efectivos están la Netnografía que estudia las conversaciones de las comunidades en internet; colaboración con usuarios pioneros que van por delante del Mercado y esperan un gran beneficio de la satisfacción de sus necesidades o deseos; los estudios de innovación que permiten al usuario definir y a menudo crear su propia solución y la externalización a la multitud, que mediante una convocatoria abierta plantea a la comunidad retos a resolver a cambio de algún tipo de recompensa. La especialización de empresas subcontratistas en el desarrollo y fabricación de semiconductores; facilita la innovación abierta colaborando con diversas entidades en las diversas fases del desarrollo del semiconductor y su ecosistema. La co-creación se emplea actualmente en el sector de los semiconductores para detectar ideas de diseños y aplicaciones, a menudo mediante concursos de innovación. El servicio de soporte técnico y la evaluación de los semiconductores con frecuencia es fruto de la colaboración entre los miembros de la comunidad fomentada y soportada por los fabricantes del producto. Con el programa EBVchips se posibilita el acceso a empresas pequeñas y medianas a la co-creación de semiconductores con los fabricantes en un proceso coordinado y patrocinado por el distribuidor EBV. Los semiconductores configurables como las FPGAs constituyen otro ejemplo de co-creación mediante el cual el fabricante proporciona el circuito integrado y el entorno de desarrollo y los clientes crean el producto final definiendo sus características y funcionalidades. Este proceso se enriquece con bloques funcionales de diseño, IP-cores, que a menudo son creados por la comunidad de usuarios. ABSTRACT. The semiconductor market is saturated of similar products and distributors with a similar proposal for services. The processes of co-creation in which the customer collaborates in the definition and development of the product and provides information about its utility, performance and perceived value, resulting in a product that solves their real needs, are becoming a step forward in the differentiation and expansion of the value chain. The design and semiconductor manufacturing process is quite complex, requires increasingly higher investments and demands complete solutions. It requires an ecosystem that supports the development of electronic equipments based on such semiconductors. The ease of dialogue and sharing information that provides internet, web 2.0-based tools and services and applications in the cloud; favor the generation of ideas, the development and evaluation of products and allows the interaction between various co-creators. To start a process of co-creation adequate methods and tools are required to interact with the participants and exchange experiences, processes to integrate the co-creation within the operations of the company, and developing an organization and culture that support and promote such process. Among the most effective methods are the Netnography that studies the conversations of the communities on the internet; collaboration with Lead Users who are ahead of the market and expect a great benefit from the satisfaction of their needs or desires; Innovation studies that allow the user to define and often create their own solution and Crowdsourcing, an open call to the community to solve challenges in exchange for some kind of reward. The specialization of subcontractors in the development and manufacture of semiconductors; facilitates open innovation in the context of collaboration with different entities working in the different phases of the development of the semiconductor and its ecosystem. Co-creation is used currently in the semiconductor sector to detect ideas of designs and applications, often through innovation’s contests. Technical support and evaluation of semiconductors frequently is the result of collaboration between members of the community fostered and supported by the manufacturers of the product. The EBVchips program provides access to small and medium-sized companies to the co-creation of semiconductors with manufacturers in a process coordinated and sponsored by the Distributor EBV. Configurable semiconductors like FPGAs are another example of co-creation whereby the manufacturer provides the integrated circuit and the development environment and customers create the final product by defining their features and functionality. This process is enriched with IP-cores, designs blocks that are often created by the user community.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although a vast amount of life sciences data is generated in the form of images, most scientists still store images on extremely diverse and often incompatible storage media, without any type of metadata structure, and thus with no standard facility with which to conduct searches or analyses. Here we present a solution to unlock the value of scientific images. The Global Image Database (GID) is a web-based (http://www.g wer.ch/qv/gid/gid.htm) structured central repository for scientific annotated images. The GID was designed to manage images from a wide spectrum of imaging domains ranging from microscopy to automated screening. The annotations in the GID define the source experiment of the images by describing who the authors of the experiment are, when the images were created, the biological origin of the experimental sample and how the sample was processed for visualization. A collection of experimental imaging protocols provides details of the sample preparation, and labeling, or visualization procedures. In addition, the entries in the GID reference these imaging protocols with the probe sequences or antibody names used in labeling experiments. The GID annotations are searchable by field or globally. The query results are first shown as image thumbnail previews, enabling quick browsing prior to original-sized annotated image retrieval. The development of the GID continues, aiming at facilitating the management and exchange of image data in the scientific community, and at creating new query tools for mining image data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new thermodynamic database for normal and modified nucleic acids has been developed. This Thermodynamic Database for Nucleic Acids (NTDB) includes sequence, structure and thermodynamic information as well as experimental methods and conditions. In this release, there are 1851 sequences containing both normal and modified nucleic acids. A user-friendly web-based interface has been developed to allow data searching under different conditions. Useful thermodynamic tools for the study of nucleic acids have been collected and linked for easy usage. NTDB is available at http://ntdb.chem.cuhk.edu.hk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MEDLINEplus is a Web-based consumer health information resource, made available by the National Library of Medicine (NLM). MEDLINEplus has been designed to provide consumers with a well-organized, selective Web site facilitating access to reliable full-text health information. In addition to full-text resources, MEDLINEplus directs consumers to dictionaries, organizations, directories, libraries, and clearinghouses for answers to health questions. For each health topic, MEDLINEplus includes a preformulated MEDLINE search created by librarians. The site has been designed to match consumer language to medical terminology. NLM has used advances in database and Web technologies to build and maintain MEDLINEplus, allowing health sciences librarians to contribute remotely to the resource. This article describes the development and implementation of MEDLINEplus, its supporting technology, and plans for future development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diversity-based designing, or the goal of ensuring that web-based information is accessible to as many diverse users as possible, has received growing international acceptance in recent years, with many countries introducing legislation to enforce it. This paper analyses web content accessibility levels in Spanish education portals according to the international guidelines established by the World Wide Web Consortium (W3C) and the Web Accessibility Initiative (WAI). Additionally, it suggests the calculation of an inaccessibility rate as a tool for measuring the degree of non-compliance with WAI Guidelines 2.0 as well as illustrating the significant gap that separates people with disabilities from digital education environments (with a 7.77% average). A total of twenty-one educational web portals with two different web depth levels (42 sampling units) were assessed for this purpose using the automated analysis tool Web Accessibility Test 2.0 (TAW, for its initials in Spanish). The present study reveals a general trend towards non-compliance with the technical accessibility recommendations issued by the W3C-WAI group (97.62% of the websites examined present mistakes in Level A conformance). Furthermore, despite the increasingly high number of legal and regulatory measures about accessibility, their practical application still remains unsatisfactory. A greater level of involvement must be assumed in order to raise awareness and enhance training efforts towards accessibility in the context of collective Information and Communication Technologies (ICTs), since this represents not only a necessity but also an ethical, social, political and legal commitment to be assumed by society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As class contact times are reduced as a result of fiscal restraints in the modern tertiary sector, language instructors are placed in the position of having to find new ways to provide experience and continuity in language learning. Extending 'learning communities'—sites of learner knowledge exchange, exposure to diverse learning styles and strategies, and mutual support—beyond the classroom is one solution to maintaining successful linguistic competencies amongst learners. This, however, can conflict with the diverse extra-curricular commitments faced by tertiary students. The flexibility of web-based learning platforms provides one means of overcoming these obstacles. This study investigates learner perceptions of the use of the WebCT platform's computer medicated communication (CMC) tools as a means of extending the community of learning in tertiary Chinese language and non-language courses. Learner responses to Likert and open-ended questionnaires show that flexibility and reduction of negative affect are seen as significant benefits to 'virtual' interaction and communication, although responses are notably stronger in the non-language compared with the language cohort. While both learner cohorts acknowledge positive learning outcomes, CMC is not seen to consistently further interpersonal rapport beyond that established in the classroom. Maintaining a balance between web-based and classroom learning emerges as a concern, especially amongst language learners. [Author abstract, ed]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the increased offering of online communication channels to support web-based retail systems, there is limited marketing research that investigates how these channels act singly, or in combination with offline channels, to influence an individual's intention to purchase online. If the marketer's strategy is to encourage online transactions, this requires a focus on consumer acceptance of the web-based transaction technology, rather than the purchase of the products per se. The exploratory study reported in this paper examines normative influences from referent groups in an individual's on and offline social communication networks that might affect their intention to use online transaction facilities. The findings suggest that for non-adopters, there is no normative influence from referents in either network. For adopters, one online and one offline referent norm positively influenced this group's intentions to use online transaction facilities. The implications of these findings are discussed together with future research directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the use of a web-site for the dissemination of the community-based '10,000 steps' program which was originally developed and evaluated in Rockhampton, Queensland in 2001-2003. The website provides information and interactive activities for individuals, and promotes resources and programs for health promotion professionals. The dissemination activity was assessed in terms of program adoption and implementation. In a 2-year period (May 2004-March 2006) more than 18,000 people registered as users of the web-site (togging more than 8.5 billion steps) and almost 100 workplaces and 13 communities implemented aspects of the 10,000 steps program. These data support the use of the internet as an effective means of disseminating ideas and resources beyond the geographical borders of the original project. Following this preliminary dissemination, there remains a need for the systematic study of different dissemination strategies, so that evidence-based physical activity programs can be translated into more widespread public health practice. (c) 2006 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

T he international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM(2), comprised 60,770 full- length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein- coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full- length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web- based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full- length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding ( including partial or truncated transcripts), providing to our knowledge the greatest current coverage of the mouse proteome by full- length cDNAs. The total number of distinct non- protein- coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and. nal expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species.