951 resultados para Scientific workflows
Resumo:
This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science
Resumo:
Este estudio tiene como objetivo estimar la influencia del acceso abierto en los patrones de publicación de la comunidad científica argentina en diferentes campos temáticos (Medicina; Física y Astronomía; Agricultura y Ciencias biológicas y Ciencias sociales y Humanidades), a partir del análisis del modelo de acceso de las revistas elegidas para comunicar los resultados de investigación en el período 2008-2010. La producción fue recogida de la base de datos SCOPUS y los modelos de acceso de las revistas determinados a partir de la consulta a las fuentes DOAJ, e-revist@s, SCielo, Redalyc, PubMed, Romeo-Sherpa y Dulcinea. Se analizó la accesibilidad real y potencial de la producción científica nacional por las vías dorada y verde, respectivamente, así como también por suscripción a través de la Biblioteca Electrónica de Ciencia y Tecnología del Ministerio de Ciencia, Tecnología e Innovación Productiva de la Nación Argentina. Los resultados muestran que en promedio, y para el conjunto de las temáticas estudiadas, el 70 de la producción científica argentina visible en SCOPUS se publica en revistas que adhieren de una u otra forma al movimiento de acceso abierto, en una relación del 27 para la vía dorada y del 43 para las que permiten el autoarchivo por la vía verde. Entre el 16 y el 30 (según las áreas temáticas) de los artículos publicados en revistas que permiten el autoarchivo se accede vía suscripción. El porcentaje de revistas sin acceso es del orden del 30 en Ciencias Sociales y Humanidades, y alcanza cerca del 45 en el resto de las áreas. Se concluye que Argentina presenta condiciones muy favorables para liberar un alto porcentaje de la literatura científica generada en el país bajo la modalidad del acceso abierto a través de repositorios institucionales y de mandatos para el auto-archivo, contribuyendo además a incrementar la accesibilidad y la preservación a largo plazo de la producción científica y tecnológica nacional
Resumo:
Nuestro objetivo es compartir la experiencia de implementación y desarrollo en OJS del Portal de Revistas Científicas de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) de la Universidad Nacional de La Plata (UNLP) a través del cual se publican en acceso abierto, bajo licencias Creative Commons, las revistas científicas de esta Unidad Académica, incluyendo tanto las electrónicas como las versiones digitales de las de formato papel. El proyecto Portal de Revistas, inaugurado en diciembre de 2012, a cargo del Area de Publicaciones, logró unificar el acceso a las revistas de la institución que integran el Núcleo Básico de Revistas Científicas Argentinas (CAICYT-CONICET). Su objetivo es facilitar la gestión editorial, el cumplimiento de la periodicidad y de los parámetros de evaluación sugeridos por las bases de datos regionales e internacionales y la automatización de los envíos a bases de datos para aumentar su visibilidad optimizando los tiempos de trabajo
Resumo:
Este trabajo tiene como objetivo describir la experiencia de implementación y desarrollo del Portal de revistas de la Facultad de Humanidades y Ciencias de Educación de la Universidad Nacional de La Plata a fin de que pueda ser aprovechada por todos aquellos que emprendan iniciativas de características similares. Para ello, se realiza en primer lugar un repaso por la trayectoria de la Facultad respecto a la edición de revistas científicas y la labor bibliotecaria para contribuir a su visualización. En segundo orden, se exponen las tareas llevadas adelante por la Prosecretaría de Gestión Editorial y Difusión (PGEyD) de la Facultad para concretar la puesta en marcha del portal. Se hace especial referencia a la personalización del software, a la metodología utilizada para la carga masiva de información en el sistema (usuarios y números retrospectivos) y a los procedimientos que permiten la inclusión en repositorio institucional y en el catálogo web de todos los contenidos del portal de manera semi-automática. Luego, se hace alusión al trabajo que se está realizando en relación al soporte y a la capacitación de los editores. Se exponen, después, los resultados conseguidos hasta el momento en un año de trabajo: creación de 10 revistas, migración de 4 títulos completos e inclusión del 25de las contribuciones publicadas en las revistas editadas por la FaHCE. A modo de cierre se enuncian una serie de desafíos que la Prosecretaría se ha propuesto para mejorar el Portal y optimizar los flujos de trabajo intra e interinstitucionales
Resumo:
This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science
Resumo:
Nuestro objetivo es compartir la experiencia de implementación y desarrollo en OJS del Portal de Revistas Científicas de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) de la Universidad Nacional de La Plata (UNLP) a través del cual se publican en acceso abierto, bajo licencias Creative Commons, las revistas científicas de esta Unidad Académica, incluyendo tanto las electrónicas como las versiones digitales de las de formato papel. El proyecto Portal de Revistas, inaugurado en diciembre de 2012, a cargo del Area de Publicaciones, logró unificar el acceso a las revistas de la institución que integran el Núcleo Básico de Revistas Científicas Argentinas (CAICYT-CONICET). Su objetivo es facilitar la gestión editorial, el cumplimiento de la periodicidad y de los parámetros de evaluación sugeridos por las bases de datos regionales e internacionales y la automatización de los envíos a bases de datos para aumentar su visibilidad optimizando los tiempos de trabajo
Resumo:
Sedimentary sequences in ancient or long-lived lakes can reach several thousands of meters in thickness and often provide an unrivalled perspective of the lake's regional climatic, environmental, and biological history. Over the last few years, deep-drilling projects in ancient lakes became increasingly multi- and interdisciplinary, as, among others, seismological, sedimentological, biogeochemical, climatic, environmental, paleontological, and evolutionary information can be obtained from sediment cores. However, these multi- and interdisciplinary projects pose several challenges. The scientists involved typically approach problems from different scientific perspectives and backgrounds, and setting up the program requires clear communication and the alignment of interests. One of the most challenging tasks, besides the actual drilling operation, is to link diverse datasets with varying resolution, data quality, and age uncertainties to answer interdisciplinary questions synthetically and coherently. These problems are especially relevant when secondary data, i.e., datasets obtained independently of the drilling operation, are incorporated in analyses. Nonetheless, the inclusion of secondary information, such as isotopic data from fossils found in outcrops or genetic data from extant species, may help to achieve synthetic answers. Recent technological and methodological advances in paleolimnology are likely to increase the possibilities of integrating secondary information. Some of the new approaches have started to revolutionize scientific drilling in ancient lakes, but at the same time, they also add a new layer of complexity to the generation and analysis of sediment-core data. The enhanced opportunities presented by new scientific approaches to study the paleolimnological history of these lakes, therefore, come at the expense of higher logistic, communication, and analytical efforts. Here we review types of data that can be obtained in ancient lake drilling projects and the analytical approaches that can be applied to empirically and statistically link diverse datasets to create an integrative perspective on geological and biological data. In doing so, we highlight strengths and potential weaknesses of new methods and analyses, and provide recommendations for future interdisciplinary deep-drilling projects.
Resumo:
In recent years, a variety of systems have been developed that export the workflows used to analyze data and make them part of published articles. We argue that the workflows that are published in current approaches are dependent on the specific codes used for execution, the specific workflow system used, and the specific workflow catalogs where they are published. In this paper, we describe a new approach that addresses these shortcomings and makes workflows more reusable through: 1) the use of abstract workflows to complement executable workflows to make them reusable when the execution environment is different, 2) the publication of both abstract and executable workflows using standards such as the Open Provenance Model that can be imported by other workflow systems, 3) the publication of workflows as Linked Data that results in open web accessible workflow repositories. We illustrate this approach using a complex workflow that we re-created from an influential publication that describes the generation of 'drugomes'.
Resumo:
This paper shows the development of a science-technological knowledge transfer model in Mexico, as a means to boost the limited relations between the scientific and industrial environments. The proposal is based on the analysis of eight organizations (research centers and firms) with varying degrees of skill in the practice of science-technological knowledge transfer, and carried out by the case study approach. The analysis highlights the synergistic use of the organizational and technological capabilities of each organization, as a means to identification of the knowledge transfer mechanisms best suited to enabling the establishment of cooperative processes, and achieve the R&D and innovation activities results.
Resumo:
Technofusion is the scientific&technical installation for fusion research in Spain, based on three pillars: • It is an open facility to European users. • It is a facility with instrumentation not accesible to small research groups. • It is designed to be closely coordiated with the European Fusion Program. With a budget of 80-100 M€ over five years, several top laboratories will be constructed
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses.
Resumo:
This paper presents a data-intensive architecture that demonstrates the ability to support applications from a wide range of application domains, and support the different types of users involved in defining, designing and executing data-intensive processing tasks. The prototype architecture is introduced, and the pivotal role of DISPEL as a canonical language is explained. The architecture promotes the exploration and exploitation of distributed and heterogeneous data and spans the complete knowledge discovery process, from data preparation, to analysis, to evaluation and reiteration. The architecture evaluation included large-scale applications from astronomy, cosmology, hydrology, functional genetics, imaging processing and seismology.
Resumo:
In the domain of eScience, investigations are increasingly collaborative. Most scientific and engineering domains benefit from building on top of the outputs of other research: By sharing information to reason over and data to incorporate in the modelling task at hand. This raises the need to provide means for preserving and sharing entire eScience workflows and processes for later reuse. It is required to define which information is to be collected, create means to preserve it and approaches to enable and validate the re-execution of a preserved process. This includes and goes beyond preserving the data used in the experiments, as the process underlying its creation and use is essential. This tutorial thus provides an introduction to the problem domain and discusses solutions for the curation of eScience processes.