976 resultados para Open science


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faldaprevir, a hepatitis C virus (HCV) NS3/4A protease inhibitor, was evaluated in HCV genotype 1-infected patients who failed peginterferon and ribavirin (PegIFN/RBV) treatment during one of three prior faldaprevir trials. Patients who received placebo plus PegIFN/RBV and had virological failure during a prior trial were enrolled and treated in two cohorts: prior relapsers (n = 43) and prior nonresponders (null responders, partial responders and patients with breakthrough; n = 75). Both cohorts received faldaprevir 240 mg once daily plus PegIFN/RBV for 24 weeks. Prior relapsers with early treatment success (ETS; HCV RNA <25 IU/mL detectable or undetectable at week 4 and <25 IU/mL undetectable at week 8) stopped treatment at week 24. Others received PegIFN/RBV through week 48. The primary efficacy endpoint was sustained virological response (HCV RNA <25 IU/mL undetectable) 12 weeks post treatment (SVR12). More prior nonresponders than prior relapsers had baseline HCV RNA ≥800 000 IU/mL (80% vs 58%) and a non-CC IL28B genotype (91% vs 70%). Rates of SVR12 (95% CI) were 95.3% (89.1, 100.0) among prior relapsers and 54.7% (43.4, 65.9) among prior nonresponders; corresponding ETS rates were 97.7% and 65.3%. Adverse events led to faldaprevir discontinuations in 3% of patients. The most common Division of AIDS Grade ≥2 adverse events were anaemia (13%), nausea (10%) and hyperbilirubinaemia (9%). In conclusion, faldaprevir plus PegIFN/RBV achieved clinically meaningful SVR12 rates in patients who failed PegIFN/RBV in a prior trial, with response rates higher among prior relapsers than among prior nonresponders. The adverse event profile was consistent with the known safety profile of faldaprevir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tension between openness and closedness is one of the most important cleavages in Swiss political debates. In the present article, we study the psychological foundations of attitudes regarding this issue. More precisely, we examine the link between personality and attitudes toward the degree of openness of Switzerland as a general stance toward the cultural, economic and political alignment of the country. Personality is understood as a complex and multifaceted concept that forms the basis for consistent patterns of attitudes and behavior. We build on the Five-Factor Theory to explain the link between personality traits, contextual factors and political attitudes. Analyzing survey data from a random sample of Swiss citizens, we find clear evidence that personality traits affect political attitudes. Furthermore, we are able to demonstrate that the relationship between personality and attitudes toward the degree of openness of Switzerland is moderated by perceived ethnic diversity in the neighborhood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Problem: Medical and veterinary students memorize facts but then have difficulty applying those facts in clinical problem solving. Cognitive engineering research suggests that the inability of medical and veterinary students to infer concepts from facts may be due in part to specific features of how information is represented and organized in educational materials. First, physical separation of pieces of information may increase the cognitive load on the student. Second, information that is necessary but not explicitly stated may also contribute to the student’s cognitive load. Finally, the types of representations – textual or graphical – may also support or hinder the student’s learning process. This may explain why students have difficulty applying biomedical facts in clinical problem solving. Purpose: To test the hypothesis that three specific aspects of expository text – the patial distance between the facts needed to infer a rule, the explicitness of information, and the format of representation – affected the ability of students to solve clinical problems. Setting: The study was conducted in the parasitology laboratory of a college of veterinary medicine in Texas. Sample: The study subjects were a convenience sample consisting of 132 second-year veterinary students who matriculated in 2007. The age of this class upon admission ranged from 20-52, and the gender makeup of this class consisted of approximately 75% females and 25% males. Results: No statistically significant difference in student ability to solve clinical problems was found when relevant facts were placed in proximity, nor when an explicit rule was stated. Further, no statistically significant difference in student ability to solve clinical problems was found when students were given different representations of material, including tables and concept maps. Findings: The findings from this study indicate that the three properties investigated – proximity, explicitness, and representation – had no statistically significant effect on student learning as it relates to clinical problem-solving ability. However, ad hoc observations as well as findings from other researchers suggest that the subjects were probably using rote learning techniques such as memorization, and therefore were not attempting to infer relationships from the factual material in the interventions, unless they were specifically prompted to look for patterns. A serendipitous finding unrelated to the study hypothesis was that those subjects who correctly answered questions regarding functional (non-morphologic) properties, such as mode of transmission and intermediate host, at the family taxonomic level were significantly more likely to correctly answer clinical case scenarios than were subjects who did not correctly answer questions regarding functional properties. These findings suggest a strong relationship (p < .001) between well-organized knowledge of taxonomic functional properties and clinical problem solving ability. Recommendations: Further study should be undertaken investigating the relationship between knowledge of functional taxonomic properties and clinical problem solving ability. In addition, the effect of prompting students to look for patterns in instructional material, followed by the effect of factors that affect cognitive load such as proximity, explicitness, and representation, should be explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudio tiene como objetivo estimar la influencia del acceso abierto en los patrones de publicación de la comunidad científica argentina en diferentes campos temáticos (Medicina; Física y Astronomía; Agricultura y Ciencias biológicas y Ciencias sociales y Humanidades), a partir del análisis del modelo de acceso de las revistas elegidas para comunicar los resultados de investigación en el período 2008-2010. La producción fue recogida de la base de datos SCOPUS y los modelos de acceso de las revistas determinados a partir de la consulta a las fuentes DOAJ, e-revist@s, SCielo, Redalyc, PubMed, Romeo-Sherpa y Dulcinea. Se analizó la accesibilidad real y potencial de la producción científica nacional por las vías dorada y verde, respectivamente, así como también por suscripción a través de la Biblioteca Electrónica de Ciencia y Tecnología del Ministerio de Ciencia, Tecnología e Innovación Productiva de la Nación Argentina. Los resultados muestran que en promedio, y para el conjunto de las temáticas estudiadas, el 70 de la producción científica argentina visible en SCOPUS se publica en revistas que adhieren de una u otra forma al movimiento de acceso abierto, en una relación del 27 para la vía dorada y del 43 para las que permiten el autoarchivo por la vía verde. Entre el 16 y el 30 (según las áreas temáticas) de los artículos publicados en revistas que permiten el autoarchivo se accede vía suscripción. El porcentaje de revistas sin acceso es del orden del 30 en Ciencias Sociales y Humanidades, y alcanza cerca del 45 en el resto de las áreas. Se concluye que Argentina presenta condiciones muy favorables para liberar un alto porcentaje de la literatura científica generada en el país bajo la modalidad del acceso abierto a través de repositorios institucionales y de mandatos para el auto-archivo, contribuyendo además a incrementar la accesibilidad y la preservación a largo plazo de la producción científica y tecnológica nacional

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudio tiene como objetivo estimar la influencia del acceso abierto en los patrones de publicación de la comunidad científica argentina en diferentes campos temáticos (Medicina; Física y Astronomía; Agricultura y Ciencias biológicas y Ciencias sociales y Humanidades), a partir del análisis del modelo de acceso de las revistas elegidas para comunicar los resultados de investigación en el período 2008-2010. La producción fue recogida de la base de datos SCOPUS y los modelos de acceso de las revistas determinados a partir de la consulta a las fuentes DOAJ, e-revist@s, SCielo, Redalyc, PubMed, Romeo-Sherpa y Dulcinea. Se analizó la accesibilidad real y potencial de la producción científica nacional por las vías dorada y verde, respectivamente, así como también por suscripción a través de la Biblioteca Electrónica de Ciencia y Tecnología del Ministerio de Ciencia, Tecnología e Innovación Productiva de la Nación Argentina. Los resultados muestran que en promedio, y para el conjunto de las temáticas estudiadas, el 70 de la producción científica argentina visible en SCOPUS se publica en revistas que adhieren de una u otra forma al movimiento de acceso abierto, en una relación del 27 para la vía dorada y del 43 para las que permiten el autoarchivo por la vía verde. Entre el 16 y el 30 (según las áreas temáticas) de los artículos publicados en revistas que permiten el autoarchivo se accede vía suscripción. El porcentaje de revistas sin acceso es del orden del 30 en Ciencias Sociales y Humanidades, y alcanza cerca del 45 en el resto de las áreas. Se concluye que Argentina presenta condiciones muy favorables para liberar un alto porcentaje de la literatura científica generada en el país bajo la modalidad del acceso abierto a través de repositorios institucionales y de mandatos para el auto-archivo, contribuyendo además a incrementar la accesibilidad y la preservación a largo plazo de la producción científica y tecnológica nacional

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, retrievals of the medium resolution imaging spectrometer (MERIS) reflectances and water quality products using 4 different coastal processing algorithms freely available are assessed by comparison against sea-truthing data. The study is based on a pair-wise comparison using processor-dependent quality flags for the retrieval of valid common macro-pixels. This assessment is required in order to ensure the reliability of monitoring systems based on MERIS data, such as the Swedish coastal and lake monitoring system (http.vattenkvalitet.se). The results show that the pre-processing with the Improved Contrast between Ocean and Land (ICOL) processor, correcting for adjacency effects, improve the retrieval of spectral reflectance for all processors, Therefore, it is recommended that the ICOL processor should be applied when Baltic coastal waters are investigated. Chlorophyll was retrieved best using the FUB (Free University of Berlin) processing algorithm, although overestimations in the range 18-26.5%, dependent on the compared pairs, were obtained. At low chlorophyll concentrations (< 2.5 mg/m**3), random errors dominated in the retrievals with the MEGS (MERIS ground segment processor) processor. The lowest bias and random errors were obtained with MEGS for suspended particulate matter, for which overestimations in te range of 8-16% were found. Only the FUB retrieved CDOM (Coloured Dissolved Organic Matter) correlate with in situ values. However, a large systematic underestimation appears in the estimates that nevertheless may be corrected for by using a~local correction factor. The MEGS has the potential to be used as an operational processing algorithm for the Himmerfjärden bay and adjacent areas, but it requires further improvement of the atmospheric correction for the blue bands and better definition at relatively low chlorophyll concentrations in presence of high CDOM attenuation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "CoMSBlack92" dataset is based on samples collected in the summer of 1992 along the Bulgarian coast including coastal and open sea areas. The whole dataset is composed of 79 samples (28 stations) with data of zooplankton species composition, abundance and biomass. Sampling for zooplankton was performed from bottom up to the surface at standard depths depending on water column stratification and the thermocline depth. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 ?m. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Sampling volume was estimated by multiplying the mouth area with the wire length. The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Asen Konsulov using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972 ). The biomass was estimated as wet weight by Petipa, 1959 (based on species specific wet weight). Wet weight values were transformed to dry weight using the equation DW=0.16*WW as suggested by Vinogradov & Shushkina, 1987. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. The biomass was estimated as wet weight by Petipa, 1959 ussing standard average weight of each species in mg/m**3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experimental study was performed in order to determine the influence of the sequence of operations on the effectiveness of Laser Shock Peening (LSP) treatment in increasing the fatigue performances of open-hole aluminium specimens. Residual stress measurements, fractographic analysis and FEM analysis were performed, indicating the presence of compressive residual stresses on the surface of the treated specimens and tensile residual stresses in the mid-section along the thickness of the specimens. Negative effects on fatigue lives were encountered on the specimens with the hole already present, while positive effect were observed in specimens in which the hole was drilled after LSP treatment. These results indicate that LSP can be a good solution for “in production” application, in which open holes are to be drilled after the LSP treatment. The application in which LSP is used “in service” on structures with pre-existing cut-outs, has proven to be impracticable in the investigated configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.