918 resultados para scientific programming
Resumo:
Introduction The Andalusian Public Health System Virtual Library (Biblioteca Virtual del Sistema Sanitario Público de Andalucía, BV-SSPA) was set up in June 2006. It consists of a regional government action with the aim of democratizing the health professional access to quality scientific information, regardless of the professional workplace. Andalusia is a region with more than 8 million inhabitants, with 100,000 health professionals for 41 hospitals, 1,500 primary healthcare centres, and 28 centres for non-medical attention purposes (research, management, and educational centres). Objectives The Department of Development, Research and Investigation (R+D+i) of the Andalusian Regional Government has, among its duties, the task of evaluating the hospitals and centres of the Andalusian Public Health System (SSPA) in order to distribute its funding. Among the criteria used is the evaluation of the scientific output, which is measured using bibliometry. It is well-known that the bibliometry has a series of limitations and problems that should be taken into account, especially when it is used for non-information sciences, such us career, funding, etc. A few years ago, the bibliometric reports were done separately in each centre, but without using preset and well-defined criteria, elements which are basic when we need to compare the results of the reports. It was possible to find some hospitals which were including Meeting Abstracts in their figures, while others do not, and the same was happening with Erratum and many other differences. Therefore, the main problem that the Department of R+D+i had to deal with, when they were evaluating the health system, was that bibliometric data was not accurate and reports were not comparable. With the aim of having an unified criteria for the whole system, the Department of R+D+i ordered the BV-SSPA to do the year analysis of the scientific output of the system, using some well defined criteria and indicators, among whichstands out the Impact Factor. Materials and Methods As the Impact Factor is the bibliometric indicator that the virtual library is asked to consider, it is necessary to use the database Web of Science (WoS), since it is its owner and editor. The WoS includes the databases Science Citation Index (SCI), Social Sciences Citation Index (SSCI) and Arts & Humanities Citation Index. To gather all the documents, SCI and SSCI are used; to obtain the Impact Factor and quartils, it is used the Journal Citation Reports, JCR. Unlike other bibliographic databases, such us MEDLINE, the bibliometric database WoS includes the address of all the authors. In order to retrieve all the scientific output of the SSPA, we have done general searches, which are afterwards processed by a tool developed by our library. We have done nine different searches using the field ‘address’; eight of them including ‘Spain’ and each one of the eight Andalusian Regions, and the other one combining ‘Spain’ with all those cities where there are health centres, since we have detected that there are some authors that do not use the region in their signatures. These are some of the search strategies: AD=Malaga and AD=Spain AD=Sevill* and AD=Spain AD=SPAIN AND (AD=GUADIX OR AD=BAZA OR AD=MOTRIL) Further more, the field ‘year’ is used to determine the period. To exploit the data, the BV-SSPA has developed a tool called Impactia. It is a web application which uses a database to store the information of the documents generated by the SSPA. Impactia allows the user to automatically process the retrieved documents, assigning them to their correspondent centres. In order to do the classification of documents automaticaly, it was necessary to detect the huge variability of names of the centres that the authors use in their signatures. Therefore, Impactia knows that if an author signs as “Hospital Universitario Virgen Macarena”, “HVM” or “Hosp. Virgin Macarena”, he belongs to the same centre. The figure attached shows the variability found for the Empresa Publica Hospital de Poniente. Besides the documents from WoS, Impactia includes the documents indexed in Scopus and in other databases, where we do bibliographic searches using similar strategies to the later ones. Aware that in the health centres and hospitals there is a lot of grey literature that is not gathered in databases, Impactia allows the centres to feed the application with these documents, so that all the SSPA scientific output is gathered and organised in a centralized place. The ones responsible of localizing this gray literature are the librarians of each one of the centres. They can also do statements to the documents and indicators that are collected and calculated by Impactia. The bulk upload of documents from WoS and Scopus into Impactia is monthly done. One of the main issues that we found during the development of Impactia was the need of dealing with duplicated documents obtained from different sources. Taking into account that sometimes titles might be written differently, with slashes, comas, and so on, Impactia detects the duplicates using the field ‘DOI’ if it is available or comparing the fields: page start, page end and ISSN. Therefore it is possible to guarantee the absence of duplicates. Results The data gathered in Impactia becomes available to the administrative teams and hospitals managers, through an easy web page that allows them to know at any moment, and with just one click, the detailed information of the scientific output of their hospitals, including useful graphs such as percentage of document types, journals where their scientists usually publish, annual comparatives, bibliometric indicators and so on. They can also compare the different centres of the SSPA. Impactia allows the user to download the data from the application, so that he can work with this information or include them in their centres’ reports. This application saves the health system many working hours. It was previously done manually by forty one librarians, while now it is done by only one person in the BV-SSPA during two days a month. To sum up, the benefits of Impactia are: It has shown its effectiveness in the automatic classification, treatment and analysis of the data. It has become an essential tool for all managers to evaluate quickly and easily the scientific production of their centers. It optimizes the human resources of the SSPA, saving time and money. It is the reference point for the Department of R+D+i to do the scientific health staff evaluation.
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
El rápido crecimiento del los sistemas multicore y los diversos enfoques que estos han tomado, permiten que procesos complejos que antes solo eran posibles de ejecutar en supercomputadores, hoy puedan ser ejecutados en soluciones de bajo coste también denominadas "hardware de comodidad". Dichas soluciones pueden ser implementadas usando los procesadores de mayor demanda en el mercado de consumo masivo (Intel y AMD). Al escalar dichas soluciones a requerimientos de cálculo científico se hace indispensable contar con métodos para medir el rendimiento que los mismos ofrecen y la manera como los mismos se comportan ante diferentes cargas de trabajo. Debido a la gran cantidad de tipos de cargas existentes en el mercado, e incluso dentro de la computación científica, se hace necesario establecer medidas "típicas" que puedan servir como soporte en los procesos de evaluación y adquisición de soluciones, teniendo un alto grado de certeza de funcionamiento. En la presente investigación se propone un enfoque práctico para dicha evaluación y se presentan los resultados de las pruebas ejecutadas sobre equipos de arquitecturas multicore AMD e Intel.
Resumo:
Large projects evaluation rises well known difficulties because -by definition- they modify the current price system; their public evaluation presents additional difficulties because they modify too existing shadow prices without the project. This paper analyzes -first- the basic methodologies applied until late 80s., based on the integration of projects in optimization models or, alternatively, based on iterative procedures with information exchange between two organizational levels. New methodologies applied afterwards are based on variational inequalities, bilevel programming and linear or nonlinear complementarity. Their foundations and different applications related with project evaluation are explored. As a matter of fact, these new tools are closely related among them and can treat more complex cases involving -for example- the reaction of agents to policies or the existence of multiple agents in an environment characterized by common functions representing demands or constraints on polluting emissions.
Resumo:
BACKGROUND Most textbooks contains messages relating to health. This profuse information requires analysis with regards to the quality of such information. The objective was to identify the scientific evidence on which the health messages in textbooks are based. METHODS The degree of evidence on which such messages are based was identified and the messages were subsequently classified into three categories: Messages with high, medium or low levels of evidence; Messages with an unknown level of evidence; and Messages with no known evidence. RESULTS 844 messages were studied. Of this total, 61% were classified as messages with an unknown level of evidence. Less than 15% fell into the category where the level of evidence was known and less than 6% were classified as possessing high levels of evidence. More than 70% of the messages relating to "Balanced Diets and Malnutrition", "Food Hygiene", "Tobacco", "Sexual behaviour and AIDS" and "Rest and ergonomics" are based on an unknown level of evidence. "Oral health" registered the highest percentage of messages based on a high level of evidence (37.5%), followed by "Pregnancy and newly born infants" (35%). Of the total, 24.6% are not based on any known evidence. Two of the messages appeared to contravene known evidence. CONCLUSION Many of the messages included in school textbooks are not based on scientific evidence. Standards must be established to facilitate the production of texts that include messages that are based on the best available evidence and which can improve children's health more effectively.
Resumo:
Business processes designers take into account the resources that the processes would need, but, due to the variable cost of certain parameters (like energy) or other circumstances, this scheduling must be done when business process enactment. In this report we formalize the energy aware resource cost, including time and usage dependent rates. We also present a constraint programming approach and an auction-based approach to solve the mentioned problem including a comparison of them and a comparison of the proposed algorithms for solving them
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Parallel tracks for clinical scientists, basic scientists, and pediatric imagers was the novel approach taken for the highly successful 8th Annual Scientific Sessions of the Society for Cardiovascular Magnetic Resonance, held in San Francisco, California, January 21 to 23, 2005. Attendees were immersed in information on the latest scientific advances in cardiovascular magnetic resonance (CMR) from mice to man and technological advances from systems with field strengths from 0.5 T to 11.7 T. State-of-the-art applications were reviewed, spanning a wide range from molecular imaging to predicting outcome with CMR in large patient populations.
Resumo:
The aim of this pilot project was to evaluate the feasibility of assessing the deposited particle dose in the lungs by applying the dynamic light scattering-based methodology in exhaled breath condensateur (EBC). In parallel, we developed and validated two analytical methods allowing the determination of inflammatory (hydrogen peroxide - H2O2) and lipoperoxidation (malondialdehyde - MDA) biomarkers in exhaled breath condensate. Finally, these methods were used to assess the particle dose and consecutive inflammatory effect in healthy nonsmoker subjects exposed to environmental tobacco smoke in controlled situations was done.