887 resultados para 420200 Literature Studies
Resumo:
Background: Cardiovascular diseases (CVD) are the leading cause of morbidity and mortality worldwide. CVD mainly comprise of coronary heart disease and stroke and were ranked first and fourth respectively amongst leading causes of death in the United States. Influenza (flu) causes annual outbreaks and pandemics and is increasingly recognized as an important trigger for acute coronary syndromes and stroke. Influenza vaccination is an inexpensive and effective strategy for prevention of influenza related complications in high risk individuals. Though it is recommended for all CVD patients, Influenza vaccine is still used at suboptimal levels in these patients owing to prevailing controversy related to its effectiveness in preventing CVD. This review was undertaken to critically assess the effectiveness of influenza vaccination as a primary or secondary prevention method for CVD. ^ Methods: A systematic review was conducted using electronic databases OVID MEDLINE, PUBMED (National Library of Medicine), EMBASE, GOOGLE SCHOLAR and TRIP (Turning Research into Practice). The study search was limited to peer-reviewed articles published in English language from January 1970 through May 2012. The case control studies, cohort studies and randomized controlled trials related to influenza vaccination and CVD, with data on at least one of the outcomes were identified. In the review, only population-based epidemiologic studies in all ethnic groups and of either sex and with age limitation of 30 yrs or above, with clinical CVD outcomes of interest were included. ^ Results: Of the 16 studies (8 case control studies, 6 cohort studies and 2 randomized controlled trials) that met the inclusion criteria, 14 studies reported that there was a significant benefit in u influenza vaccination as primary or secondary prevention method for preventing new cardiovascular events. In contrary to the above findings, two studies mentioned that there was no significant benefit of vaccination in CVD prevention. ^ Conclusion: The available body of evidence in the review elucidates that vaccination against influenza is associated with reduction in the risk of new CVD events, hospitalization for coronary heart disease and stroke and as well as the risk of death. The study findings disclose that the influenza vaccination is very effective in CVD prevention and should be encouraged for the high risk population. However, larger and more future studies like randomized control trials are needed to further evaluate and confirm these findings. ^
Resumo:
Methicillin Resistant Staphylococcus aureus healthcare-associated infections (MRSA HAIs) are a major cause of morbidity in hospitalized patients. They pose great economic burden to hospitals caring for these patients. Intensified Interventions aim to control MRSA HAIs. Cost-effectiveness of Intensified Interventions is largely unclear. We performed a review of cost-effectiveness literature on Intensified Interventions , and provide a summary of study findings, the status of economic research in the area, and information that will help decision-makers at regional level and guide future research.^ We conducted literature search using electronic database PubMed, EBSCO, and The Cochrane Library. We limited our search to English articles published after 1999. We reviewed a total of 1,356 titles, and after applying our inclusion and exclusion criteria selected seven articles for our final review. We modified the Economic Evaluation Abstraction Form provided by CDC, and used this form to abstract data from studies.^ Of the seven selected articles two were cohort studies and the remaining five were modeling studies. They were done in various countries, in different study settings, and with different variations of the Intensified Intervention . Overall, six of the seven studies reported that Intensified Interventions were dominant or at least cost-effective in their study setting. This effect persisted on sensitivity testing.^ We identified many gaps in research in this field. The cost-effectiveness research in the field is mostly composed of modeling studies. The studies do not always clearly describe the intervention. The intervention and infection costs and the sources for these costs are not always explicit or are missing. In modeling studies, there is uncertainty associated with some key model inputs, but these inputs are not always identified. The models utilized in the modeling studies are not always tested for internal consistency or validity. Studies usually test the short term cost-effectiveness of Intensified Interventions but not the long results.^ Our study limitation was the inability to adjust for differences in study settings, intervention costs, disease costs, or effectiveness measures. Our study strength is the presentation of a focused literature review of Intensified Interventions in hospital settings. Through this study we provide information that will help decision makers at regional level, help guide future research, and might change clinical care and policies. ^
Resumo:
Objective: To review published literature on the impact of restaurant menu labeling on consumer food choices.^ Method: To examine all relevant studies published on the topic from 2002 to 2012.^ Results: Sixteen studies were identified as relevant and suitable for review. These studies comprised of one systematic review, one health impact assessment, and fourteen research studies conducted at restaurants, cafeterias, and laboratories. Three of ten studies conducted at restaurants and cafeterias and two of four studies conducted at laboratories found positive effects of menu labeling on consumer food choices. Conversely, the systematic review identified for this review found that five out of six studies resulted in weakly positive effects. The health impact assessment estimated positive effects; however, the results of this assessment must be cautiously interpreted since the authors used simulated data.^ Conclusion: Overall, there is insufficient evidence to provide support for the majority of the types of menu labels identified in this review on consumer food choice.^
Resumo:
Community metabolism and air-sea carbon dioxide (CO2) fluxes were investigated in July 1992 on a fringing reef at Moorea (French Polynesia). The benthic community was dominated by macroalgae (85% substratum cover) and comprised of Phaeophyceae Padina tenuis (Bory), Turbinaria ornata (Turner) J. Agardh, and Hydroclathrus clathratus Bory (Howe); Chlorophyta Halimeda incrassata f. ovata J. Agardh (Howe); and Ventricaria ventricosa J. Agardh (Olsen et West), as well as several Rhodophyta (Actinotrichia fragilis Forskál (Børgesen) and several species of encrusting coralline algae). Algal biomass was 171 g dry weight/m**2. Community gross production (Pg), respiration (R), and net calcification (G) were measured in an open-top enclosure. Pg and R were respectively 248 and 240 mmol Co2/m**2/d, and there was a slight net dissolution of CaCO3 (0.8 mmol/m**2/d). This site was a sink for atmospheric CO2 (10 ± 4 mmol CO2/m**2/d), and the analysis of data from the literature suggests that this is a general feature of algal-dominated reefs. Measurement of air-sea CO2 fluxes in open water close to the enclosure demonstrated that changes in small-scale hydrodynamics can lead to misleading conclusions. Net CO2 evasion to the atmosphere was measured on the fringing reef due to changes in the current pattern that drove water from the barrier reef (a C02 source) to the study site.
Resumo:
Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance pelagic ciliates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of pelagic ciliates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known ciliates feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C/(ciliate*h), µm**3/(ciliate*h) and prey cell/(ciliate*h); clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.
Resumo:
Este trabajo esta dedicado al estudio de las estructuras macroscópicas conocidas en la literatura como filamentos o blobs que han sido observadas de manera universal en el borde de todo tipo de dispositivos de fusión por confinamiento magnético. Estos filamentos, celdas convectivas elongadas a lo largo de las líneas de campo que surgen en el plasma fuertemente turbulento que existe en este tipo de dispositivos, parecen dominar el transporte radial de partículas y energía en la región conocida como Scrape-off Layer, en la que las líneas de campo dejan de estar cerradas y el plasma es dirigido hacia la pared sólida que forma la cámara de vacío. Aunque el comportamiento y las leyes de escala de estas estructuras son relativamente bien conocidos, no existe aún una teoría generalmente aceptada acerca del mecanismo físico responsable de su formación, que constituye una de las principales incógnitas de la teoría de transporte del borde en plasmas de fusión y una cuestión de gran importancia práctica en el desarrollo de la siguiente generación de reactores de fusión (incluyendo dispositivos como ITER y DEMO), puesto que la eficiencia del confinamiento y la cantidad de energía depositadas en la pared dependen directamente de las características del transporte en el borde. El trabajo ha sido realizado desde una perspectiva eminentemente experimental, incluyendo la observación y el análisis de este tipo de estructuras en el stellarator tipo heliotrón LHD (un dispositivo de gran tamaño, capaz de generar plasmas de características cercanas a las necesarias en un reactor de fusión) y en el stellarator tipo heliac TJ-II (un dispositivo de medio tamaño, capaz de generar plasmas relativamente más fríos pero con una accesibilidad y disponibilidad de diagnósticos mayor). En particular, en LHD se observó la generación de filamentos durante las descargas realizadas en configuración de alta _ (alta presión cinética frente a magnética) mediante una cámara visible ultrarrápida, se caracterizó su comportamiento y se investigó, mediante el análisis estadístico y la comparación con modelos teóricos, el posible papel de la Criticalidad Autoorganizada en la formación de este tipo de estructuras. En TJ-II se diseñó y construyó una cabeza de sonda capaz de medir simultáneamente las fluctuaciones electrostáticas y electromagnéticas del plasma. Gracias a este nuevo diagnóstico se pudieron realizar experimentos con el fin de determinar la presencia de corriente paralela a través de los filamentos (un parámetro de gran importancia en su modelización) y relacionar los dos tipos de fluctuaciones por primera vez en un stellarator. Así mismo, también por primera vez en este tipo de dispositivo, fue posible realizar mediciones simultáneas de los tensores viscoso y magnético (Reynolds y Maxwell) de transporte de cantidad de movimiento. ABSTRACT This work has been devoted to the study of the macroscopic structures known in the literature as filaments or blobs, which have been observed universally in the edge of all kind of magnetic confinement fusion devices. These filaments, convective cells stretching along the magnetic field lines, arise from the highly turbulent plasma present in this kind of machines and seem to dominate radial transport of particles and energy in the region known as Scrapeoff Layer, in which field lines become open and plasma is directed towards the solid wall of the vacuum vessel. Although the behavior and scale laws of these structures are relatively well known, there is no generally accepted theory about the physical mechanism involved in their formation yet, which remains one of the main unsolved questions in the fusion plasmas edge transport theory and a matter of great practical importance for the development of the next generation of fusion reactors (including ITER and DEMO), since efficiency of confinement and the energy deposition levels on the wall are directly dependent of the characteristics of edge transport. This work has been realized mainly from an experimental perspective, including the observation and analysis of this kind of structures in the heliotron stellarator LHD (a large device capable of generating reactor-relevant plasma conditions) and in the heliac stellarator TJ-II (a medium-sized device, capable of relatively colder plasmas, but with greater ease of access and diagnostics availability). In particular, in LHD, the generation of filaments during high _ discharges (with high kinetic to magnetic pressure ratio) was observed by means of an ultrafast visible camera, and the behavior of this structures was characterized. Finally, the potential role of Self-Organized Criticality in the generation of filaments was investigated. In TJ-II, a probe head capable of measuring simultaneously electrostatic and electromagnetic fluctuations in the plasma was designed and built. Thanks to this new diagnostic, experiments were carried out in order to determine the presence of parallel current through filaments (one of the most important parameters in their modelization) and to related electromagnetic (EM) and electrostatic (ES) fluctuations for the first time in an stellarator. As well, also for the first time in this kind of device, measurements of the viscous and magnetic momentum transfer tensors (Reynolds and Maxwell) were performed.
Resumo:
Quality assessment is one of the activities performed as part of systematic literature reviews. It is commonly accepted that a good quality experiment is bias free. Bias is considered to be related to internal validity (e.g., how adequately the experiment is planned, executed and analysed). Quality assessment is usually conducted using checklists and quality scales. It has not yet been proven;however, that quality is related to experimental bias. Aim: Identify whether there is a relationship between internal validity and bias in software engineering experiments. Method: We built a quality scale to determine the quality of the studies, which we applied to 28 experiments included in two systematic literature reviews. We proposed an objective indicator of experimental bias, which we applied to the same 28 experiments. Finally, we analysed the correlations between the quality scores and the proposed measure of bias. Results: We failed to find a relationship between the global quality score (resulting from the quality scale) and bias; however, we did identify interesting correlations between bias and some particular aspects of internal validity measured by the instrument. Conclusions: There is an empirically provable relationship between internal validity and bias. It is feasible to apply quality assessment in systematic literature reviews, subject to limits on the internal validity aspects for consideration.
Resumo:
Separated transitional boundary layers appear on key aeronautical processes such as the flow around wings or turbomachinery blades. The aim of this thesis is the study of these flows in representative scenarios of technological applications, gaining knowledge about phenomenology and physical processes that occur there and, developing a simple model for scaling them. To achieve this goal, experimental measurements have been carried out in a low speed facility, ensuring the flow homogeneity and a low disturbances level such that unwanted transitional mechanisms are avoided. The studied boundary layers have been developed on a flat plate, by imposing a pressure gradient by means of contoured walls. They generate an initial acceleration region followed by a deceleration zone. The initial region is designed to obtain at the beginning of the deceleration the Blasius profile, characterized by its momentum thickness, and an edge boundary layer velocity, defining the problem characteristic velocity. The deceleration region is designed to obtain a linear evolution of the edge velocity, thereby defining the characteristic length of the problem. Several experimental techniques, both intrusive (hot wire anemometry, total pressure probes) as nonintrusive (PIV and LDV anemometry, high-speed filming), have been used in order to take advantage of each of them and allow cross-validation of the results. Once the boundary layer at the deceleration beginning has been characterized, ensuring the desired integral parameters and level of disturbance, the evolution of the laminar boundary layer up to the point of separation is studied. It has been compared with integral methods, and numerical simulations. In view of the results a new model for this evolution is proposed. Downstream from the separation, the flow near to the wall is configured as a shear layer that encloses low momentum recirculating fluid. The region where the shear layer remains laminar tends to be positioned to compensate the adverse pressure gradient associated with the imposed deceleration. Under these conditions, the momentum thickness remains almost constant. This laminar shear layer region extends up to where transitional phenomena appear, extension that scales with the momentum thickness at separation. These transitional phenomena are of inviscid type, similar to those found in free shear layers. The transitional region analysis begins with a study of the disturbances evolution in the linear growth region and the comparison of experimental results with a numerical model based on Linear Stability Theory for parallel flows and with data from other authors. The results’ coalescence for both the disturbances growth and the excited frequencies is stated. For the transition final stages the vorticity concentration into vortex blobs is found, analogously to what happens in free shear layers. Unlike these, the presence of the wall and the pressure gradient make the large scale structures to move towards the wall and quickly disappear under certain circumstances. In these cases, the recirculating flow is confined into a closed region saying the bubble is closed or the boundary layer reattaches. From the reattachment point, the fluid shows a configuration in the vicinity of the wall traditionally considered as turbulent. It has been observed that existing integral methods for turbulent boundary layers do not fit well to the experimental results, due to these methods being valid only for fully developed turbulent flow. Nevertheless, it has been found that downstream from the reattachment point the velocity profiles are self-similar, and a model has been proposed for the evolution of the integral parameters of the boundary layer in this region. Finally, the phenomenon known as bubble burst is analyzed. It has been checked the validity of existing models in literature and a new one is proposed. This phenomenon is blamed to the inability of the large scale structures formed after the transition to overcome with the adverse pressure gradient, move towards the wall and close the bubble. El estudio de capas límites transicionales con separación es de gran relevancia en distintas aplicaciones tecnológicas. Particularmente, en tecnología aeronáutica, aparecen en procesos claves, tales como el flujo alrededor de alas o álabes de turbomaquinaria. El objetivo de esta tesis es el estudio de estos flujos en situaciones representativas de las aplicaciones tecnológicas, ganando por un lado conocimiento sobre la fenomenología y los procesos físicos que aparecen y, por otra parte, desarrollando un modelo sencillo para el escalado de los mismos. Para conseguir este objetivo se han realizado ensayos en una instalación experimental de baja velocidad específicamente diseñada para asegurar un flujo homogéneo y con bajo nivel de perturbaciones, de modo que se evita el disparo de mecanismos transicionales no deseados. La capa límite bajo estudio se ha desarrollado sobre una placa plana, imponiendo un gradiente de presión a la misma por medio de paredes de geometría especificada. éstas generan una región inicial de aceleración seguida de una zona de deceleración. La región inicial se diseña para tener en al inicio de la deceleración un perfil de capa límite de Blasius, caracterizado por su espesor de cantidad de movimiento, y una cierta velocidad externa a la capa límite que se considera la velocidad característica del problema. La región de deceleración está concebida para que la variación de la velocidad externa a la capa límite sea lineal, definiendo de esta forma una longitud característica del problema. Los ensayos se han realizado explotando varias técnicas experimentales, tanto intrusivas (anemometría de hilo caliente, sondas de presión total) como no intrusivas (anemometrías láser y PIV, filmación de alta velocidad), de cara a aprovechar las ventajas de cada una de ellas y permitir validación cruzada de resultados entre las mismas. Caracterizada la capa límite al comienzo de la deceleración, y garantizados los parámetros integrales y niveles de perturbación deseados se procede al estudio de la zona de deceleración. Se presenta en la tesis un análisis de la evolución de la capa límite laminar desde el inicio de la misma hasta el punto de separación, comparando con métodos integrales, simulaciones numéricas, y proponiendo un nuevo modelo para esta evolución. Aguas abajo de la separación, el flujo en las proximidades de la pared se configura como una capa de cortadura que encierra una región de fluido recirculatorio de baja cantidad de movimiento. Se ha caracterizado la región en que dicha capa de cortadura permanece laminar, encontrando que se posiciona de modo que compensa el gradiente adverso de presión asociado a la deceleración de la corriente. En estas condiciones, el espesor de cantidad de movimiento permanece prácticamente constante y esta capa de cortadura laminar se extiende hasta que los fenómenos transicionales aparecen. Estos fenómenos son de tipo no viscoso, similares a los que aparecen en una capa de cortadura libre. El análisis de la región transicional comienza con un estudio de la evolución de las vii viii RESUMEN perturbaciones en la zona de crecimiento lineal de las mismas y la comparación de los resultados experimentales con un modelo numérico y con datos de otros autores. La coalescencia de los resultados tanto para el crecimiento de las perturbaciones como para las frecuencias excitadas queda demostrada. Para los estadios finales de la transición se observa la concentración de la vorticidad en torbellinos, de modo análogo a lo que ocurre en capas de cortadura libres. A diferencia de estas, la presencia de la pared y del gradiente de presión hace que, bajo ciertas condiciones, la gran escala se desplace hacia la pared y desaparezca rápidamente. En este caso el flujo recirculatorio queda confinado en una región cerrada y se habla de cierre de la burbuja o readherencia de la capa límite. A partir del punto de readherencia se tiene una configuración fluida en las proximidades de la pared que tradicionalmente se ha considerado turbulenta. Se ha observado que los métodos integrales existentes para capas límites turbulentas no ajustan bien a las medidas experimentales realizadas, hecho imputable a que no se obtiene en dicha región un flujo turbulento plenamente desarrollado. Se ha encontrado, sin embargo, que pasado el punto de readherencia los perfiles de velocidad próximos a la pared son autosemejantes entre sí y se ha propuesto un modelo para la evolución de los parámetros integrales de la capa límite en esta región. Finalmente, el fenómeno conocido como “estallido” de la burbuja se ha analizado. Se ha comprobado la validez de los modelos existentes en la literatura y se propone uno nuevo. Este fenómeno se achaca a la incapacidad de la gran estructura formada tras la transición para vencer el gradiente adverso de presión, desplazarse hacia la pared y cerrar la burbuja.
Resumo:
After more than 40 years of life, software evolution should be considered as a mature field. However, despite such a long history, many research questions still remain open, and controversial studies about the validity of the laws of software evolution are common. During the first part of these 40 years the laws themselves evolved to adapt to changes in both the research and the software industry environments. This process of adaption to new paradigms, standards, and practices stopped about 15 years ago, when the laws were revised for the last time. However, most controversial studies have been raised during this latter period. Based on a systematic and comprehensive literature review, in this paper we describe how and when the laws, and the software evolution field, evolved. We also address the current state of affairs about the validity of the laws, how they are perceived by the research community, and the developments and challenges that are likely to occur in the coming years.
Resumo:
During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.
Resumo:
There is no specialized survey of experiments conducted in the software industry. Goal: Identify the major features of software industry experiments, such as time distribution, independent and dependent variables, subject types, design types and challenges. Method: Systematic literature review, taking the form of a scoping study. Results: We have identified 10 experiments and five quasi-experiments up to July 2012. Most were run as of 2003. The main features of these studies are that they test technologies related to quality and management and analyse outcomes related to effectiveness and effort. Most experiments have a factorial design. The major challenges faced by experimenters are to minimize the cost of running the experiment for the company and to schedule the experiment so as not to interfere with production processes.
Resumo:
The mineral price assigned in mining project design is critical to determining the economic feasibility of a project. Nevertheless, although it is not difficult to find literature about market metal prices, it is much more complicated to achieve a specific methodology for calculating the value or which justifications are appropriate to include. This study presents an analysis of various methods for selecting metal prices and investigates the mechanisms and motives underlying price selections. The results describe various attitudes adopted by the designers of mining investment projects, and how the price can be determined not just by means of forecasting but also by consideration of other relevant parameters.
Resumo:
This paper groups recent supply chain management research focused on organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management realize cost-benefit analysis of IT software implementations. The success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. This paper concludes with proposals for future research on unaddressed issues within and among the identified research streams.
Resumo:
Presented here are femtosecond pump-probe studies on the water-solvated 7-azaindole dimer, a model DNA base pair. In particular, studies are presented that further elucidate the nature of the reactive and nonreactive dimers and also provide new insights establishing that the excited state double-proton transfer in the dimer occurs in a stepwise rather than a concerted manner. A major question addressed is whether the incorporation of a water molecule with the dimer results in the formation of species that are unable to undergo excited state double-proton transfer, as suggested by a recent study reported in the literature [Nakajima, A., Hirano, M., Hasumi, R., Kaya, K., Watanabe, H., Carter, C. C., Williamson, J. M. & Miller, T. (1997) J. Phys. Chem. 101, 392–398]. In contrast to this earlier work, our present findings reveal that both reactive and nonreactive dimers can coexist in the molecular beam under the same experimental conditions and definitively show that the clustering of water does not induce the formation of the nonreactive dimer. Rather, when present with a species already determined to be a nonreactive dimer, the addition of water can actually facilitate the occurrence of the proton transfer reaction. Furthermore, on attaining a critical hydration number, the data for the nonreactive dimer suggest a solvation-induced conformational structure change leading to proton transfer on the photoexcited half of the 7-azaindole dimer.
Resumo:
This review analyzes the existing research on the information needs of rural health professionals and relates it to the broader information-needs literature to establish whether the information needs of rural health professionals differ from those of other health professionals. The analysis of these studies indicates that rural health practitioners appear to have the same basic needs for patient-care information as their urban counterparts, and that both groups rely on colleagues and personal libraries as their main sources of information. Rural practitioners, however, tend to make less use of journals and online databases and ask fewer clinical questions; a difference that correlates with geographic and demographic factors. Rural practitioners experience pronounced barriers to information access including lack of time, isolation, inadequate library access, lack of equipment, lack of skills, costs, and inadequate Internet infrastructure. Outreach efforts to this group of underserved health professionals must be sustained to achieve equity in information access and to change information-seeking behaviors.