101 resultados para Research assessment
Resumo:
The authors present 10 grids which are widely used in Health Sciences for the assessment of quality in research. They proceed through a comparative thematic analysis of these grids and show which points of view are preferred. They insist on the issues that differentiate these grids from each other and suggest the analysis of their differences by distinguishing the theoretical perspectives that underpin each one of these grids. Whilst the authors of the assessment grids rarely refer to the implicit theoretical backgrounds that guide their work, findings show that these grids convey varied epistemologies and research models. This gap renders the comparison of quality assessment in qualitative research a very difficult task, unless we shift our focus on the relationship between the grids, their theoretical backgrounds and their specific research subjects.
Resumo:
This dissertation aims to investigate empirical evidence on the importance and influence of attractiveness of nations in global competition. The notion of country attractiveness, which has been widely developed in the research areas of international business, tourism and migration, is a multi-dimensional construct to measure a country's characteristics with regard to its market or destination that attract international investors, tourists and migrants. This analytical concept provides an account of the mechanism as to how potential stakeholders evaluate more attractive countries based on certain criteria. Thus, in the field of international sport-event bidding, do international sport event owners also have specific country attractiveness for their sport event hosts? The dissertation attempts to address this research question by statistically assessing the effects of country attractiveness on the success of strategy for hosting international sports events. Based on theories of signaling and soft power, country attractiveness is defined and measured as the three dimensions of sustainable development: economic, social, and environmental attractiveness. This thesis proceeds to examine the concept of sport-event-hosting strategy and explore multi-level factors affecting the success in international sport-event bidding. By exploring past history of the Olympic Movement from theoretical perspectives, the thesis proposes and tests the hypotheses that economic, social and environmental attractiveness of a country may be correlated with its bid wins or the success of sport-event-hosting strategy. Quantitative analytical methods with various robustness checks are employed with using collected data on bidding results of major events in Olympic sports during the period from 1990 to 2012. The analysis results reveal that event owners of international Olympic sports are likely to prefer countries that have higher economic, social, and environmental attractiveness. The empirical assessment of this thesis suggests that high country attractiveness can be an essential element of prerequisites for a city/country to secure in order to bid with an increased chance of success.
Resumo:
BACKGROUND: High blood pressure, blood glucose, serum cholesterol, and BMI are risk factors for cardiovascular diseases and some of these factors also increase the risk of chronic kidney disease and diabetes. We estimated mortality from cardiovascular diseases, chronic kidney disease, and diabetes that was attributable to these four cardiometabolic risk factors for all countries and regions from 1980 to 2010. METHODS: We used data for exposure to risk factors by country, age group, and sex from pooled analyses of population-based health surveys. We obtained relative risks for the effects of risk factors on cause-specific mortality from meta-analyses of large prospective studies. We calculated the population attributable fractions for each risk factor alone, and for the combination of all risk factors, accounting for multicausality and for mediation of the effects of BMI by the other three risks. We calculated attributable deaths by multiplying the cause-specific population attributable fractions by the number of disease-specific deaths. We obtained cause-specific mortality from the Global Burden of Diseases, Injuries, and Risk Factors 2010 Study. We propagated the uncertainties of all the inputs to the final estimates. FINDINGS: In 2010, high blood pressure was the leading risk factor for deaths due to cardiovascular diseases, chronic kidney disease, and diabetes in every region, causing more than 40% of worldwide deaths from these diseases; high BMI and glucose were each responsible for about 15% of deaths, and high cholesterol for more than 10%. After accounting for multicausality, 63% (10·8 million deaths, 95% CI 10·1-11·5) of deaths from these diseases in 2010 were attributable to the combined effect of these four metabolic risk factors, compared with 67% (7·1 million deaths, 6·6-7·6) in 1980. The mortality burden of high BMI and glucose nearly doubled from 1980 to 2010. At the country level, age-standardised death rates from these diseases attributable to the combined effects of these four risk factors surpassed 925 deaths per 100 000 for men in Belarus, Kazakhstan, and Mongolia, but were less than 130 deaths per 100 000 for women and less than 200 for men in some high-income countries including Australia, Canada, France, Japan, the Netherlands, Singapore, South Korea, and Spain. INTERPRETATION: The salient features of the cardiometabolic disease and risk factor epidemic at the beginning of the 21st century are high blood pressure and an increasing effect of obesity and diabetes. The mortality burden of cardiometabolic risk factors has shifted from high-income to low-income and middle-income countries. Lowering cardiometabolic risks through dietary, behavioural, and pharmacological interventions should be a part of the global response to non-communicable diseases. FUNDING: UK Medical Research Council, US National Institutes of Health.
Resumo:
The number of physical activity measures and indexes used in the human literature is large and may result in some difficulty for the average investigator to choose the most appropriate measure. Accordingly, this review is intended to provide information on the utility and limitations of the various measures. Its primary focus is the objective assessment of free-living physical activity in humans based on physiological and biomechanical methods. The physical activity measures have been classified into three categories: Measures based on energy expenditure or oxygen uptake, such as activity energy expenditure, activity-related time equivalent, physical activity level, physical activity ratio, metabolic equivalent, and a new index of potential interest, daytime physical activity level. Measures based on heart rate monitoring, such as net heart rate, physical activity ratio heart rate, physical activity level heart rate, activity-related time equivalent, and daytime physical activity level heart rate. Measures based on whole-body accelerometry (counts/U time). Quantification of the velocity and duration of displacement in outdoor conditions by satellites using the Differential Global Positioning System may constitute a surrogate for physical activity, because walking is the primary activity of man in free-living conditions. A general outline of the measures and indexes described above is presented in tabular form, along with their respective definition, usual applications, advantages, and shortcomings. A practical example is given with typical values in obese and non-obese subjects. The various factors to be considered in the selection of physical activity methods include experimental goals, sample size, budget, cultural and social/environmental factors, physical burden for the subject, and statistical factors, such as accuracy and precision. It is concluded that no single current technique is able to quantify all aspects of physical activity under free-living conditions, requiring the use of complementary methods. In the future, physical activity sensors, which are of low-cost, small-sized, and convenient for subjects, investigators, and clinicians, are needed to reliably monitor, during extended periods in free-living situations, small changes in movements and grade as well as duration and intensity of typical physical activities.
Resumo:
INTRODUCTION: Numerous instruments have been developed to assess spirituality and measure its association with health outcomes. This study's aims were to identify instruments used in clinical research that measure spirituality; to propose a classification of these instruments; and to identify those instruments that could provide information on the need for spiritual intervention. METHODS: A systematic literature search in MEDLINE, CINHAL, PsycINFO, ATLA, and EMBASE databases, using the terms "spirituality" and "adult$," and limited to journal articles was performed to identify clinical studies that used a spiritual assessment instrument. For each instrument identified, measured constructs, intended goals, and data on psychometric properties were retrieved. A conceptual and a functional classification of instruments were developed. RESULTS: Thirty-five instruments were retrieved and classified into measures of general spirituality (N = 22), spiritual well-being (N = 5), spiritual coping (N = 4), and spiritual needs (N = 4) according to the conceptual classification. Instruments most frequently used in clinical research were the FACIT-Sp and the Spiritual Well-Being Scale. Data on psychometric properties were mostly limited to content validity and inter-item reliability. According to the functional classification, 16 instruments were identified that included at least one item measuring a current spiritual state, but only three of those appeared suitable to address the need for spiritual intervention. CONCLUSIONS: Instruments identified in this systematic review assess multiple dimensions of spirituality, and the proposed classifications should help clinical researchers interested in investigating the complex relationship between spirituality and health. Findings underscore the scarcity of instruments specifically designed to measure a patient's current spiritual state. Moreover, the relatively limited data available on psychometric properties of these instruments highlight the need for additional research to determine whether they are suitable in identifying the need for spiritual interventions.
Resumo:
During the past twenty years, various instruments have been developed for the assessment of substance use in adolescents, mainly in the United States. However, few of them have been adapted to, and validated in, French-speaking populations. Consequently, although increasing alcohol and drug use among teenagers has become a major concern, the various health and social programs developed in response to this specific problem have received little attention with regard to follow-up and outcome assessment. A standardized multidimensional assessment instrument adapted for adolescents is needed to assess the individual needs of adolescents and assign them to the most appropriate treatment setting, to provide a single measurement within and across health and social systems, and to conduct treatment outcome evaluations. Moreover, having an available instrument makes it possible to develop longitudinal and trans-cultural research studies. For this reason, a French version of the Adolescent Drug Abuse Diagnosis (ADAD) was developed and validated at the University Child and Adolescent Psychiatric Clinic in Lausanne, Switzerland. This paper aims to discuss the methodological issues that we faced when using the ADAD instrument in a 4-year longitudinal study including adolescent substance users. Methodological aspects relating to the content and format of the instrument, the assessment administration and the statistical analyses are discussed.
Resumo:
The introduction of engineered nanostructured materials into a rapidly increasing number of industrial and consumer products will result in enhanced exposure to engineered nanoparticles. Workplace exposure has been identified as the most likely source of uncontrolled inhalation of engineered aerosolized nanoparticles, but release of engineered nanoparticles may occur at any stage of the lifecycle of (consumer) products. The dynamic development of nanomaterials with possibly unknown toxicological effects poses a challenge for the assessment of nanoparticle induced toxicity and safety.In this consensus document from a workshop on in-vitro cell systems for nanoparticle toxicity testing11Workshop on 'In-Vitro Exposure Studies for Toxicity Testing of Engineered Nanoparticles' sponsored by the Association for Aerosol Research (GAeF), 5-6 September 2009, Karlsruhe, Germany. an overview is given of the main issues concerning exposure to airborne nanoparticles, lung physiology, biological mechanisms of (adverse) action, in-vitro cell exposure systems, realistic tissue doses, risk assessment and social aspects of nanotechnology. The workshop participants recognized the large potential of in-vitro cell exposure systems for reliable, high-throughput screening of nanoparticle toxicity. For the investigation of lung toxicity, a strong preference was expressed for air-liquid interface (ALI) cell exposure systems (rather than submerged cell exposure systems) as they more closely resemble in-vivo conditions in the lungs and they allow for unaltered and dosimetrically accurate delivery of aerosolized nanoparticles to the cells. An important aspect, which is frequently overlooked, is the comparison of typically used in-vitro dose levels with realistic in-vivo nanoparticle doses in the lung. If we consider average ambient urban exposure and occupational exposure at 5mg/m3 (maximum level allowed by Occupational Safety and Health Administration (OSHA)) as the boundaries of human exposure, the corresponding upper-limit range of nanoparticle flux delivered to the lung tissue is 3×10-5-5×10-3μg/h/cm2 of lung tissue and 2-300particles/h/(epithelial) cell. This range can be easily matched and even exceeded by almost all currently available cell exposure systems.The consensus statement includes a set of recommendations for conducting in-vitro cell exposure studies with pulmonary cell systems and identifies urgent needs for future development. As these issues are crucial for the introduction of safe nanomaterials into the marketplace and the living environment, they deserve more attention and more interaction between biologists and aerosol scientists. The members of the workshop believe that further advances in in-vitro cell exposure studies would be greatly facilitated by a more active role of the aerosol scientists. The technical know-how for developing and running ALI in-vitro exposure systems is available in the aerosol community and at the same time biologists/toxicologists are required for proper assessment of the biological impact of nanoparticles.
Resumo:
BACKGROUND: Positron emission tomography (PET) during the cold pressor test (CPT) has been used to assess endothelium-dependent coronary vasoreactivity, a surrogate marker of cardiovascular events. However, its use remains limited by cardiac PET availability. As multidetector computed tomography (MDCT) is more widely available, we aimed to develop a measurement of endothelium-dependent coronary vasoreactivity with MDCT and similar radiation burden as with PET. METHODS AND RESULTS: A study group of 18 participants without known cardiovascular risk factor (9F/9M; age 60±6 years) underwent cardiac PET with (82)Rb and unenhanced ECG-gated MDCT within 4h, each time at rest and during CPT. The relation between absolute myocardial blood flow (MBF) response to CPT by PET (ml·min(-1)·g(1)) and relative changes in MDCT-measured coronary artery surface were assessed using linear regression analysis and Spearman's correlation. MDCT and PET/CT were analyzed in all participants. Hemodynamic conditions during CPT at MDCT and PET were similar (P>0.3). Relative changes in coronary artery surface because of CPT (2.0-21.2%) correlated to changes in MBF (-0.10-0.52ml·min(-1)·g(1)) (ρ=0.68, P=0.02). Effective dose was 1.3±0.2mSv for MDCT and 3.1mSv for PET/CT. CONCLUSIONS: Assessment of endothelium-dependent coronary vasoreactivity using MDCT CPT appears feasible. Because of its wider availability, shorter examination time and similar radiation burden, MDCT could be attractive in clinical research for coronary status assessment.
Resumo:
BACKGROUND: Knowledge of the number of recent HIV infections is important for epidemiologic surveillance. Over the past decade approaches have been developed to estimate this number by testing HIV-seropositive specimens with assays that discriminate the lower concentration and avidity of HIV antibodies in early infection. We have investigated whether this "recency" information can also be gained from an HIV confirmatory assay. METHODS AND FINDINGS: The ability of a line immunoassay (INNO-LIA HIV I/II Score, Innogenetics) to distinguish recent from older HIV-1 infection was evaluated in comparison with the Calypte HIV-1 BED Incidence enzyme immunoassay (BED-EIA). Both tests were conducted prospectively in all HIV infections newly diagnosed in Switzerland from July 2005 to June 2006. Clinical and laboratory information indicative of recent or older infection was obtained from physicians at the time of HIV diagnosis and used as the reference standard. BED-EIA and various recency algorithms utilizing the antibody reaction to INNO-LIA's five HIV-1 antigen bands were evaluated by logistic regression analysis. A total of 765 HIV-1 infections, 748 (97.8%) with complete test results, were newly diagnosed during the study. A negative or indeterminate HIV antibody assay at diagnosis, symptoms of primary HIV infection, or a negative HIV test during the past 12 mo classified 195 infections (26.1%) as recent (< or = 12 mo). Symptoms of CDC stages B or C classified 161 infections as older (21.5%), and 392 patients with no symptoms remained unclassified. BED-EIA ruled 65% of the 195 recent infections as recent and 80% of the 161 older infections as older. Two INNO-LIA algorithms showed 50% and 40% sensitivity combined with 95% and 99% specificity, respectively. Estimation of recent infection in the entire study population, based on actual results of the three tests and adjusted for a test's sensitivity and specificity, yielded 37% for BED-EIA compared to 35% and 33% for the two INNO-LIA algorithms. Window-based estimation with BED-EIA yielded 41% (95% confidence interval 36%-46%). CONCLUSIONS: Recency information can be extracted from INNO-LIA-based confirmatory testing at no additional costs. This method should improve epidemiologic surveillance in countries that routinely use INNO-LIA for HIV confirmation.
Resumo:
RATIONALE AND OBJECTIVE:. The information assessment method (IAM) permits health professionals to systematically document the relevance, cognitive impact, use and health outcomes of information objects delivered by or retrieved from electronic knowledge resources. The companion review paper (Part 1) critically examined the literature, and proposed a 'Push-Pull-Acquisition-Cognition-Application' evaluation framework, which is operationalized by IAM. The purpose of the present paper (Part 2) is to examine the content validity of the IAM cognitive checklist when linked to email alerts. METHODS: A qualitative component of a mixed methods study was conducted with 46 doctors reading and rating research-based synopses sent on email. The unit of analysis was a doctor's explanation of a rating of one item regarding one synopsis. Interviews with participants provided 253 units that were analysed to assess concordance with item definitions. RESULTS AND CONCLUSION: The content relevance of seven items was supported. For three items, revisions were needed. Interviews suggested one new item. This study has yielded a 2008 version of IAM.
Resumo:
Much progress has been made over the past decades in the development of in vitro techniques for the assessment of chemically induced effects in embryonic and fetal development. In vitro assays have originally been developed to provide information on the mechanism of action of normal development, and have hence more adequately been used in fundamental research. These assays had to undergo extensive modification to be used in developmental toxicity testing. The present paper focuses on the rat whole embryo culture system, but also reviews modifications that were undertaken for the in vitro chick embryo system and the aggregate cultures of fetal rat brain cells. Today these tests cannot replace the existing in vivo developmental toxicity tests. They can, however, be used to screen chemicals for further development or further testing. In addition, these in vitro tests provide valuable information on the mechanisms of developmental toxicity and help to understand the relevancy of findings for humans. In vitro systems, combined with selected in vivo testing and pharmacokinetic investigations in animals and humans, can thus provide essential information for human risk assessment.
Resumo:
Background: Despite the increasing incidences of the publication of assessment frameworks intending to establish the "standards" of the quality of qualitative research, the research conducted using such empirical methods are still facing difficulties in being published or recognised by funding agencies. Methods: We conducted a thematic content analysis of eight frameworks from psychology/psychiatry and general medicine. The frameworks and their criteria are then compared against each other. Findings: The results illustrated the difficulties in reaching consensus on the definition of quality criteria. This showed the differences between the frameworks from the point of views of the underlying epistemology and the criteria suggested. Discussion: The aforementioned differences reflect the diversity of paradigms implicitly referred to by the authors of the frameworks, although rarely explicitly mentioned in text. We conclude that the increase in qualitative research and publications has failed to overcome the difficulties in establishing shared criteria and the great heterogeneity of concepts raises methodological and epistemological problems.
Resumo:
BACKGROUND: Early detection is a major goal in the management of malignant melanoma. Besides clinical assessment many noninvasive technologies such as dermoscopy, digital dermoscopy and in vivo laser scanner microscopy are used as additional methods. Herein we tested a system to assess lesional perfusion as a tool for early melanoma detection.¦METHODS: Laser Doppler flow (FluxExplorer) and mole analyser (MA) score (FotoFinder) were applied to histologically verified melanocytic nevi (33) and malignant melanomas (12).¦RESULTS: Mean perfusion and MA scores were significantly increased in melanoma compared to nevi. However, applying an empirically determined threshold of 16% perfusion increase only 42% of the melanomas fulfilled the criterion of malignancy, whereas with the mole analyzer score 82% of the melanomas fulfilled the criterion of malignancy.¦CONCLUSION: Laser Doppler imaging is a highly sensitive technology to assess skin and skin tumor perfusion in vivo. Although mean perfusion is higher in melanomas compared to nevi the high numbers of false negative results hamper the use of this technology for early melanoma detection.