58 resultados para research data finder
Resumo:
OBJECTIVES: This action-research study conducted in a Swiss male post-trial detention centre (120 detainees and 120 staff) explored the attitudes of detainees and staff towards tobacco smoking. Tackling public health matters through research involving stakeholders in prisons implies benefits and risks that need exploration. STUDY DESIGN: The observational study involved multiple strands (quantitative and qualitative components, and air quality measurements). This article presents qualitative data on participants' attitudes and expectations about research in a prison setting. METHODS: Semi-structured interviews were used to explore the attitudes of detainees and staff towards smoking before and after a smoke-free regulation change in the prison in 2009. Specific coding and thematic content analysis for research were performed with the support of ATLAS.ti. RESULTS: In total, 77 interviews were conducted (38 before the regulation change and 39 after the regulation change) with 31 detainees (mean age 35 years, range 22-60 years) and 27 prison staff (mean age 46 years, range 29-65 years). Both detainees and staff expressed satisfaction regarding their involvement in the study, and wished to be informed about the results. They expected concrete changes in smoke-free regulation, and that the research would help to find ways to motivate detainees to quit smoking. CONCLUSION: Active involvement of stakeholders promotes public health. Interviewing detainees and prison staff as part of an action-research study aimed at tackling a public health matter is a way of raising awareness and facilitating change in prisons. Research needs to be conducted independently from the prison administrators in order to increase trust and to avoid misunderstandings.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
As part of a collaborative project on the epidemiology of craniofacial anomalies, funded by the National Institutes for Dental and Craniofacial Research and channeled through the Human Genetics Programme of the World Health Organization, the International Perinatal Database of Typical Orofacial Clefts (IPDTOC) was established in 2003. IPDTOC is collecting case-by-case information on cleft lip with or without cleft palate and on cleft palate alone from birth defects registries contributing to at least one of three collaborative organizations: European Surveillance Systems of Congenital Anomalies (EUROCAT) in Europe, National Birth Defects Prevention Network (NBDPN) in the United States, and International Clearinghouse for Birth Defects Surveillance and Research (ICBDSR) worldwide. Analysis of the collected information is performed centrally at the ICBDSR Centre in Rome, Italy, to maximize the comparability of results. The present paper, the first of a series, reports data on the prevalence of cleft lip with or without cleft palate from 54 registries in 30 countries over at least 1 complete year during the period 2000 to 2005. Thus, the denominator comprises more than 7.5 million births. A total of 7704 cases of cleft lip with or without cleft palate (7141 livebirths, 237 stillbirths, 301 terminations of pregnancy, and 25 with pregnancy outcome unknown) were available. The overall prevalence of cleft lip with or without cleft palate was 9.92 per 10,000. The prevalence of cleft lip was 3.28 per 10,000, and that of cleft lip and palate was 6.64 per 10,000. There were 5918 cases (76.8%) that were isolated, 1224 (15.9%) had malformations in other systems, and 562 (7.3%) occurred as part of recognized syndromes. Cases with greater dysmorphological severity of cleft lip with or without cleft palate were more likely to include malformations of other systems.
Resumo:
OBJECTIVES To compare subjective memory deficit (SMD) in older adults with and without dementia or depression across multiple centers in low- and middle-income countries (LAMICs). DESIGN Secondary analysis of data from 23 case control studies. SETTING Twenty-three centers in India, Southeast Asia (including China), Latin America and the Caribbean, Nigeria, and Russia. PARTICIPANTS Two thousand six hundred ninety-two community-dwelling people aged 60 and older in one of three groups: people with dementia, people with depression, and controls free of dementia and depression. MEASUREMENTS SMD was derived from the Geriatric Mental State examination. RESULTS Median SMD frequency was lowest in participants without dementia (26.2%) and higher in those with depression (50.0%) and dementia (66.7%). Frequency of SMD varied between centers. Depression and dementia were consistently associated with SMD. Older age and hypochondriasis were associated with SMD only in subjects without dementia. In those with dementia, SMD was associated with better cognitive function, whereas the reverse was the case in controls. CONCLUSION Associations with SMD may differ between subjects with and without dementia living in LAMICs.
Resumo:
The European Surveillance of Congenital Anomalies (EUROCAT) network of population-based congenital anomaly registries is an important source of epidemiologic information on congenital anomalies in Europe covering live births, fetal deaths from 20 weeks gestation, and terminations of pregnancy for fetal anomaly. EUROCAT's policy is to strive for high-quality data, while ensuring consistency and transparency across all member registries. A set of 30 data quality indicators (DQIs) was developed to assess five key elements of data quality: completeness of case ascertainment, accuracy of diagnosis, completeness of information on EUROCAT variables, timeliness of data transmission, and availability of population denominator information. This article describes each of the individual DQIs and presents the output for each registry as well as the EUROCAT (unweighted) average, for 29 full member registries for 2004-2008. This information is also available on the EUROCAT website for previous years. The EUROCAT DQIs allow registries to evaluate their performance in relation to other registries and allows appropriate interpretations to be made of the data collected. The DQIs provide direction for improving data collection and ascertainment, and they allow annual assessment for monitoring continuous improvement. The DQI are constantly reviewed and refined to best document registry procedures and processes regarding data collection, to ensure appropriateness of DQI, and to ensure transparency so that the data collected can make a substantial and useful contribution to epidemiologic research on congenital anomalies.
Resumo:
Focus groups are increasingly popular in nursing research. However, proper care and attention are critical to their planning and conduct, particularly those involving nursing staff. This article uses data gleaned from prior research to address the complexities present in clinical settings when conducting focus groups with nurses. Applying their combined experiences of conducting studies with nursing staff, the authors present a data-derived approach to thorough preparation and successful implementation of focus group research, offering a unique contribution to the literature regarding this research strategy.
Resumo:
NanoImpactNet (NIN) is a multidisciplinary European Commission funded network on the environmental, health and safety (EHS) impact of nanomaterials. The 24 founding scientific institutes are leading European research groups active in the fields of nanosafety, nanorisk assessment and nanotoxicology. This 4−year project is the new focal point for information exchange within the research community. Contact with other stakeholders is vital and their needs are being surveyed. NIN is communicating with 100s of stakeholders: businesses; internet platforms; industry associations; regulators; policy makers; national ministries; international agencies; standard−setting bodies and NGOs concerned by labour rights, EHS or animal welfare. To improve this communication, internet research, a questionnaire distributed via partners and targeted phone calls were used to identify stakeholders' interests and needs. Knowledge gaps and the necessity for further data mentioned by representatives of all stakeholder groups in the targeted phone calls concerned: potential toxic and safety hazards of nanomaterials throughout their lifecycles; fate and persistence of nanoparticles in humans, animals and the environment; risks associated to nanoparticle exposure; participation in the preparation of nomenclature, standards, methodologies, protocols and benchmarks; development of best practice guidelines; voluntary schemes on responsibility; databases of materials, research topics and themes. Findings show that stakeholders and NIN researchers share very similar knowledge needs, and that open communication and free movement of knowledge will benefit both researchers and industry. Consequently NIN will encourage stakeholders to be active members. These survey findings will be used to improve NIN's communication tools to further build on interdisciplinary relationships towards a healthy future with nanotechnology.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
The DNA microarray technology has arguably caught the attention of the worldwide life science community and is now systematically supporting major discoveries in many fields of study. The majority of the initial technical challenges of conducting experiments are being resolved, only to be replaced with new informatics hurdles, including statistical analysis, data visualization, interpretation, and storage. Two systems of databases, one containing expression data and one containing annotation data are quickly becoming essential knowledge repositories of the research community. This present paper surveys several databases, which are considered "pillars" of research and important nodes in the network. This paper focuses on a generalized workflow scheme typical for microarray experiments using two examples related to cancer research. The workflow is used to reference appropriate databases and tools for each step in the process of array experimentation. Additionally, benefits and drawbacks of current array databases are addressed, and suggestions are made for their improvement.
Resumo:
Background: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e. g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. Results: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/similar to vpopovic/research/ Conclusion: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.
Resumo:
CONTEXT: Several genetic risk scores to identify asymptomatic subjects at high risk of developing type 2 diabetes mellitus (T2DM) have been proposed, but it is unclear whether they add extra information to risk scores based on clinical and biological data. OBJECTIVE: The objective of the study was to assess the extra clinical value of genetic risk scores in predicting the occurrence of T2DM. DESIGN: This was a prospective study, with a mean follow-up time of 5 yr. SETTING AND SUBJECTS: The study included 2824 nondiabetic participants (1548 women, 52 ± 10 yr). MAIN OUTCOME MEASURE: Six genetic risk scores for T2DM were tested. Four were derived from the literature and two were created combining all (n = 24) or shared (n = 9) single-nucleotide polymorphisms of the previous scores. A previously validated clinic + biological risk score for T2DM was used as reference. RESULTS: Two hundred seven participants (7.3%) developed T2DM during follow-up. On bivariate analysis, no differences were found for all but one genetic score between nondiabetic and diabetic participants. After adjusting for the validated clinic + biological risk score, none of the genetic scores improved discrimination, as assessed by changes in the area under the receiver-operating characteristic curve (range -0.4 to -0.1%), sensitivity (-2.9 to -1.0%), specificity (0.0-0.1%), and positive (-6.6 to +0.7%) and negative (-0.2 to 0.0%) predictive values. Similarly, no improvement in T2DM risk prediction was found: net reclassification index ranging from -5.3 to -1.6% and nonsignificant (P ≥ 0.49) integrated discrimination improvement. CONCLUSIONS: In this study, adding genetic information to a previously validated clinic + biological score does not seem to improve the prediction of T2DM.
Resumo:
Animal toxins are of interest to a wide range of scientists, due to their numerous applications in pharmacology, neurology, hematology, medicine, and drug research. This, and to a lesser extent the development of new performing tools in transcriptomics and proteomics, has led to an increase in toxin discovery. In this context, providing publicly available data on animal toxins has become essential. The UniProtKB/Swiss-Prot Tox-Prot program (http://www.uniprot.org/program/Toxins) plays a crucial role by providing such an access to venom protein sequences and functions from all venomous species. This program has up to now curated more than 5000 venom proteins to the high-quality standards of UniProtKB/Swiss-Prot (release 2012_02). Proteins targeted by these toxins are also available in the knowledgebase. This paper describes in details the type of information provided by UniProtKB/Swiss-Prot for toxins, as well as the structured format of the knowledgebase.
Resumo:
The advent and application of high-resolution array-based comparative genome hybridization (array CGH) has led to the detection of large numbers of copy number variants (CNVs) in patients with developmental delay and/or multiple congenital anomalies as well as in healthy individuals. The notion that CNVs are also abundantly present in the normal population challenges the interpretation of the clinical significance of detected CNVs in patients. In this review we will illustrate a general clinical workflow based on our own experience that can be used in routine diagnostics for the interpretation of CNVs.