989 resultados para SCREENING ACCURACY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To verify prostate cancer prevalence in non-symptomatic men between 50 and 70 years old as well as cancer characteristics. Material and Methods: 2815 non-symptomatic men had total PSA and digital rectal examination performed between March 1998 and April 1998. Racial distribution was: 2331 Caucasians (83.9%), 373 Blacks (13.4%) and 75 Asiatic (2.7%). PSA was normal in 2554 (91.4%), 4 to 10 in 177 (6.3%) and greater than 10 in 64 (2.3%). DRE was normal in 2419 (86.3%), suspicious in 347 (12.4%) and characteristic for cancer in 37 (1.3%). Men with abnormal DRE and/or PSA had transrectal prostate biopsy indicated. Results: 461 biopsies were done and 78 tumors was detected (prevalence = 2.8%). Prevalence was progressively higher with age (p < 0.001), PSA level (p < 0.0001) and DRE findings (p = 0.0216). Cancer prevalence in Blacks was 1.65 times higher than in Caucasians (p > 0.05) and 94.9% of detected tumors were moderately or poorly differentiated. Sensibility, specificity, positive predictive value, negative predictive value and total accuracy for PSA were respectively: 66.6%; 89.7%; 51.7%; 94.2% and 86.5%. For DRE, the respective values were: 49.1%; 79.4%; 50.9%; 78.3% and 70.3%. Conclusions: prostate cancer prevalence in the studied population (2.8%) was similar to that of other countries populations. Cancer prevalence in blacks was 1.65 times higher than in Caucasians (difference was not statistically significant). Cancer prevalence becomes higher with aging. The association of DRE and PSA is of paramount importance for cancer diagnosis. The great majority of detected tumors (94.9%) was moderately and poorly differentiated. Brazil probably needs regional studies to better characterize prostate cancer epidemiology due to population heterogeneity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In pediatric populations, the use of resting heart rate as a health index remains unclear, mainly in epidemiological settings. The aims of this study were to analyze the impact of resting heart rate on screening dyslipidemia and high blood glucose and also to identify its significance in pediatric populations.Methods: The sample was composed of 971 randomly selected adolescents aged 11 to 17 years (410 boys and 561 girls). Resting heart rate was measured with oscillometric devices using two types of cuffs according to the arm circumference. Biochemical parameters triglycerides, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol and glucose were measured. Body fatness, sleep, smoking, alcohol consumption and cardiorespiratory fitness were analyzed.Results: Resting heart rate was positively related to higher sleep quality (β = 0.005, p = 0.039) and negatively related to cardiorespiratory fitness (β = -0.207, p = 0.001). The receiver operating characteristic curve indicated significant potential for resting heart rate in the screening of adolescents at increased values of fasting glucose (area under curve = 0.611 ± 0.039 [0.534 - 0.688]) and triglycerides (area under curve = 0.618 ± 0.044 [0.531 - 0.705]).Conclusion: High resting heart rate constitutes a significant and independent risk related to dyslipidemia and high blood glucose in pediatric populations. Sleep and cardiorespiratory fitness are two important determinants of the resting heart rate. © 2013 Fernandes et al.; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current response to intervention models (RTIs) favor a three-tier system. In general, Tier 1 consists of evidence-based, effective reading instruction in the classroom and universal screening of all students at the beginning of the grade level to identify children for early intervention. Non-responders to Tier 1 receive small-group tutoring in Tier 2. Nonresponders to Tier 2 are given still more intensive, individual intervention in Tier 3. Limited time, personnel and financial resources derail RTI's implementation in Brazilian schools because this approach involves procedures that require extra time and extra personnel in all three tiers, including screening tools which normally consist of tasks administered individually. We explored the accuracy of collectively and easily administered screening tools for the early identification of second graders at risk for dyslexia in a two-stage screening model. A first-stage universal screening based on collectively administered curriculum-based measurements was used in 45 7 years old early Portuguese readers from 4 second-grade classrooms at the beginning of the school year and identified an at-risk group of 13 academic low-achievers. Collectively administered tasks based on phonological judgments by matching figures and figures to spoken words [alternative tools for educators (ATE)] and a comprehensive cognitive-linguistic battery of collective and individual assessments were both administered to all children and constituted the second-stage screening. Low-achievement on ATE tasks and on collectively administered writing tasks (scores at the 25th percentile) showed good sensitivity (true positives) and specificity (true negatives) to poor literacy status defined as scores <= 1 SD below the mean on literacy abilities at the end of fifth grade. These results provide implications for the use of a collectively administered screening tool for the early identification of children at risk for dyslexia in a classroom setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The widespread screening programs prompted a decrease in prostate cancer stage at diagnosis, and active surveillance is an option for patients who may harbor clinically insignificant prostate cancer (IPC). Pathologists include the possibility of an IPC in their reports based on the Gleason score and tumor volume. This study determined the accuracy of pathological data in the identification of IPC in radical prostatectomy (RP) specimens. Materials and Methods: Of 592 radical prostatectomy specimens examined in our laboratory from 2001 to 2010, 20 patients harbored IPC and exhibited biopsy findings suggestive of IPC. These biopsy features served as the criteria to define patients with potentially insignificant tumor in this population. The results of the prostate biopsies and surgical specimens of the 592 patients were compared. Results: The twenty patients who had IPC in both biopsy and RP were considered real positive cases. All patients were divided into groups based on their diagnoses following RP: true positives (n = 20), false positives (n = 149), true negatives (n = 421), false negatives (n = 2). The accuracy of the pathological data alone for the prediction of IPC was 91.4%, the sensitivity was 91% and the specificity was 74%. Conclusion: The identification of IPC using pathological data exclusively is accurate, and pathologists should suggest this in their reports to aid surgeons, urologists and radiotherapists to decide the best treatment for their patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this manuscript, an automatic setup for screening of microcystins in surface waters by employing photometric detection is described. Microcystins are toxins delivered by cyanobacteria within an aquatic environment, which have been considered strongly poisonous for humans. For that reason, the World Health Organization (WHO) has proposed a provisional guideline value for drinking water of 1 mu g L-1. In this work, we developed an automated equipment setup, which allows the screening of water for concentration of microcystins below 0.1 mu g V. The photometric method was based on the enzyme-linked immunosorbent assay (ELISA) and the analytical signal was monitored at 458 nm using a homemade LED-based photometer. The proposed system was employed for the detection of microcystins in rivers and lakes waters. Accuracy was assessed by processing samples using a reference method and applying the paired t-test between results. No significant difference at the 95% confidence level was observed. Other useful features including a linear response ranging from 0.05 up to 2.00 mu g L-1 (R-2 =0.999) and a detection limit of 0.03 mu g L-1 microcystins were achieved. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the previous 10 years, global R&D expenditure in the pharmaceuticals and biotechnology sector has steadily increased, without a corresponding increase in output of new medicines. To address this situation, the biopharmaceutical industry's greatest need is to predict the failures at the earliest possible stage of the drug development process. A major key to reducing failures in drug screenings is the development and use of preclinical models that are more predictive of efficacy and safety in clinical trials. Further, relevant animal models are needed to allow a wider testing of novel hypotheses. Key to this is the developing, refining, and validating of complex animal models that directly link therapeutic targets to the phenotype of disease, allowing earlier prediction of human response to medicines and identification of safety biomarkers. Morehover, well-designed animal studies are essential to bridge the gap between test in cell cultures and people. Zebrafish is emerging, complementary to other models, as a powerful system for cancer studies and drugs discovery. We aim to investigate this research area designing a new preclinical cancer model based on the in vivo imaging of zebrafish embryogenesis. Technological advances in imaging have made it feasible to acquire nondestructive in vivo images of fluorescently labeled structures, such as cell nuclei and membranes, throughout early Zebrafishsh embryogenesis. This In vivo image-based investigation provides measurements for a large number of features at cellular level and events including nuclei movements, cells counting, and mitosis detection, thereby enabling the estimation of more significant parameters such as proliferation rate, highly relevant for investigating anticancer drug effects. In this work, we designed a standardized procedure for accessing drug activity at the cellular level in live zebrafish embryos. The procedure includes methodologies and tools that combine imaging and fully automated measurements of embryonic cell proliferation rate. We achieved proliferation rate estimation through the automatic classification and density measurement of epithelial enveloping layer and deep layer cells. Automatic embryonic cells classification provides the bases to measure the variability of relevant parameters, such as cell density, in different classes of cells and is finalized to the estimation of efficacy and selectivity of anticancer drugs. Through these methodologies we were able to evaluate and to measure in vivo the therapeutic potential and overall toxicity of Dbait and Irinotecan anticancer molecules. Results achieved on these anticancer molecules are presented and discussed; furthermore, extensive accuracy measurements are provided to investigate the robustness of the proposed procedure. Altogether, these observations indicate that zebrafish embryo can be a useful and cost-effective alternative to some mammalian models for the preclinical test of anticancer drugs and it might also provides, in the near future, opportunities to accelerate the process of drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To determine sample sizes in studies on diagnostic accuracy and the proportion of studies that report calculations of sample size. DESIGN: Literature survey. DATA SOURCES: All issues of eight leading journals published in 2002. METHODS: Sample sizes, number of subgroup analyses, and how often studies reported calculations of sample size were extracted. RESULTS: 43 of 8999 articles were non-screening studies on diagnostic accuracy. The median sample size was 118 (interquartile range 71-350) and the median prevalence of the target condition was 43% (27-61%). The median number of patients with the target condition--needed to calculate a test's sensitivity--was 49 (28-91). The median number of patients without the target condition--needed to determine a test's specificity--was 76 (27-209). Two of the 43 studies (5%) reported a priori calculations of sample size. Twenty articles (47%) reported results for patient subgroups. The number of subgroups ranged from two to 19 (median four). No studies reported that sample size was calculated on the basis of preplanned analyses of subgroups. CONCLUSION: Few studies on diagnostic accuracy report considerations of sample size. The number of participants in most studies on diagnostic accuracy is probably too small to analyse variability of measures of accuracy across patient subgroups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To review the accuracy of electrocardiography in screening for left ventricular hypertrophy in patients with hypertension. DESIGN: Systematic review of studies of test accuracy of six electrocardiographic indexes: the Sokolow-Lyon index, Cornell voltage index, Cornell product index, Gubner index, and Romhilt-Estes scores with thresholds for a positive test of > or =4 points or > or =5 points. DATA SOURCES: Electronic databases ((Pre-)Medline, Embase), reference lists of relevant studies and previous reviews, and experts. STUDY SELECTION: Two reviewers scrutinised abstracts and examined potentially eligible studies. Studies comparing the electrocardiographic index with echocardiography in hypertensive patients and reporting sufficient data were included. DATA EXTRACTION: Data on study populations, echocardiographic criteria, and methodological quality of studies were extracted. DATA SYNTHESIS: Negative likelihood ratios, which indicate to what extent the posterior odds of left ventricular hypertrophy is reduced by a negative test, were calculated. RESULTS: 21 studies and data on 5608 patients were analysed. The median prevalence of left ventricular hypertrophy was 33% (interquartile range 23-41%) in primary care settings (10 studies) and 65% (37-81%) in secondary care settings (11 studies). The median negative likelihood ratio was similar across electrocardiographic indexes, ranging from 0.85 (range 0.34-1.03) for the Romhilt-Estes score (with threshold > or =4 points) to 0.91 (0.70-1.01) for the Gubner index. Using the Romhilt-Estes score in primary care, a negative electrocardiogram result would reduce the typical pre-test probability from 33% to 31%. In secondary care the typical pre-test probability of 65% would be reduced to 63%. CONCLUSION: Electrocardiographic criteria should not be used to rule out left ventricular hypertrophy in patients with hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: First Trimester Screening (FTS) combines maternal age with fetal nuchal translucency (NT) and maternal analytes to identify pregnancies at an increased risk for Down syndrome and trisomy 18. Though the accuracy of this screening is high, it cannot replace the conclusive accuracy of prenatal diagnostic testing (PDT). Since FTS has been available, a decrease in the number of women who pursue PDT has been observed. This study sought to determine if there has been a significant change in the amount of PDT performed in our clinics, if the type of FTS result affects the patient’s decision regarding PDT, and what the patient’s intentions are regarding PDT. Material and Methods: A database review was performed for the two years prior and the two years after the January 2007 American College of Obstetricians and Gynecologists (ACOG) guidelines regarding FTS were issued. We compared the number of women who were AMA and the number of women who were AMA and had PDT between those time periods. We also determined the number of positive and negative FTS results, and determined how many of those patients had PDT. Finally, we surveyed our patients and referring physicians to determine: what the patient understands about FTS, what the patient’s intentions are regarding FTS, and how physicians present the option of FTS to their patients. Results: We determined that there was a 19.6% decrease in the amount of PDT performed when we compared the two time periods at our three specified clinics. Many of our patients were against having PDT prior to their genetic counseling session, but after they received genetic counseling, 76% of our population became open to the possibility of having PDT. Conclusion: Similar to previous studies, we determined that there has been a significant decrease in the number of PDT procedures performed at our clinics, which coincides with the release of the January 2007 ACOG statement regarding FTS. While our patients regarded FTS as a way to gain early information about their pregnancy in a non-invasive manner, they also stated they would use their results as a way to aid in their decision regarding PDT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND General practitioners (GPs) are in best position to suspect dementia. Mini-Mental State Examination (MMSE) and Clock Drawing Test (CDT) are widely used. Additional neurological tests may increase the accuracy of diagnosis. We aimed to evaluate diagnostic ability to detect dementia with a Short Smell Test (SST) and Palmo-Mental Reflex (PMR) in patients whose MMSE and CDT are normal, but who show signs of cognitive dysfunction. METHODS This was a 3.5-year cross-sectional observational study in the Memory Clinic of the University Department of Geriatrics in Bern, Switzerland. Participating patients with normal MMSE (>26 points) and CDT (>5 points) were referred by GPs because they suspected dementia. All were examined according to a standardized protocol. Diagnosis of dementia was based on DSM-IV TR criteria. We used SST and PMR to determine if they accurately detected dementia. RESULTS In our cohort, 154 patients suspected of dementia had normal MMSE and CDT test results. Of these, 17 (11 %) were demented. If SST or PMR were abnormal, sensitivity was 71 % (95 % CI 44-90 %), and specificity 64 % (95 % CI 55-72 %) for detecting dementia. If both tests were abnormal, sensitivity was 24 % (95 % CI 7-50 %), but specificity increased to 93 % (95 % CI 88-97 %). CONCLUSION Patients suspected of dementia, but with normal MMSE and CDT results, may benefit if SST and PMR are added as diagnostic tools. If both SST and PMR are abnormal, this is a red flag to investigate these patients further, even though their negative neuropsychological screening results.