174 resultados para Developmental Screening-test
Resumo:
Chromosomal microarray (CMA) is increasingly utilized for genetic testing of individuals with unexplained developmental delay/intellectual disability (DD/ID), autism spectrum disorders (ASD), or multiple congenital anomalies (MCA). Performing CMA and G-banded karyotyping on every patient substantially increases the total cost of genetic testing. The International Standard Cytogenomic Array (ISCA) Consortium held two international workshops and conducted a literature review of 33 studies, including 21,698 patients tested by CMA. We provide an evidence-based summary of clinical cytogenetic testing comparing CMA to G-banded karyotyping with respect to technical advantages and limitations, diagnostic yield for various types of chromosomal aberrations, and issues that affect test interpretation. CMA offers a much higher diagnostic yield (15%-20%) for genetic testing of individuals with unexplained DD/ID, ASD, or MCA than a G-banded karyotype (similar to 3%, excluding Down syndrome and other recognizable chromosomal syndromes), primarily because of its higher sensitivity for submicroscopic deletions and duplications. Truly balanced rearrangements and low-level mosaicism are generally not detectable by arrays, but these are relatively infrequent causes of abnormal phenotypes in this population (<1%). Available evidence strongly supports the use of CMA in place of G-banded karyotyping as the first-tier cytogenetic diagnostic test for patients with DD/ID, ASD, or MCA. G-banded karyotype analysis should be reserved for patients with obvious chromosomal syndromes (e.g., Down syndrome), a family history of chromosomal rearrangement, or a history of multiple miscarriages.
Resumo:
A modified method for the calculation of the normalized faradaic charge (q fN) is proposed. The method involves the simulation of an oxidation process, by cyclic voltammetry, by employing potentials in the oxygen evolution reaction region. The method is applicable to organic species whose oxidation is not manifested by a defined oxidation peak at conductive oxide electrodes. The variation of q fN for electrodes of nominal composition Ti/RuX Sn1-X O2 (x = 0.3, 0.2 and 0.1), Ti/Ir0.3Ti0.7O2 and Ti/Ru0.3Ti0.7O2 in the presence of various concentrations of formaldehyde was analyzed. It was observed that electrodes containing SnO2 are the most active for formaldehyde oxidation. Subsequently, in order to test the validity of the proposed model, galvanostatic electrolyses (40 mA cm-2) of two different formaldehyde concentrations (0.10 and 0.01 mol dm-3) were performed. The results are in agreement with the proposed model and indicate that this new method can be used to determine the relative activity of conductive oxide electrodes. In agreement with previous studies, it can be concluded that not only the nature of the electrode material, but also the organic species in solution and its concentration are important factors to be considered in the oxidation of organic compounds.
Resumo:
Acai, the fruit of a palm native to the Amazonian basin, is widely distributed in northern South America, where it has considerable economic importance. Whereas individual polyphenolics compounds in Acai have been extensively evaluated, studies of the intact fruit and its biological properties are lacking. Therefore, the present study was undertaken to investigate the in vivo genotoxicity of Acai and its possible antigenotoxicity on doxorubicin (DXR)-induced DNA damage. The Acai pulp doses selected were 3.33, 10.0 and 16.67 g/kg b.w. administered by gavage alone or prior to DXR (16 mg/kg b.w.) administered by intraperitoneal injection. Swiss albino mice were distributed in eight groups for acute treatment with acai pulp (24 h) and eight groups for subacute treatment (daily for 14 consecutive days) before euthanasia. The negative control groups were treated in a similar way. The results of chemical analysis suggested the presence of carotenoids, anthocyanins, phenolic. and flavonoids in Acai pulp. The endpoints analyzed were micronucleus induction in bone marrow and peripheral blood cells polychromatic erythrocytes, and DNA damage in peripheral blood, liver and kidney cells assessed using the alkaline (pH > 13) comet assay. There were no statistically significant differences (p > 0.05) between the negative control and the groups treated with the three doses of Acai pulp alone in all endpoints analyzed, demonstrating the absence of genotoxic effects. The protective effects of Acai pulp were observed in both acute and subacute treatments, when administered prior to DXR. In general, subacute treatment provided greater efficiency in protecting against DXR-induced DNA damage in liver and kidney cells. These protective effects can be explained as the result of the phytochemicals present in Acai pulp. These results will be applied to the developmental of food with functional characteristics, as well as to explore the characteristics of Acai as a health promoter. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Neonatal screening for congenital adrenal hyperplasia (CAH) is useful in diagnosing salt wasting form (SW). However, there are difficulties in interpreting positive results in asymptomatic newborns. The main objective is to analyze genotyping as a confirmatory test in children with neonatal positive results. Patients comprised 23 CAH children and 19 asymptomatic infants with persistently elevated 17-hydroxyprogesterone (17OHP) levels. CYP21A2 gene was sequenced and genotypes were grouped according to the enzymatic activity of the less severe allele: A1 null, A2 < 2%, B 3-7%, C > 20%. Twenty-one children with neonatal symptoms and/or 17OHP levels > 80 ng/ml carried A genotypes, except two virilized girls (17OHP < 50 ng/ml) without CAH genotypes. Patients carrying SW genotypes (A1, A2) and low serum sodium levels presented with neonatal 17OHP > 200 ng/ml. Three asymptomatic boys carried simple virilizing genotypes (A2 and B): in two, the symptoms began at 18 months; another two asymptomatic boys had nonclassical genotypes (C). The remaining 14 patients did not present CAH genotypes, and their 17OHP levels were normalized by 14 months of age. Molecular analysis is useful as a confirmatory test of CAH, mainly in boys. It can predict clinical course, identify false-positives and help distinguish between clinical forms of CAH.
Resumo:
Introduction. Hepatic steatosis due to non-alcoholic fatty liver disease is associated with obesity, dyslipidemia, insulin resistance, and type 2 diabetes. The Finnish Diabetes Risk Score (FINDRISC) is a prognostic screening tool to detect people at risk for type 2 diabetes without the use of any blood test. The objective of this study was to evaluate whether FINDRISC can also be used to screen for the presence of hepatic steatosis. Patients and methods. Steatosis was determined by ultrasound. The study sample consisted of 821 non-diabetic subjects without previous hepatic disease; 81% were men (mean age 45 +/- 9 years) and 19% women (mean age 41 +/- 10 years). Results. Steatosis was present in 44% of men and 10% of women. The odds ratio for one unit increase in the FINDRISC associated with the risk of steatosis was 1.30 (95% CI 1.25-1.35), similar for men and women. The area under the receiver operating characteristics curve for steatosis was 0.80 (95% CI 0.77-0.83); 0.80 in men (95% CI 0.77-0.83) and 0.83 (95% CI 0.73-0.93) in women. Conclusions. Our data suggest that the FINDRISC could be a useful primary screening tool for the presence of steatosis.
Resumo:
BACKGROUND: Persons with human immunodeficiency virus (HIV) risk behaviors are excluded from donation to reduce the risk of transfusion-transmitted infection. Persons donating to be tested for HIV may therefore deny risk behaviors. STUDY DESIGN AND METHODS: A random sample of donors completed a survey on motivations, knowledge, and attitudes on the screening process. Donors were considered test seekers if they agreed with two statements ""I think that blood donation is a good, fast, and anonymous way to get my blood tested"" and ""I donate to get my test results."" This study was conducted from June to November 2006 at the largest blood bank in Sao Paulo, Brazil. RESULTS: Of 3061 participants, 208 (7%) were test seekers. They tended to be male and had a lower educational level. They were more likely to have incorrect knowledge about blood safety (e.g., not knowing that a unit can test antibody negative and still transmit infection, 60% vs. 42%, p = 0.02), express dissatisfaction with screening questions (e.g., feeling that important questions were not asked, 14% vs. 5%, p < 0.01), and concur that donors do not answer questions truthfully (e.g., donors have more sexual partners than they admit, 29% vs. 18%, p < 0.01). Test seekers were more likely to believe that it is acceptable to donate blood to get tested for HIV (41% vs. 10%, p < 0.01). CONCLUSIONS: Test-seeking motivation, coupled with low knowledge of window period risk, is counter to improving blood safety and to donor prevention needs. Donor education needs to be improved along with availability of appropriate HIV counseling and testing.
Resumo:
Background: The Rivermead Behavioural Memory Test (RBMT) assesses everyday memory by means of tasks which mimic daily challenges. The objective was to examine the validity of the Brazilian version of the RBMT to detect cognitive decline. Methods: 195 older adults were diagnosed as normal controls (NC) or with mild cognitive impairment (MCI) or Alzheimer`s disease (AD) by a multidisciplinary team, after participants completed clinical and neuropsychological protocols. Results: Cronbach`s alpha was high for the total sample for the RBMT profile (PS) and screening scores (SS) (PS=0.91, SS=0.87) and for the AD group (PS=0.84, SS=0.85), and moderate for the MCI (PS=0.62, SS=0.55)and NC (PS=0.62, SS=0.60) groups. RBMT total scores, Appointment, Pictures, Immediate and Delayed Story, Immediate and Delayed Route, Delayed Message and Date contributed to differentiate NC from MCI. ROC curve analyses indicated high accuracy to differentiate NC from AD patients, and, moderate accuracy to differentiate NC from MCI. Conclusions: The Brazilian version of the RBMT seems to be an appropriate instrument to identify memory decline in Brazilian older adults.
Resumo:
One of the challenges in screening for dementia in developing countries is related to performance differences due to educational and cultural factors. This study evaluated the accuracy of single screening tests as well as combined protocols including the Mini-Mental State Examination (MMSE), Verbal Fluency animal category (VF), Clock Drawing test (CDT), and Pfeffer Functional Activities Questionnaire (PFAQ) to discriminate illiterate elderly with and without Alzheimer`s disease (AD) in a clinical sample. Cross-sectional study with 66 illiterate outpatients diagnosed with mild and moderate AD and 40 illiterate normal controls. Diagnosis of AD was based on NINCDS-ADRDA. All patients were submitted to a diagnostic protocol including a clinical interview based on the CAMDEX sections. ROC curves area analyses were carried out to compare sensitivity and specificity for the cognitive tests to differentiate the two groups (each test separately and in two by two combinations). Scores for all cognitive (MMSE, CDT, VF) and functional assessments (PFAQ) were significantly different between the two groups (p < 0.001). The best screening instruments for this sample of illiterate elderly were the MMSE and the PFAQ. The cut-off scores for the MMSE, VF, CDT, and PFAQ were 17.5, 7.5, 2.5, and 11.5, respectively. The most sensitive combination came from the MMSE and PFAQ (94.1%), and the best specificity was observed with the combination of the MMSE and CDT (89%). Illiterate patients can be successfully screened for AD using well-known screening instruments, especially in combined protocols.
Resumo:
Background The CAMCOG is a brief neuropsychological battery designed to assess global cognitive function and ascertain the impairments that are required for the diagnosis of dementia. To date, the cut-off scores for mild cognitive impairment (MCI) have not been determined. Given the need for an earlier diagnosis of mild dementia, new cut-off values are also necessary, taking into account cultural and educational effects. Methods One hundred and fifty-seven older adults (mean age: 69.6 +/- 7.4 years) with 8 or more years of formal education (mean years of schooling 14.2 +/- 3.8) attending a memory clinic at the Institute of Psychiatry University of Sao Paulo were included. Subjects were divided into three groups according to their cognitive status, established through clinical and neuropsychological assessment: normal controls, n = 62; MCI, n = 65; and mild or moderate dementia, n = 30. ROC curve analyses were performed for dementia vs controls, MCI vs controls and MCI vs dementia. Results The cut-off values were: 92/93 for dementia is controls (AUC = 0.99: sensitivity: 100%, specificity: 95%); 95/96 for MCI vs controls (AUC = 0.83, sensitivity: 64%, specificity: 88%), and 85/86 for MCI vs dementia (AUC = 0.91, sensitivity: 81%, specificity: 88%). The total CAMCOG score was more accurate than its subtests Mini-mental State Examination, Verbal Fluency Test and Clock Drawing Test when used separately. Conclusions The CAMCOG discriminated controls and MCI from demented patients, but was less accurate to discriminate MCI from controls. The best cut-off value to differentiate controls and demented was higher than suggested in the original publication, probably because only cases of mild to moderate dementia were included. This is important given the need for a diagnostic at earlier stages of Alzheimer`s disease. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
We examined the correlation between results obtained from the in vivo Draize test for ocular irritation and in vitro results obtained from the sheep red blood cell (RBC) haemolytic assay, which assesses haemolysis and protein denaturation in erythrocytes, induced by cosmetic products. We sought to validate the haemolytic assay as a preliminary test for identifying highly-irritative products, and also to evaluate the in vitro test as alternative assay for replacement of the in vivo test. In vitro and in vivo analyses were carried out on 19 cosmetic products, in order to correlate the lesions in the ocular structures with three in vitro parameters: (i) the extent of haemolysis (H50); (ii) the protein denaturation index (131); and (iii) the H50/DI ratio, which reflects the irritation potential (IP). There was significant correlation between maximum average scores (MAS) and the parameters determined in vitro (r = 0.752-0.764). These results indicate that the RBC assay is a useful and rapid test for use as a screening method to assess the IP of cosmetic products, and for predicting the IP value with a high level of concordance (94.7%). The assay showed high sensitivity and specificity rates of 91.6% and 100%, respectively.
Resumo:
Background & aims: We evaluated the ability of Nutritional Risk Screening 2002 (NRS 2002) and Subjective Global Assessment (SGA) to predict malnutrition related to poor clinical outcomes. Methods: We assessed 705 patients at a public university hospital within 48 h of admission. Logistic regression and number needed to screen (NNS) were calculated to test the complementarity between the tools and their ability to predict very long length of hospital stay (VLLOS), complications, and death. Results: Of the patients screened, 27.9% were at nutritional risk (NRS+) and 38.9% were malnourished (SGA B or C). Compared to those patients not at nutritional risk, NRS+, SGA B or C patients were at increased risk for complications (p = 0.03, 0.02, and 0.003, respectively). NRS+ patients had an increased risk of death (p = 0.03), and SGA B and C patients had an increased likelihood of VLLOS (p = 0.008 and p < 0.0001, respectively). Patients who were both NRS+ and SGA C had lower estimates of NNS than patients who were NRS+ or SGA C only, though their confidence intervals did overlap. Conclusions: The concurrent application of SGA in NRS+ patients might enhance the ability to predict poor clinical outcomes in hospitalized patients in Brazil. (C) 2010 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Resumo:
To find the most reliable screening method for Trypanosoma cruzi infection in blood banks. Epidemiological data, lymphoproliferation assay, parasitological, conventional serological tests: immunofluorescence, haemagglutination, ELISA with epimastigote and trypomastigote antigens and reference serological tests: trypomastigote excreted-secreted antigens (TESA) blot and chemiluminescent ELISA assay with mucine from trypomastigote forms were applied to individuals with inconclusive serology, non-chagasic individuals and chronic chagasic patients. TESA blot had the best performance when used as a single test in all the groups. In the inconclusive group 20.5% of individuals were positive for TESA blot, 23.3% for either lymphoproliferation or TESA blot, and 17.8% for lymphoproliferation only. Positive lymphoproliferation without detectable antibodies was observed in 5.47% of all inconclusive serology cases. Analysis of six parameters (three serological assays, at least one parasitological test, one lymphoproliferation assay and epidemiological data) in the inconclusive group showed that diagnosis of Chagas` disease was probable in 15 patients who were positive by two or more serological tests or for whom three of those six parameters were positive. TESA blot is a good confirmatory test for Chagas` disease in the inconclusive group. Although lymphoproliferation suggests the diagnosis of Chagas` disease in the absence of antibodies when associated with a high epidemiological risk of acquiring Chagas` disease, the data from this study and the characteristics of the lymphoproliferation assay (which is both laborious and time-consuming) do not support its use as a confirmatory test in blood-bank screening. However, our findings underscore the need to develop alternative methods that are not based on antibody detection to improve the diagnosis when serological tests are inconclusive.
Resumo:
Two targets, reverse transcriptase (RT) and protease from HIV-1, were used during the past two decades to the discovery of non-nucleoside reverse transcriptase inhibitors (NNRTI) and protease inhibitors (PI) that belong to the arsenal of the antiretroviral therapy. Herein these enzymes were chosen as templates for conducting a computer-aided ligand design. Ligand and structure-based drug designs were the starting points to select compounds from a database bearing more than five million compounds by means of cheminformatic tools. New promising lead structures are retrieved from the database, which are open to acquisition and test. Classes of molecules already described as NNRTI or PI in the literature also came out and were useful to prove the reliability of the workflow, and thus validating the work carried out so far. (c) 2007 Elsevier Masson SAS. All rights reserved.
Resumo:
Advances in diagnostic research are moving towards methods whereby the periodontal risk can be identified and quantified by objective measures using biomarkers. Patients with periodontitis may have elevated circulating levels of specific inflammatory markers that can be correlated to the severity of the disease. The purpose of this study was to evaluate whether differences in the serum levels of inflammatory biomarkers are differentially expressed in healthy and periodontitis patients. Twenty-five patients (8 healthy patients and 17 chronic periodontitis patients) were enrolled in the study. A 15 mL blood sample was used for identification of the inflammatory markers, with a human inflammatory flow cytometry multiplex assay. Among 24 assessed cytokines, only 3 (RANTES, MIG and Eotaxin) were statistically different between groups (p<0.05). In conclusion, some of the selected markers of inflammation are differentially expressed in healthy and periodontitis patients. Cytokine profile analysis may be further explored to distinguish the periodontitis patients from the ones free of disease and also to be used as a measure of risk. The present data, however, are limited and larger sample size studies are required to validate the findings of the specific biomarkers.
Resumo:
A retrospective survey was designed to identify diagnostic subgroups and clinical factors associated with odontogenic pain and discomfort in dental urgency patients. A consecutive sample of 1,765 patients seeking treatment for dental pain at the Urgency Service of the Dental School of the Federal University of Goiás, Brazil, was selected. Inclusion criteria were pulpal or periapical pain that occurred before dental treatment (minimum 6 months after the last dental appointment), and the exclusion criteria were teeth with odontogenic developmental anomalies and missing information or incomplete records. Clinical and radiographic examinations were performed to assess clinical presentation of pain complaints including origin, duration, frequency and location of pain, palpation, percussion and vitality tests, radiographic features, endodontic diagnosis and characteristics of teeth. Chi-square test and multiple logistic regression were used to analyze association between pulpal and periapical pain and independent variables. The most frequent endodontic diagnosis of pulpal pain were symptomatic pulpitis (28.3%) and hyperreactive pulpalgia (14.4%), and the most frequent periapical pain was symptomatic apical periodontitis of infectious origin (26.4%). Regression analysis revealed that closed pulp chamber and caries were highly associated with pulpal pain and, conversely, open pulp chamber was associated with periapical pain (p<0.001). Endodontic diagnosis and local factors associated with pulpal and periapical pain suggest that the important clinical factor of pulpal pain was closed pulp chamber and caries, and of periapical pain was open pulp chamber.