875 resultados para requirement-based testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of cognitive impairment is a frequent complaint among elderly individuals in the general population. This study aimed to investigate the relationship between aging-related regional gray matter (rGM) volume changes and cognitive performance in healthy elderly adults. Morphometric magnetic resonance imaging (MRI) measures were acquired in a community-based sample of 170 cognitively-preserved subjects (66 to 75 years). This sample was drawn from the "Sao Paulo Ageing and Health" study, an epidemiological study aimed at investigating the prevalence and risk factors for Alzheimer's disease in a low income region of the city of Sao Paulo. All subjects underwent cognitive testing using a cross-culturally battery validated by the Research Group on Dementia 10/66 as well as the SKT (applied on the day of MRI scanning). Blood genotyping was performed to determine the frequency of the three apolipoprotein E allele variants (APOE epsilon 2/epsilon 3/epsilon 4) in the sample. Voxelwise linear correlation analyses between rGM volumes and cognitive test scores were performed using voxel-based morphometry, including chronological age as covariate. There were significant direct correlations between worse overall cognitive performance and rGM reductions in the right orbitofrontal cortex and parahippocampal gyrus, and also between verbal fluency scores and bilateral parahippocampal gyral volume (p < 0.05, familywise-error corrected for multiple comparisons using small volume correction). When analyses were repeated adding the presence of the APOE epsilon 4 allele as confounding covariate or excluding a minority of APOE epsilon 2 carriers, all findings retained significance. These results indicate that rGM volumes are relevant biomarkers of cognitive deficits in healthy aging individuals, most notably involving temporolimbic regions and the orbitofrontal cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Validation of parentage and horse breed registries through DNA typing relies on estimates of random match probabilities with DNA profiles generated from multiple polymorphic loci. Of the twenty-seven microsatellite loci recommended by the International Society for Animal Genetics for parentage testing in Thoroughbred horses, eleven are located on five chromosomes. An important aspect in determining combined exclusion probabilities is the ascertainment of the genetic linkage status of syntenic markers, which may affect reliable use of the product rule in estimating random match probabilities. In principle, linked markers can be in gametic phase disequilibrium (GD). We aimed at determining the extent, by frequency and strength, of GD between the HTG4 and HMS3 multiallelic loci, syntenic on chromosome 9. We typed the qualified offspring (n (1) = 27; n (2) = 14) of two Quarter Bred stallions (registered by the Brazilian Association of Quarter Horse Breeders) and 121 unrelated horses from the same breed. In the 41 informative meioses analyzed, the frequency of recombination between the HTG4 and HMS3 loci was 0.27. Consistent with genetic map distances, this recombination rate does not fit to the theoretical distribution for independently segregated markers. We estimated sign-based D' coefficients as a measure of GD, and showed that the HTG4 and HMS3 loci are in significant, yet partial and weak, disequilibrium, with two allele pairs involved (HTG4*M/HMS3*P, D'(+) = 0.6274; and HTG4*K/HMS3*P, D'(-) = -0.6096). These results warn against the inadequate inclusion of genetically linked markers in the calculation of combined power of discrimination for Thoroughbred parentage validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Clinical multistage risk assessment associated with electrocardiogram (ECG) and NT-proBNP may be a feasible strategy to screen hypertrophic cardiomyopathy (HCM). We investigated the effectiveness of a screening based on ECG and NT-proBNP in first-degree relatives of patients with HCM. Methods and Results: A total of 106 first-degree relatives were included. All individuals were evaluated by echocardiography, ECG, NT-proBNP, and molecular screening (available for 65 individuals). From the 106 individuals, 36 (34%) had diagnosis confirmed by echocardiography. Using echocardiography as the gold standard, ECG criteria had a sensitivity of 0.71, 0.42, and 0.52 for the Romhilt-Estes, Sokolow-Lyon, and Cornell criteria, respectively. Mean values of NT-ProBNP were higher in affected as compared with nonaffected relatives (26.1 vs. 1290.5, P < .001). The AUC of NT-proBNP was 0.98. Using a cutoff value of 70 pg/mL, we observed a sensitivity of 0.92 and specificity of 0.96. Using molecular genetics as the gold standard, ECG criteria had a sensitivity of 0.67, 0.37, and 0.42 for the Romhilt-Estes, Sokolow-Lyon, and Cornell criteria, respectively. Using a cutoff value of 70 pg/mL, we observed a sensitivity of 0.83 and specificity of 0.98. Conclusion: Values of NT-proBNP above 70 pg/mL can be used to effectively select high-risk first-degree relatives for HCM screening. (J Cardiac Fail 2012;18:564-568)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We sought to evaluate the performance of diagnostic tools to establish an affordable setting for early detection of cervical cancer in developing countries. We compared the performance of different screening tests and their feasibility in a cohort of over 12,000 women: conventional Pap smear, liquid-based cytology, visual inspection with acetic acid (VIA), visual inspection with Iodine solution (VILI), cervicography, screening colposcopy, and high-risk human papillomavirus (HPV) testing (HR-HPV) collected by physician and by self-sampling. HR-HPV assay collected by the physician has the highest sensitivity (80 %), but high unnecessary referrals to colposcopy (15.1 %). HR-HPV test in self-sampling had a markedly lower (57.1 %) sensitivity. VIA, VILI, and cervicography had a poor sensitivity (47.4, 55, and 28.6 %, respectively). Colposcopy presented with sensitivity of 100 % in detecting CIN2+, but the lowest specificity (66.9 %). Co-testing with VIA and VILI Pap test increased the sensitivity of stand-alone Pap test from 71.6 to 87.1 % and 71.6 to 95 %, respectively, but with high number of unnecessary colposcopies. Co-testing with HR-HPV importantly increased the sensitivity of Pap test (to 86 %), but with high number of unnecessary colposcopies (17.5 %). Molecular tests adjunct to Pap test seems a realistic option to improve the detection of high-grade lesions in population-based screening programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The objective of this study was to investigate correlations between pulp oxygenation rates (%SpO(2)) and clinical diagnoses of reversible pulpitis (RP), irreversible pulpitis (IP), or pulp necrosis (PN). Methods: Sixty patients who presented with a tooth with endodontic pathology were grouped according to a clinical diagnosis of either RP (n = 20), IP (n = 20), or PN (n = 20). The clinical diagnosis was based on the patient's dental history, periapical radiographs, clinical inspection, and percussion and thermal sensitivity testing. Pulse oximetry (PO) was used to determine pulp oxygenation rates. For every patient, one additional endodontically treated tooth (negative control [NC], n = 60) and one additional healthy tooth with healthy pulp status (positive control [PC], n = 60) were evaluated. Analysis of variance, the Tukey HSD test, and the Student's t test were used for statistical analysis. Results: The mean % SpO(2) levels were as follows: RP: 87.4% (standard deviation [SD] +/- 2.46), IP: 83.1% (SD +/- 2.29), PN: 74.6% (SD +/- 1.96), PC: 92.2% (SD +/- 1.84), and NC: 0% (SD +/- 0.0). There were statistically significant differences between RP, IP, and PM compared with NC and PC and between RP, IP, and PN (all P <= .01). Conclusions: The evaluation of pulp oxygenation rates by PO may be a useful tool to determine the different inflammatory stages of the pulp to aid in endodontic diagnosis. (JEndod 2012;38:880-883)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The development of protocols for RNA extraction from paraffin-embedded samples facilitates gene expression studies on archival samples with known clinical outcome. Older samples are particularly valuable because they are associated with longer clinical follow up. RNA extracted from formalin-fixed paraffin-embedded (FFPE) tissue is problematic due to chemical modifications and continued degradation over time. We compared quantity and quality of RNA extracted by four different protocols from 14 ten year old and 14 recently archived (three to ten months old) FFPE breast cancer tissues. Using three spin column purification-based protocols and one magnetic bead-based protocol, total RNA was extracted in triplicate, generating 336 RNA extraction experiments. RNA fragment size was assayed by reverse transcription-polymerase chain reaction (RT-PCR) for the housekeeping gene glucose-6-phosphate dehydrogenase (G6PD), testing primer sets designed to target RNA fragment sizes of 67 bp, 151 bp, and 242 bp. Results Biologically useful RNA (minimum RNA integrity number, RIN, 1.4) was extracted in at least one of three attempts of each protocol in 86–100% of older and 100% of recently archived ("months old") samples. Short RNA fragments up to 151 bp were assayable by RT-PCR for G6PD in all ten year old and months old tissues tested, but none of the ten year old and only 43% of months old samples showed amplification if the targeted fragment was 242 bp. Conclusion All protocols extracted RNA from ten year old FFPE samples with a minimum RIN of 1.4. Gene expression of G6PD could be measured in all samples, old and recent, using RT-PCR primers designed for RNA fragments up to 151 bp. RNA quality from ten year old FFPE samples was similar to that extracted from months old samples, but quantity and success rate were generally higher for the months old group. We preferred the magnetic bead-based protocol because of its speed and higher quantity of extracted RNA, although it produced similar quality RNA to other protocols. If a chosen protocol fails to extract biologically useful RNA from a given sample in a first attempt, another attempt and then another protocol should be tried before excluding the case from molecular analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to evaluate the efficacy of QMiX, SmearClear, and 17% EDTA for the debris and smear layer removal from the root canal and its effects on the push-out bond strength of an epoxy-based sealer by scanning electron microscopy (SEM). Forty extracted human canines (n = 10) were assigned to the following final rinse protocols: G1-distilled water (control), G2–17% EDTA, G3-SmearClear, and G4-QMiX. The specimens were submitted to a SEM analysis to evaluate the presence of debris and smear layer, respectively, in the apical or cervical segments. In sequence, forty extracted human maxillary canines with the root canals instrumented were divided into four groups (n = 10) similar to the SEM analysis study. After the filling with AH Plus, the roots were transversally sectioned to obtain dentinal slices. The specimens were submitted to a push-out bond strength test using an electromechanical testing machine. The statistical analysis for the SEM and push-out bond strength studies were performed using the Kruskal–Wallis and Dunn tests (α = 5%). There was no difference among the G2, G3, and G4 efficacy in removing the debris and smear layer (P > 0.05). The efficacy of these groups was superior to the control group. The push-out bond strength values of G2, G3, and G4 were superior to the control group. The ability to remove the debris and smear layer by SmearClear and QMiX was as effective as the 17% EDTA. The final rinse with these solutions promoted similar push-out bond strength values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: This study aimed to compare the micro-tensile bond strength of methacrylate resin systems to a silorane-based restorative system on dentin after 24 hours and six months water storage. Material and Methods: The restorative systems Adper Single Bond 2/Filtek Z350 (ASB), Clearfil SE Bond/Z350 (CF), Adper SE Plus/Z350 (ASEP) and P90 Adhesive System/Filtek P90 (P90) were applied on flat dentin surfaces of 20 third molars (n=5). The restored teeth were sectioned perpendicularly to the bonding interface to obtain sticks (0.8 mm2) to be tested after 24 hours (24 h) and 6 months (6 m) of water storage, in a universal testing machine at 0.5 mm/min. The data was analyzed via two-way Analysis of Variance/Bonferroni post hoc tests at 5% global significance. Results: Overall outcomes did not indicate a statistical difference for the resin systems (p=0.26) nor time (p=0.62). No interaction between material × time was detected (p=0.28). Mean standard-deviation in MPa at 24 h and 6 m were: ASB 31.38 (4.53) and 30.06 (1.95), CF 34.26 (3.47) and 32.75 (4.18), ASEP 29.54 (4.14) and 33.47 (2.47), P90 30.27 (2.03) and 31.34 (2.19). Conclusions: The silorane-based system showed a similar performance to methacrylate-based materials on dentin. All systems were stable in terms of bond strength up to 6 month of water storage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the influence of brain-derived neurotrophic factor (BDNF) gene variations on cognitive performance and clinical symptomatology in first-episode psychosis (FEP). METHODS: We performed BDNF val66met variant genotyping, cognitive testing (verbal fluency and digit spans) and assessments of symptom severity (as assessed with the PANSS) in a population-based sample of FEP patients (77 with schizophreniform psychosis and 53 with affective psychoses) and 191 neighboring healthy controls. RESULTS: There was no difference in the proportion of Met allele carriers between FEP patients and controls, and no significant influence of BDNF genotype on cognitive test scores in either of the psychosis groups. A decreased severity of negative symptoms was found in FEP subjects that carried a Met allele, and this finding reached significance for the subgroup with affective psychoses (p < 0.01, ANOVA). CONCLUSIONS: These results suggest that, in FEP, the BDNF gene Val66Met polymorphism does not exert a pervasive influence on cognitive functioning but may modulate the severity of negative symptoms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The frequent occurrence of inconclusive serology in blood banks and the absence of a gold standard test for Chagas'disease led us to examine the efficacy of the blood culture test and five commercial tests (ELISA, IIF, HAI, c-ELISA, rec-ELISA) used in screening blood donors for Chagas disease, as well as to investigate the prevalence of Trypanosoma cruzi infection among donors with inconclusive serology screening in respect to some epidemiological variables. METHODS: To obtain estimates of interest we considered a Bayesian latent class model with inclusion of covariates from the logit link. RESULTS: A better performance was observed with some categories of epidemiological variables. In addition, all pairs of tests (excluding the blood culture test) presented as good alternatives for both screening (sensitivity > 99.96% in parallel testing) and for confirmation (specificity > 99.93% in serial testing) of Chagas disease. The prevalence of 13.30% observed in the stratum of donors with inconclusive serology, means that probably most of these are non-reactive serology. In addition, depending on the level of specific epidemiological variables, the absence of infection can be predicted with a probability of 100% in this group from the pairs of tests using parallel testing. CONCLUSION: The epidemiological variables can lead to improved test results and thus assist in the clarification of inconclusive serology screening results. Moreover, all combinations of pairs using the five commercial tests are good alternatives to confirm results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores educational technology and management education by analyzing fidelity in game-based management education interventions. A sample of 31 MBA students was selected to help answer the research question: To what extent do MBA students tend to recognize specific game-based academic experiences, in terms of fidelity, as relevant to their managerial performance? Two distinct game-based interventions (BG1 and BG2) with key differences in fidelity levels were explored: BG1 presented higher physical and functional fidelity levels and lower psychological fidelity levels. Hypotheses were tested with data from the participants, collected shortly after their experiences, related to the overall perceived quality of game-based interventions. The findings reveal a higher overall perception of quality towards BG1: (a) better for testing strategies, (b) offering better business and market models, (c) based on a pace that better stimulates learning, and (d) presenting a fidelity level that better supports real world performance. This study fosters the conclusion that MBA students tend to recognize, to a large extent, that specific game-based academic experiences are relevant and meaningful to their managerial development, mostly with heightened fidelity levels of adopted artifacts. Agents must be ready and motivated to explore the new, to try and err, and to learn collaboratively in order to perform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Zooplankton growth and secondary production are key input parameters in marine ecosystem modelling, but their direct measurement is difficult to make. Accordingly, zooplanktologists have developed several statistical-based secondary production models. Here, three of these secondary production models are tested in Leptomysis lingvura (Mysidacea, Crustacea). Mysid length was measured in two cultures grown on two different food concentrations. The relationship between length and dry-mass was determined in a pilot study and used to calculate dry-mass from the experimental length data. Growth rates ranged from 0.11 to 0.64 , while secondary production rates ranged from 1.77 to 12.23 mg dry-mass . None of the three selected models were good predictors of growth and secondary production in this species of mysid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Linguistic immersion programs are increasing nowadays. The concept of being bilingual, that started being used by schools for the elite in the 19th century, and that in the mid- 20th century became an educational option, has given raise to CLIL (Content and Language Integrated Learning), a methodology through which students work in a bilingual environment, acquiring knowledge of curricular subject and developing their competences in a foreign language. In this teaching context started a new European project called PlayingCLIL. Six partners from different European countries (Germany, United Kingdom, Spain and Romania) are working in this project. Our main aim is to develop a new methodology to learn a foreign language combining elements from the pedagogic drama (interactive games) with the CLIL classroom. At present we are testing the games in different schools and high schools and we are compiling the results to be collected in a handbook (printed and e-book).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.