871 resultados para Scoring scheme
Resumo:
This multi-phase study examined the influence of retrieval processes on children’s metacognitive processes in relation to and in interaction with achievement level and age. First, N = 150 9/10- and 11/12-year old high and low achievers watched an educational film and predicted their test performance. Children then solved a cloze test regarding the film content including answerable and unanswerable items and gave confidence judgments to every answer. Finally, children withdrew answers that they believed to be incorrect. All children showed adequate metacognitive processes before and during test taking with 11/12- year-olds outperforming 9/10-year-olds when considering characteristics of on-going retrieval processes. As to the influence of achievement level, high compared to low achievers proved to be more accurate in their metacognitive monitoring and controlling. Results suggest that both cognitive resources (operationalized through achievement level) and mnemonic experience (assessed through age) fuel metacognitive development. Nevertheless, when facing higher demands regarding retrieval processes, experience seems to play the more important role.
Resumo:
As a basic tool of modern biology, sequence alignment can provide us useful information in fold, function, and active site of protein. For many cases, the increased quality of sequence alignment means a better performance. The motivation of present work is to increase ability of the existing scoring scheme/algorithm by considering residue–residue correlations better. Based on a coarse-grained approach, the hydrophobic force between each pair of residues is written out from protein sequence. It results in the construction of an intramolecular hydrophobic force network that describes the whole residue–residue interactions of each protein molecule, and characterizes protein's biological properties in the hydrophobic aspect. A former work has suggested that such network can characterize the top weighted feature regarding hydrophobicity. Moreover, for each homologous protein of a family, the corresponding network shares some common and representative family characters that eventually govern the conservation of biological properties during protein evolution. In present work, we score such family representative characters of a protein by the deviation of its intramolecular hydrophobic force network from that of background. Such score can assist the existing scoring schemes/algorithms, and boost up the ability of multiple sequences alignment, e.g. achieving a prominent increase (50%) in searching the structurally alike residue segments at a low identity level. As the theoretical basis is different, the present scheme can assist most existing algorithms, and improve their efficiency remarkably.
Resumo:
BACKGROUND: Recommendations for statin use for primary prevention of coronary heart disease (CHD) are based on estimation of the 10- year CHD risk. We compared the 10-year CHD risk assessments and eligibility percentages for statin therapy using three scoring algorithms currently used in Europe. METHODS: We studied 5683 women and men, aged 35-75, without overt cardiovascular disease (CVD), in a population-based study in Switzerland. We compared the 10-year CHD risk using three scoring schemes, i.e., the Framingham risk score (FRS) from the U.S. National Cholesterol Education Program's Adult Treatment Panel III (ATP III), the PROCAM scoring scheme from the International Atherosclerosis Society (IAS), and the European risk SCORE for low-risk countries, without and with extrapolation to 60 years as recommended by the European Society of Cardiology guidelines (ESC). With FRS and PROCAM, high-risk was defined as a 10- year risk of fatal or non-fatal CHD>20% and a 10-year risk of fatal CVD≥5% with SCORE. We compared the proportions of high-risk participants and eligibility for statin use according to these three schemes. For each guideline, we estimated the impact of increased statin use from current partial compliance to full compliance on potential CHD deaths averted over 10 years, using a success proportion of 27% for statins. RESULTS: Participants classified at high-risk (both genders) were 5.8% according to FRS and 3.0% to the PROCAM, whereas the European risk SCORE classified 12.5% at high-risk (15.4% with extrapolation to 60 years). For the primary prevention of CHD, 18.5% of participants were eligible for statin therapy using ATP III, 16.6% using IAS, and 10.3% using ESC (13.0% with extrapolation) because ESC guidelines recommend statin therapy only in high-risk subjects. In comparison with IAS, agreement to identify eligible adults for statins was good with ATP III, but moderate with ESC. Using a population perspective, a full compliance with ATP III guidelines would reduce up to 17.9% of the 24′ 310 CHD deaths expected over 10 years in Switzerland, 17.3% with IAS and 10.8% with ESC (11.5% with extrapolation). CONCLUSIONS: Full compliance with guidelines for statin therapy would result in substantial health benefits, but proportions of high-risk adults and eligible adults for statin use varied substantially depending on the scoring systems and corresponding guidelines used for estimating CHD risk in Europe.
Resumo:
This thesis presents a highly sensitive genome wide search method for recessive mutations. The method is suitable for distantly related samples that are divided into phenotype positives and negatives. High throughput genotype arrays are used to identify and compare homozygous regions between the cohorts. The method is demonstrated by comparing colorectal cancer patients against unaffected references. The objective is to find homozygous regions and alleles that are more common in cancer patients. We have designed and implemented software tools to automate the data analysis from genotypes to lists of candidate genes and to their properties. The programs have been designed in respect to a pipeline architecture that allows their integration to other programs such as biological databases and copy number analysis tools. The integration of the tools is crucial as the genome wide analysis of the cohort differences produces many candidate regions not related to the studied phenotype. CohortComparator is a genotype comparison tool that detects homozygous regions and compares their loci and allele constitutions between two sets of samples. The data is visualised in chromosome specific graphs illustrating the homozygous regions and alleles of each sample. The genomic regions that may harbour recessive mutations are emphasised with different colours and a scoring scheme is given for these regions. The detection of homozygous regions, cohort comparisons and result annotations are all subjected to presumptions many of which have been parameterized in our programs. The effect of these parameters and the suitable scope of the methods have been evaluated. Samples with different resolutions can be balanced with the genotype estimates of their haplotypes and they can be used within the same study.
Resumo:
In protein sequence alignment, residue similarity is usually evaluated by substitution matrix, which scores all possible exchanges of one amino acid with another. Several matrices are widely used in sequence alignment, including PAM matrices derived from homologous sequence and BLOSUM matrices derived from aligned segments of BLOCKS. However, most matrices have not addressed the high-order residue-residue interactions that are vital to the bioproperties of protein.With consideration for the inherent correlation in residue triplet, we present a new scoring scheme for sequence alignment. Protein sequence is treated as overlapping and successive 3-residue segments. Two edge residues of a triplet are clustered into hydrophobic or polar categories, respectively. Protein sequence is then rewritten into triplet sequence with 2 · 20 · 2 = 80 alphabets. Using a traditional approach, we construct a new scoring scheme named TLESUMhp (TripLEt SUbstitution Matrices with hydropobic and polar information) for pairwise substitution of triplets, which characterizes the similarity of residue triplets. The applications of this matrix led to marked improvements in multiple sequence alignment and in searching structurally alike residue segments. The reason for the occurrence of the ‘‘twilight zone,’’ i.e., structure explosion of lowidentity sequences, is also discussed.
Resumo:
Background
Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties by gene expression profile. Lamb et al first proposed the Connectivity Map [Lamb et al (2006), Science 313, 1929–1935] to make successful connections among small molecules, genes, and diseases using genomic signatures.
Results
Here we have built on the principles of the Connectivity Map to present a simpler and more robust method for the construction of reference gene-expression profiles and for the connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with two randomly generated gene signatures and three experimentally derived gene signatures (for HDAC inhibitors, estrogens, and immunosuppressive drugs, respectively). Our testing with this method indicates that it achieves a higher level of specificity and sensitivity and so advances the original method.
Conclusion
The method presented here not only offers more principled statistical procedures for testing connections, but more importantly it provides effective safeguard against false connections at the same time achieving increased sensitivity. With its robust performance, the method has potential use in the drug development pipeline for the early recognition of pharmacological and toxicological properties in chemicals and new drug candidates, and also more broadly in other 'omics sciences.
Resumo:
The relationships among organisms and their surroundings can be of immense complexity. To describe and understand an ecosystem as a tangled bank, multiple ways of interaction and their effects have to be considered, such as predation, competition, mutualism and facilitation. Understanding the resulting interaction networks is a challenge in changing environments, e.g. to predict knock-on effects of invasive species and to understand how climate change impacts biodiversity. The elucidation of complex ecological systems with their interactions will benefit enormously from the development of new machine learning tools that aim to infer the structure of interaction networks from field data. In the present study, we propose a novel Bayesian regression and multiple changepoint model (BRAM) for reconstructing species interaction networks from observed species distributions. The model has been devised to allow robust inference in the presence of spatial autocorrelation and distributional heterogeneity. We have evaluated the model on simulated data that combines a trophic niche model with a stochastic population model on a 2-dimensional lattice, and we have compared the performance of our model with L1-penalized sparse regression (LASSO) and non-linear Bayesian networks with the BDe scoring scheme. In addition, we have applied our method to plant ground coverage data from the western shore of the Outer Hebrides with the objective to infer the ecological interactions. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
[Concurrent irradiation and DNA tumor vaccination in canine oral malignant melanoma: a pilot study].
Resumo:
Melanoma is the most common oral tumor in dogs, characterized by rapid growth, local invasion, and high metastatic rate. The goal of this study was to evaluate the combination of radiation therapy and DNA tumor vaccine. We hypothesized, that the concurrent use would not increase toxicity. Nine dogs with oral melanoma were treated with 4 fractions of 8 Gray at 7-day intervals. The vaccine was given 4 times every 14 days, beginning at the first radiation fraction. Local acute radiation toxicities were assessed according to the VRTOG toxicity scoring scheme over a time period of 7 weeks. In none of the evaluated dogs, mucositis, dermatitis and conjunctivitis exceeded grade 2. In 3 dogs mild fever, lethargy, and local swelling at the injection site were seen after vaccine application. In conclusion, the concurrent administration of radiation therapy and vaccine was well tolerated in all dogs.
Resumo:
The pink shrimp, Farfantepenaeus duorarum, familiar to most Floridians as either food or bait shrimp, is ubiquitous in South Florida coastal and offshore waters and is proposed as an indicator for assessing restoration of South Florida's southern estuaries: Florida Bay, Biscayne Bay, and the mangrove estuaries of the lower southwest coast. Relationships between pink shrimp and salinity have been determined in both field and laboratory studies. Salinity is directly relevant to restoration because the salinity regimes of South Florida estuaries, critical nursery habitat for the pink shrimp, will be altered by changes in the quantity, timing, and distribution of freshwater inflow planned as part of the Comprehensive Everglades Restoration Project (CERP). Here we suggest performance measures based on pink shrimp density (number per square meter) in the estuaries and propose a restoration assessment and scoring scheme using these performance measures that can readily be communicated to managers, policy makers, and the interested public. The pink shrimp is an appropriate restoration indicator because of its ecological as well as its economic importance and also because scientific interest in pink shrimp in South Florida has produced a wealth of information about the species and relatively long time series of data on both juveniles in estuarine nursery habitats and adults on the fishing grounds. We suggest research needs for improving the pink shrimp performance measure.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
A comparison of antibiotic prescribing indicators and medicines management scoring in secondary care
Resumo:
Poster: - Robust prescribing indicators analogous to those used in primary care are not available currently in NHS hospital trusts - The Department of Health has recently implemented a scheme for self-assessment scoring medicines management processes (maximum 23) in NHS hospitals - There is no clear relationship between average values for two antibiotic prescribing indicators obtained in ten NHS hospital trusts in the West Midlands - There is no clear relationship between either indicator value and the corresponding self-assessment medicines management score - This study highlights the difficulties involved in assessing the medicines management processes in NHS hospitals; better medicines management evaluation systems are needed