957 resultados para Computer Prediction Program
Resumo:
Background: The cerebrospinal fluid (CSF) biomarkers amyloid beta (A beta)-42, total-tau (T-tau), and phosphorylated-tau (P-tau) demonstrate good diagnostic accuracy for Alzheimer`s disease (AD). However, there are large variations in biomarker measurements between studies, and between and within laboratories. The Alzheimer`s Association has initiated a global quality control program to estimate and monitor variability of measurements, quantify batch-to-batch assay variations, and identify sources of variability. In this article, we present the results from the first two rounds of the program. Methods: The program is open for laboratories using commercially available kits for A beta, T-tau, or P-tau. CSF samples (aliquots of pooled CSF) are sent for analysis several times a year from the Clinical Neurochemistry Laboratory at the Molndal campus of the University of Gothenburg, Sweden. Each round consists of three quality control samples. Results: Forty laboratories participated. Twenty-six used INNOTEST enzyme-linked immunosorbent assay kits, 14 used Luminex xMAP with the INNO-BIA AlzBio3 kit (both measure A beta-(1-42), P-tau(181P), and T-tau), and 5 used Mesa Scale Discovery with the A beta triplex (A beta N-42, A beta N-40, and A beta N-38) or T-tau kits. The total coefficients of variation between the laboratories were 13% to 36%. Five laboratories analyzed the samples six times on different occasions. Within-laboratory precisions differed considerably between biomarkers within individual laboratories. Conclusions: Measurements of CSF AD biomarkers show large between-laboratory variability, likely caused by factors related to analytical procedures and the analytical kits. Standardization of laboratory procedures and efforts by kit vendors to increase kit performance might lower variability, and will likely increase the usefulness of CSF AD biomarkers. (C) 2011 The Alzheimer`s Association. All rights reserved.
Resumo:
Axial vertebral rotation, an important parameter in the assessment of scoliosis may be identified on X-ray images. In line with the advances in the field of digital radiography, hospitals have been increasingly using this technique. The objective of the present study was to evaluate the reliability of computer-processed rotation measurements obtained from digital radiographs. A software program was therefore developed, which is able to digitally reproduce the methods of Perdriolle and Raimondi and to calculate semi-automatically the rotation degree of vertebra on digital radiographs. Three independent observers estimated vertebral rotation employing both the digital and the traditional manual methods. Compared to the traditional method, the digital assessment showed a 43% smaller error and a stronger correlation. In conclusion, the digital method seems to be reliable and enhance the accuracy and precision of vertebral rotation measurements.
Resumo:
The current prediction or genes in the Plasmodium falciparum genome database relies upon a limited number of specially developed computer algorithms. We have re-annotated the sequence of chromosome 2 of P. falciparum by a computer-assisted manual analysis. which is described here. Of 161 newly predicted introns, we have experimentally confirmed 98. We regard 110 introns from the previously published analyses as probable, we delete 3, change 26 and add 135. We recognise 214 genes in chromosome 2. We have predicted introns in 121 genes. The increased complexity or gene structure on chromosome 2 is likely to be mirrored by the entire genome. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Program compilation can be formally defined as a sequence of equivalence-preserving transformations, or refinements, from high-level language programs to assembler code, Recent models also incorporate timing properties, but the resulting formalisms are intimidatingly complex. Here we take advantage of a new, simple model of real-time refinement, based on predicate transformer semantics, to present a straightforward compilation formalism that incorporates real-time constraints. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A major challenge faced by today's white clover breeder is how to manage resources within a breeding program. It is essential to utilise these resources with sufficient flexibility to build on past progress from conventional breeding strategies, but also take advantage of emerging opportunities from molecular breeding tools such as molecular markers and transformation. It is timely to review white clover breeding strategies. This background can then be used as a foundation for considering how to continue conventional plant improvement activities and complement them with molecular breeding opportunities. In this review, conventional white clover breeding strategies relevant to the Australian dryland target population environments are considered. Attention is given to: (i) availability of genetic variation, (ii) characterisation of germplasm collections, (iii) quantitative models for estimation of heritability, (iv) the role of multi-environment trials to accommodate genotype-by-environment interactions, (v) interdisciplinary research to understand adaptation to dryland environments, (vi) breeding and selection strategies, and (vii) cultivar structure. Current achievements in biotechnology with specific reference to white clover breeding in Australia are considered, and computer modelling of breeding programs is discussed as a useful integrative tool for the joint evaluation of conventional and molecular breeding strategies and optimisation of resource use in breeding programs. Four areas are identified as future research priorities: (i) capturing the potential genetic diversity among introduced accessions and ecotypes that are adapted to key constraints such as summer moisture stress and the use of molecular markers to assess the genetic diversity, (ii) understanding the underlying physiological/morphological root and shoot mechanisms involved in water use efficiency of white clover, with the objective of identifying appropriate selection criteria, (iii) estimation of quantitative genetic parameters of important morphological/physiological attributes to enable prediction of response to selection in target environments, and (iv) modelling white clover breeding strategies to evaluate the opportunities for integration of molecular breeding strategies with conventional breeding programs.
Resumo:
Tissue engineering applications rely on scaffolds that during its service life, either for in-vivo or in vitro applications, are under mechanical solicitations. The variation of the mechanical condition of the scaffold is strongly relevant for cell culture and has been scarcely addressed. Fatigue life cycle of poly-ε-caprolactone, PCL, scaffolds with and without fibrin as filler of the pore structure were characterized both dry and immersed in liquid water. It is observed that the there is a strong increase from 100 to 500 in the number of loading cycles before collapse in the samples tested in immersed conditions due to the more uniform stress distributions within the samples, the fibrin loading playing a minor role in the mechanical performance of the scaffolds
Resumo:
Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperative or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programs
Resumo:
This paper reports on the development of specific slicing techniques for functional programs and their use for the identification of possible coherent components from monolithic code. An associated tool is also introduced. This piece of research is part of a broader project on program understanding and re-engineering of legacy code supported by formal methods
Resumo:
Geostatistics has been successfully used to analyze and characterize the spatial variability of environmental properties. Besides giving estimated values at unsampled locations, it provides a measure of the accuracy of the estimate, which is a significant advantage over traditional methods used to assess pollution. In this work universal block kriging is novelty used to model and map the spatial distribution of salinity measurements gathered by an Autonomous Underwater Vehicle in a sea outfall monitoring campaign, with the aim of distinguishing the effluent plume from the receiving waters, characterizing its spatial variability in the vicinity of the discharge and estimating dilution. The results demonstrate that geostatistical methodology can provide good estimates of the dispersion of effluents that are very valuable in assessing the environmental impact and managing sea outfalls. Moreover, since accurate measurements of the plume’s dilution are rare, these studies might be very helpful in the future to validate dispersion models.
Resumo:
Dissertation presented to obtain a Masters degree in Computer Science
Resumo:
Existent computer programming training environments help users to learn programming by solving problems from scratch. Nevertheless, initiating the resolution of a program can be frustrating and demotivating if the student does not know where and how to start. Skeleton programming facilitates a top-down design approach, where a partially functional system with complete high level structures is available, so the student needs only to progressively complete or update the code to meet the requirements of the problem. This paper presents CodeSkelGen - a program skeleton generator. CodeSkelGen generates skeleton or buggy Java programs from a complete annotated program solution provided by the teacher. The annotations are formally described within an annotation type and processed by an annotation processor. This processor is responsible for a set of actions ranging from the creation of dummy methods to the exchange of operator types included in the source code. The generator tool will be included in a learning environment that aims to assist teachers in the creation of programming exercises and to help students in their resolution.