942 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
The purpose of this dissertation is to give a contribution to the translation of the terminology of Cycle and Bike Polo into European Portuguese and hence call the attention of a wide Portuguese public to this fairly new sport, whose roots go back to Elephant and Horse Polo in India and in other parts of the world. Sequencing a characterization of technical translation, translation issues of Bike and Cycle Polo´s terminological units have been dealt with in the light of the Cognitive Linguistics framework and hence intimately associated both with physical experiences and historical facts. In fact, sports terminology coinage in this field is highly motivated by metaphorical and metonymical conceptualization mapped from physical reality dimensions, as well as from already existing sports terminology from other sports modalities. In order to render this research unique, a glossary of technical terms from Bike and Cycle Polo has been gathered, since most of them had not yet undergone translation from English into European Portuguese. For validation of my translations I have resorted to Portuguese bike polo players, with special reference to Catarina Almeida, who introduced me to Bike Polo’s terminology.
Resumo:
Objective: In Southern European countries up to one-third of the patients with hereditary hemochromatosis (HH) do not present the common HFE risk genotype. In order to investigate the molecular basis of these cases we have designed a gene panel for rapid and simultaneous analysis of 6 HH-related genes (HFE, TFR2, HJV, HAMP, SLC40A1 and FTL) by next-generation sequencing (NGS). Materials and Methods: Eighty-eight iron overload Portuguese patients, negative for the common HFE mutations, were analysed. A TruSeq Custom Amplicon kit (TSCA, by Illumina) was designed in order to generate 97 amplicons covering exons, intron/exon junctions and UTRs of the mentioned genes with a cumulative target sequence of 12115bp. Amplicons were sequenced in the MiSeq instrument (IIlumina) using 250bp paired-end reads. Sequences were aligned against human genome reference hg19 using alignment and variant caller algorithms in the MiSeq reporter software. Novel variants were validated by Sanger sequencing and their pathogenic significance were assessed by in silico studies. Results: We found a total of 55 different genetic variants. These include novel pathogenic missense and splicing variants (in HFE and TFR2), a very rare variant in IRE of FTL, a variant that originates a novel translation initiation codon in the HAMP gene, among others. Conclusion: The merging of TSCA methodology and NGS technology appears to be an appropriate tool for simultaneous and fast analysis of HH-related genes in a large number of samples. However, establishing the clinical relevance of NGS-detected variants for HH development remains a hard-working task, requiring further functional studies.
Resumo:
The present work aims to develop the Life Cycle Assessment study of thermo-modified Atlanticwood® pine boards based on real data provided by Santos & Santos Madeiras company. Atlanticwood® pine boards are used mainly for exterior decking and cladding facades of buildings. The LCA study is elaborated based on ISO 14040/44 standard and Product Category Rules for preparing an environmental product declaration for Construction Products and Construction Services. The inventory analysis and, subsequently, the impact analysis have been performed using the LCA software SimaPro8.0.4. The method chosen for impact assessment was EPD (2013) V1.01. The results show that more than ¾ of ‘Acidification’, ‘Eutrophication’, ‘Global warming’ and ‘Abiotic depletion’ caused by 1 m3 of Atlanticwood® pine boards production is due to energy consumption (electricity + gas + biomass). This was to be expected since the treatment is based on heat production and no chemicals are added during the heat treatment process.
Resumo:
Relatório de estágio apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Sistemas de Informação Organizacionais
Resumo:
The representation of the thermal behaviour of the building is achieved through a relatively simple dynamic model that takes into account the effects due to the thermal mass of the building components. The model of a intra-floor apartment has been built in the Matlab-Simulink environment and considers the heat transmission through the external envelope, wall and windows, the internal thermal masses, (i.e. furniture, internal wall and floor slabs) and the sun gain due to opaque and see-through surfaces of the external envelope. The simulations results for the entire year have been compared and the model validated, with the one obtained with the dynamic building simulation software Energyplus.
Resumo:
BACKGROUND Screening of aphasia in acute stroke is crucial for directing patients to early language therapy. The Language Screening Test (LAST), originally developed in French, is a validated language screening test that allows detection of a language deficit within a few minutes. The aim of the present study was to develop and validate two parallel German versions of the LAST. METHODS The LAST includes subtests for naming, repetition, automatic speech, and comprehension. For the translation into German, task constructs and psycholinguistic criteria for item selection were identical to the French LAST. A cohort of 101 stroke patients were tested, all of whom were native German speakers. Validation of the LAST was based on (1) analysis of equivalence of the German versions, which was established by administering both versions successively in a subset of patients, (2) internal validity by means of internal consistency analysis, and (3) external validity by comparison with the short version of the Token Test in another subset of patients. RESULTS The two German versions were equivalent as demonstrated by a high intraclass correlation coefficient of 0.91. Furthermore, an acceptable internal structure of the LAST was found (Cronbach's α = 0.74). A highly significant correlation (r = 0.74, p < 0.0001) between the LAST and the short version of the Token Test indicated good external validity of the scale. CONCLUSION The German version of the LAST, available in two parallel versions, is a new and valid language screening test in stroke.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
Objectives: To validate the WOMAC 3.1 in a touch screen computer format, which applies each question as a cartoon in writing and in speech (QUALITOUCH method), and to assess patient acceptance of the computer touch screen version. Methods: The paper and computer formats of WOMAC 3.1 were applied in random order to 53 subjects with hip or knee osteoarthritis. The mean age of the subjects was 64 years ( range 45 to 83), 60% were male, 53% were 65 years or older, and 53% used computers at home or at work. Agreement between formats was assessed by intraclass correlation coefficients (ICCs). Preferences were assessed with a supplementary questionnaire. Results: ICCs between formats were 0.92 (95% confidence interval, 0.87 to 0.96) for pain; 0.94 (0.90 to 0.97) for stiffness, and 0.96 ( 0.94 to 0.98) for function. ICCs were similar in men and women, in subjects with or without previous computer experience, and in subjects below or above age 65. The computer format was found easier to use by 26% of the subjects, the paper format by 8%, and 66% were undecided. Overall, 53% of subjects preferred the computer format, while 9% preferred the paper format, and 38% were undecided. Conclusion: The computer format of the WOMAC 3.1 is a reliable assessment tool. Agreement between computer and paper formats was independent of computer experience, age, or sex. Thus the computer format may help improve patient follow up by meeting patients' preferences and providing immediate results.
Resumo:
Background: The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence-and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results: GANN ( available at http://bioinformatics.org.au/gann) is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion: GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Two-dimensional (2-D) strain (epsilon(2-D)) on the basis of speckle tracking is a new technique for strain measurement. This study sought to validate epsilon(2-D) and tissue velocity imaging (TVI)based strain (epsilon(TVI)) with tagged harmonic-phase (HARP) magnetic resonance imaging (MRI). Thirty patients (mean age. 62 +/- 11 years) with known or suspected ischemic heart disease were evaluated. Wall motion (wall motion score index 1.55 +/- 0.46) was assessed by an expert observer. Three apical images were obtained for longitudinal strain (16 segments) and 3 short-axis images for radial and circumferential strain (18 segments). Radial epsilon(TVI) was obtained in the posterior wall. HARP MRI was used to measure principal strain, expressed as maximal length change in each direction. Values for epsilon(2-D), epsilon(TVI), and HARP MRI were comparable for all 3 strain directions and were reduced in dysfunctional segments. The mean difference and correlation between longitudinal epsilon(2-D) and HARP MRI (2.1 +/- 5.5%, r = 0.51, p < 0.001) were similar to those between longitudinal epsilon(TVI), and HARP MRI (1.1 +/- 6.7%, r = 0.40, p < 0.001). The mean difference and correlation were more favorable between radial epsilon(2-D) and HARP MRI (0.4 +/- 10.2%, r = 0.60, p < 0.001) than between radial epsilon(TVI), and HARP MRI (3.4 +/- 10.5%, r = 0.47, p < 0.001). For circumferential strain, the mean difference and correlation between epsilon(2-D) and HARP MRI were 0.7 +/- 5.4% and r = 0.51 (p < 0.001), respectively. In conclusion, the modest correlations of echocardiographic and HARP MRI strain reflect the technical challenges of the 2 techniques. Nonetheless, epsilon(2-D) provides a reliable tool to quantify regional function, with radial measurements being more accurate and feasible than with TVI. Unlike epsilon(TVI), epsilon(2-D) provides circumferential measurements. (c) 2006 Elsevier Inc. All rights reserved.