24 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
em Instituto Politécnico do Porto, Portugal
Resumo:
Translator’s training and assessment has used more and more tools and innovative strategies over the years. The goals and results to achieve haven’t changed much, however: translation quality. In order to accomplish it, the translator and all the tasks and processes he develops appear as crucial, being pre-translation and post-translation processes equally important as the translation itself, namely as far as autonomy, reflexive and critical skills are concerned. Finally, the need and relevance of collaborative tasks and networks amongst virtual translation communities, led us to the decision of implementing ePortfolios as a tool to develop the requested skills and extend the use of Internet in translation. In this paper we describe a case-study of a pilot experiment on the using of e-portfolios as a translation training tool and discuss their role in the definition of a clear set of objectives and phases for the completion of each task, by helping students in the management of the projects deadlines, improving their knowledge on the construction and management of translation resources and deepening their awareness about the concepts related to the development of eportfolios.
Resumo:
A presente dissertação tem como principal propósito avaliar o desempenho energético e a qualidade do ar interior do edifício principal do Parque Biológico de Vila Nova de Gaia (PBG). Para esse efeito, este estudo relaciona os termos definidos na legislação nacional em vigor até à presente data, e referentes a esta área de atuação, em particular, os presentes no SCE, RSECE, RCCTE e RSECE-QAI. Para avaliar o desempenho energético, procedeu-se numa primeira fase ao processo de auditoria no local e posteriormente à realização de uma simulação dinâmica detalhada, cuja modelação do edifício foi feita com recurso ao software DesignBuilder. Após a validação do modelo simulado, por verificação do desvio entre os consumos energéticos registados nas faturas e os calculados na simulação, igual a 5,97%, foi possível efetuar a desagregação dos consumos em percentagem pelos diferentes tipos de utilizações. Foi também possível determinar os IEE real e nominal, correspondendo a 29,9 e 41.3 kgep/m2.ano, respetivamente, constatando-se através dos mesmos que o edifício ficaria dispensado de implementar um plano de racionalização energética (PRE) e que a classe energética a atribuir é a C. Contudo, foram apresentadas algumas medidas de poupança de energia, de modo a melhorar a eficiência energética do edifício e reduzir a fatura associada. Destas destacam-se duas propostas, a primeira propõe a alteração do sistema de iluminação interior e exterior do edifício, conduzindo a uma redução no consumo de eletricidade de 47,5 MWh/ano, com um período de retorno de investimento de 3,5 anos. A segunda está relacionada com a alteração do sistema de produção de água quente para o aquecimento central, através do incremento de uma caldeira a lenha ao sistema atual, que prevê uma redução de 50 MWh no consumo de gás natural e um período de retorno de investimento de cerca de 4 anos. Na análise realizada à qualidade do ar interior (QAI), os parâmetros quantificados foram os exigidos legalmente, excetuando os microbiológicos. Deste modo, para os parâmetros físicos, temperatura e humidade relativa, obtiveram-se os resultados médios de 19,7ºC e 66,9%, respetivamente, ligeiramente abaixo do previsto na legislação (20,0ºC no período em que foi feita a medição, inverno). No que diz respeito aos parâmetros químicos, os valores médios registados para as concentrações de dióxido de carbono (CO2), monóxido de carbono (CO), ozono (O3), formaldeído (HCHO), partículas em suspensão (PM10) e radão, foram iguais a 580 ppm, 0,2 ppm, 0,06 ppm, 0,01 ppm, 0,07 mg/m3 e 196 Bq/m3, respetivamente, verificando-se que estão abaixo dos valores máximos de referência presentes no regulamento (984 ppm, 10,7 ppm, 0,10 ppm, 0,08 ppm, 0,15 mg/m3 e 400 Bq/m3). No entanto, o parâmetro relativo aos compostos orgânicos voláteis (COV) teve um valor médio igual a 0,84 ppm, bastante acima do valor máximo de referência (0,26 ppm). Neste caso, terá que ser realizada uma nova série de medições utilizando meios cromatográficos, para avaliar qual(ais) são o(s) agente(s) poluidor(es), de modo a eliminar ou atenuar as fontes de emissão.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
In this paper we describe a casestudy of an experiment on how reflexivity and technology can enhance learning, by using ePorfolios as a training environment to develop translation skills. Translation is today a multiskilled job and translators need to assure their clients a good performance and quality, both in language and in technology domains. In order to accomplish it, for the translator all the tasks and processes he develops appear as crucial, being pretranslation and posttranslation processes equally important as the translation itself, namely as far as autonomy, reflexive and critical skills are concerned. Finally, the need and relevance for collaborative tasks and networks amongst virtual translation communities, led us to the decision of implementing ePortfolios as a tool to develop the requested skills and extend the use of Internet in translation, namely in terminology management phases, for the completion of each task, by helping students in the management of the projects deadlines, improving their knowledge on the construction and management of translation resources and deepening their awareness about the concepts related to the development and usability of ePorfolios.
Resumo:
This paper will focus on some aspects of translation based on blending distinct linguistic domains such as English Language and Portuguese in using false friends in the English class in tertiary level students, reflecting namely on: 1. the choice of a word suitable to the context in L2 ; 2. the difficulties encountered by choice of that word that could be misleading, by relying in a false L1 reality that is going to adulterate reality in the L2 domain; 3. the difficulty in making such type of distinctions due to the lack of linguistic and lexical knowledge. 4. the need to study the cause of these difficulties by working, not only with their peers, but also with their language teacher to develop strategies to diminish and if possible to eradicate this type of linguistic and, above all, translation problem by making an inventory of those types of mistakes. In relation to the first point it is necessary to know that translation tasks involve much more than literal concepts ( Ladmiral, 1975) : furthermore it is necessary and suitable to realise that lexicon relies in significant contexts (Coseriu 1966), which connects both domains, that, at first sight do not seem to be compatible. In other words, although students have the impression they dominate lexicon due to the fact that they possess at least seven years of foreign language exposure that doesn’t mean they master the particularities engaged in such a delicate task as translation is concerned. There are some chromaticisms in the words (false friends), that need to be researched and analysed later on by both students and language teachers. The reason for such state of affairs lies in their academic formation, of a mainly general stream, which has enabled them only for knowledge of the foreign language, but not for the translation as a tool as it is required only when they reach the tertiary level. Besides, for their translations they rely, most of the times, on glossaries, whose dominant language is portuguese of Brazil, which is, obviously, much different from the portuguese mother tongue reality and even more of English. So it seems necessary to use with caution the working tools (glossaries) that work as surpluses, but could bring translation problems as we will see.
Resumo:
The design and development of simulation models and tools for Demand Response (DR) programs are becoming more and more important for adequately taking the maximum advantages of DR programs use. Moreover, a more active consumers’ participation in DR programs can help improving the system reliability and decrease or defer the required investments. DemSi, a DR simulator, designed and implemented by the authors of this paper, allows studying DR actions and schemes in distribution networks. It undertakes the technical validation of the solution using realistic network simulation based on PSCAD. DemSi considers the players involved in DR actions, and the results can be analyzed from each specific player point of view.
Resumo:
In this paper we present VERITAS, a tool that focus time maintenance, that is one of the most important processes in the engineering of the time during the development of KBS. The verification and validation (V&V) process is part of a wider process denominated knowledge maintenance, in which an enterprise systematically gathers, organizes, shares, and analyzes knowledge to accomplish its goals and mission. The V&V process states if the software requirements specifications have been correctly and completely fulfilled. The methodologies proposed in software engineering have showed to be inadequate for Knowledge Based Systems (KBS) validation and verification, since KBS present some particular characteristics. VERITAS is an automatic tool developed for KBS verification which is able to detect a large number of knowledge anomalies. It addresses many relevant aspects considered in real applications, like the usage of rule triggering selection mechanisms and temporal reasoning.
Resumo:
This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
O aumento da população Mundial, particularmente em Países emergentes como é o caso da China e da Índia, tem-se relevado um problema adicional no que confere às dificuldades associadas ao consumo mundial de energia, pois esta situação limita inequivocamente o acesso destes milhões de pessoas à energia eléctrica para os bens básicos de sobrevivência. Uma das muitas formas de se extinguir esta necessidade, começa a ser desenvolvida recorrendo ao uso de recursos renováveis como fontes de energia. Independentemente do local do mundo onde nos encontremos, essas fontes de energia são abundantes, inesgotáveis e gratuitas. O problema reside na forma como esses recursos renováveis são geridos em função das solicitações de carga que as instalações necessitam. Sistemas híbridos podem ser usados para produzir energia em qualquer parte do mundo. Historicamente este tipo de sistemas eram aplicados em locais isolados, mas nos dias que correm podem ser usados directamente conectados à rede, permitindo que se realize a venda de energia. Foi neste contexto que esta tese foi desenvolvida, com o objectivo de disponibilizar uma ferramenta informática capaz de calcular a rentabilidade de um sistema híbrido ligado à rede ou isolado. Contudo, a complexidade deste problema é muito elevada, pois existe uma extensa panóplia de características e distintos equipamentos que se pode adoptar. Assim, a aplicação informática desenvolvida teve de ser limitada e restringida aos dados disponíveis de forma a poder tornar-se genérica, mas ao mesmo tempo permitir ter uma aplicabilidade prática. O objectivo da ferramenta informática desenvolvida é apresentar de forma imediata os custos da implementação que um sistema híbrido pode acarretar, dependendo apenas de três variáveis distintas. A primeira variável terá de ter em consideração o local de instalação do sistema. Em segundo lugar é o tipo de ligação (isolado ou ligado à rede) e, por fim, o custo dos equipamentos (eólico, solar e restantes componentes) que serão introduzidos. Após a inserção destes dados a aplicação informática apresenta valores estimados de Payback e VAL.
Resumo:
People do not learn only in formal educational institutions, but also throughout their lives, from their experiences, conversations, observations of others, exploration of the Internet, meetings and conferences, and chance encounters etc. However this informal and non-formal learning can easily remain largely invisible, making it hard for peers and employers to recognize or act upon it. The TRAILER project aims to make this learning visible so that it can benefit both the individual and the organization. The proposed demonstration will show a software solution that (i) helps the learners to capture, organize and classify a wide range of ’informal’ learning taking place in their lives, and (ii) assists the organization in recognizing this learning and use it to help managing human resources (benefiting both parts). This software tool has recently been used in two phases of pilot studies, which have run in four different European countries.
Resumo:
Mestrado em Engenharia Informática, Área de Especialização em Arquiteturas, Sistemas e Redes
Resumo:
XML Schema is one of the most used specifications for defining types of XML documents. It provides an extensive set of primitive data types, ways to extend and reuse definitions and an XML syntax that simplifies automatic manipulation. However, many features that make XML Schema Definitions (XSD) so interesting also make them rather cumbersome to read. Several tools to visualize and browse schema definitions have been proposed to cope with this issue. The novel approach proposed in this paper is to base XSD visualization and navigation on the XML document itself, using solely the web browser, without requiring a pre-processing step or an intermediate representation. We present the design and implementation of a web-based XML Schema browser called schem@Doc that operates over the XSD file itself. With this approach, XSD visualization is synchronized with the source file and always reflects its current state. This tool fits well in the schema development process and is easy to integrate in web repositories containing large numbers of XSD files.
Resumo:
Artigo científico disponível actualmente em Early View (Online Version of Record published before inclusion in an issue)