58 resultados para sistemi integrati, CAT tools, machine translation
Resumo:
O presente trabalho foi elaborado no âmbito da Dissertação Final de Mestrado do curso de Engenharia Mecânica – Gestão Industrial do Instituto Superior de Engenharia do Porto. Este foi realizado numa empresa da indústria de pneus, a Continental Mabor S.A.. Nos dias atuais, a indústria esta cada vez mais competitiva, os custos e os prazos da entrega são cada vez mais reduzidos e a qualidade cada vez mais exigente, assim sendo, é imprescindível uma constante melhoria do sistema produtivo. Este fato fez com que o presente trabalho tivesse como principal objetivo determinar o estado atual e delinear um plano de melhoria para um equipamento (Extrusora de pisos nº6), recentemente instalado na fábrica mas oriundo de outra fábrica do grupo, recorrendo para tal ao Lean Manufacturing e a ferramentas que lhe estão associadas. Inicialmente realizou-se uma análise e diagnóstico ao processo de extrusão de pisos na Extrusora nº6, com o intuito de avaliar todas as suas ineficiências e ainda formular um plano de melhorias para a ineficiência de maior impacto no sistema produtivo. Esta análise foi realizada em diferentes turnos e diferentes equipas de trabalho de forma a se obter uma amostra mais representativa da realidade global. Após esta análise verificou-se que as principais ineficiências eram Setup, conformidade do material, dimensões e encravamentos, entre outros. Sendo que os Setup provocam um tempo de paragem de 101 minutos por turno, escolheu-se esta perturbação como o foco do plano de melhorias a realizar posteriormente. De forma a reduzir os tempos de mudança (Setup), o autor no presente trabalho utilizou ferramentas Lean Manufacturing, principalmente o SMED. Conjuntamente com o SMED ainda foram utilizadas outras ferramentas Lean Manufacturing tais como: 5S, Gestão Visual, Problem Solving e Normalização do Método de Trabalho. Após a implementação de todas estas ferramentas obteve-se uma redução de tempos de mudança de 43% com 1 operador e 71% com 2 operadores, ou seja, reduziu-se de 40,5 minutos gastos por turno para a mudança de fieira para 23,13 min e 11,79 min respetivamente, o que corresponde a um ganho monetário anual de 63.621€ ou 105.045€, respetivamente. Com este trabalho conclui-se que a utilização de ferramentas Lean Manufacturing contribuem para a redução dos desperdícios do processo produtivo. Por isso, espera-se que este estudo seja aplicado na Extrusora nº6 e nas restantes Extrusoras de pisos existentes na fábrica, e num futuro próximo que se realize estudos semelhantes em máquinas com diferentes funções.
Resumo:
The distinctive characteristics of carbon fibre reinforced plastics, like low weight or high specific strength, had broadened their use to new fields. Due to the need of assembly to structures, machining operations like drilling are frequent. In result of composites inhomogeneity, this operation can lead to different damages that reduce mechanical strength of the parts in the connection area. From these damages, delamination is the most severe. A proper choice of tool and cutting parameters can reduce delamination substantially. In this work the results obtained with five different tool geometries are compared. Conclusions show that the choice of an adequate drill can reduce thrust forces, thus delamination damage.
Resumo:
As estruturas coladas são geralmente projetadas para que o adesivo seja essencialmente sujeito a esforços de corte, pois neste tipo de solicitação o adesivo apresenta melhores caraterísticas mecânicas. A avaliação do comportamento ao corte pode ser realizada com o adesivo no estado maciço ou como camada fina em juntas adesivas. Os métodos que permitem avaliar o comportamento ao corte, quer para o adesivo, quer para as juntas, são: o ensaio Iosipescu ou V-Notched beam shear method, o ensaio de borboleta ou Notched plate shear method (Arcan), o ensaio de torsão, o ensaio de tração numa junta de sobreposição simples e o ensaio Thick Adherend Shear Test (TAST). Os ensaios Arcan e Iosipescu, tal como o ensaio de torção, podem ser realizados em provetes de adesivo maciço ou em juntas. O ensaio de torção é pouco utilizado, porque a aplicação do esforço de corte exige dispositivos e equipamentos de ensaios complexos. Os ensaios Arcan e Iosipescu utilizam provetes com entalhes e podem introduzir alguma dificuldade na medição precisa das deformações. O ensaio de tração numa junta de sobreposição simples é um dos métodos mais usados para caraterizar uma junta adesiva, porque é um método simples, as juntas são de fácil fabrico e pode ser realizado em máquinas universais de ensaios mecânicos. Neste ensaio os aderentes estão sujeitos a uma solicitação de tração, enquanto a camada de adesivo está sujeita a esforços de corte combinados com esforços de arrancamento. Os esforços de arrancamento resultam da própria geometria da junta na qual existe um desalinhamento das forças de tração, mesmo quando são colocados calços (reguladores de espessura) nos locais de amarração. O ensaio TAST é dos mais populares para obtenção das propriedades ao corte, uma vez que tanto as ferramentas de ensaio como o fabrico dos provetes são relativamente simples. Este ensaio é realizado em junta sendo os substratos espessos e de aço que, devido à sua elevada rigidez, contribuem para um esforço de corte praticamente puro no adesivo. Neste trabalho realizou-se o projeto e a fabricação das ferramentas, gabarit e substratos necessários para a execução de provetes TAST e ensaios utilizando diferentes adesivos.
Resumo:
The LMS plays an indisputable role in the majority of the eLearning environments. This eLearning system type is often used for presenting, solving and grading simple exercises. However, exercises from complex domains, such as computer programming, require heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. This work presents a standard approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach use a pivot component embedded in the LMS with two roles: provide an exercise resolution environment and coordinate the communication between the LMS and other systems exposing their functions as web services. The integration of the pivot component with the LMS relies on the Learning Tools Interoperability. The validation of this approach is made through the integration of the component with LMSs from two vendors.
Resumo:
Assessment plays a vital role in learning. This is certainly the case with assessment of computer programs, both in curricular and competitive learning. The lack of a standard – or at least a widely used format – creates a modern Ba- bel tower made of Learning Objects, of assessment items that cannot be shared among automatic assessment systems. These systems whose interoperability is hindered by the lack of a common format include contest management systems, evaluation engines, repositories of learning objects and authoring tools. A prag- matical approach to remedy this problem is to create a service to convert among existing formats. A kind of translation service specialized in programming prob- lems formats. To convert programming exercises on-the-fly among the most used formats is the purpose of the BabeLO – a service to cope with the existing Babel of Learning Object formats for programming exercises. BabeLO was designed as a service to act as a middleware in a network of systems typically used in auto- matic assessment of programs. It provides support for multiple exercise formats and can be used by: evaluation engines to assess exercises regardless of its format; repositories to import exercises from various sources; authoring systems to create exercises in multiple formats or based on exercises from other sources. This paper analyses several of existing formats to highlight both their differ- ences and their similar features. Based on this analysis it presents an approach to extensible format conversion. It presents also the features of PExIL, the pivotal format in which the conversion is based; and the function definitions of the proposed service – BabeLO. Details on the design and implementation of BabeLO, including the service API and the interfaces required to extend the conversion to a new format, are also provided. To evaluate the effectiveness and efficiency of this approach this paper reports on two actual uses of BabeLO: to relocate exercises to a different repository; and to use an evaluation engine in a network of heterogeneous systems.
Resumo:
Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
This paper discusses the changes brought by the communication revolution in teaching and learning in the scope of LSP. Its aim is to provide an insight on how teaching which was bi-dimensional, turned into a multidimensional system, gathering other complementary resources that have transformed, in a incredibly short time, the ways we receive share and store information, for instance as professionals, and keep in touch with our peers. The increasing rise of electronic publications, the incredible boom of social and professional networks, search engines, blogs, list servs, forums, e-mail blasts, Facebook pages, YouTube contents, Tweets and Apps, have twisted the way information is conveyed. Classes ceased to be predictable and have been empowered by digital platforms, innumerous and different data repositories (TILDE, IATE, LINGUEE, and so many other terminological data banks) that have definitely transformed the academic world in general and tertiary education in particular. There is a bulk of information to be digested by students, who are no longer passive but instead responsible and active for their academic outcomes. The question is whether they possess the tools to select only what is accurate and important for a certain subject or assignment, due to that overflow? Due to the reduction of the number of course years in most degrees, after the implementation of Bologna and the shrinking of the curricula contents, have students the possibility of developing critical thinking? Both teaching and learning rely on digital resources to improve the speed of the spreading of knowledge. But have those changes been effective to promote really communication? Furthermore, with the increasing Apps that have already been developed and will continue to appear for learning foreign languages, for translation among others, will the students feel the need of learning them once they have those Apps. These are some the questions we would like to discuss in our paper.
Resumo:
In order to cater for an extended readership, crime fiction, like most popular genres, is based on the repetition of a formula allowing for the reader's immediate identification. This first domestication is followed, at the time of its translation, by a second process, which wipes out those characteristics of the source text that may come into conflict with the dominant values of the target culture. An analysis of the textual and paratextual strategies used in the English translation of José Carlos Somoza's La caverna de las ideas (2000) shows the efforts to make the novel more easily marketable in the English-speaking world through the elimination of most of the obstacles to easy readability.
Resumo:
In today’s globalized world, communication students need to be capable of efficiently communicating across the globe. At ISCAP, part of the 3rd year syllabus in Translation and New Technologies course is focused on culture and the need to be culturally knowledgeable. We argue the approach to incorporate cultural aspects in HE needs to be studentcentered, in order to encompass not only intercultural awareness, but also the 21st century skills students need to be successful and competent citizens. Additionally, as studies have shown, the manipulation of digital tools fosters greater student involvement in learning activities. We have adopted Digital Storytelling - multimodal storytelling technique - to promote a personal, student-centered reflection on intercultural communication. We intend to present student and teacher perspectives on this learning experience and assess its relevance in HE contexts, based on the content analysis of student expressed perspectives on this activity as well as a multimodal analysis of the digital stories created. A preliminary analysis of our case study has demonstrated that Digital Storytelling potentiates two complimentary types of reflection: on the one hand, students felt the need to reflect on their own intercultural knowledge, create and adapt their finding in the form of a story; on the other hand, viewing others’ stories they have raised questions and demonstrated points of view otherwise ignored.
Resumo:
Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.
Resumo:
O desenvolvimento de recursos multilingues robustos para fazer face às exigências crescentes na complexidade dos processos intra e inter-organizacionais é um processo complexo que obriga a um aumento da qualidade nos modos de interacção e partilha dos recursos das organizações, através, por exemplo, de um maior envolvimento dos diferentes interlocutores em formas eficazes e inovadoras de colaboração. É um processo em que se identificam vários problemas e dificuldades, como sendo, no caso da criação de bases de dados lexicais multilingues, o desenvolvimento de uma arquitectura capaz de dar resposta a um conjunto vasto de questões linguísticas, como a polissemia, os padrões lexicais ou os equivalentes de tradução. Estas questões colocam-se na construção quer dos recursos terminológicos, quer de ontologias multilingues. No caso da construção de uma ontologia em diferentes línguas, processo no qual focalizaremos a nossa atenção, as questões e a complexidade aumentam, dado o tipo e propósitos do artefacto semântico, os elementos a localizar (conceitos e relações conceptuais) e o contexto em que o processo de localização ocorre. Pretendemos, assim, com este artigo, analisar o conceito e o processo de localização no contexto dos sistemas de gestão do conhecimento baseados em ontologias, tendo em atenção o papel central da terminologia no processo de localização, as diferentes abordagens e modelos propostos, bem como as ferramentas de base linguística que apoiam a implementação do processo. Procuraremos, finalmente, estabelecer alguns paralelismos entre o processo tradicional de localização e o processo de localização de ontologias, para melhor o situar e definir.
Resumo:
A Internet, conforme a conhecemos, foi projetada com base na pilha de protocolos TCP/IP, que foi desenvolvida nos anos 60 e 70 utilizando um paradigma centrado nos endereços individuais de cada máquina (denominado host-centric). Este paradigma foi extremamente bem-sucedido em interligar máquinas através de encaminhamento baseado no endereço IP. Estudos recentes demonstram que, parte significativa do tráfego atual da Internet centra-se na transferência de conteúdos, em vez das tradicionais aplicações de rede, conforme foi originalmente concebido. Surgiram então novos modelos de comunicação, entre eles, protocolos de rede ponto-a-ponto, onde cada máquina da rede pode efetuar distribuição de conteúdo (denominadas de redes peer-to-peer), para melhorar a distribuição e a troca de conteúdos na Internet. Por conseguinte, nos últimos anos o paradigma host-centric começou a ser posto em causa e apareceu uma nova abordagem de Redes Centradas na Informação (ICN - information-centric networking). Tendo em conta que a Internet, hoje em dia, basicamente é uma rede de transferência de conteúdos e informações, porque não centrar a sua evolução neste sentido, ao invés de comunicações host-to-host? O paradigma de Rede Centrada no Conteúdo (CCN - Content Centric Networking) simplifica a solução de determinados problemas de segurança relacionados com a arquitetura TCP/IP e é uma das principais propostas da nova abordagem de Redes Centradas na Informação. Um dos principais problemas do modelo TCP/IP é a proteção do conteúdo. Atualmente, para garantirmos a autenticidade e a integridade dos dados partilhados na rede, é necessário garantir a segurança do repositório e do caminho que os dados devem percorrer até ao seu destino final. No entanto, a contínua ineficácia perante os ataques de negação de serviço praticados na Internet, sugere a necessidade de que seja a própria infraestrutura da rede a fornecer mecanismos para os mitigar. Um dos principais pilares do paradigma de comunicação da CCN é focalizar-se no próprio conteúdo e não na sua localização física. Desde o seu aparecimento em 2009 e como consequência da evolução e adaptação a sua designação mudou atualmente para Redes de Conteúdos com Nome (NNC – Named Network Content). Nesta dissertação, efetuaremos um estudo de uma visão geral da arquitetura CCN, apresentando as suas principais características, quais os componentes que a compõem e como os seus mecanismos mitigam os tradicionais problemas de comunicação e de segurança. Serão efetuadas experiências com o CCNx, que é um protótipo composto por um conjunto de funcionalidades e ferramentas, que possibilitam a implementação deste paradigma. O objetivo é analisar criticamente algumas das propostas existentes, determinar oportunidades, desafios e perspectivas para investigação futura.
Resumo:
This paper intends to present the legal background that support dissemination and access to documents from European institutions, namely the Parliament, the Council and the European Commission. Currently, this legal framework is accomplished with a set of Internet tools that are analyzed regarding official documents types and options searches available. Some statistical data on access to European information published in annual reports from the institutions are also evaluated. The relationship between shadow and light in transparency to access administrative documents and marketing issues of a political communication are underlined. Neo-institutional approach, reputational concept in public organizations and systemic perspective are used as theoretical background.
Resumo:
No panorama socioeconómico atual, a contenção de despesas e o corte no financiamento de serviços secundários consumidores de recursos conduzem à reformulação de processos e métodos das instituições públicas, que procuram manter a qualidade de vida dos seus cidadãos através de programas que se mostrem mais eficientes e económicos. O crescimento sustentado das tecnologias móveis, em conjunção com o aparecimento de novos paradigmas de interação pessoa-máquina com recurso a sensores e sistemas conscientes do contexto, criaram oportunidades de negócio na área do desenvolvimento de aplicações com vertente cívica para indivíduos e empresas, sensibilizando-os para a disponibilização de serviços orientados ao cidadão. Estas oportunidades de negócio incitaram a equipa do projeto a desenvolver uma plataforma de notificação de problemas urbanos baseada no seu sistema de informação geográfico para entidades municipais. O objetivo principal desta investigação foca a idealização, conceção e implementação de uma solução completa de notificação de problemas urbanos de caráter não urgente, distinta da concorrência pela facilidade com que os cidadãos são capazes de reportar situações que condicionam o seu dia-a-dia. Para alcançar esta distinção da restante oferta, foram realizados diversos estudos para determinar características inovadoras a implementar, assim como todas as funcionalidades base expectáveis neste tipo de sistemas. Esses estudos determinaram a implementação de técnicas de demarcação manual das zonas problemáticas e reconhecimento automático do tipo de problema reportado nas imagens, ambas desenvolvidas no âmbito deste projeto. Para a correta implementação dos módulos de demarcação e reconhecimento de imagem, foram feitos levantamentos do estado da arte destas áreas, fundamentando a escolha de métodos e tecnologias a integrar no projeto. Neste contexto, serão apresentadas em detalhe as várias fases que constituíram o processo de desenvolvimento da plataforma, desde a fase de estudo e comparação de ferramentas, metodologias, e técnicas para cada um dos conceitos abordados, passando pela proposta de um modelo de resolução, até à descrição pormenorizada dos algoritmos implementados. Por último, é realizada uma avaliação de desempenho ao par algoritmo/classificador desenvolvido, através da definição de métricas que estimam o sucesso ou insucesso do classificador de objetos. A avaliação é feita com base num conjunto de imagens de teste, recolhidas manualmente em plataformas públicas de notificação de problemas, confrontando os resultados obtidos pelo algoritmo com os resultados esperados.