45 resultados para Visual Divided Field
Resumo:
BACKGROUND: Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. OBJECTIVE: The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed.METHODS: The effect of emotional valence and visualization types and their interaction were analyzed through a 3x2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. RESULTS: The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. CONCLUSIONS: This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios’ valence) and their interaction with three-dimensionality.
Resumo:
Within the European project R-Fieldbus (http://www.hurray.isep.ipp.pt/activities/rfieldbus/), an industrial manufacturing field trial was developed. This field trial was conceived as a demonstration test bed for the technologies developed during the project. Because the R-Fieldbus field trial included prototype hardware devices, the purpose of this equipment changed and since the conclusion of the project, several new technologies also emerged, therefore an update of the field trial was required. This document describes an update of the manufacturing field trial. The purpose of this update, the changes and improvements introduced are described in the document. Additionally, this document also provides a reliable source of documentation for the equipment, configuration and software components of the manufacturing field trial.
Resumo:
Health services
Resumo:
Rationale and Objectives Computer-aided detection and diagnosis (CAD) systems have been developed in the past two decades to assist radiologists in the detection and diagnosis of lesions seen on breast imaging exams, thus providing a second opinion. Mammographic databases play an important role in the development of algorithms aiming at the detection and diagnosis of mammary lesions. However, available databases often do not take into consideration all the requirements needed for research and study purposes. This article aims to present and detail a new mammographic database. Materials and Methods Images were acquired at a breast center located in a university hospital (Centro Hospitalar de S. João [CHSJ], Breast Centre, Porto) with the permission of the Portuguese National Committee of Data Protection and Hospital's Ethics Committee. MammoNovation Siemens full-field digital mammography, with a solid-state detector of amorphous selenium was used. Results The new database—INbreast—has a total of 115 cases (410 images) from which 90 cases are from women with both breasts affected (four images per case) and 25 cases are from mastectomy patients (two images per case). Several types of lesions (masses, calcifications, asymmetries, and distortions) were included. Accurate contours made by specialists are also provided in XML format. Conclusion The strengths of the actually presented database—INbreast—relies on the fact that it was built with full-field digital mammograms (in opposition to digitized mammograms), it presents a wide variability of cases, and is made publicly available together with precise annotations. We believe that this database can be a reference for future works centered or related to breast cancer imaging.
Resumo:
Este trabalho visa contribuir para o desenvolvimento de um sistema de visão multi-câmara para determinação da localização, atitude e seguimento de múltiplos objectos, para ser utilizado na unidade de robótica do INESCTEC, e resulta da necessidade de ter informação externa exacta que sirva de referência no estudo, caracterização e desenvolvimento de algoritmos de localização, navegação e controlo de vários sistemas autónomos. Com base na caracterização dos veículos autónomos existentes na unidade de robótica do INESCTEC e na análise dos seus cenários de operação, foi efectuado o levantamento de requisitos para o sistema a desenvolver. Foram estudados os fundamentos teóricos, necessários ao desenvolvimento do sistema, em temas relacionados com visão computacional, métodos de estimação e associação de dados para problemas de seguimento de múltiplos objectos . Foi proposta uma arquitectura para o sistema global que endereça os vários requisitos identi cados, permitindo a utilização de múltiplas câmaras e suportando o seguimento de múltiplos objectos, com ou sem marcadores. Foram implementados e validados componentes da arquitectura proposta e integrados num sistema para validação, focando na localização e seguimento de múltiplos objectos com marcadores luminosos à base de Light-Emitting Diodes (LEDs). Nomeadamente, os módulos para a identi cação dos pontos de interesse na imagem, técnicas para agrupar os vários pontos de interesse de cada objecto e efectuar a correspondência das medidas obtidas pelas várias câmaras, método para a determinação da posição e atitude dos objectos, ltro para seguimento de múltiplos objectos. Foram realizados testes para validação e a nação do sistema implementado que demonstram que a solução encontrada vai de encontro aos requisitos, e foram identi cadas as linhas de trabalho para a continuação do desenvolvimento do sistema global.
Resumo:
Vishnu is a tool for XSLT visual programming in Eclipse - a popular and extensible integrated development environment. Rather than writing the XSLT transformations, the programmer loads or edits two document instances, a source document and its corresponding target document, and pairs texts between then by drawing lines over the documents. This form of XSLT programming is intended for simple transformations between related document types, such as HTML formatting or conversion among similar formats. Complex XSLT programs involving, for instance, recursive templates or second order transformations are out of the scope of Vishnu. We present the architecture of Vishnu composed by a graphical editor and a programming engine. The editor is an Eclipse plug-in where the programmer loads and edits document examples and pairs their content using graphical primitives. The programming engine receives the data collected by the editor and produces an XSLT program. The design of the engine and the process of creation of an XSLT program from examples are also detailed. It starts with the generation of an initial transformation that maps source document to the target document. This transformation is fed to a rewrite process where each step produces a refined version of the transformation. Finally, the transformation is simplified before being presented to the programmer for further editing.
Resumo:
Os sistemas de perceção existentes nos robôs autónomos, hoje em dia, são bastante complexos. A informação dos vários sensores, existentes em diferentes partes do robôs, necessitam de estar relacionados entre si face ao referencial do robô ou do mundo. Para isso, o conhecimento da atitude (posição e rotação) entre os referenciais dos sensores e o referencial do robô é um fator critico para o desempenho do mesmo. O processo de calibração dessas posições e translações é chamado calibração dos parâmetros extrínsecos. Esta dissertação propõe o desenvolvimento de um método de calibração autónomo para robôs como câmaras direcionais, como é o caso dos robôs da equipa ISePorto. A solução proposta consiste na aquisição de dados da visão, giroscópio e odometria durante uma manobra efetuada pelo robô em torno de um alvo com um padrão conhecido. Esta informação é então processada em conjunto através de um Extended Kalman Filter (EKF) onde são estimados necessários para relacionar os sensores existentes no robô em relação ao referencial do mesmo. Esta solução foi avaliada com recurso a vários testes e os resultados obtidos foram bastante similares aos obtidos pelo método manual, anteriormente utilizado, com um aumento significativo em rapidez e consistência.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Considering tobacco smoke as one of the most health-relevant indoor sources, the aim of this work was to further understand its negative impacts on human health. The specific objectives of this work were to evaluate the levels of particulate-bound PAHs in smoking and non-smoking homes and to assess the risks associated with inhalation exposure to these compounds. The developed work concerned the application of the toxicity equivalency factors approach (including the estimation of the lifetime lung cancer risks, WHO) and the methodology established by USEPA (considering three different age categories) to 18 PAHs detected in inhalable (PM10) and fine (PM2.5) particles at two homes. The total concentrations of 18 PAHs (ΣPAHs) was 17.1 and 16.6 ng m−3 in PM10 and PM2.5 at smoking home and 7.60 and 7.16 ng m−3 in PM10 and PM2.5 at non-smoking one. Compounds with five and six rings composed the majority of the particulate PAHs content (i.e., 73 and 78 % of ΣPAHs at the smoking and non-smoking home, respectively). Target carcinogenic risks exceeded USEPA health-based guideline at smoking home for 2 different age categories. Estimated values of lifetime lung cancer risks largely exceeded (68–200 times) the health-based guideline levels at both homes thus demonstrating that long-term exposure to PAHs at the respective levels would eventually cause risk of developing cancer. The high determined values of cancer risks in the absence of smoking were probably caused by contribution of PAHs from outdoor sources.
Resumo:
Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn–Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.
Resumo:
Mestrado em Engenharia Computação e Instrumentação Médica
Resumo:
No decorrer dos últimos anos tem-se verificado um acréscimo do número de sistemas de videovigilância presentes nos mais diversos ambientes, sendo que estes se encontram cada vez mais sofisticados. Os casinos são um exemplo bastante popular da utilização destes sistemas sofisticados, sendo que vários casinos, hoje em dia, utilizam câmeras para controlo automático das suas operações de jogo. No entanto, atualmente existem vários tipos de jogos em que o controlo automático ainda não se encontra disponível, sendo um destes, o jogo Banca Francesa. A presente dissertação tem como objetivo propor um conjunto de algoritmos idealizados para um sistema de controlo e gestão do jogo de casino Banca Francesa através do auxílio de componentes pertencentes à área da computação visual, tendo em conta os contributos mais relevantes e existentes na área, elaborados por investigadores e entidades relacionadas. No decorrer desta dissertação são apresentados quatro módulos distintos, os quais têm como objetivo auxiliar os casinos a prevenir o acontecimento de fraudes durante o decorrer das suas operações, assim como auxiliar na recolha automática de resultados de jogo. Os quatro módulos apresentados são os seguintes: Dice Sample Generator – Módulo proposto para criação de casos de teste em grande escala; Dice Sample Analyzer – Módulo proposto para a deteção de resultados de jogo; Dice Calibration – Módulo proposto para calibração automática do sistema; Motion Detection – Módulo proposto para a deteção de fraude no jogo. Por fim, para cada um dos módulos, é apresentado um conjunto de testes e análises de modo a verificar se é possível provar o conceito para cada uma das propostas apresentadas.
Resumo:
O projeto idealizado para a realização da presente tese de mestrado tem como finalidade o desenvolvimento de uma aplicação móvel para o sistema Android. Esta aplicação permitirá que, através da passagem do dispositivo móvel por um leitor NFC, seja possível realizar apostas no jogo Euromilhões da Santa Casa, onde os dados ficam gravados numa conta associada a cada jogador. Esta aplicação terá ainda muitas outras funcionalidades que permitirão criar chaves de jogo, gerir o cartão individual de cada jogador, consultar os prémios e outras informações do jogo. A realização deste projeto está dividida em três fases. A primeira fase consistiu na aquisição de todo o material necessário e estabelecimento da comunicação entre o dispositivo móvel e o desktop, por intermédio da tecnologia NFC. A segunda fase centrou-se no desenvolvimento da aplicação móvel e do servidor web, onde foram integradas as várias funcionalidades. Estabeleceu-se também a comunicação entre estes dois sistemas. Na terceira e última fase, foi realizada a criação da aplicação desktop, capaz de interagir por intermédio da tecnologia NFC, com a aplicação móvel, possibilitando a comunicação entre os dois sistemas.
Resumo:
Trabalho de natureza profissional para a atribuição do Título de Especialista do Instituto Politécnico do Porto, na área de Design, defendido a 23-02-2015.
Resumo:
Este trabalho foi realizado no âmbito do Mestrado em Engenharia Mecânica, especialização em Gestão Industrial, do Instituto Superior de Engenharia do Porto. O estudo foi desenvolvido na Continental Mabor – Indústria de Pneus S.A., sendo analisado o processo de Inspeção Visual dos pneus. Face à atual conjuntura de mercado, as empresas devem estar munidas de dados detalhados e precisos relativos aos seus processos produtivos. A Capacidade instalada apresenta-secomo um parâmetro determinante na medida em que condiciona diretamente a resposta a solicitações de clientes. Esta é fortemente influenciada pelo Layout fabril, pelo que a otimização do mesmo é fundamental numa perspetiva de ganho de Capacidade produtiva. O relatório iniciou-se com a determinação do Tempo Previsto da operação segundo o referencial REFA. Seguidamente quantificaram-se as atuais perturbações através de auditorias ao processo. Deste modo obteve-se uma Capacidade instalada de 59380 pneus/dia. A análise das perturbações desenvolveu-se a partir de um diagrama causa-efeito, no qual foram identificadas diversas potenciais causas, classificadas posteriormente por uma equipa experiente e conhecedora do processo. Assim, conhecidas as perturbações de maior impacto, foi apresentada uma solução de Layout que visou a sua minimização. O ganho estimado, em termos de Capacidade, após a implementação da solução proposta é de 3000 pneus/dia. Este ganho de 5% é significativo na medida em que é obtido sem a necessidade de aquisição de novos equipamentos nem de área fabril adicional. É expectável que esta implementação proporcione ainda melhorias no processo produtivo subsequente - Uniformidade, especificamente na alimentação do mesmo. A quantificação desta melhoria, na sequência deste trabalho, apresenta-se como uma oportunidade de estudo futuro.