959 resultados para user-created content
Resumo:
Introduction: Anxiety is a common problem in primary care and specialty medical settings. Treating an anxious patient takes more time and adds stress to staff. Unrecognised anxiety may lead to exam repetition, image artifacts and hinder the scan performance. Reducing patient anxiety at the onset is probably the most useful means of minimizing artifactual FDG uptake, both fat brown and skeletal muscle uptake, as well patient movement and claustrophobia. The aim of the study was to examine the effects of information giving on the anxiety levels of patients who are to undergo a PET/CT and whether the patient experience is enhanced with the creation of a guideline. Methodology: Two hundred and thirty two patients were given two questionnaires before and after the procedure to determine their prior knowledge, concerns, expectations and experiences about the study. Verbal information was given by one of the technologists after the completion of the first questionnaire. Results: Our results show that the main causes of anxiety in patients who are having a PET/CT is the fear of the procedure itself, and fear of the results. The patients who suffered from greater anxiety were those who were scanned during the initial stage of a disease. No significant differences were found between the anxiety levels pre procedural and post procedural. Findings with regard to satisfaction show us that the amount of information given before the procedure does not change the anxiety levels and therefore, does not influence patient satisfaction. Conclusions: The performance of a PET/CT scan is an important and statistically generator of anxiety. PET/CT patients are often poorly informed and present with a range of anxieties that may ultimately affect examination quality. The creation of a guideline may reduce the stress of not knowing what will happen, the anxiety created and may increase their satisfaction in the experience of having a PET/CT scan.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
In today’s globalized world, communication students need to be capable of efficiently communicating across the globe. At ISCAP, part of the 3rd year syllabus in Translation and New Technologies course is focused on culture and the need to be culturally knowledgeable. We argue the approach to incorporate cultural aspects in HE needs to be studentcentered, in order to encompass not only intercultural awareness, but also the 21st century skills students need to be successful and competent citizens. Additionally, as studies have shown, the manipulation of digital tools fosters greater student involvement in learning activities. We have adopted Digital Storytelling - multimodal storytelling technique - to promote a personal, student-centered reflection on intercultural communication. We intend to present student and teacher perspectives on this learning experience and assess its relevance in HE contexts, based on the content analysis of student expressed perspectives on this activity as well as a multimodal analysis of the digital stories created. A preliminary analysis of our case study has demonstrated that Digital Storytelling potentiates two complimentary types of reflection: on the one hand, students felt the need to reflect on their own intercultural knowledge, create and adapt their finding in the form of a story; on the other hand, viewing others’ stories they have raised questions and demonstrated points of view otherwise ignored.
Resumo:
Phenylketonuria is an inborn error of metabolism, involving, in most cases, a deficient activity of phenylalanine hydroxylase. Neonatal diagnosis and a prompt special diet (low phenylalanine and natural-protein restricted diets) are essential to the treatment. The lack of data concerning phenylalanine contents of processed foodstuffs is an additional limitation for an already very restrictive diet. Our goals were to quantify protein (Kjeldahl method) and amino acid (18) content (HPLC/fluorescence) in 16 dishes specifically conceived for phenylketonuric patients, and compare the most relevant results with those of several international food composition databases. As might be expected, all the meals contained low protein levels (0.67–3.15 g/100 g) with the highest ones occurring in boiled rice and potatoes. These foods also contained the highest amounts of phenylalanine (158.51 and 62.65 mg/100 g, respectively). In contrast to the other amino acids, it was possible to predict phenylalanine content based on protein alone. Slight deviations were observed when comparing results with the different food composition databases.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Trabalho de Projeto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro, Artes Performativas – Especialização em Interpretação.
Resumo:
he expansion of Digital Television and the convergence between conventional broadcasting and television over IP contributed to the gradual increase of the number of available channels and on demand video content. Moreover, the dissemination of the use of mobile devices like laptops, smartphones and tablets on everyday activities resulted in a shift of the traditional television viewing paradigm from the couch to everywhere, anytime from any device. Although this new scenario enables a great improvement in viewing experiences, it also brings new challenges given the overload of information that the viewer faces. Recommendation systems stand out as a possible solution to help a watcher on the selection of the content that best fits his/her preferences. This paper describes a web based system that helps the user navigating on broadcasted and online television content by implementing recommendations based on collaborative and content based filtering. The algorithms developed estimate the similarity between items and users and predict the rating that a user would assign to a particular item (television program, movie, etc.). To enable interoperability between different systems, programs characteristics (title, genre, actors, etc.) are stored according to the TV-Anytime standard. The set of recommendations produced are presented through a Web Application that allows the user to interact with the system based on the obtained recommendations.
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Publicidade e Marketing.
Resumo:
As e-learning gradually evolved many specialized and disparate systems appeared to fulfil the needs of teachers and students, such as repositories of learning objects, authoring tools, intelligent tutors and automatic evaluators. This heterogeneity raises interoperability issues giving the standardization of content an important role in e-learning. This article presents a survey on current e-learning content aggregation standards focusing on their internal organization and packaging. This study is part of an effort to choose the most suitable specifications and standards for an e-learning framework called Ensemble defined as a conceptual tool to organize a network of e-learning systems and services for domains with complex evaluation.
Resumo:
Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação de Doutor Freitas Santos
Resumo:
Seismic ambient noise tomography is applied to central and southern Mozambique, located in the tip of the East African Rift (EAR). The deployment of MOZART seismic network, with a total of 30 broad-band stations continuously recording for 26 months, allowed us to carry out the first tomographic study of the crust under this region, which until now remained largely unexplored at this scale. From cross-correlations extracted from coherent noise we obtained Rayleigh wave group velocity dispersion curves for the period range 5–40 s. These dispersion relations were inverted to produce group velocity maps, and 1-D shear wave velocity profiles at selected points. High group velocities are observed at all periods on the eastern edge of the Kaapvaal and Zimbabwe cratons, in agreement with the findings of previous studies. Further east, a pronounced slow anomaly is observed in central and southern Mozambique, where the rifting between southern Africa and Antarctica created a passive margin in the Mesozoic, and further rifting is currently happening as a result of the southward propagation of the EAR. In this study, we also addressed the question concerning the nature of the crust (continental versus oceanic) in the Mozambique Coastal Plains (MCP), still in debate. Our data do not support previous suggestions that the MCP are floored by oceanic crust since a shallow Moho could not be detected, and we discuss an alternative explanation for its ocean-like magnetic signature. Our velocity maps suggest that the crystalline basement of the Zimbabwe craton may extend further east well into Mozambique underneath the sediment cover, contrary to what is usually assumed, while further south the Kaapval craton passes into slow rifted crust at the Lebombo monocline as expected. The sharp passage from fast crust to slow crust on the northern part of the study area coincides with the seismically active NNE-SSW Urema rift, while further south the Mazenga graben adopts an N-S direction parallel to the eastern limit of the Kaapvaal craton. We conclude that these two extensional structures herald the southward continuation of the EAR, and infer a structural control of the transition between the two types of crust on the ongoing deformation.
Resumo:
Trabalho de Projeto para obtenção do grau de mestre em Engenharia Civil