975 resultados para semantic content annotation
Resumo:
The aim of this article is to show how it is possible to integrate stories and ICT in Content Language Integrated Learning (CLIL) for English as a foreign language (EFL) learning in bilingual schools. Two Units of Work are presented. One, for the second year of Primary, is based on a Science topic, ‘Materials’. The story used is ‘The three little pigs’ and the computer program ‘JClic’. The other one is based on a Science and Arts topic for the sixth year of Primary, the story used is ‘Charlotte’s Web’ and the computer program ‘Atenex’.
Resumo:
With the advent of wearable sensing and mobile technologies, biosignals have seen an increasingly growing number of application areas, leading to the collection of large volumes of data. One of the difficulties in dealing with these data sets, and in the development of automated machine learning systems which use them as input, is the lack of reliable ground truth information. In this paper we present a new web-based platform for visualization, retrieval and annotation of biosignals by non-technical users, aimed at improving the process of ground truth collection for biomedical applications. Moreover, a novel extendable and scalable data representation model and persistency framework is presented. The results of the experimental evaluation with possible users has further confirmed the potential of the presented framework.
Resumo:
OBJECTIVE To evaluate the impact of consuming ultra-processed foods on the micronutrient content of the Brazilian population’s diet.METHODS This cross-sectional study was performed using data on individual food consumption from a module of the 2008-2009 Brazilian Household Budget Survey. A representative sample of the Brazilian population aged 10 years or over was assessed (n = 32,898). Food consumption data were collected through two 24-hour food records. Linear regression models were used to assess the association between the nutrient content of the diet and the quintiles of ultra-processed food consumption – crude and adjusted for family incomeper capita.RESULTS Mean daily energy intake per capita was 1,866 kcal, with 69.5% coming from natural or minimally processed foods, 9.0% from processed foods and 21.5% from ultra-processed foods. For sixteen out of the seventeen evaluated micronutrients, their content was lower in the fraction of the diet composed of ultra-processed foods compared with the fraction of the diet composed of natural or minimally processed foods. The content of 10 micronutrients in ultra-processed foods did not reach half the content level observed in the natural or minimally processed foods. The higher consumption of ultra-processed foods was inversely and significantly associated with the content of vitamins B12, vitamin D, vitamin E, niacin, pyridoxine, copper, iron, phosphorus, magnesium, selenium and zinc. The reverse situation was only observed for calcium, thiamin and riboflavin.CONCLUSIONS The findings of this study highlight that reducing the consumption of ultra-processed foods is a natural way to promote healthy eating in Brazil and, therefore, is in line with the recommendations made by the Guia Alimentar para a População Brasileira (Dietary Guidelines for the Brazilian Population) to avoid these foods.
Resumo:
European Master in Multimedia and Audiovisual Administration
Resumo:
ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2) for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries.
Resumo:
Com o crescimento da informação disponível na Web, arquivos pessoais e profissionais, protagonizado tanto pelo aumento da capacidade de armazenamento de dados, como pelo aumento exponencial da capacidade de processamento dos computadores, e do fácil acesso a essa mesma informação, um enorme fluxo de produção e distribuição de conteúdos audiovisuais foi gerado. No entanto, e apesar de existirem mecanismos para a indexação desses conteúdos com o objectivo de permitir a pesquisa e acesso aos mesmos, estes apresentam normalmente uma grande complexidade algorítmica ou exigem a contratação de pessoal altamente qualificado, para a verificação e categorização dos conteúdos. Nesta dissertação pretende-se estudar soluções de anotação colaborativa de conteúdos e desenvolver uma ferramenta que facilite a anotação de um arquivo de conteúdos audiovisuais. A abordagem implementada é baseada no conceito dos “Jogos com Propósito” (GWAP – Game With a Purpose) e permite que os utilizadores criem tags (metadatos na forma de palavras-chave) de forma a atribuir um significado a um objecto a ser categorizado. Assim, e como primeiro objectivo, foi desenvolvido um jogo com o propósito não só de entretenimento, mas também que permita a criação de anotações audiovisuais perante os vídeos que são apresentados ao jogador e, que desta forma, se melhore a indexação e categorização dos mesmos. A aplicação desenvolvida permite ainda a visualização dos conteúdos e metadatos categorizados, e com o objectivo de criação de mais um elemento informativo, permite a inserção de um like num determinado instante de tempo do vídeo. A grande vantagem da aplicação desenvolvida reside no facto de adicionar anotações a pontos específicos do vídeo, mais concretamente aos seus instantes de tempo. Trata-se de uma funcionalidade nova, não disponível em outras aplicações de anotação colaborativa de conteúdos audiovisuais. Com isto, o acesso aos conteúdos será bastante mais eficaz pois será possível aceder, por pesquisa, a pontos específicos no interior de um vídeo.
Resumo:
As the variety of mobile devices connected to the Internet growts there is a correponding increase in the need to deliver content tailored to their heterogeneous characteristics. At the same time, we watch to the increase of e-learning in universities through the adoption of electronic platforms and standards. Not surprisingly, the concept of mLearning (Mobile Learning) appeared in recent years decreasing the limitation of learning location with the mobility of general portable devices. However, this large number and variety of Web-enabled devices poses several challenges for Web content creators who want to automatic get the delivery context and adapt the content to the client mobile devices. In this paper we analyze several approaches to defining delivery context and present an architecture for deliver uniform mLearning content to mobile devices denominated eduMCA - Educational Mobile Content Adaptation. With the eduMCA system the Web authors will not need to create specialized pages for each kind of device, since the content is automatically transformed to adapt to any mobile device capabilities from WAP to XHTML MP-compliant devices.
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Recent studies of mobile Web trends show a continuous explosion of mobile-friendly content. However, the increasing number and heterogeneity of mobile devices poses several challenges for Web programmers who want to automatically get the delivery context and adapt the content to mobile devices. In this process, the devices detection phase assumes an important role where an inaccurate detection could result in a poor mobile experience for the enduser. In this paper we compare the most promising approaches for mobile device detection. Based on this study, we present an architecture for a system to detect and deliver uniform m-Learning content to students in a Higher School. We focus mainly on the devices capabilities repository manageable and accessible through an API. We detail the structure of the capabilities XML Schema that formalizes the data within the devices capabilities XML repository and the REST Web Service API for selecting the correspondent devices capabilities data according to a specific request. Finally, we validate our approach by presenting the access and usage statistics of the mobile web interface of the proposed system such as hits and new visitors, mobile platforms, average time on site and rejection rate.
Resumo:
The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
eLearning has been evolved in a gradual and consistent way. Along with this evolution several specialized and disparate systems appeared to fulfill the needs of teachers and students such as repositories of learning objects, intelligent tutors, or automatic evaluators. This heterogeneity poses issues that are necessary to address in order to promote interoperability among systems. Based on this fact, the standardization of content takes a leading role in the eLearning realm. This article presents a survey on current eLearning content standards. It gathers information on the most emergent standards and categorizes them according three distinct facets: metadata, content packaging and educational design.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
Phenylketonuria is an inborn error of metabolism, involving, in most cases, a deficient activity of phenylalanine hydroxylase. Neonatal diagnosis and a prompt special diet (low phenylalanine and natural-protein restricted diets) are essential to the treatment. The lack of data concerning phenylalanine contents of processed foodstuffs is an additional limitation for an already very restrictive diet. Our goals were to quantify protein (Kjeldahl method) and amino acid (18) content (HPLC/fluorescence) in 16 dishes specifically conceived for phenylketonuric patients, and compare the most relevant results with those of several international food composition databases. As might be expected, all the meals contained low protein levels (0.67–3.15 g/100 g) with the highest ones occurring in boiled rice and potatoes. These foods also contained the highest amounts of phenylalanine (158.51 and 62.65 mg/100 g, respectively). In contrast to the other amino acids, it was possible to predict phenylalanine content based on protein alone. Slight deviations were observed when comparing results with the different food composition databases.