35 resultados para content centric
em Instituto Politécnico do Porto, Portugal
Resumo:
A Internet, conforme a conhecemos, foi projetada com base na pilha de protocolos TCP/IP, que foi desenvolvida nos anos 60 e 70 utilizando um paradigma centrado nos endereços individuais de cada máquina (denominado host-centric). Este paradigma foi extremamente bem-sucedido em interligar máquinas através de encaminhamento baseado no endereço IP. Estudos recentes demonstram que, parte significativa do tráfego atual da Internet centra-se na transferência de conteúdos, em vez das tradicionais aplicações de rede, conforme foi originalmente concebido. Surgiram então novos modelos de comunicação, entre eles, protocolos de rede ponto-a-ponto, onde cada máquina da rede pode efetuar distribuição de conteúdo (denominadas de redes peer-to-peer), para melhorar a distribuição e a troca de conteúdos na Internet. Por conseguinte, nos últimos anos o paradigma host-centric começou a ser posto em causa e apareceu uma nova abordagem de Redes Centradas na Informação (ICN - information-centric networking). Tendo em conta que a Internet, hoje em dia, basicamente é uma rede de transferência de conteúdos e informações, porque não centrar a sua evolução neste sentido, ao invés de comunicações host-to-host? O paradigma de Rede Centrada no Conteúdo (CCN - Content Centric Networking) simplifica a solução de determinados problemas de segurança relacionados com a arquitetura TCP/IP e é uma das principais propostas da nova abordagem de Redes Centradas na Informação. Um dos principais problemas do modelo TCP/IP é a proteção do conteúdo. Atualmente, para garantirmos a autenticidade e a integridade dos dados partilhados na rede, é necessário garantir a segurança do repositório e do caminho que os dados devem percorrer até ao seu destino final. No entanto, a contínua ineficácia perante os ataques de negação de serviço praticados na Internet, sugere a necessidade de que seja a própria infraestrutura da rede a fornecer mecanismos para os mitigar. Um dos principais pilares do paradigma de comunicação da CCN é focalizar-se no próprio conteúdo e não na sua localização física. Desde o seu aparecimento em 2009 e como consequência da evolução e adaptação a sua designação mudou atualmente para Redes de Conteúdos com Nome (NNC – Named Network Content). Nesta dissertação, efetuaremos um estudo de uma visão geral da arquitetura CCN, apresentando as suas principais características, quais os componentes que a compõem e como os seus mecanismos mitigam os tradicionais problemas de comunicação e de segurança. Serão efetuadas experiências com o CCNx, que é um protótipo composto por um conjunto de funcionalidades e ferramentas, que possibilitam a implementação deste paradigma. O objetivo é analisar criticamente algumas das propostas existentes, determinar oportunidades, desafios e perspectivas para investigação futura.
Resumo:
The aim of this paper is presenting the recommendation module of the Mathematics Collaborative Learning Platform (PCMAT). PCMAT is an Adaptive Educational Hypermedia System (AEHS), with a constructivist approach, which presents contents and activities adapted to the characteristics and learning style of students of mathematics in basic schools. The recommendation module is responsible for choosing different learning resources for the platform, based on the user's characteristics and performance. Since the main purpose of an adaptive system is to provide the user with content and interface adaptation, the recommendation module is integral to PCMAT’s adaptation model.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
A growth trial with Senegalese Sole (Solea senegalensis Kaup, 1858) juveniles fed with diets containing increasing replacement levels of fishmeal by mixtures of plant protein sources was conducted over 12 weeks. Total fat contents of muscle, liver, viscera, skin, fins and head tissues were determined, as well as fatty acid profiles of muscle and liver (GC-FID analysis). Liver was the preferential local for fat deposition (5.5–10.8% of fat) followed by fins (3.4–6.7% fat). Increasing levels of plant protein in the diets seems to be related to increased levels of total lipids in the liver. Sole muscle is lean (2.4–4.0% fat), with total lipids being similar among treatments. Liver fatty acid profile varied significantly among treatments. Plant protein diets induced increased levels of C16:1 and C18:2 n -6 and a decrease in ARA and EPA levels. Muscle fatty acid profile also evidenced increasing levels of C18:2 n 6, while ARA and DHA remained similar among treatments. Substitution of fishmeal by plant protein is hence possible without major differences on the lipid content and fatty acid profile of the main edible portion of the fish – the muscle.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
A Teia Mundial (Web) foi prevista como uma rede de documentos de hipertexto interligados de forma a criar uma espaço de informação onde humanos e máquinas poderiam comunicar. No entanto, a informação contida na Web tradicional foi/é armazenada de forma não estruturada o que leva a que apenas os humanos a possam consumir convenientemente. Consequentemente, a procura de informações na Web sintáctica é uma tarefa principalmente executada pelos humanos e nesse sentido nem sempre é fácil de concretizar. Neste contexto, tornou-se essencial a evolução para uma Web mais estruturada e mais significativa onde é dado significado bem definido à informação de forma a permitir a cooperação entre humanos e máquinas. Esta Web é usualmente referida como Web Semântica. Além disso, a Web Semântica é totalmente alcançável apenas se os dados de diferentes fontes forem ligados criando assim um repositório de Dados Abertos Ligados (LOD). Com o aparecimento de uma nova Web de Dados (Abertos) Ligados (i.e. a Web Semântica), novas oportunidades e desafios surgiram. Pergunta Resposta (QA) sobre informação semântica é actualmente uma área de investigação activa que tenta tirar vantagens do uso das tecnologias ligadas à Web Semântica para melhorar a tarefa de responder a questões. O principal objectivo do projecto World Search passa por explorar a Web Semântica para criar mecanismos que suportem os utilizadores de domínios de aplicação específicos a responder a questões complexas com base em dados oriundos de diferentes repositórios. No entanto, a avaliação feita ao estado da arte permite concluir que as aplicações existentes não suportam os utilizadores na resposta a questões complexas. Nesse sentido, o trabalho desenvolvido neste documento foca-se em estudar/desenvolver metodologias/processos que permitam ajudar os utilizadores a encontrar respostas exactas/corretas para questões complexas que não podem ser respondidas fazendo uso dos sistemas tradicionais. Tal inclui: (i) Ultrapassar a dificuldade dos utilizadores visionarem o esquema subjacente aos repositórios de conhecimento; (ii) Fazer a ponte entre a linguagem natural expressa pelos utilizadores e a linguagem (formal) entendível pelos repositórios; (iii) Processar e retornar informações relevantes que respondem apropriadamente às questões dos utilizadores. Para esse efeito, são identificadas um conjunto de funcionalidades que são consideradas necessárias para suportar o utilizador na resposta a questões complexas. É também fornecida uma descrição formal dessas funcionalidades. A proposta é materializada num protótipo que implementa as funcionalidades previamente descritas. As experiências realizadas com o protótipo desenvolvido demonstram que os utilizadores efectivamente beneficiam das funcionalidades apresentadas: ▪ Pois estas permitem que os utilizadores naveguem eficientemente sobre os repositórios de informação; ▪ O fosso entre as conceptualizações dos diferentes intervenientes é minimizado; ▪ Os utilizadores conseguem responder a questões complexas que não conseguiam responder com os sistemas tradicionais. Em suma, este documento apresenta uma proposta que comprovadamente permite, de forma orientada pelo utilizador, responder a questões complexas em repositórios semiestruturados.
Resumo:
The current models are not simple enough to allow a quick estimation of the remediation time. This work reports the development of an easy and relatively rapid procedure for the forecasting of the remediation time using vapour extraction. Sandy soils contaminated with cyclohexane and prepared with different water contents were studied. The remediation times estimated through the mathematical fitting of experimental results were compared with those of real soils. The main objectives were: (i) to predict, through a simple mathematical fitting, the remediation time of soils with water contents different from those used in the experiments; (ii) to analyse the influence of soil water content on the: (ii1) remediation time; (ii2) remediation efficiency; and (ii3) distribution of contaminants in the different phases present into the soil matrix after the remediation process. For sandy soils with negligible contents of clay and natural organic matter, artificially contaminated with cyclohexane before vapour extraction, it was concluded that (i) if the soil water content belonged to the range considered in the experiments with the prepared soils, then the remediation time of real soils of similar characteristics could be successfully predicted, with relative differences not higher than 10%, through a simple mathematical fitting of experimental results; (ii) increasing soil water content from 0% to 6% had the following consequences: (ii1) increased remediation time (1.8–4.9 h, respectively); (ii2) decreased remediation efficiency (99–97%, respectively); and (ii3) decreased the amount of contaminant adsorbed onto the soil and in the non-aqueous liquid phase, thus increasing the amount of contaminant in the aqueous and gaseous phases.
Resumo:
Abstract This work reports the analysis of the efficiency and time of soil remediation using vapour extraction as well as provides comparison of results using both, prepared and real soils. The main objectives were: (i) to analyse the efficiency and time of remediation according to the water and natural organic matter content of the soil; and (ii) to assess if a previous study, performed using prepared soils, could help to preview the process viability in real conditions. For sandy soils with negligible clay content, artificially contaminated with cyclohexane before vapour extraction, it was concluded that (i) the increase of soil water content and mainly of natural organic matter content influenced negatively the remediation process, making it less efficient, more time consuming, and consequently more expensive; and (ii) a previous study using prepared soils of similar characteristics has proven helpful for previewing the process viability in real conditions.
Resumo:
Folk medicine is a relevant and effective part of indigenous healthcare systems which are, in practice, totally dependent on traditional healers. An outstanding coincidence between indigenous medicinal plant uses and scientifically proved pharmacological properties of several phytochemicals has been observed along the years. This work focused on the leaves of a medicinal plant traditionally used for therapeutic benefits (Angolan Cymbopogon citratus), in order to evaluate their nutritional value. The bioactive phytochemical composition and antioxidant activity of leaf extracts prepared with different solvents (water, methanol and ethanol) were also evaluated. The plant leaves contained ~60% of carbohydrates, protein (~20%), fat (~5%), ash (~4%) and moisture (~9%). The phytochemicals screening revealed the presence of tannins, flavonoids, and terpenoids in all extracts. Methanolic extracts also contained alkaloids and steroids. Several methods were used to evaluate total antioxidant capacity of the different extracts (DPPH; NO; and H2O2 scavenging assays, reducing power, and FRAP). Ethanolic extracts presented a significantly higher antioxidant activity (p < 0.05) except for FRAP, in which the best results were achieved by the aqueous extracts. Methanolic extracts showed the lowest radical scavenging activities for both DPPH; and NO; radicals.
Resumo:
This paper proposes a novel business model to support media content personalisation: an agent-based business-to-business (B2B) brokerage platform for media content producer and distributor businesses. Distributors aim to provide viewers with a personalised content experience and producers wish to en-sure that their media objects are watched by as many targeted viewers as possible. In this scenario viewers and media objects (main programmes and candidate objects for insertion) have profiles and, in the case of main programme objects, are annotated with placeholders representing personalisation opportunities, i.e., locations for insertion of personalised media objects. The MultiMedia Brokerage (MMB) platform is a multiagent multilayered brokerage composed by agents that act as sellers and buyers of viewer stream timeslots and/or media objects on behalf of the registered businesses. These agents engage in negotiations to select the media objects that best match the current programme and viewer profiles.
Resumo:
The aim of this article is to show how it is possible to integrate stories and ICT in Content Language Integrated Learning (CLIL) for English as a foreign language (EFL) learning in bilingual schools. Two Units of Work are presented. One, for the second year of Primary, is based on a Science topic, ‘Materials’. The story used is ‘The three little pigs’ and the computer program ‘JClic’. The other one is based on a Science and Arts topic for the sixth year of Primary, the story used is ‘Charlotte’s Web’ and the computer program ‘Atenex’.
Resumo:
As the variety of mobile devices connected to the Internet growts there is a correponding increase in the need to deliver content tailored to their heterogeneous characteristics. At the same time, we watch to the increase of e-learning in universities through the adoption of electronic platforms and standards. Not surprisingly, the concept of mLearning (Mobile Learning) appeared in recent years decreasing the limitation of learning location with the mobility of general portable devices. However, this large number and variety of Web-enabled devices poses several challenges for Web content creators who want to automatic get the delivery context and adapt the content to the client mobile devices. In this paper we analyze several approaches to defining delivery context and present an architecture for deliver uniform mLearning content to mobile devices denominated eduMCA - Educational Mobile Content Adaptation. With the eduMCA system the Web authors will not need to create specialized pages for each kind of device, since the content is automatically transformed to adapt to any mobile device capabilities from WAP to XHTML MP-compliant devices.
Resumo:
Recent studies of mobile Web trends show a continuous explosion of mobile-friendly content. However, the increasing number and heterogeneity of mobile devices poses several challenges for Web programmers who want to automatically get the delivery context and adapt the content to mobile devices. In this process, the devices detection phase assumes an important role where an inaccurate detection could result in a poor mobile experience for the enduser. In this paper we compare the most promising approaches for mobile device detection. Based on this study, we present an architecture for a system to detect and deliver uniform m-Learning content to students in a Higher School. We focus mainly on the devices capabilities repository manageable and accessible through an API. We detail the structure of the capabilities XML Schema that formalizes the data within the devices capabilities XML repository and the REST Web Service API for selecting the correspondent devices capabilities data according to a specific request. Finally, we validate our approach by presenting the access and usage statistics of the mobile web interface of the proposed system such as hits and new visitors, mobile platforms, average time on site and rejection rate.
Resumo:
The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.