80 resultados para user generated content


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current models are not simple enough to allow a quick estimation of the remediation time. This work reports the development of an easy and relatively rapid procedure for the forecasting of the remediation time using vapour extraction. Sandy soils contaminated with cyclohexane and prepared with different water contents were studied. The remediation times estimated through the mathematical fitting of experimental results were compared with those of real soils. The main objectives were: (i) to predict, through a simple mathematical fitting, the remediation time of soils with water contents different from those used in the experiments; (ii) to analyse the influence of soil water content on the: (ii1) remediation time; (ii2) remediation efficiency; and (ii3) distribution of contaminants in the different phases present into the soil matrix after the remediation process. For sandy soils with negligible contents of clay and natural organic matter, artificially contaminated with cyclohexane before vapour extraction, it was concluded that (i) if the soil water content belonged to the range considered in the experiments with the prepared soils, then the remediation time of real soils of similar characteristics could be successfully predicted, with relative differences not higher than 10%, through a simple mathematical fitting of experimental results; (ii) increasing soil water content from 0% to 6% had the following consequences: (ii1) increased remediation time (1.8–4.9 h, respectively); (ii2) decreased remediation efficiency (99–97%, respectively); and (ii3) decreased the amount of contaminant adsorbed onto the soil and in the non-aqueous liquid phase, thus increasing the amount of contaminant in the aqueous and gaseous phases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract This work reports the analysis of the efficiency and time of soil remediation using vapour extraction as well as provides comparison of results using both, prepared and real soils. The main objectives were: (i) to analyse the efficiency and time of remediation according to the water and natural organic matter content of the soil; and (ii) to assess if a previous study, performed using prepared soils, could help to preview the process viability in real conditions. For sandy soils with negligible clay content, artificially contaminated with cyclohexane before vapour extraction, it was concluded that (i) the increase of soil water content and mainly of natural organic matter content influenced negatively the remediation process, making it less efficient, more time consuming, and consequently more expensive; and (ii) a previous study using prepared soils of similar characteristics has proven helpful for previewing the process viability in real conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Folk medicine is a relevant and effective part of indigenous healthcare systems which are, in practice, totally dependent on traditional healers. An outstanding coincidence between indigenous medicinal plant uses and scientifically proved pharmacological properties of several phytochemicals has been observed along the years. This work focused on the leaves of a medicinal plant traditionally used for therapeutic benefits (Angolan Cymbopogon citratus), in order to evaluate their nutritional value. The bioactive phytochemical composition and antioxidant activity of leaf extracts prepared with different solvents (water, methanol and ethanol) were also evaluated. The plant leaves contained ~60% of carbohydrates, protein (~20%), fat (~5%), ash (~4%) and moisture (~9%). The phytochemicals screening revealed the presence of tannins, flavonoids, and terpenoids in all extracts. Methanolic extracts also contained alkaloids and steroids. Several methods were used to evaluate total antioxidant capacity of the different extracts (DPPH; NO; and H2O2 scavenging assays, reducing power, and FRAP). Ethanolic extracts presented a significantly higher antioxidant activity (p < 0.05) except for FRAP, in which the best results were achieved by the aqueous extracts. Methanolic extracts showed the lowest radical scavenging activities for both DPPH; and NO; radicals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past few years the so-called gadgets like cellular phones, personal data assistants and digital cameras are more widespread even with less technological aware users. However, for several reasons, the factory-floor itself seems to be hermetic to this changes ... After the fieldbus revolution, the factory-floor has seen an increased use of more and more powerful programmable logic controllers and user interfaces but the way they are used remains almost the same. We believe that new user-computer interaction techniques including multimedia and augmented rcaliry combined with now affordable technologies like wearable computers and wireless networks can change the way the factory personal works together with the roachines and the information system on the factory-floor. This new age is already starting with innovative uses of communication networks on the factory-floor either using "standard" networks or enhancing industrial networks with multimedia and wireless capabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes how MPEG-4 object based video (obv) can be used to allow selected objects to be inserted into the play-out stream to a specific user based on a profile derived for that user. The application scenario described here is for personalized product placement, and considers the value of this application in the current and evolving commercial media distribution market given the huge emphasis media distributors are currently placing on targeted advertising. This level of application of video content requires a sophisticated content description and metadata system (e.g., MPEG-7). The scenario considers the requirement for global libraries to provide the objects to be inserted into the streams. The paper then considers the commercial trading of objects between the libraries, video service providers, advertising agencies and other parties involved in the service. Consequently a brokerage of video objects is proposed based on negotiation and trading using intelligent agents representing the various parties. The proposed Media Brokerage Platform is a multi-agent system structured in two layers. In the top layer, there is a collection of coarse grain agents representing the real world players – the providers and deliverers of media contents and the market regulator profiler – and, in the bottom layer, there is a set of finer grain agents constituting the marketplace – the delegate agents and the market agent. For knowledge representation (domain, strategic and negotiation protocols) we propose a Semantic Web approach based on ontologies. The media components contents should be represented in MPEG-7 and the metadata describing the objects to be traded should follow a specific ontology. The top layer content providers and deliverers are modelled by intelligent autonomous agents that express their will to transact – buy or sell – media components by registering at a service registry. The market regulator profiler creates, according to the selected profile, a market agent, which, in turn, checks the service registry for potential trading partners for a given component and invites them for the marketplace. The subsequent negotiation and actual transaction is performed by delegate agents in accordance with their profiles and the predefined rules of the market.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a novel business model to support media content personalisation: an agent-based business-to-business (B2B) brokerage platform for media content producer and distributor businesses. Distributors aim to provide viewers with a personalised content experience and producers wish to en-sure that their media objects are watched by as many targeted viewers as possible. In this scenario viewers and media objects (main programmes and candidate objects for insertion) have profiles and, in the case of main programme objects, are annotated with placeholders representing personalisation opportunities, i.e., locations for insertion of personalised media objects. The MultiMedia Brokerage (MMB) platform is a multiagent multilayered brokerage composed by agents that act as sellers and buyers of viewer stream timeslots and/or media objects on behalf of the registered businesses. These agents engage in negotiations to select the media objects that best match the current programme and viewer profiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this article is to show how it is possible to integrate stories and ICT in Content Language Integrated Learning (CLIL) for English as a foreign language (EFL) learning in bilingual schools. Two Units of Work are presented. One, for the second year of Primary, is based on a Science topic, ‘Materials’. The story used is ‘The three little pigs’ and the computer program ‘JClic’. The other one is based on a Science and Arts topic for the sixth year of Primary, the story used is ‘Charlotte’s Web’ and the computer program ‘Atenex’.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the current complexity of communication protocols, implementing its layers totally in the kernel of the operating system is too cumbersome, and it does not allow use of the capabilities only available in user space processes. However, building protocols as user space processes must not impair the responsiveness of the communication. Therefore, in this paper we present a layer of a communication protocol, which, due to its complexity, was implemented in a user space process. Lower layers of the protocol are, for responsiveness issues, implemented in the kernel. This protocol was developed to support large-scale power-line communication (PLC) with timing requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-loss power transmission gears operate at lower temperature than conventional ones because their teeth geometry is optimized to reduce friction. The main objective of this work is to compare the operating stabilization temperature and efficiency of low-loss austempered ductile iron (ADI) and carburized steel gears. Three different low-loss tooth geometries were adopted (types 311, 411 and 611, all produced using standard 20° pressure angle tools) and corresponding steel and ADI gears were tested in a FZG machine. The results obtained showed that low-loss geometries had a significant influence on power loss, gears 611 generating lower power loss than gears 311. At low speeds (500 and 1000 rpm) and high torque ADI gears generated lower power loss than steel gears. However, at high speed and high torque (high input power and high stabilization temperature) steel gears had better efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta dissertação apresenta uma proposta de sistema capaz de preencher a lacuna entre documentos legislativos em formato PDF e documentos legislativos em formato aberto. O objetivo principal é mapear o conhecimento presente nesses documentos de maneira a representar essa coleção como informação interligada. O sistema é composto por vários componentes responsáveis pela execução de três fases propostas: extração de dados, organização de conhecimento, acesso à informação. A primeira fase propõe uma abordagem à extração de estrutura, texto e entidades de documentos PDF de maneira a obter a informação desejada, de acordo com a parametrização do utilizador. Esta abordagem usa dois métodos de extração diferentes, de acordo com as duas fases de processamento de documentos – análise de documento e compreensão de documento. O critério utilizado para agrupar objetos de texto é a fonte usada nos objetos de texto de acordo com a sua definição no código de fonte (Content Stream) do PDF. A abordagem está dividida em três partes: análise de documento, compreensão de documento e conjunção. A primeira parte da abordagem trata da extração de segmentos de texto, adotando uma abordagem geométrica. O resultado é uma lista de linhas do texto do documento; a segunda parte trata de agrupar os objetos de texto de acordo com o critério estipulado, produzindo um documento XML com o resultado dessa extração; a terceira e última fase junta os resultados das duas fases anteriores e aplica regras estruturais e lógicas no sentido de obter o documento XML final. A segunda fase propõe uma ontologia no domínio legal capaz de organizar a informação extraída pelo processo de extração da primeira fase. Também é responsável pelo processo de indexação do texto dos documentos. A ontologia proposta apresenta três características: pequena, interoperável e partilhável. A primeira característica está relacionada com o facto da ontologia não estar focada na descrição pormenorizada dos conceitos presentes, propondo uma descrição mais abstrata das entidades presentes; a segunda característica é incorporada devido à necessidade de interoperabilidade com outras ontologias do domínio legal, mas também com as ontologias padrão que são utilizadas geralmente; a terceira característica é definida no sentido de permitir que o conhecimento traduzido, segundo a ontologia proposta, seja independente de vários fatores, tais como o país, a língua ou a jurisdição. A terceira fase corresponde a uma resposta à questão do acesso e reutilização do conhecimento por utilizadores externos ao sistema através do desenvolvimento dum Web Service. Este componente permite o acesso à informação através da disponibilização de um grupo de recursos disponíveis a atores externos que desejem aceder à informação. O Web Service desenvolvido utiliza a arquitetura REST. Uma aplicação móvel Android também foi desenvolvida de maneira a providenciar visualizações dos pedidos de informação. O resultado final é então o desenvolvimento de um sistema capaz de transformar coleções de documentos em formato PDF para coleções em formato aberto de maneira a permitir o acesso e reutilização por outros utilizadores. Este sistema responde diretamente às questões da comunidade de dados abertos e de Governos, que possuem muitas coleções deste tipo, para as quais não existe a capacidade de raciocinar sobre a informação contida, e transformá-la em dados que os cidadãos e os profissionais possam visualizar e utilizar.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the variety of mobile devices connected to the Internet growts there is a correponding increase in the need to deliver content tailored to their heterogeneous characteristics. At the same time, we watch to the increase of e-learning in universities through the adoption of electronic platforms and standards. Not surprisingly, the concept of mLearning (Mobile Learning) appeared in recent years decreasing the limitation of learning location with the mobility of general portable devices. However, this large number and variety of Web-enabled devices poses several challenges for Web content creators who want to automatic get the delivery context and adapt the content to the client mobile devices. In this paper we analyze several approaches to defining delivery context and present an architecture for deliver uniform mLearning content to mobile devices denominated eduMCA - Educational Mobile Content Adaptation. With the eduMCA system the Web authors will not need to create specialized pages for each kind of device, since the content is automatically transformed to adapt to any mobile device capabilities from WAP to XHTML MP-compliant devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the design of a user interface for repositories of learning objects. It integrates several tasks, such as submission, browse, search, and comment/review of learning objects, on a single screen layout. This design is being implemented on the web front-end of crimsonHex, a repository of specialized learning objects, developed as part of the EduJudge, a European project that aims to bring automatic evaluation of programming problems to eLearning systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main purpose of this work was the development of procedures for the simulation of atmospheric ows over complex terrain, using OpenFOAM. For this aim, tools and procedures were developed apart from this code for the preprocessing and data extraction, which were thereafter applied in the simulation of a real case. For the generation of the computational domain, a systematic method able to translate the terrain elevation model to a native OpenFOAM format (blockMeshDict) was developed. The outcome was a structured mesh, in which the user has the ability to de ne the number of control volumes and its dimensions. With this procedure, the di culties of case set up and the high computation computational e ort reported in literature associated to the use of snappyHexMesh, the OpenFOAM resource explored until then for the accomplishment of this task, were considered to be overwhelmed. Developed procedures for the generation of boundary conditions allowed for the automatic creation of idealized inlet vertical pro les, de nition of wall functions boundary conditions and the calculation of internal eld rst guesses for the iterative solution process, having as input experimental data supplied by the user. The applicability of the generated boundary conditions was limited to the simulation of turbulent, steady-state, incompressible and neutrally strati ed atmospheric ows, always recurring to RaNS (Reynolds-averaged Navier-Stokes) models. For the modelling of terrain roughness, the developed procedure allowed to the user the de nition of idealized conditions, like an uniform aerodynamic roughness length or making its value variable as a function of topography characteristic values, or the using of real site data, and it was complemented by the development of techniques for the visual inspection of generated roughness maps. The absence and the non inclusion of a forest canopy model limited the applicability of this procedure to low aerodynamic roughness lengths. The developed tools and procedures were then applied in the simulation of a neutrally strati ed atmospheric ow over the Askervein hill. In the performed simulations was evaluated the solution sensibility to di erent convection schemes, mesh dimensions, ground roughness and formulations of the k - ε and k - ω models. When compared to experimental data, calculated values showed a good agreement of speed-up in hill top and lee side, with a relative error of less than 10% at a height of 10 m above ground level. Turbulent kinetic energy was considered to be well simulated in the hill windward and hill top, and grossly predicted in the lee side, where a zone of ow separation was also identi ed. Despite the need of more work to evaluate the importance of the downstream recirculation zone in the quality of gathered results, the agreement between the calculated and experimental values and the OpenFOAM sensibility to the tested parameters were considered to be generally in line with the simulations presented in the reviewed bibliographic sources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The content of a Learning Object is frequently characterized by metadata from several standards, such as LOM, SCORM and QTI. Specialized domains require new application profiles that further complicate the task of editing the metadata of learning object since their data models are not supported by existing authoring tools. To cope with this problem we designed a metadata editor supporting multiple metadata languages, each with its own data model. It is assumed that the supported languages have an XML binding and we use RDF to create a common metadata representation, independent from the syntax of each metadata languages. The combined data model supported by the editor is defined as an ontology. Thus, the process of extending the editor to support a new metadata language is twofold: firstly, the conversion from the XML binding of the metadata language to RDF and vice-versa; secondly, the extension of the ontology to cover the new metadata model. In this paper we describe the general architecture of the editor, we explain how a typical metadata language for learning objects is represented as an ontology, and how this formalization captures all the data required to generate the graphical user interface of the editor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O bacalhau (Gadus morhua) faz parte da dieta alimentar dos portugueses há vários séculos, sendo atualmente, um dos maiores consumidores deste peixe a nível mundial. Após o processo de salga, esta espécie possui características únicas como a consistência, cheiro, paladar e cor amarela. É precisamente devido à coloração do peixe que alguns produtores da Islândia, Noruega e Dinamarca requisitaram às autoridades da União Europeia (UE) a aprovação da utilização de polifosfatos no processo de salga húmida do bacalhau. Os polifosfatos são aditivos alimentares bastante usados no processamento do pescado pois previnem a oxidação dos lípidos e proteínas do músculo do bacalhau, evitando assim a indesejada mudança de cor do peixe. Apesar dos esforços da Associação dos Industriais do Bacalhau (AIB) e do governo português para a rejeição da proposta nórdica, tal não se verificou. Deste modo, no início do próximo ano já será possível a venda na UE de bacalhau com fosfatos. A quantificação do teor de fosfatos no bacalhau é geralmente efetuada por Espetrofotometria de absorção molecular no ultravioleta-visível (UV-Visível). Esta quantificação é baseada no método de determinação do fósforo total, através da hidrólise dos fosfatos a ortofosfatos com posterior medição da cor amarela, gerada pela reação destes com uma solução de molibdato-vanadato. O objetivo desta dissertação foi a validação de um método de análise para a quantificação dos polifosfatos no bacalhau. O método validado foi o descrito na norma NP 4495 para produtos de pesca e aquicultura. O desenvolvimento deste trabalho foi realizado em laboratório acreditado para águas e produtos alimentares (Equilibrium - Laboratório de Controlo de Qualidade e de Processos Lda, L0312). Foi ainda determinada a influência do teor de cloreto de sódio na quantificação dos polifosfatos e o teor de humidade, uma vez que este pode afetar o produto durante a sua comercialização. No processo de validação do método foram estudados diversos parâmetros, tais como a seletividade, linearidade, sensibilidade, limite de quantificação e precisão. Pela análise dos resultados obtidos conclui-se que o método para determinação de fosfatos no bacalhau se encontra validado, uma vez que satisfaz todas as especificações determinadas para cada parâmetro de validação avaliado.