18 resultados para domain knowledge reuse
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Develop a new model of Absorptive Capacity taking into account two variables namely Learning and knowledge to explain how companies transform information into knowledge
Resumo:
Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined.
Resumo:
Most of the wastewater treatment systems in small rural communities of the Cova da Beira region (Portugal) consist of constructed wetlands (CW) with horizontal subsurface flow (HSSF). It is believed that those systems allow the compliance of discharge standards as well as the production of final effluents with suitability for reuse. Results obtained in a nine-month campaign in an HSSF bed pointed out that COD and TSS removal were lower than expected. A discrete sampling also showed that removal of TC, FC and HE was not enough to fulfill international irrigation goals. However, the bed had a very good response to variation of incoming nitrogen loads presenting high removal of nitrogen forms. A good correlation between mass load and mass removal rate was observed for BOD5, COD, TN, NH4-N, TP and TSS, which shows a satisfactory response of the bed to the variable incoming loads. The entrance of excessive loads of organic matter and solids contributed for the decrease of the effective volume for pollutant uptake and therefore, may have negatively influenced the treatment capability. Primary treatment should be improved in order to decrease the variation of incoming organic and solid loads and to improve the removal of COD, solids and pathogenic. The final effluent presented good physical-chemical quality to be reused for irrigation, which is the most likely application in the area.
Resumo:
Wyner-Ziv (WZ) video coding is a particular case of distributed video coding, the recent video coding paradigm based on the Slepian-Wolf and Wyner-Ziv theorems that exploits the source correlation at the decoder and not at the encoder as in predictive video coding. Although many improvements have been done over the last years, the performance of the state-of-the-art WZ video codecs still did not reach the performance of state-of-the-art predictive video codecs, especially for high and complex motion video content. This is also true in terms of subjective image quality mainly because of a considerable amount of blocking artefacts present in the decoded WZ video frames. This paper proposes an adaptive deblocking filter to improve both the subjective and objective qualities of the WZ frames in a transform domain WZ video codec. The proposed filter is an adaptation of the advanced deblocking filter defined in the H.264/AVC (advanced video coding) standard to a WZ video codec. The results obtained confirm the subjective quality improvement and objective quality gains that can go up to 0.63 dB in the overall for sequences with high motion content when large group of pictures are used.
Resumo:
We provide all agent; the capability to infer the relations (assertions) entailed by the rules that, describe the formal semantics of art RDFS knowledge-base. The proposed inferencing process formulates each semantic restriction as a rule implemented within a, SPARQL query statement. The process expands the original RDF graph into a fuller graph that. explicitly captures the rule's described semantics. The approach is currently being explored in order to support descriptions that follow the generic Semantic Web Rule Language. An experiment, using the Fire-Brigade domain, a small-scale knowledge-base, is adopted to illustrate the agent modeling method and the inferencing process.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
The effects of the Miocene through Present compression in the Tagus Abyssal Plain are mapped using the most up to date available to scientific community multi-channel seismic reflection and refraction data. Correlation of the rift basin fault pattern with the deep crustal structure is presented along seismic line IAM-5. Four structural domains were recognized. In the oceanic realm mild deformation concentrates in Domain I adjacent to the Tore-Madeira Rise. Domain 2 is characterized by the absence of shortening structures, except near the ocean-continent transition (OCT), implying that Miocene deformation did not propagate into the Abyssal Plain, In Domain 3 we distinguish three sub-domains: Sub-domain 3A which coincides with the OCT, Sub-domain 3B which is a highly deformed adjacent continental segment, and Sub-domain 3C. The Miocene tectonic inversion is mainly accommodated in Domain 3 by oceanwards directed thrusting at the ocean-continent transition and continentwards on the continental slope. Domain 4 corresponds to the non-rifted continental margin where only minor extensional and shortening deformation structures are observed. Finite element numerical models address the response of the various domains to the Miocene compression, emphasizing the long-wavelength differential vertical movements and the role of possible rheologic contrasts. The concentration of the Miocene deformation in the transitional zone (TC), which is the addition of Sub-domain 3A and part of 3B, is a result of two main factors: (1) focusing of compression in an already stressed region due to plate curvature and sediment loading; and (2) theological weakening. We estimate that the frictional strength in the TC is reduced in 30% relative to the surrounding regions. A model of compressive deformation propagation by means of horizontal impingement of the middle continental crust rift wedge and horizontal shearing on serpentinized mantle in the oceanic realm is presented. This model is consistent with both the geological interpretation of seismic data and the results of numerical modelling.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
CoDeSys "Controller Development Systems" is a development environment for programming in the area of automation controllers. It is an open source solution completely in line with the international industrial standard IEC 61131-3. All five programming languages for application programming as defined in IEC 61131-3 are available in the development environment. These features give professionals greater flexibility with regard to programming and allow control engineers have the ability to program for many different applications in the languages in which they feel most comfortable. Over 200 manufacturers of devices from different industrial sectors offer intelligent automation devices with a CoDeSys programming interface. In 2006, version 3 was released with new updates and tools. One of the great innovations of the new version of CoDeSys is object oriented programming. Object oriented programming (OOP) offers great advantages to the user for example when wanting to reuse existing parts of the application or when working on one application with several developers. For this reuse can be prepared a source code with several well known parts and this is automatically generated where necessary in a project, users can improve then the time/cost/quality management. Until now in version 2 it was necessary to have hardware interface called “Eni-Server” to have access to the generated XML code. Another of the novelties of the new version is a tool called Export PLCopenXML. This tool makes it possible to export the open XML code without the need of specific hardware. This type of code has own requisites to be able to comply with the standard described above. With XML code and with the knowledge how it works it is possible to do component-oriented development of machines with modular programming in an easy way. Eplan Engineering Center (EEC) is a software tool developed by Mind8 GmbH & Co. KG that allows configuring and generating automation projects. Therefore it uses modules of PLC code. The EEC already has a library to generate code for CoDeSys version 2. For version 3 and the constant innovation of drivers by manufacturers, it is necessary to implement a new library in this software. Therefore it is important to study the XML export to be then able to design any type of machine. The purpose of this master thesis is to study the new version of the CoDeSys XML taking into account all aspects and impact on the existing CoDeSys V2 models and libraries in the company Harro Höfliger Verpackungsmaschinen GmbH. For achieve this goal a small sample named “Traffic light” in CoDeSys version 2 will be done and then, using the tools of the new version it there will be a project with version 3 and also the EEC implementation for the automatically generated code.
Resumo:
In this paper we examine the construction of first entities in narratives produced by children of 5, 7, 10 years and adults1 . The study demonstrates that when children reformulate they try to construct entities detached from the situation of enunciation, which means that they construct a detached or a translated plane and they construct linguistic existence of entities. Entities must first be introduced into the enunciative space and then comments will be made in subsequent utterances. Constructing existence supposes extraction. This consists of “singling out an occurrence, that is, isolating and drawing its spatiotemporal boundaries” (Culioli, 1990, p. 182) . Once the occurrence of the notion is constructed (which means it has become a separate occurrence with situational properties), children can predicate about it. However, there are children who do not construct the linguistic existence of entities. I hypothesize that the mode of task presentation influences the success of constructing linguistic existence. Sharing the investigator’s knowledge about the stimulus images, children do not ascribe an existential status to the occurrence of the notional domain.
Resumo:
Cork processing involves a boiling step to make the cork softer, which consumes a high volume of water and generates a wastewater with a high organic content, rich in tannins. An assessment of the final wastewater characteristics and of the boiling water composition along the boiling process was performed. The parameters studied were pH, color, total organic carbon (TOC), chemical and biochemical oxygen demands (COD, BOD5, BOD20), total suspended solids (TSS), total phenols and tannins (TP, TT). It was observed that the water solutes extraction power is significantly reduced for higher quantities of cork processed. Valid relationships between parameters were established not only envisaging wastewater characterization but also to provide an important tool for wastewater monitoring and for process control/optimization. Boiling water biodegradability presented decreasing values with the increase of cork processed and for the final wastewater its value is always lower than 0.5, indicating that these wastewaters are very difficult to treat by biological processes. The biodegradability was associated with the increase of tannin content that can rise up to 0.7 g/L. These compounds can be used by other industries when concentrated and the clarified wastewater can be reused, which is a potential asset in this wastewater treatment.
Resumo:
This paper presents the foundations of an Academic Social Network (ASN) focusing the Bologna Declaration and the Bologna Process (BP) mobility issues using ontological support. An ASN will permit students to share commons academic interests, preferences and mobility paths in the European Higher Education Space (EHES). The description of the conceptual support is ontology based allowing knowledge sharing and reuse. An approach is presented by merging Academic Ontology to Support the Bologna Mobility Process with Friend of a Friend ontology. The resulting ontology supports the student mobility profile in the ASN. The strategies to make available, in the network, knowledge about mobility issues, are presented including knowledge discovery and simulation approaches to cover student's mobility scenarios for BP.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Enisno do 1.º e 2.º Ciclo do Ensino Básico
Resumo:
Nas últimas décadas, tem se observado uma crescente demanda pelo domínio da linguagem escrita em todas as áreas da vida social (Olson 2009). Essa demanda não se limita apenas ao contexto brasileiro, mas refere-se a um contexto mundial, que hoje coloca o domínio das diversas capacidades de linguagem, em especial das capacidades de leitura e escrita, como condição para acesso ao conhecimento, à participação social e o exercício efetivo da cidadania. O domínio de tais capacidades refere-se a um certo tipo ou nível de literacia que vai além da descodificação, diz respeito às diversas capacidades de leitura e escrita necessárias em diferentes práticas sociais. Partindo de uma parceria de investigação entre instituições de ensino superior em Portugal e no Brasil, temos vindo a reflectir sobre questões de currículo nos dois países.
Resumo:
In this paper, we introduce an innovative course in the Portuguese Context, the Master's Course in “Integrated Didactics in Mother Tongue, Maths, Natural and Social Sciences”, taking place at the Lisbon School of Education and discussing in particular the results of the evaluation made by the students who attended the Curricular Unit - Integrated Didactics (CU-ID). This course was designed for in-service teachers of the first six years of schooling and intends to improve connections between different curriculum areas. In this paper, we start to present a few general ideas about curriculum development; to discuss the concept of integration; to present the principles and objectives of the course created as well as its structure; to describe the methodology used in the evaluation process of the above mentioned CU-ID. The results allow us to state that the students recognized, as positive features of the CU-ID, the presence in all sessions of two teachers simultaneously from different scientific areas, as well as invitations issued to specialists on the subject of integration and to other teachers that already promote forms of integration in schools. As negative features, students noted a lack of integrated purpose, applying simultaneously the four scientific areas of the course, and also indicated the need to be familiar with more models of integrated education. Consequently, the suggestions for improvement derived from these negative features. The students also considered that their evaluation process was correct, due to the fact that it was focused on the design of an integrated project for one of the school years already mentioned.