35 resultados para Knowledge creation
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The main purpose of this research is to identify the hidden knowledge and learning mechanisms in the organization in order to disclosure the tacit knowledge and transform it into explicit knowledge. Most firms usually tend to duplicate their efforts acquiring extra knowledge and new learning skills while forgetting to exploit the existing ones thus wasting one life time resources that could be applied to increase added value within the firm overall competitive advantage. This unique value in the shape of creation, acquisition, transformation and application of learning and knowledge is not disseminated throughout the individual, group and, ultimately, the company itself. This work is based on three variables that explain the behaviour of learning as the process of construction and acquisition of knowledge, namely internal social capital, technology and external social capital, which include the main attributes of learning and knowledge that help us to capture the essence of this symbiosis. Absorptive Capacity provides the right tool to explore this uncertainty within the firm it is possible to achieve the perfect match between learning skills and knowledge needed to support the overall strategy of the firm. This study has taken in to account a sample of the Portuguese textile industry and it is based on a multisectorial analysis that makes it possible a crossfunctional analysis to check on the validity of results in order to better understand and capture the dynamics of organizational behavior.
Resumo:
The main purpose of this research is to identify the hidden knowledge and learning mechanisms in the organization in order to disclosure the tacit knowledge and transform it into explicit knowledge. Most firms usually tend to duplicate their efforts acquiring extra knowledge and new learning skills while forgetting to exploit the existing ones thus wasting one life time resources that could be applied to increase added value within the firm overall competitive advantage. This unique value in the shape of creation, acquisition, transformation and application of learning and knowledge is not disseminated throughout the individual, group and, ultimately, the company itself. This work is based on three variables that explain the behaviour of learning as the process of construction and acquisition of knowledge, namely internal social capital, technology and external social capital, which include the main attributes of learning and knowledge that help us to capture the essence of this symbiosis. Absorptive Capacity provides the right tool to explore this uncertainty within the firm it is possible to achieve the perfect match between learning skills and knowledge needed to support the overall strategy of the firm. This study has taken in to account a sample of the Portuguese textile industry and it is based on a multisectorial analysis that makes it possible a crossfunctional analysis to check on the validity of results in order to better understand and capture the dynamics of organizational behavior.
Resumo:
Develop a new model of Absorptive Capacity taking into account two variables namely Learning and knowledge to explain how companies transform information into knowledge
Resumo:
Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined.
Resumo:
A área de comercialização de energia eléctrica conheceu uma profunda mudança após a liberalização do sector eléctrico, que levou à criação de algumas entidades, as quais gerem os mercados de electricidade europeus. Relativamente a Portugal e Espanha, durante esse processo de liberalização, deu-se também um acordo que os levou à criação de um mercado conjunto, um mercado Ibérico (MIBEL). Dentro deste mercado estão contemplados dois operadores, sendo que um deles representa o pólo Português (OMIP) e o outro representa o pólo Espanhol (OMEL). O OMIP contempla os mercados a prazo, ou futuros, normalmente apresenta contratos de energia comercializada com durabilidade de semanas, meses, trimestres, semestres ou mesmo anos. Diariamente estes contratos poderão vencer no OMEL, que engloba os mercados, diário e intradiário. Este, ao contrário do OMIP negoceia para o dia seguinte (mercado diário) ou para uma determinada altura do dia (mercado intra diário). O mercado diário será o exemplo usado para a criação do simulador interactivo do mercado de energia eléctrica. Este será composto por diversos utilizadores (jogadores), que através de uma plataforma HTML irão investir em centrais de energia eléctrica, negociar licitações e analisar o funcionamento e resultados deste mercado. Este jogo subdividir-se-á então em 3 fases: 1. Fase de investimento; 2. Fase de venda (licitações); 3. Fase de mercado. Na fase do investimento, o jogador terá a possibilidade de adquirir unidades de geração de energia eléctrica de seis tipos de tecnologia: 1. Central a Carvão; 2. Central de Ciclo Combinado; 3. Central Hídrica; 4. Central Eólica; 5. Central Solar; 6. Central Nuclear. Com o decorrer das jogadas o jogador poderá aumentar a sua capacidade de investimento, com a venda de energia, sendo o vencedor aquele que mais saldo tiver no fim do número de jogadas previamente definidos, ou aquele que mais depressa atingir o saldo definido como limite pelo administrador do jogo. A nível pedagógico este simulador é muito interessante pois para além de o utilizador ficar a conhecer as tecnologias em causa e as vantagens e desvantagens das centrais de energia renovável e das centrais a combustíveis fósseis, este ganha igualmente uma sensibilidade para questões de nível ambiental, tais como o aumento dos gases de estufa e o degelo resultante do aquecimento global provocado por esses gases. Para além do conhecimento adquirido na parte de energia eléctrica este jogo dará a conhecer ao utilizador o funcionamento do mercado da energia eléctrica, bem como as tácticas que este poderá usar a seu favor neste tipo de mercado.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
We investigate shareholder value creation of Spanish listed firms in response to announcements of acquisitions of unlisted companies and compare this experience to the purchase of listed firms over the period 1991–2006. Similar to foreign markets, acquirers of listed targets earn insignificant average abnormal returns, whereas acquirers of unlisted targets gain significant positive average abnormal returns. When we relate these results to company and transaction characteristics our findings diverge from those reported in the literature for other foreign markets, as our evidence suggests that the listing status effect is mainly associated with the fact that unlisted firms tend to be smaller and lesser–known firms, and thus suffer from a lack of competition in the market for corporate control. Consequently, the payment of lower premiums and the possibility of diversifying shareholders’ portfolios lead to unlisted firm acquisitions being viewed as value–orientated transactions.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ciências da Educação, especialidade em Supervisão em Educação
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde.
Resumo:
A criação da medida de política educativa (Despacho Normativo n.º 55/2008) - Territórios Educativos de Intervenção Prioritária (TEIP) corresponde à necessidade de concretização de um princípio da Lei de Bases do Sistema Educativo português, visando assegurar uma educação de base para todos, bem sucedida. Esta mesma medida acontece na sequência de uma outra anterior com as mesmas características, criada no ano lectivo de 1996/1997. A sua implementação no terreno foi objecto de um estudo encomendado pelo Instituto de Inovação Educacional no qual participei, integrando uma equipa de investigação da Faculdade de Psicologia e Ciências da Educação da Universidade de Lisboa. Nesta comunicação, proponho-me revisitar esse mesmo estudo, dando conta do balanço crítico então realizado, e, à luz das suas principais conclusões, refletir agora na qualidade de perito externo de um TEIP da região de Lisboa, sobre os modos como os TEIP2 evoluíram, em que sentido se deu tal evolução e quais os problemas que persistem. A consecução de um dos objectivos centrais desta medida de política educativa, especificamente a melhoria da qualidade das aprendizagens traduzida no sucesso educativo dos alunos, parece, de acordo com os relatórios oficiais, não ser ainda muito expressiva nem muito consistente. Torna-se, portanto, necessário interrogar e problematizar as práticas de ensino realizadas pelos professores, repensando estratégias pedagógicas que se configurem relevantes para dar expressão real àquele objectivo.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ciências da Educação - Especialidade Educação Especial
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Mestrado em Contabilidade