105 resultados para Level Set Approximation
Resumo:
This paper analyzes Knowledge Management (KM) as a political activity, made by the great political leaders of the world. We try to inspect if at the macro political level KM is made, and how. The research is interesting because given that we live in a Knowledge society, in the Information Era, it is more or less obvious that the political leaders should also do KM. However we don’t know of any previous study on KM and world leaders and this paper wants to be a first step to fill that gap. As a methodology we use literature review: given this one is a first preliminary study we use data we found in the Internet and other databases like EBSCO. We divide the analysis in two main parts: theoretical ideas first, and an application second. The second part is it self divided in two segments: the past and the present times. We find that rather not surprisingly, KM always was and is pervasive in the activity of the world leaders, and has become more and more diverse has power itself became to be more and more disseminated in the world. The study has the limitation of relying on insights and texts and not on interviews. But we believe it is very interesting to make this kind of analysis and such studies may help improving the democracies in the world.
Resumo:
This paper addresses a gap in the literature concerning the management of Intellectual Capital (IC) in a port, which is a network of independent organizations that act together in the provision of a set of services. As far as the authors are aware, this type of empirical context has been unexplored when regarding knowledge management or IC creation/destruction. Indeed, most research in IC still focus on individual firms, despite the more recent interest placed on the analysis of macro-level units such as regions or nations. In this study, we conceptualise the port as meta-organisation, which has the generic goal of economic development, both for itself and for the region where it is located. It provides us with a unique environment due to its complexity as an “organisation” composed by several organisations, connected by interdependency relationships and, typically, with no formal hierarchy. Accordingly, actors’ interests are not always aligned and in some situations their individual interests can be misaligned with the collective goals of the port. Moreover, besides having their own interests, port actors also have different sources of influence and different levels of power, which can impact on the port’s Collective Intellectual Capital (CIC). Consequently, the management of the port’s CIC can be crucial in order for its goals to be met. With this paper we intend to discuss how the network coordinator (the port authority) manages those complex relations of interest and power in order to develop collaboration and mitigate conflict, thus creating collective intellectual assets or avoiding intellectual liabilities that may emerge for the whole port. The fact that we are studying complex and dynamic processes, about which there is a lack of understanding, in a complex and atypical organisation, leads us to consider the case study as an appropriate method of research. Evidence presented in this study results from preliminary interviews and also from document analysis. Findings suggest that alignment of interests and actions, at both dyadic and networking levels, is critical to develop a context of collaboration/cooperation within the port community and, accordingly, the port coordinator should make use of different types of power in order to ensure that port’s goals are achieved.
Resumo:
Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.
Resumo:
Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do Grau de Mestre em Auditoria, sob a orientação de Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira
Resumo:
This paper explores the calculation of fractional integrals by means of the time delay operator. The study starts by reviewing the memory properties of fractional operators and their relationship with time delay. Based on the time response of the Mittag-Leffler function an approximation of fractional integrals consisting of time delayed samples is proposed. The tuning of the approximation is optimized by means of a genetic algorithm. The results demonstrate the feasibility of the new perspective and the limits of their application.
Resumo:
Os osciloscópios digitais são utilizados em diversas áreas do conhecimento, assumindo-se no âmbito da engenharia electrónica, como instrumentos indispensáveis. Graças ao advento das Field Programmable Gate Arrays (FPGAs), os instrumentos de medição reconfiguráveis, dadas as suas vantagens, i.e., altos desempenhos, baixos custos e elevada flexibilidade, são cada vez mais uma alternativa aos instrumentos tradicionalmente usados nos laboratórios. Tendo como objectivo a normalização no acesso e no controlo deste tipo de instrumentos, esta tese descreve o projecto e implementação de um osciloscópio digital reconfigurável baseado na norma IEEE 1451.0. Definido de acordo com uma arquitectura baseada nesta norma, as características do osciloscópio são descritas numa estrutura de dados denominada Transducer Electronic Data Sheet (TEDS), e o seu controlo é efectuado utilizando um conjunto de comandos normalizados. O osciloscópio implementa um conjunto de características e funcionalidades básicas, todas verificadas experimentalmente. Destas, destaca-se uma largura de banda de 575kHz, um intervalo de medição de 0.4V a 2.9V, a possibilidade de se definir um conjunto de escalas horizontais, o nível e declive de sincronismo e o modo de acoplamento com o circuito sob análise. Arquitecturalmente, o osciloscópio é constituído por um módulo especificado com a linguagem de descrição de hardware (HDL, Hardware Description Language) Verilog e por uma interface desenvolvida na linguagem de programação Java®. O módulo é embutido numa FPGA, definindo todo o processamento do osciloscópio. A interface permite o seu controlo e a representação do sinal medido. Durante o projecto foi utilizado um conversor Analógico/Digital (A/D) com uma frequência máxima de amostragem de 1.5MHz e 14 bits de resolução que, devido às suas limitações, obrigaram à implementação de um sistema de interpolação multi-estágio com filtros digitais.
Resumo:
Neste documento, apresenta-se o reflexo sobre o trabalho de estágio desenvolvido entre 17 de Fevereiro e 31 de Julho de 2014, nas instalações da Fábrica das Estruturas Metálicas da Faurecia, em São João da Madeira, num Projecto Final no âmbito de Implementação de Ferramentas Lean. O objetivo proposto foi a participação na procura e implementação de soluções, com vista à melhoria contínua do sistema de produção. Foi utilizado, para esse efeito, um vasto conjunto de ferramentas entre as quais os 5S, QRCI, Standardized Work, entre outras e amplamente empregues na indústria automóvel (e nesta empresa em particular), através do Sistema de Excelência Faurecia (FES), aplicado ao ramo de negócio onde está solidamente implantada esta multinacional de origem francesa. O período de tempo em que decorreu o estágio constituiu uma oportunidade única para o estagiário contactar com os problemas existentes no departamento de produção, num mercado tão concorrencial e competitivo como é o da indústria de componentes para automóveis. O presente trabalho de estágio apresenta duas vertentes distintas: uma de caráter interno à empresa e outra relativa aos fornecedores e clientes. Em termos internos, foi visível a batalha pela diminuição das variabilidades que surgem no plano da produção ao absorver grande parte do esforço dos agentes que trabalham na otimização dos processos. Externamente, observou-se a dificuldade em encontrar fornecedores adequados a satisfazer os aprovisionamentos da Faurecia, em quantidade e qualidade, e um elevado grau de exigência imposto por parte dos vários clientes. Por fim, este Projeto possibilitou a aplicação de conhecimentos adquiridos não só ao longo do curso como também durante a realização do estágio, o conhecimento da realidade industrial e o enriquecimento técnico e pessoal.
Resumo:
More than ever, there is an increase of the number of decision support methods and computer aided diagnostic systems applied to various areas of medicine. In breast cancer research, many works have been done in order to reduce false-positives when used as a double reading method. In this study, we aimed to present a set of data mining techniques that were applied to approach a decision support system in the area of breast cancer diagnosis. This method is geared to assist clinical practice in identifying mammographic findings such as microcalcifications, masses and even normal tissues, in order to avoid misdiagnosis. In this work a reliable database was used, with 410 images from about 115 patients, containing previous reviews performed by radiologists as microcalcifications, masses and also normal tissue findings. Throughout this work, two feature extraction techniques were used: the gray level co-occurrence matrix and the gray level run length matrix. For classification purposes, we considered various scenarios according to different distinct patterns of injuries and several classifiers in order to distinguish the best performance in each case described. The many classifiers used were Naïve Bayes, Support Vector Machines, k-nearest Neighbors and Decision Trees (J48 and Random Forests). The results in distinguishing mammographic findings revealed great percentages of PPV and very good accuracy values. Furthermore, it also presented other related results of classification of breast density and BI-RADS® scale. The best predictive method found for all tested groups was the Random Forest classifier, and the best performance has been achieved through the distinction of microcalcifications. The conclusions based on the several tested scenarios represent a new perspective in breast cancer diagnosis using data mining techniques.
Resumo:
This study aims to analyze which determinants predict frailty in general and each frailty domain (physical, psychological, and social), considering the integral conceptual model of frailty, and particularly to examine the contribution of medication in this prediction. A cross-sectional study was designed using a non-probabilistic sample of 252 community-dwelling elderly from three Portuguese cities. Frailty and determinants of frailty were assessed with the Tilburg Frailty Indicator. The amount and type of different daily-consumed medication were also examined. Hierarchical regression analysis were conducted. The mean age of the participants was 79.2 years (±7.3), and most of them were women (75.8%), widowed (55.6%) and with a low educational level (0–4 years: 63.9%). In this study, determinants explained 46% of the variance of total frailty, and 39.8, 25.3, and 27.7% of physical, psychological, and social frailty respectively. Age, gender, income, death of a loved one in the past year, lifestyle, satisfaction with living environment and self-reported comorbidity predicted total frailty, while each frailty domain was associated with a different set of determinants. The number of daily-consumed drugs was independently associated with physical frailty, and the consumption of medication for the cardiovascular system and for the blood and blood-forming organs explained part of the variance of total and physical frailty. The adverse effects of polymedication and its direct link with the level of comorbidities could explain the independent contribution of the amount of prescribed drugs to frailty prediction. On the other hand, findings in regard to medication type provide further evidence of the association of frailty with cardiovascular risk. In the present study, a significant part of frailty was predicted, and the different contributions of each determinant to frailty domains highlight the relevance of the integral model of frailty. The added value of a simple assessment of medication was considerable, and it should be taken into account for effective identification of frailty.
Resumo:
Atualmente, as estratégias que as empresas optam por seguir para a maximização de recursos materiais e humanos, podem representar a diferença entre o sucesso e o fracasso. A seleção de fornecedores é um fator bastante crítico para o desempenho da empresa compradora, sendo por vezes necessária a resolução de problemas que apresentam um elevado grau de complexidade. A escolha dos métodos a ser utilizados e a eleição dos critérios mais relevantes foi feito com base no estudo de diversos autores e nas repostas obtidas a um inquérito online difundido por uma amostra de empresas portuguesas, criado especificamente para compreender quais os fatores que mais peso tinham nas decisões de escolha de parceiros. Além disso, os resultados adquiridos desta forma foram utilizados para conceder mais precisão às ponderações efetuadas na ferramenta de seleção, na escolha dos melhores fornecedores introduzidos pelos utilizadores da mesma. Muitos estudos literários propõem o uso de métodos para simplificar a tarefa de seleção de fornecedores. Esta dissertação aplica o estudo realizado nos métodos de seleção, nomeadamente o Simple Multi-Attribute Rating Technique (SMART) e Analytic Hierarchy Process (AHP), necessários para o desenvolvimento de uma ferramenta de software online que permitia, a qualquer empresa nacional, obter uma classificação para os seus fornecedores perante um conjunto de critérios e subcritérios.
Resumo:
The current practices in the consumption metering by electricity utilities is currently largely based on monthly consumption reading. The consumption metering device is always calculating the cumulative consumption. Then, it is possible to calculate the difference between the actual and the previous consumption evaluation in order to estimate the monthly consumption. The power systems planning needs in many aspects to handle consumption data obtained for shorter periods, namely in the Demand Response programs planning. The work presented in this paper is based on the application of typical consumption profiles that are previously defined for a certain power system area. Such profiles are then used in order to estimate the 15 minutes consumption for a certain consumer or consumer type.
Resumo:
Dissertação de Mestrado em Gestão Integrada da Qualidade, Ambiente e Segurança
Risk Acceptance in the Furniture Sector: Analysis of Acceptance Level and Relevant Influence Factors
Resumo:
Risk acceptance has been broadly discussed in relation to hazardous risk activities and/or technologies. A better understanding of risk acceptance in occupational settings is also important; however, studies on this topic are scarce. It seems important to understand the level of risk that stakeholders consider sufficiently low, how stakeholders form their opinion about risk, and why they adopt a certain attitude toward risk. Accordingly, the aim of this study is to examine risk acceptance in regard to occupational accidents in furniture industries. The safety climate analysis was conducted through the application of the Safety Climate in Wood Industries questionnaire. Judgments about risk acceptance, trust, risk perception, benefit perception, emotions, and moral values were measured. Several models were tested to explain occupational risk acceptance. The results showed that the level of risk acceptance decreased as the risk level increased. High-risk and death scenarios were assessed as unacceptable. Risk perception, emotions, and trust had an important influence on risk acceptance. Safety climate was correlated with risk acceptance and other variables that influence risk acceptance. These results are important for the risk assessment process in terms of defining risk acceptance criteria and strategies to reduce risks.
Resumo:
Apresenta-se nesta tese uma revisão da literatura sobre a modelação de semicondutores de potência baseada na física e posterior análise de desempenho de dois métodos estocásticos, Particle Swarm Optimizaton (PSO) e Simulated Annealing (SA), quando utilizado para identificação eficiente de parâmetros de modelos de dispositivos semicondutores de potência, baseado na física. O conhecimento dos valores destes parâmetros, para cada dispositivo, é fundamental para uma simulação precisa do comportamento dinâmico do semicondutor. Os parâmetros são extraídos passo-a-passo durante simulação transiente e desempenham um papel relevante. Uma outra abordagem interessante nesta tese relaciona-se com o facto de que nos últimos anos, os métodos de modelação para dispositivos de potência têm emergido, com alta precisão e baixo tempo de execução baseado na Equação de Difusão Ambipolar (EDA) para díodos de potência e implementação no MATLAB numa estratégia de optimização formal. A equação da EDA é resolvida numericamente sob várias condições de injeções e o modelo é desenvolvido e implementado como um subcircuito no simulador IsSpice. Larguras de camada de depleção, área total do dispositivo, nível de dopagem, entre outras, são alguns dos parâmetros extraídos do modelo. Extração de parâmetros é uma parte importante de desenvolvimento de modelo. O objectivo de extração de parâmetros e otimização é determinar tais valores de parâmetros de modelo de dispositivo que minimiza as diferenças entre um conjunto de características medidas e resultados obtidos pela simulação de modelo de dispositivo. Este processo de minimização é frequentemente chamado de ajuste de características de modelos para dados de medição. O algoritmo implementado, PSO é uma técnica de heurística de otimização promissora, eficiente e recentemente proposta por Kennedy e Eberhart, baseado no comportamento social. As técnicas propostas são encontradas para serem robustas e capazes de alcançar uma solução que é caracterizada para ser precisa e global. Comparada com algoritmo SA já realizada, o desempenho da técnica proposta tem sido testado utilizando dados experimentais para extrair parâmetros de dispositivos reais das características I-V medidas. Para validar o modelo, comparação entre resultados de modelo desenvolvido com um outro modelo já desenvolvido são apresentados.