48 resultados para People management model
Resumo:
Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.
Resumo:
Este trabalho tem como objetivo intervir na área de Recursos Humanos na Entidade Acolhedora do Projeto. Foi neste contexto que identificamos o Centro Social e Paroquial de S. Martinho de Brufe para a sua realização. O diagnóstico realizado permitiu identificar como potencialidade de intervenção o Sistema de Gestão de Recursos Humanos. Considerando as exigências definidas pelo Modelo de Avaliação da Qualidade das Respostas Sociais (MAQRS) procedeu-se ao diagnóstico da organização acolhedora do projeto. Seguiu-se a configuração exata da potencialidade identificada, o planeamento estratégico e operacional da estratégia. A fase seguinte envolveu a implementação do projeto. Terminamos com a avaliação e apresentação das respetivas medidas necessárias para concretizar da finalidade a que nos propusemos. Os resultados da avaliação permitem concluir que o planeamento e a implementação do projeto foram eficientes e eficazes, uma vez que a auditoria final mostrou a inexistência de não conformidades no projeto de intervenção. Sendo finalidade do projeto garantir que o Centro Social e Paroquial de S. Martinho de Brufe cumpre todos os requisitos do Critério 2 – Pessoas, do Modelo de Avaliação da Qualidade das Respostas Sociais (MAQRS), do Instituto da Segurança Social para submeter com êxito o processo de certificação, em julho de 2014, o documento que se segue contém todos os procedimentos necessários para garantir êxito na sua concretização. O centro Social e Paroquial de S. Martinho de Brufe dispõe dos próximos seis meses (de janeiro a junho de 2014) para apresentar evidências da formalização, sendo esta também condição necessária que antecede a submissão do processo de certificação.
Resumo:
The paper proposes a Flexibility Requirements Model and a Factory Templates Framework to support the dynamic Virtual Organization decision-makers in order to reach effective response to the emergent business opportunities ensuring profitability. Through the construction and analysis of the flexibility requirements model, the network managers can achieve and conceive better strategies to model and breed new dynamic VOs. This paper also presents the leagility concept as a new paradigm fit to equip the network management with a hybrid approach that better tackle the performance challenges imposed by the new and competitive business environments.
Resumo:
The Casa da Música Foundation, responsible for the management of Casa da Música do Porto building, has the need to obtain statistical data related to the number of building’s visitors. This information is a valuable tool for the elaboration of periodical reports concerning the success of this cultural institution. For this reason it was necessary to develop a system capable of returning the number of visitors for a requested period of time. This represents a complex task due to the building’s unique architectural design, characterized by very large doors and halls, and the sudden large number of people that pass through them in moments preceding and proceeding the different activities occurring in the building. To achieve the technical solution for this challenge, several image processing methods, for people detection with still cameras, were first studied. The next step was the development of a real time algorithm, using OpenCV libraries and computer vision concepts,to count individuals with the desired accuracy. This algorithm includes the scientific and technical knowledge acquired in the study of the previous methods. The themes developed in this thesis comprise the fields of background maintenance, shadow and highlight detection, and blob detection and tracking. A graphical interface was also built, to help on the development, test and tunning of the proposed system, as a complement to the work. Furthermore, tests to the system were also performed, to certify the proposed techniques against a set of limited circumstances. The results obtained revealed that the algorithm was successfully applied to count the number of people in complex environments with reliable accuracy.
Resumo:
This research, still at an early stage, and then presented in a poster format, intended to explain the management of organizational performance of a family business in the succession process using the case study method. The scripts for semi-structured interviews that will apply to managers, owners and other workers who are deemed suitable for the investigation, which include relatives of the owners of the company are being developed. For this work the model of organizational performance management developed by David Otley in 1999 [1], consisting of five questions that seek to explain the existing performance management in any organization is utilized.
Resumo:
The purpose of this paper is to present a framework that increases knowledge sharing and collaboration in Higher Education Institutions. The paper discusses the concept of knowledge management in higher education institutions, presenting a systematization of knowledge practices and tools to linking people (students, teachers, researchers, secretariat staff, external entities)and promoting the knowledge sharing across several key processes and services in a higher education institution, such as: the research processes, learning processes, student and alumni services, administrative services and processes, and strategic planning and management. The framework purposed in this paper aims to improve knowledge practices and processes which facilitate an environment and a culture of knowledge collaboration,sharing and discovery that should characterize an institution of higher education.
Resumo:
O presente trabalho de dissertação teve como objetivo a implementação de metodologias de Lean Management e avaliação do seu impacto no processo de Desenvolvimento de Produto. A abordagem utilizada consistiu em efetuar uma revisão da literatura e levantamento do Estado da Arte para obter a fundamentação teórica necessária à implementação de metodologias Lean. Prosseguiu com o levantamento da situação inicial da organização em estudo ao nível das atividades de desenvolvimento de produto, práticas de gestão documental e operacional e ainda de atividades de suporte através da realização de inquéritos e medições experimentais. Este conhecimento permitiu criar um modelo de referência para a implementação de Lean Management nesta área específica do desenvolvimento de produto. Após implementado, este modelo foi validado pela sua experimentação prática e recolha de indicadores. A implementação deste modelo de referência permitiu introduzir na Unidade de Desenvolvimento de Produto e Sistemas (DPS) da organização INEGI, as bases do pensamento Lean, contribuindo para a criação de um ambiente de Respeito pela Humanidade e de Melhoria Contínua. Neste ambiente foi possível obter ganhos qualitativos e quantitativos nas várias áreas em estudo, contribuindo de forma global para um aumento da eficiência e eficácia da DPS. Prevê-se que este aumento de eficiência represente um aumento da capacidade instalada na Organização, pela redução anual de 2290 horas de desperdício (6.5% da capacidade total da unidade) e pela redução significativa em custos operacionais. Algumas das implementações de melhoria propostas no decorrer deste trabalho, após verificado o seu sucesso, extravasaram a unidade em estudo e foram aplicadas transversalmente à da organização. Foram também obtidos ganhos qualitativos, tais como a normalização de práticas de gestão documental e a centralização e agilização de fluxos de informação. Isso permitiu um aumento de qualidade dos serviços prestados pela redução de correções e retrabalho. Adicionalmente, com o desenvolvimento de uma nova ferramenta que permite a monitorização do estado atual dos projetos a nível da sua percentagem de execução (cumprimento de objetivos), prazos e custos, bem como a estimação das datas de conclusão dos projetos possibilitando o replaneamento do projeto bem como a detecção atempada de desvios. A ferramenta permite também a criação de um histórico que identifica o esforço horário associado à realização das atividades/tarefas das várias áreas de Desenvolvimento de Produto e desta forma pode ser usada como suporte à orçamentação futura de atividades similares. No decorrer do projeto, foram também criados os mecanismos que permitem o cálculo de indicadores das competências técnicas e motivações intrínsecas individuais da equipa DPS. Estes indicadores podem ser usados na definição por parte dos gestores dos projetos da composição das equipas de trabalho, dos executantes de tarefas individuais do projeto e dos destinatários de ações de formação. Com esta informação é expectável que se consiga um maior aproveitamento do potencial humano e como consequência um aumento do desempenho e da satisfação pessoal dos recursos humanos da organização. Este caso de estudo veio demonstrar que o potencial de melhoria dos processos associados ao desenvolvimento de produto através de metodologias de Lean Management é muito significativo, e que estes resultam em ganhos visíveis para a organização bem como para os seus elementos individualmente.
Resumo:
Electricity Markets are not only a new reality but an evolving one as the involved players and rules change at a relatively high rate. Multi-agent simulation combined with Artificial Intelligence techniques may result in very helpful sophisticated tools. This paper presents a new methodology for the management of coalitions in electricity markets. This approach is tested using the multi-agent market simulator MASCEM (Multi-Agent Simulator of Competitive Electricity Markets), taking advantage of its ability to provide the means to model and simulate Virtual Power Players (VPP). VPPs are represented as coalitions of agents, with the capability of negotiating both in the market and internally, with their members in order to combine and manage their individual specific characteristics and goals, with the strategy and objectives of the VPP itself. A case study using real data from the Iberian Electricity Market is performed to validate and illustrate the proposed approach.
Resumo:
The rising usage of distributed energy resources has been creating several problems in power systems operation. Virtual Power Players arise as a solution for the management of such resources. Additionally, approaching the main network as a series of subsystems gives birth to the concepts of smart grid and micro grid. Simulation, particularly based on multi-agent technology is suitable to model all these new and evolving concepts. MASGriP (Multi-Agent Smart Grid simulation Platform) is a system that was developed to allow deep studies of the mentioned concepts. This paper focuses on a laboratorial test bed which represents a house managed by a MASGriP player. This player is able to control a real installation, responding to requests sent by the system operators and reacting to observed events depending on the context.
Resumo:
Multi-agent approaches have been widely used to model complex systems of distributed nature with a large amount of interactions between the involved entities. Power systems are a reference case, mainly due to the increasing use of distributed energy sources, largely based on renewable sources, which have potentiated huge changes in the power systems’ sector. Dealing with such a large scale integration of intermittent generation sources led to the emergence of several new players, as well as the development of new paradigms, such as the microgrid concept, and the evolution of demand response programs, which potentiate the active participation of consumers. This paper presents a multi-agent based simulation platform which models a microgrid environment, considering several different types of simulated players. These players interact with real physical installations, creating a realistic simulation environment with results that can be observed directly in the reality. A case study is presented considering players’ responses to a demand response event, resulting in an intelligent increase of consumption in order to face the wind generation surplus.
Resumo:
Energy resource scheduling is becoming increasingly important, as the use of distributed resources is intensified and of massive electric vehicle is envisaged. The present paper proposes a methodology for day-ahead energy resource scheduling for smart grids considering the intensive use of distributed generation and Vehicle-to-Grid (V2G). This method considers that the energy resources are managed by a Virtual Power Player (VPP) which established contracts with their owners. It takes into account these contracts, the users' requirements subjected to the VPP, and several discharge price steps. The full AC power flow calculation included in the model takes into account network constraints. The influence of the successive day requirements on the day-ahead optimal solution is discussed and considered in the proposed model. A case study with a 33-bus distribution network and V2G is used to illustrate the good performance of the proposed method.
Resumo:
The purpose of this study is to investigate the association between the satisfaction with HRM practices in an organization and the workers' perceived performance. We are interested in learning if indeed workers that are more satisfied with the organization’s practices will also perceive themselves as more hardworking than others, thus confirming the happy-productive worker hypothesis, from an individual perception standpoint. Data originates from a large Portuguese hospital, with a sample of 952 clinical and nonclinical hospital workers. Data was originally explored using SPSS software and later tested in AMOS software where a multiple regression model was constructed and tested. Results indicate that overall satisfaction with HRM practices are related with the workers’ perceived performance; most of the HRM satisfaction subscales also relate, except for pay and performance appraisal, that do not seem to be good predictors of the workers perceived performance. The present study is based on a single large public hospital, and thus, these findings need to be further tested in other settings. This study offers some clues regarding the areas of HRM that seem to be more related with the workers’ perceived performance, and hence provide an interesting framework for managers dealing with healthcare teams. This study contributes to the happy-productive worker hypothesis research, by including seldom used variables in the equation and taking a different perspective. Results provide new clues for investigation and practice regarding the areas of action in HRM that seem to be more prone to elicit perceived effort from the workers.
Resumo:
Com o advento da invenção do modelo relacional em 1970 por E.F.Codd, a forma como a informação era gerida numa base de dados foi totalmente revolucionada. Migrou‐se de sistemas hierárquicos baseados em ficheiros para uma base de dados relacional com tabelas relações e registos que simplificou em muito a gestão da informação e levou muitas empresas a adotarem este modelo. O que E.F.Codd não previu foi o facto de que cada vez mais a informação que uma base de dados teria de armazenar fosse de proporções gigantescas, nem que as solicitações às bases de dados fossem da mesma ordem. Tudo isto veio a acontecer com a difusão da internet que veio ligar todas as pessoas de qualquer parte do mundo que tivessem um computador. Com o número de adesões à internet a crescer, o número de sites que nela eram criados também cresceu (e ainda cresce exponencialmente). Os motores de busca que antigamente indexavam alguns sites por dia, atualmente indexam uns milhões de sites por segundo e, mais recentemente as redes sociais também estão a lidar com quantidades gigantescas de informação. Tanto os motores de busca como as redes sociais chegaram à conclusão que uma base de dados relacional não chega para gerir a enorme quantidade de informação que ambos produzem e como tal, foi necessário encontrar uma solução. Essa solução é NoSQL e é o assunto que esta tese vai tratar. O presente documento visa definir e apresentar o problema que as bases de dados relacionais têm quando lidam com grandes volumes de dados, introduzir os limites do modelo relacional que só até há bem pouco tempo começaram a ser evidenciados com o surgimento de movimentos, como o BigData, com o crescente número de sites que surgem por dia e com o elevado número de utilizadores das redes sociais. Será também ilustrada a solução adotada até ao momento pelos grandes consumidores de dados de elevado volume, como o Google e o Facebook, enunciando as suas características vantagens, desvantagens e os demais conceitos ligados ao modelo NoSQL. A presente tese tenciona ainda demonstrar que o modelo NoSQL é uma realidade usada em algumas empresas e quais as principias mudanças a nível programático e as boas práticas delas resultantes que o modelo NoSQL traz. Por fim esta tese termina com a explicação de que NoSQL é uma forma de implementar a persistência de uma aplicação que se inclui no novo modelo de persistência da informação.
Resumo:
Purpose: This exploratory research evaluates if there is a relationship between the number of years since an organization has achieved ISO 9001 certification and the highest level of recognition received by the same organization with the EFQM Business Excellence Model. Methodology/Approach: After state of the art review a detailed comparison between both models was made. Fifty two Portuguese organizations were considered and Correlation coefficient Spearman Rho was used to investigate the possible relationships. Findings: Conclusion is that there is indeed a moderate positive correlation between these two variables, the higher the number of years of ISO 9001 certification, the higher the results of the organization EFQM model evaluation and recognition. This supports the assumption that ISO 9001 International Standard by incorporating many of the principles present in the EFQM Business Excellence Model is consistent with this model and can be considered as a step towards that direction. Research Limitation/implication: Due to the dynamic nature of these models that might change over time and the possible time delays between implementation and results, more in-depth studies like experimental design or a longitudinal quasi-experimental design could be used to confirm the results of this investigation. Originality/Value of paper: This research gives additional insights on conjunct studies of both models. The use of external evaluation results carried out by the independent EFQM assessors minimizes the possible bias of previous studies accessing the value of ISO 9001 certification.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.