980 resultados para Meet-hanke
Resumo:
Os programas de melhoria contínua dos processos são cada vez mais a aposta das empresas para fazer face ao mercado. Através da implementação destes programas é possível conferir simplicidade e padronização aos processos e consequentemente reduzir os custos com desperdícios internos relacionados com a qualidade dos mesmos. As ferramentas de melhoria da qualidade e as ferramentas associadas ao Lean Thinking representam um pilar importante no sucesso de qualquer programa de melhoria contínua dos processos. Estas ferramentas constituem meios úteis na análise, controlo, organização de dados importantes para a correta tomada de decisão nas organizações. O presente projeto tem como principal objetivo a conceção e implementação de um programa de melhoria da qualidade na Eurico Ferreira, S.A., tendo por base a avaliação da satisfação do cliente e a aplicação dos 5S. Neste contexto, o trabalho teve como fundamentação teórica a Gestão da Qualidade, Lean Thinking e algumas ferramentas de ambas as matérias. Posteriormente foi selecionada a área de negócio da empresa a abordar. Após a seleção, realizou-se um diagnóstico inicial do processo identificando os diversos pontos de melhoria onde foram aplicadas algumas ferramentas do Lean Thinking, nomeadamente o Value Stream Mapping e a metodologia 5S. Com a primeira foi possível construir um mapa do estado atual do processo, no qual estavam representados todos os intervenientes assim como o fluxo de materiais e de informação ao longo do processo. A metodologia 5S permitiu atuar sobre os desperdícios, identificando e implementando diversas melhorias no processo. Concluiu-se que a implementação das ferramentas contribuiu eficientemente para a melhoria contínua da qualidade nos processos, tendo sido decisão da coordenação alargar o âmbito do projeto aos restantes armazéns do centro logístico da empresa. Pode afirmar-se com recurso à satisfação do cliente expressa através da evolução favorável do Service-level agreement que as ferramentas implementadas têm gerado resultados muito positivos no curto prazo.
Resumo:
Trabalho de Projecto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro – Especialização em Encenação.
Resumo:
O desenvolvimento de recursos multilingues robustos para fazer face às exigências crescentes na complexidade dos processos intra e inter-organizacionais é um processo complexo que obriga a um aumento da qualidade nos modos de interacção e partilha dos recursos das organizações, através, por exemplo, de um maior envolvimento dos diferentes interlocutores em formas eficazes e inovadoras de colaboração. É um processo em que se identificam vários problemas e dificuldades, como sendo, no caso da criação de bases de dados lexicais multilingues, o desenvolvimento de uma arquitectura capaz de dar resposta a um conjunto vasto de questões linguísticas, como a polissemia, os padrões lexicais ou os equivalentes de tradução. Estas questões colocam-se na construção quer dos recursos terminológicos, quer de ontologias multilingues. No caso da construção de uma ontologia em diferentes línguas, processo no qual focalizaremos a nossa atenção, as questões e a complexidade aumentam, dado o tipo e propósitos do artefacto semântico, os elementos a localizar (conceitos e relações conceptuais) e o contexto em que o processo de localização ocorre. Pretendemos, assim, com este artigo, analisar o conceito e o processo de localização no contexto dos sistemas de gestão do conhecimento baseados em ontologias, tendo em atenção o papel central da terminologia no processo de localização, as diferentes abordagens e modelos propostos, bem como as ferramentas de base linguística que apoiam a implementação do processo. Procuraremos, finalmente, estabelecer alguns paralelismos entre o processo tradicional de localização e o processo de localização de ontologias, para melhor o situar e definir.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.
Resumo:
Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto, para obtenção do grau de Mestre em Marketing Digital, sob orientação da Prof. Sandrina Teixeira
Resumo:
This work evaluates the possibility of using spent coffee grounds (SCG) for biodiesel production and other applications. An experimental study was conducted with different solvents showing that lipid content up to 6 wt% can be obtained from SCG. Results also show that besides biodiesel production, SCG can be used as fertilizer as it is rich in nitrogen, and as solid fuel with higher heating value (HHV) equivalent to some agriculture and wood residues. The extracted lipids were characterized for their properties of acid value, density at 15 °C, viscosity at 40 °C, iodine number, and HHV, which are negatively influenced by water content and solvents used in lipid extraction. Results suggest that for lipids with high free fatty acids (FFA), the best procedure for conversion to biodiesel would be a two-step process of acid esterification followed by alkaline transesterification, instead of a sole step of direct transesterification with acid catalyst. Biodiesel was characterized for its properties of iodine number, acid value, and ester content. Although these quality parameters were not within the limits of NP EN 14214:2009 standard, SCG lipids can be used for biodiesel, blended with higher-quality vegetable oils before transesterification, or the biodiesel produced from SCG can be blended with higher-quality biodiesel or even with fossil diesel, in order to meet the standard requirements.
Resumo:
Dissertação de Mestrado apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação das docentes Doutora Alcina Dias e Doutora Ana Paula Lopes
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
Orientação: Doutora Maria Alexandra Pacheco Ribeiro da Costa
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Sistemas Gráficos e Multimédia
Resumo:
This book discusses in detail the CMOS implementation of energy harvesting. The authors describe an integrated, indoor light energy harvesting system, based on a controller circuit that dynamically and automatically adjusts its operation to meet the actual light circumstances of the environment where the system is placed. The system is intended to power a sensor node, enabling an autonomous wireless sensor network (WSN). Although designed to cope with indoor light levels, the system is also able to work with higher levels, making it an all-round light energy harvesting system. The discussion includes experimental data obtained from an integrated manufactured prototype, which in conjunction with a photovoltaic (PV) cell, serves as a proof of concept of the desired energy harvesting system. © 2016 Springer International Publishing. All rights are reserved.
Resumo:
Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.
Utilização de coberturas ajardinadas de vegetação intensiva, extensiva e horta urbana em edificações
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil