902 resultados para Orthographic projection


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

São vários os factores sociais e económicos que valorizam a aplicação de tecnologias de domótica em edifícios. No caso particular dos edifícios residenciais, a tendência dos seus utilizadores é a instalação de sistemas de controlo da segurança, do ambiente, de mecanismos de rega e de alarmes. Assim, seguindo a premissa do marketing, que identifica como uma boa prática a projecção de produtos / serviços que satisfaçam as necessidades inventariadas pelos seus utilizadores, este trabalho assenta na criação de um sistema domótico, controlado remotamente através de uma aplicação Android, que pretende, numa primeira instância, o controlo das lâmpadas de uma habitação. Neste trabalho é utilizado o protocolo KNX.TP para a comunicação dos dispositivos de domótica existentes no ISEP, que constituem o ambiente domótico deste trabalho. De forma a implementar o controlo remoto destes dispositivos via internet, este trabalho foca-se no desenvolvimento de uma interface IP-KNX, usando como hardware de controlo, um Arduino Mega 2560, uma placa de interface Ethernet para Arduino, a placa de integração KNX, e um servidor web com a linguagem PHP instalada. Para efeitos de demonstração, foi criada uma aplicação para o SO Android que controla as lâmpadas da rede KNX. Neste trabalho foram utilizadas várias linguagens de programação: C++ no firmware do Arduino, PHP no servidor web e JAVA + XML na aplicação Android.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A imagem de transportadores cerebrais da dopamina com recurso à tomografia por emissão de fotão único com 123I-FP-CIT tornou-se uma ferramenta importante no diagnóstico e avaliação de síndromes parkinsonianos. Embora o algoritmo de reconstrução de imagem Ordered Subset Expectation Maximization (OSEM) seja o método mais recomendado na literatura para reconstrução da imagem, o Filtered Back Projection (FBP) é ainda usado devido à sua rapidez. O objetivo deste trabalho é investigar a influência dos parâmetros de reconstrução para FBP na semiquantificação em estudos cerebrais com 123I-FPCIT em comparação com os obtidos com a reconstrução recomendada por OSEM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese para a obtenção do grau de Doutor em Economia, especialidade de Economia da Empresa

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Empreendedorismo e Internacionalização. Os orientadores: Prof. Doutor José de Freitas Santos Profª. Doutora Maria Clara Dias Pinto Ribeiro

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of great investments in telecommunication networks is to approach economies and put an end to the asymmetries. The most isolated regions could be the beneficiaries of this new technological investments wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services or products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical and real space, location is defined by two or three geographical co-ordinates. In the network virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. The Internet Geography, Cybergeography or Geography of Cyberspace are only three examples. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to Internet dissemination and regional development studies. The Portuguese territory is seen in a completely new form: the IP address distribution of Country Code Top Level Domains (.pt) could show new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to reflect the info-structural dynamic and economic development of a territory, especially at regional level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information Society plays an important role in all kinds of human activity, inducing new forms of economic and social organization and creating knowledge. Over the last twenty years of the 20th century, large investments in telecommunication networks were made to approach economies and put an end to the asymmetries. The most isolated regions were the beneficiaries of this new technological investment’s wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services, products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical/real space, location is defined by two or three geographical co-ordinates. In the network/virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. Internet Geography is one example. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to regional development studies, and at the same time. The IP address distribution of Country Code Top Level Domains (.pt for Portugal) could show the same economic patterns, reflecting territorial inflexibility or, by opposition, new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to analyse the info-structural dynamic and economic development of a territory, especially at regional level. At the same time it shows that information technologies are essential to innovation and competitive advantage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phonological development was assessed in six alphabetic orthographies (English, French, Greek, Icelandic, Portuguese and Spanish) at the beginning and end of the first year of reading instruction. The aim was to explore contrasting theoretical views regarding: the question of the availability of phonology at the outset of learning to read (Study 1); the influence of orthographic depth on the pace of phonological development during the transition to literacy (Study 2); and the impact of literacy instruction (Study 3). Results from 242 children did not reveal a consistent sequence of development as performance varied according to task demands and language. Phonics instruction appeared more influential than orthographic depth in the emergence of an early meta-phonological capacity to manipulate phonemes, and preliminary indications were that cross-linguistic variation was associated with speech rhythm more than factors such as syllable complexity. The implications of the outcome for current models of phonological development are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com o aumento do preço da eletricidade e o fim dos combustíveis fósseis, associados à necessidade de Portugal reduzir a sua dependência energética do exterior, provoca a necessidade urgente de apostar nas energias renováveis. Perante este cenário, e assumindo que o custo da fatura energética, é para as empresas portuguesas um fator cada vez mais determinante para serem competitivas, devido aos aumentos consecutivos da energia nos últimos anos, bem como, a subida do imposto de valor acrescentado (IVA) de 6% para 23%. Outro aspeto importante é a eficiência energética como instrumento para reduzir os consumos de eletricidade. Com estas duas medidas: utilização de energias renováveis e o aumento da eficiência energética, são extremamente importantes para a redução da produção dos gases de efeito estufa (GEE). Consequentemente, as empresas terão de investir na produção da própria energia a partir de fontes renováveis, de modo a proporcionar um desenvolvimento sustentável, associado à redução da fatura energética. Esta dissertação propõe o dimensionamento de um sistema híbrido composto por tecnologia fotovoltaica e eólica, com e sem armazenamento de energia em baterias, adequado para reduzir uma parte dos consumos de uma empresa enquadrada no sector dos plásticos. O dimensionamento deste sistema, foi efetuado com recurso à caracterização dos consumos da empresa através da recolha de dados e leituras no local da instalação. Paralelamente, foi efetuada uma pesquisa em diversos fabricantes, de modo a identificar qual o sistema mais indicado a adotar, considerando painéis fotovoltaicos, turbinas eólicas, inversores e baterias. Com base nos dados recolhidos na empresa e referentes ao potencial eólico e solar para o distrito do Porto, em conjunto com as características técnicas dos equipamentos selecionados, foi delineado o sistema híbrido utilizando para o efeito um software de simulação e otimização de sistemas híbridos, denominado Hybrid Optimization Model for Eletric Renewable (HOMER). São apresentadas várias simulações para as diversas configurações escolhidas e estudos comparativos entre si, com o objetivo de reduzir o consumo de eletricidade da rede. Adicionalmente, foram realizadas duas configurações apenas com tecnologia fotovoltaica, de modo a efetuar uma análise comparativa entre um sistema híbrido e outro apenas com uma fonte renovável. Os resultados apresentados focaram-se no desempenho diário, mensal e anual, bem como, a produção individual de cada tecnologia evidenciada. Por último, procedeu-se ao estudo da viabilidade técnico-económica das configurações.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation presented to obtain a Master degree in Biotechnology

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Num mercado globalizado, a procura contínua de vantagens competitivas é um fator crucial para o sucesso das organizações. A melhoria contínua dos processos é uma abordagem usual, uma vez que os resultados destas melhorias vão se traduzir diretamente na qualidade dos produtos. Neste contexto, a metodologia Failure Mode Effect Analysis (FMEA) é muito utilizada, especialmente pelas suas características proactivas, que permitem a identificação e a prevenção de erros do processo. Assim, quanto mais eficaz for a aplicação desta ferramenta, mais benefícios terá a organização. Assim, quando é utilizado com eficácia, o FMEA de Processo, além de ser um método poderoso na análise do processo, permite a melhoria contínua e a redução dos custos [1] . Este trabalho de dissertação teve como objetivo avaliar a eficácia da utilização da ferramenta do FMEA de processo numa organização certificada segundo a norma ISO/TS16949. A metodologia proposta passa pela análise de dados reais, ou seja, comparar as falhas verificadas no mercado com as falhas que tinham sido identificadas no FMEA. Assim, ao analisar o nível de falhas identificadas e não identificadas durante o FMEA e a projeção dessas falhas no mercado, consegue-se determinar se o FMEA foi mais ou menos eficaz, e ainda, identificar fatores que condicionam a melhor utilização da mesma. Este estudo, está organizado em três fases, a primeira apresenta a metodologia proposta , com a definição de um fluxograma do processo de avaliação e as métricas usadas, a segunda fase a aplicação do modelo proposto a dois casos de estudo, e uma última fase, que consiste na análise comparativa, individual e global, que visa, além de comparar esultados, identificar pontos fracos durante a execução do FMEA. Os resultados do caso de estudo indicam que a ferramenta do FMEA tem sido usada com eficácia, pois consegue-se identificar uma quantidade significativa de falhas potenciais e evitá-las. No entanto, existem ainda falhas que não foram identificadas no FMEA e que apareceram no cliente, e ainda, algumas falhas que foram identificadas e apareceram no cliente. As falhas traduzem-se em má qualidade e custos para o negócio, pelo que são propostas ações de melhoria. Pode-se concluir que uma boa utilização do FMEA pode ser um fator importante para a qualidade do serviço ao cliente, e ainda, com impacto dos custos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: As a starting point, a vast variety of 200 technical papers and documents published during the ten years 1999-2008, from Brazilian and international organizations dedicated to the control of leprosy, was taken. A study was then undertaken to investigate its future evolutive possibilities by employing resources obtained from scenario analyses. DESIGN: The methodological reconstruction in use was of a qualitative nature, based on a bibliographic review and content analysis techniques. The latter were employed in a documental, categorical, contingent, frequency-based format, in compliance with appropriate and pertinent conditions. RESULTS: Nowadays, important elements on epidemiological and operational aspects have been regained, as well as respective perspectives. CONCLUSIONS: A projection is made towards the fact that the maintenance of the disease's present incidence levels constitute economic and sanitary challenges that confront issues ranging from the neoliberal model of global societal organization to specific competences of actions taken by health teams in the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A presente dissertação foi realizada no âmbito do Mestrado de Engenharia e Gestão Industrial da Escola Superior de Estudos Industriais e de Gestão, de Vila do Conde. O projeto desenvolvido tem como tema principal a Otimização de processos de Logística in-house baseado num projeto, em contexto empresarial da empresa cliente, Continental Mabor S.A., da Rangel Distribuição e Logística, S.A. Este projeto tem como objetivo a “aglomeração” de dois armazéns do cliente, devido à necessidade de ocupação do armazém de produto acabado interno, para aumento da área de produção. Inicialmente foi feita uma revisão de literatura sobre os temas mais relevantes de suporte para o projeto, nomeadamente na otimização e melhoria contínua. Seguidamente é apresentado o Grupo Rangel, bem como a Rangel Distribuição e Logística, S.A., onde se enquadra o projeto e para se perceber o enquadramento e objetivo. A metodologia usada, caso de estudo, permitiu a aplicação de conceitos e ferramentas usados na literatura neste contexto, como ferramentas de otimização e melhoria continua como as melhores práticas de Kaizen-Lean. Na fase de diagnóstico do atual sistema, foi realizado um mapeamento de fluxo de processos e uma descrição detalhada do layout dos dois armazéns: Armazém de Produto Acabado (APA) e Armazém de Produto Acabado Externo (APAE), bem como todos os recursos, quer técnicos quer humanos necessários. Verificamos ao longo deste projeto várias limitações, inclusive limitações impostas pelo cliente, tal como não aprovar um estudo para um novo layout do armazém. Foi aprovado apenas a replicação do já existente. Com isto, depararam-se constrangimentos na gestão deste projeto. Os custos aumentaram significativamente, embora estes não sejam apresentados por questões de confidencialidade, principalmente com a necessidade de aquisição de novos equipamentos retráteis, e mais baterias para os mesmos, devido às grandes distâncias que irão ser percorridas. Finalmente foi projetado o sistema futuro, de acordo com as necessidades reais do cliente tendo em consideração a otimização de recursos e uma gestão magra (Lean Management). Foi desenvolvida a implementação da metodologia “Kaizen diário”, a dar início em 2016 juntamente com o novo projeto APAE. Com esta projeção foram identificados problemas e implicações no projeto, bem como possíveis melhorias.