953 resultados para What-if Analysis
Resumo:
Dissertação apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do Grau de Mestre em Empreendedorismo e Internacionalização Orientação pela Professora Doutora: Ana Azevedo Manuela Patrício
Resumo:
A dissertação descreve o levantamento e otimização do processo de fabrico de uma peça plástica para um automóvel. A otimização deste processo produtivo tem em vista a rentabilização de recursos humanos, a redução de custos associados e a redução de atividades ao longo do processo, tais como armazenamento de produto semiacabado, uma vez que este deixa de existir, a eliminação de fluxos logísticos e a eliminação do posto de montagem de componentes desta peça. A metodologia utilizada neste projeto centra-se na aplicação de ferramentas que permitem a análise e melhoria de processos produtivos, tais como o diagrama de processo, o diagrama spaghetti, o PDCA, a cronometragem dos tempos por tarefa e a aplicação de uma heurística para o balanceamento do futuro posto de trabalho da peça em estudo. O diagrama de processo, o diagrama de spaghetti e a cronometragem dos tempos por tarefas mostraram o estado atual do processo produtivo. Da análise do atual estado verifica-se que existe muito desperdício de mão-de-obra nas duas estações de trabalho, injeção e montagem. A eliminação do posto de trabalho exclusivo para montagem e junção dos processos de injeção e montagem no mesmo posto de trabalho, com a aplicação de uma heurística para balanceamento do novo posto de trabalho, demonstra que os ganhos são muito significativos. A aplicação do ciclo PDCA, na base da qual está a elaboração de um plano de ações, tornará esta mudança possível e bem-sucedida. A intenção deste projeto é demonstrar que o estado atual pode sempre ser melhorado se se usarem as ferramentas corretas para análise e proposta de melhorias que trarão ganhos à empresa a curto prazo. Usando esta sugestão, a empresa poderá dar início a um novo ciclo em que o espírito de melhoria esteja presente todos os dias em toda a organização.
Resumo:
A new method, based on linear correlation and phase diagrams was successfully developed for processes like the sedimentary process, where the deposition phase can have different time duration - represented by repeated values in a series - and where the erosion can play an important rule deleting values of a series. The sampling process itself can be the cause of repeated values - large strata twice sampled - or deleted values: tiny strata fitted between two consecutive samples. What we developed was a mathematical procedure which, based upon the depth chemical composition evolution, allows the establishment of frontiers as well as the periodicity of different sedimentary environments. The basic tool isn't more than a linear correlation analysis which allow us to detect the existence of eventual evolution rules, connected with cyclical phenomena within time series (considering the space assimilated to time), with the final objective of prevision. A very interesting discovery was the phenomenon of repeated sliding windows that represent quasi-cycles of a series of quasi-periods. An accurate forecast can be obtained if we are inside a quasi-cycle (it is possible to predict the other elements of the cycle with the probability related with the number of repeated and deleted points). We deal with an innovator methodology, reason why it's efficiency is being tested in some case studies, with remarkable results that shows it's efficacy. Keywords: sedimentary environments, sequence stratigraphy, data analysis, time-series, conditional probability.
Resumo:
This paper analyzes Knowledge Management (KM) as a political activity, made by the great political leaders of the world. We try to inspect if at the macro political level KM is made, and how. The research is interesting because given that we live in a Knowledge society, in the Information Era, it is more or less obvious that the political leaders should also do KM. However we don’t know of any previous study on KM and world leaders and this paper wants to be a first step to fill that gap. As a methodology we use literature review: given this one is a first preliminary study we use data we found in the Internet and other databases like EBSCO. We divide the analysis in two main parts: theoretical ideas first, and an application second. The second part is it self divided in two segments: the past and the present times. We find that rather not surprisingly, KM always was and is pervasive in the activity of the world leaders, and has become more and more diverse has power itself became to be more and more disseminated in the world. The study has the limitation of relying on insights and texts and not on interviews. But we believe it is very interesting to make this kind of analysis and such studies may help improving the democracies in the world.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
RESUMO - O presente trabalho de projecto visa analisar a introdução de mecanismos de competição do ponto de vista do quadro legal básico – Constituição e Lei de Bases da Saúde – que enforma o sistema de saúde português e principalmente, o seu impacte ao nível do meio hospitalar. Pretende-se aferir se a implementação de ferramentas de mercado no meio em apreço encontra previsão naqueles diplomas legais, sendo por isso, permissivos quanto ao seu desenvolvimento ou se, por outro lado, o nosso enquadramento legal se revela hostil ao seu desenvolvimento. O estudo foi desenvolvido com recurso, essencialmente, à pesquisa e revisão bibliográficas que serão transversais aos capítulos que compõem o enquadramento conceptual, à hermenêutica para efeitos de aplicação à temática da descrição do quadro legal básico e à análise das hipóteses de trabalho apresentadas. Os resultados obtidos permitem concluir que, via de regra, o quadro legal básico do sistema de saúde português é permissivo à introdução de mecanismos de competição, encontrando mesmo, alguns deles, eco legal em disposições datadas de final da década de ’60. Este grau de permissividade tanto é comprovável através de estatuições que directamente prevêem determinada ferramenta, como através da ausência de previsão que no nosso ordenamento jurídico, não é sinónimo de proibição. ----------------------------------ABSTRACT - This essay analyses the existing relation between the introduction of competition tools in the Portuguese health care system and its basic legal framework – the Constitution and the Health Bases Law – particularly from the hospital’s point of view. We aim to assess if the use and implementation of those tools are permissible by law or if, on the other hand, our legal system is hostile towards that introduction. Preferably we used bibliographical research in almost every chapter and hermeneutics allowed us to perform a detailed analysis of the basic legal framework. We conclude that, most of the times, the Portuguese basic legal framework is permissible to the use of such tools and some of the legislative acts date from the late sixties. That can be encompassed by existing or non-existing statutes – since in our legal system what is not specifically foreseen is not, necessarily forbidden.
Resumo:
This work provides an assessment of layerwise mixed models using least-squares formulation for the coupled electromechanical static analysis of multilayered plates. In agreement with three-dimensional (3D) exact solutions, due to compatibility and equilibrium conditions at the layers interfaces, certain mechanical and electrical variables must fulfill interlaminar C-0 continuity, namely: displacements, in-plane strains, transverse stresses, electric potential, in-plane electric field components and transverse electric displacement (if no potential is imposed between layers). Hence, two layerwise mixed least-squares models are here investigated, with two different sets of chosen independent variables: Model A, developed earlier, fulfills a priori the interiaminar C-0 continuity of all those aforementioned variables, taken as independent variables; Model B, here newly developed, rather reduces the number of independent variables, but also fulfills a priori the interlaminar C-0 continuity of displacements, transverse stresses, electric potential and transverse electric displacement, taken as independent variables. The predictive capabilities of both models are assessed by comparison with 3D exact solutions, considering multilayered piezoelectric composite plates of different aspect ratios, under an applied transverse load or surface potential. It is shown that both models are able to predict an accurate quasi-3D description of the static electromechanical analysis of multilayered plates for all aspect ratios.
Resumo:
The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Presented at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
Susceptibility of snails to infection by certain trematodes and their suitability as hosts for continued development has been a bewildering problem in host-parasite relationships. The present work emphasizes our interest in snail genetics to determine what genes or gene products are specifically responsible for susceptibility of snails to infection. High molecular weight DNA was extracted from both susceptible and non-susceptible snails within the same species Biomphalaria tenagophila. RAPD was undertaken to distinguish between the two types of snails. Random primers (10 mers) were used to amplify the extracted DNA by the polymerase chain reaction (PCR) followed by polyacrylamide gel electrophoresis (PAGE) and silver staining. The results suggest that RAPD represents an efficient means of genome comparison, since many molecular markers were detected as genetic variations between susceptible and non-susceptible snails.
Resumo:
Dissertation to obtain the degree of Doctor in Electrical and Computer Engineering, specialization of Collaborative Networks
Resumo:
Pain transmission at the spinal cord is modulated by descending actions that arise from supraspinal areas which collectively form the endogenous pain control system. Two key areas involved of the endogenous pain control system have a circunventricular location, namely the periaqueductal grey (PAG) and the locus coeruleus (LC). The PAG plays a crucial role in descending pain modulation as it conveys the input from higher brain centers to the spinal cord. As to the LC, it is involved in descending pain inhibition by direct noradrenergic projections to the spinal cord. In the context of neurological defects, several diseases may affect the structure and function of the brain. Hydrocephalus is a congenital or acquired disease characterized by an enlargement of the ventricles which leads to a distortion of the adjacent tissues, including the PAG and LC. Usually, patients suffering from hydrocephalus present dysfunctions in learning and memory and also motor deficits. It remains to be evaluated if lesions of the periventricular brain areas involved in pain control during hydrocephalus may affect descending pain control and, herein, affect pain responses. The studies included in the present thesis used an experimental model of hydrocephalus (the rat injected in the cisterna magna with kaolin) to study descending modulation of pain, focusing on the two circumventricular regions referred above (the PAG and the LC). In order to evaluate the effects of kaolin injection into the cisterna magna, we measured the degree of ventricular dilatation in sections encompassing the PAG by standard cytoarquitectonic stanings (thionin staining). For the LC, immunodetection of the noradrenaline-synthetizing enzyme tyrosine hydroxylase (TH) was performed, due to the noradrenergic nature of the LC neurons. In general, rats with kaolin-induced hydrocephalus presented a higher dilatation of the 4th ventricle, along with a tendency to a higher area of the PAG. Due to the validated role of detection the c-fos protooncogene as a marker of neuronal activation, we also studied neuronal activation in the several subnuclei which compose the PAG, namely the dorsomedial, dorsolateral, lateral and ventrolateral (VLPAG) parts. A decrease in the numbers of neurons immunoreactive for Fos protein (the product of activation of the c-fos protooncogene) was detected in rats injected with kaolin, whereas the remaining PAG subnuclei did not present changes in Fos-immunoreactive nuclei. Increases in the levels of TH in the LC, namely at the rostral parts of the nucleus, were detected in hydrocephalic animals. The following pain-related parameters were measured, namely 1) pain behavioural responses in a validated pain inflammatory test (the formalin test) and 2) the nociceptive activation of spinal cord neurons. A decrease in behavioral responses was detected in rats with kaolin-induced hydrocephalus was detected, namely in the second phase of the test (inflammatory phase). This is the phase of the formalin test in which the motor behaviour is less important, which is important since a semi-quantitative analysis of the motor performance of rats injected with kaolin indicates that these animals may present some motor impairments. Collectively, the results of the behavioral studies indicate that rats with kaolin-induced hydrocephalus exhibit hypoalgesia. A decrease in Fos expression was detected at the superficial dorsal layers of the spinal cord in rats with kaolin-induced hydrocephalus, further indicating that hydrocephalus decreases nociceptive responses. It remains to be ascertained if this is due to alterations in the PAG and LC in the rats with kaolin-induced hydrocephalus, which may affect descending pain modulation. It remains to be evaluated what are the mechanisms underlying the increased pain inhibition at the spinal dorsal horn in the hydrocephalus rats. Regarding the VLPAG, the decrease in neuronal activity may impair descending modulation. Since the LC has higher levels of TH in rats with kaolininduced hydrocephalus, which also appears to increase the noradrenergic innervation in the spinal dorsal horn, it is possible that an increase in the release of noradrenaline at the spinal cord accounts for pain inhibition. Our studies also determine the need to study in detail patients with hydrocephalus namely in what concerns their thresholds to pain and to perform imaging studies focused on the structure and function of pain control areas in the brain.
Resumo:
O coaching é um processo que permite ajudar um ou mais indivíduos a definirem e saberem como concretizar os seus objetivos, sejam eles pessoais ou profissionais. Atualmente, existe um interesse e procura crescente de pessoas com experiência nesta área (designados por coaches) por parte de empresas, equipas desportivas, escolas e outras organizações, com a finalidade de obter um maior rendimento. De forma a ajudar os intervenientes no processo, este documento demonstra a necessidade de existir uma ferramenta de apoio que permite aos coaches gerirem melhor a sua atividade profissional. A pesquisa e estudo efetuados procuram responder a este caso, desenvolvendo um sistema informático inteligente de apoio ao coach dotado de uma interface centrada no utilizador. Antes de iniciar o desenvolvimento de um sistema inteligente é necessário realizar e apresentar um levantamento do estado da arte, mais concretamente sobre a interação homem-computador, modelação do perfil de utilizador e processo de coaching, que apresenta os fundamentos teóricos para a escolha da metodologia de desenvolvimento adequado. São apresentadas posteriormente as fases constituintes do modelo de desenvolvimento de interfaces escolhido, a engenharia de usabilidade, que se inicia com uma análise detalhada, permitindo de seguida uma estruturação dos conhecimentos obtidos e a aplicação de linhas de orientação estipuladas, finalizando com testes de utilização e respetivo feedback dos utilizadores. O protótipo desenvolvido distingue utilizadores com diferentes características, através de uma classificação por níveis e permite gerir todo o processo de coaching efetuado a outras pessoas ou ao próprio utilizador. O facto de existir uma classificação dos utilizadores faz com que a interação entre sistema e utilizadores seja diferente e adaptada às necessidades de cada um. O resultado dos testes de utilização com um caso prático e dos questionários efetuados permite detetar se o modelo foi bem-sucedido e funciona corretamente e o que é necessário alterar no futuro para facilitar a interação e satisfazer as necessidades de cada utilizador.