971 resultados para Process system value
Resumo:
O objetivo central deste estudo consiste em demonstrar de que forma o trabalho do auditor interno contribui no processo de gestão de riscos empresariais. Neste sentido, faz-se uma abordagem sobre o conceito de Auditoria Interna, sendo uma atividade destinada a acrescentar valor à organização na medida em que a auxilia na consecução dos seus objetivos, proporcionando-lhe informações oportunas e relevantes para a tomada de decisão. Faz também considerações ao Controlo Interno, no sentido de que as organizações vão sentir diferentes necessidades de controlo interno dependendo da sua dimensão e complexidade do negócio. O controlo interno é um processo desenvolvido pelos Orgãos de Gestão com o propósito de garantir uma segurança razoável no cumprimento dos objetivos estabelecidos. Cabe ao auditor interno auxiliar nesse sentido, ou seja, debruçar-se sobre a avaliação da adequação e eficiência do Sistema de Controlo Interno. Por fim é abordada a importância da Gestão do Risco, neste contexto as organizações têm como compromisso prioritário a implementação de mecanismos de avaliação e gestão dos riscos que possam afetar as suas operações e o cumprimento dos objetivos estratégicos definidos. A Auditoria Interna vai fornecer segurança acerca da eficácia das atividades de gestão do risco das organizações para assegurar que os principais riscos de negócio estão a ser geridos de forma apropriada bem como os sistemas de controlo interno estão a funcionar eficazmente. Ainda na gestão do risco é abordado o modelo COSO ERM, instrumento importante para as organizações na medida em que melhoram a performance e o desempenho dos controlos internos implementados e progridem para um processo de gestão do risco. Faz-se também uma breve referência sobre a Lei Sox, que veio promover uma profunda reforma na elaboração dos relatórios financeiros, no detalhe minucioso sobre os aspetos do controlo interno nas organizações e na transparência das informações divulgadas pelas organizações.
Resumo:
This paper presents the project of a mobile cockpit system (MCS) for smartphones, which provides assistance to electric bicycle (EB) cyclists in smart cities' environment. The presented system introduces a mobile application (MCS App) with the goal to provide useful personalized information to the cyclist related to the EB's use, including EB range prediction considering the intended path, management of the cycling effort performed by the cyclist, handling of the battery charging process, and the provisioning of information regarding available public transport. This work also introduces the EB cyclist profile concept, which is based on historical data analysis previously stored in a database and collected from mobile devices' sensors. From the tests performed, the results show the importance of route guidance, taking into account the energy savings. The results also show significant changes on range prediction based on user and route taken. It is important to say that the proposed system can be used for all bicycles in general.
Resumo:
Since long ago cellulosic lyotropic liquid crystals were thought as potential materials to produce fibers competitive with spidersilk or Kevlar, yet the processing of high modulus materials from cellulose-based precursors was hampered by their complex rheological behavior. In this work, by using the Rheo-NMR technique, which combines deuterium NMR with rheology, we investigate the high shear rate regimes that may be of interest to the industrial processing of these materials. Whereas the low shear rate regimes were already investigated by this technique in different works [1-4], the high shear rates range is still lacking a detailed study. This work focuses on the orientational order in the system both under shear and subsequent relaxation process arising after shear cessation through the analysis of deuterium spectra from the deuterated solvent water. At the analyzed shear rates the cholesteric order is suppressed and a flow-aligned nematic is observed which for the higher shear rates develops after certain time periodic perturbations that transiently annihilate the order in the system. During relaxation the flow aligned nematic starts losing order due to the onset of the cholesteric helices leading to a period of very low order where cholesteric helices with different orientations are forming from the aligned nematic, followed in the final stage by an increase in order at long relaxation times corresponding to the development of aligned cholesteric domains. This study sheds light on the complex rheological behavior of chiral nematic cellulose-based systems and opens ways to improve its processing. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
RESUMO: O Enfarte Agudo do Miocárdio (EAM) representa um dos principais problemas de saúde pública em Portugal. A rápida intervenção nos factores de risco determinantes da saúde cardíaca pode ter um impacto positivo em vários indicadores de saúde. O objectivo final dessa intervenção passa por capacitar a pessoa, para que, autonomamente, adopte um conjunto de comportamentos de saúde, baseados em estilos de vida protectores da saúde cardíaca, que favorecem positivamente o processo de reabilitação. Esta procura e aquisição do comportamento de saúde, adesão ao regime terapêutico, deve ser desenvolvido em parceria com os profissionais de saúde. O hospital representa a porta de entrada da pessoa com EAM no sistema de saúde. É neste contacto que se inicia uma intervenção de sensibilização e promoção da adesão ao regime terapêutico. Sendo os enfermeiros um grupo profissional que estabelece uma relação continua com a pessoa, importa conhecer um conjunto de dimensões do desempenho dos enfermeiros na promoção da adesão ao regime terapêutico. Breve referência ao desenho de estudo. Foram incluídas no estudo 143 enfermeiros de 9 serviços hospitalares da Região de Saúde de Lisboa e Vale do Tejo. Os dados foram obtidos através de um questionário auto-preenchido. Os dados mostraram que a população de enfermeiros é jovem (M= 30,5: dp= 8,0), 49% têm uma idade £ 26 anos e apresenta pouca experiência profissional (M=7,7; dp= 7,6), 48,2% exerce a profissão há menos de 3 anos. A antiguidade no serviço actual é reduzida (M= 4,7; dp= 4,6), 48,9% estão no serviço há menos de 2 anos. Os enfermeiros acreditam que deviam intervir com mais frequência nos factores de risco fisiológicos e comportamentais que nos factores psicossociais e ambientais; a confiança que têm nas capacidades para intervir nos factores de risco fisiológicos e comportamentais é maior que nos factores psicossociais e ambientais e no último ano, intervieram mais frequentemente nos factores de risco fisiológicos e comportamentais que nos psicossociais e ambientais. O “ensaio” da validação da escala de Will scale de Anderson et al (2004), sobre a capacidade de intervenção na saúde cardíaca, mostrou que o teste de Esfericidade de Bartlett e Medida de adequação da amostragem de Kaiser-Meyer- Olkin (KMO) permitiram a realização da análise factorial em componentes principais (AFCP). Da AFCP emergiram 16 factores, os mesmos que no estudo original de Anderson et al (2004), que revelaram boa consistência interna, com valores de alpha de Cronbach que variaram entre 0,71 a 0,98. Os resultados revelam a necessidade de sensibilizar os enfermeiros para valorizar a intervenção no âmbito dos factores de risco psicossociais e ambientais para promover a adesão ao regime terapêutico. Sugerem ainda que a intervenção baseada na evidência pode ser potenciada de forma a melhorar as práticas de cuidados dos enfermeiros. ABSTRACT: Myocardial infarction (MI) is one of the most important problems in public health in Portugal. A prompt intervention in cardiac health determinants means a positive impact in health outcomes, individually and collectively. The main purpose of this intervention lays on patient’s empowerment so he or she becomes able to choose healthy behaviours, based on heart health protective life styles, and therefore to manage his/hers therapeutic regime. This search and acquisition of health behaviours leading to therapeutic regime adherence may positively have an influence on the whole rehabilitation process and it must be developed in partnership with health workers. MI patients’ first contact with the Health System usually happens at the Hospital. Here the first steps are taken to start an intervention in order to promote therapeutic regime adherence. Nurses are a group of health workers who establish a unique and continuous relation with patients, so it matters to have knowledge of their performance skills that can actually promote a healthy behaviours and increase therapeutic regime adherence. Short Study design The study sample includes 143 nurses working on 9 different hospital wards, belonging to the Lisboa and Tejo’s Valley Health Region, in the district of Lisbon. Data were collected trough a self-administered questionnaire. It revealed that the nurses sample is a young population (M=30,5; dp=8,0), 49% of whom are aged less than 26 years old and has little professional experience (M=7,7; dp= 7,6); 48,2% work has nurses for less than 3 years. There’s a low percentage of seniority (M=4,7; dp=4,6), 48,9% of nurses work in these wards for less than 2 years. Nurses believe they should have intervene more frequently in physiological and behaviour risk factors than in psychological, social and environmental factors; they have greater confidence in their ability to intervene in physiological and behaviour risk factors than to intervene in psychological, social and environmental factors. In last year they took interventions more frequently in physiological and behaviour risk factors than in the other health determinants. The Scale Validation “essay” on Will Scale (Anderson et al, 2004), about heart health intervention capacity, revealed that the Bartlett’s test sphericity and the Kaiser-Meyer- Olkin’s (KMO) appropriate sample measure allowed the factorial analysis on main components (FAMC). From FAMC emerged 16 factors, the same number found on Anderson’s et al (2004) study, revealing good internal consistence, with Cronbach’s alpha values that varied between 0,71 and 0,98. The results point a need for nurses to attribute bigger value to other health determinants intervention - such as psychological, social and environmental determinants - so they’ll take part in promoting therapeutic regime adherence. The results also suggest t
Resumo:
The cleaning of syngas is one of the most important challenges in the development of technologies based on gasification of biomass. Tar is an undesired byproduct because, once condensed, it can cause fouling and plugging and damage the downstream equipment. Thermochemical methods for tar destruction, which include catalytic cracking and thermal cracking, are intrinsically attractive because they are energetically efficient and no movable parts are required nor byproducts are produced. The main difficulty with these methods is the tendency for tar to polymerize at high temperatures. An alternative to tar removal is the complete combustion of the syngas in a porous burner directly as it leaves the particle capture system. In this context, the main aim of this study is to evaluate the destruction of the tar present in the syngas from biomass gasification by combustion in porous media. A gas mixture was used to emulate the syngas, which included toluene as a tar surrogate. Initially, CHEMKIN was used to assess the potential of the proposed solution. The calculations revealed the complete destruction of the tar surrogate for a wide range of operating conditions and indicated that the most important reactions in the toluene conversion are C6H5CH3 + OH <-> C6H5CH2 + H2O, C6H5CH3 + OH <-> C6H4CH3 + H2O, and C6H5CH3 + O <-> OC6H4CH3 + H and that the formation of toluene can occur through C6H5CH2 + H <-> C6H5CH3. Subsequently, experimental tests were performed in a porous burner fired with pure methane and syngas for two equivalence ratios and three flow velocities. In these tests, the toluene concentration in the syngas varied from 50 to 200 g/Nm(3). In line with the CHEMKIN calculations, the results revealed that toluene was almost completely destroyed for all tested conditions and that the process did not affect the performance of the porous burner regarding the emissions of CO, hydrocarbons, and NOx.
Resumo:
Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering
Resumo:
Trabalho de projeto apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob a orientação do Mestre Paulino Manuel Leite da Silva
Resumo:
Dissertação de mestrado em Administração Pública, orientada pelo Professor Doutor J. A. Oliveira Rocha apresentada na Escola de Economia e Gestão da Universidade do Minho, em 2006.
Resumo:
Mestrado em Engenharia Mecânica – Especialização Gestão Industrial
Resumo:
Mestrado em Engenharia Mecânica - Especialização em Gestão Industrial
Resumo:
The choice of an information systems is a critical factor of success in an organization's performance, since, by involving multiple decision-makers, with often conflicting objectives, several alternatives with aggressive marketing, makes it particularly complex by the scope of a consensus. The main objective of this work is to make the analysis and selection of a information system to support the school management, pedagogical and administrative components, using a multicriteria decision aid system – MMASSITI – Multicriteria Method- ology to Support the Selection of Information Systems/Information Technologies – integrates a multicriteria model that seeks to provide a systematic approach in the process of choice of Information Systems, able to produce sustained recommendations concerning the decision scope. Its application to a case study has identi- fied the relevant factors in the selection process of school educational and management information system and get a solution that allows the decision maker’ to compare the quality of the various alternatives.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertation presented to obtain a Ph.D. degree in Engineering and Technology Sciences, Systems Biology at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa
Resumo:
A presente dissertação centrou-se no estudo técnico-económico de dois cenários futuros para a continuação de fornecimento de energia térmica a um complexo de piscinas existente na região do vale do Tâmega. Neste momento a central de cogeração existente excedeu a sua licença de utilização e necessita de ser substituída. Os dois cenários em estudo são a compra de uma nova caldeira, a gás natural, para suprir as necessidades térmicas da caldeira existente a fuelóleo, ou o uso de um sistema de cogeração compacto que poderá estar disponível numa empresa do grupo. No primeiro cenário o investimento envolvido é cerca de 456 640 € sem proveitos de outra ordem para além dos requisitos térmicos, mas no segundo cenário os resultados são bem diferentes, mesmo que tenha de ser realizado o investimento de 1 000 000 € na instalação. Para este cenário foi efetuado um levantamento da legislação nacional no que toca à cogeração, recolheram-se dados do edifício como: horas de funcionamento, número de utentes, consumos de energia elétrica, térmica, água, temperatura da água das piscinas, temperatura do ar da nave, assim como as principais características da instalação de cogeração compacta. Com esta informação realizou-se o balanço de massa e energia e criou-se um modelo da nova instalação em software de modelação processual (Aspen Plus® da AspenTech). Os rendimentos térmico e elétrico obtidos da nova central de cogeração compacta foram, respetivamente, de 38,1% e 39,8%, com uma percentagem de perdas de 12,5% o que determinou um rendimento global de 78%. A avaliação da poupança de energia primária para esta instalação de cogeração compacta foi de 19,6 % o que permitiu concluir que é de elevada eficiência. O modelo criado permitiu compreender as necessidades energéticas, determinar alguns custos associados ao processo e simular o funcionamento da unidade com diferentes temperaturas de ar ambiente (cenários de verão e inverno com temperaturas médias de 20ºC e 5ºC). Os resultados revelaram uma diminuição de 1,14 €/h no custo da electricidade e um aumento do consumo de gás natural de 62,47 €/h durante o período mais frio no inverno devido ao aumento das perdas provocadas pela diminuição da temperatura exterior. Com esta nova unidade de cogeração compacta a poupança total anual pode ser, em média, de 267 780 € admitindo um valor para a manutenção de 97 698 €/ano. Se assim for, o projeto apresenta um retorno do investimento ao fim de 5 anos, com um VAL de 1 030 430 € e uma taxa interna de rentabilidade (TIR) de 14% (positiva, se se considerar a taxa de atualização do investimento de 3% para 15 anos de vida). Apesar do custo inicial ser elevado, os parâmetros económicos mostram que o projeto tem viabilidade económica e dará lucro durante cerca de 9 anos.