982 resultados para setup crossover
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Química, especialidade de Operações Unitárias e Fenómenos de Transferência, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Review of scientific instruments, Vol.72, Nº9
Resumo:
The global warming due to high CO2 emission in the last years has made energy saving a global problem nowadays. However, manufacturing processes such as pultrusion necessarily needs heat for curing the resin. Then, the only option available is to apply all efforts to make the process even more efficient. Different heating systems have been used on pultrusion, however, the most widely used are the planar resistances. The main objective of this study is to develop another heating system and compares it with the former one. Thermography was used in spite of define the temperature profile along the die. FEA (finite element analysis) allows to understand how many energy is spend with the initial heating system. After this first approach, changes were done on the die in order to test the new heating system and to check possible quality problems on the product. Thus, this work allows to conclude that with the new heating system a significant reduction in the setup time is now possible and an energy reduction of about 57% was achieved.
Resumo:
RESUMO - Num contexto em que a prestação de cuidados de Fisioterapia e Reabilitação é identificada como apresentando uma desigualdade e desajustamento da oferta regional superior à dos restantes cuidados de saúde, assim como uma falta de adequação dos preços praticados, perante as condições de oferta e procura actualmente existentes, o presente trabalho tem por objectivo investigar, no domínio do Desempenho, a influência do Financiamento na definição da prestação destes cuidados, tendo como pressuposto genérico que as decisões estratégicas e a reestruturação produtiva das organizações de saúde são condicionadas pelo sistema de preços. Considera-se que o actual sistema de Financiamento/Pagamento provoca um constrangimento na qualidade da resposta destes cuidados a dois níveis: um primeiro nível, ao colocar o pagamento no âmbito dos Meios Complementares de Diagnóstico e Terapêutica (MCDTs) a contratar pelo Serviço Nacional de Saúde, com isso determinando a configuração organizativa do sistema; um segundo nível de constrangimento que incide sobre as estruturas das organizações prestadoras, pela modelação que induz, nomeadamente a nível da sua produção. Na impossibilidade de tratar as duas dimensões do problema, pela falta de indicadores de desempenho deste sector, analisou-se, relativamente ao segundo nível de constrangimento, a produção de fisioterapia de três organizações que, potencialmente, teriam o mesmo o mesmo perfil de oferta por se enquadrarem num mesmo perfil de procura. Os resultados reflectem o pressuposto genérico do trabalho e abrem espaço para colocar como futura hipótese de investigação a razão da(s) causa(s) que poderão estar subjacentes à discrepância encontrada na média de tratamentos por sessão (duas vezes e meia) na produção das duas organizações que foi possível comparar.------------------- ABSTRACT - In a context where the provision of Physical Therapy and Rehabilitation care is identified as having a regional mismatch of supply and inequality above all the others health cares, and a lack of adequacy of prices in the current conditions of supply and demand, the present work has, as main purpose, to investigate, in the field of Performance, the Payment’s influence in shaping the provision of such health care. The general assumption tracking this analysis is that the strategic decisions on productive structure of health care organizations are influenced by the price systems. It is considered that the current Finance / Payment system causes two levels of constraints on the quality of such health care: a first constraint, as it putts its payment under the Supplementary Means of Diagnosis and Therapy (MCDTs), witch ends up establishing the organizational setup of the system; a second level of constraint by modelling the internal structure of these organizations. The lack of indicators characterizing the performance of this sector, addressed the present study to the second dimension, in witch was analysed the physical therapy production in three organizations that, potentially, would have the same profile of supply responding to similar characteristics of demand. The results reflect the above mentioned general assumption that supported the work, and leave an open space for future research, about the reason (s) that lay behind the discrepancy found between the average of treatments per session (two and a half times) in
Resumo:
Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.
Resumo:
This paper presents the design of low-cost, conformal UHF antennas and RFID tags on two types of cork substrates: 1) natural cork and 2) agglomerate cork. Such RFID tags find an application in wine bottle and barrel identification, and in addition, they are suitable for numerous antenna-based sensing applications. This paper includes the high-frequency characterization of the selected cork substrates considering the anisotropic behavior of such materials. In addition, the variation of their permittivity values as a function of the humidity is also verified. As a proof-of-concept demonstration, three conformal RFID tags have been implemented on cork, and their performance has been evaluated using both a commercial Alien ALR8800 reader and an in-house measurement setup. The reading of all tags has been checked, and a satisfactory performance has been verified, with reading ranges spanning from 0.3 to 6 m. In addition, this paper discusses how inkjet printing can be applied to cork surfaces, and an RFID tag printed on cork is used as a humidity sensor. Its performance is tested under different humidity conditions, and a good range in excess of 3 m has been achieved, allied to a good sensitivity obtained with a shift of >5 dB in threshold power of the tag for different humid conditions.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science
Resumo:
This paper proposes an implementation, based on a multi-agent system, of a management system for automated negotiation of electricity allocation for charging electric vehicles (EVs) and simulates its performance. The widespread existence of charging infrastructures capable of autonomous operation is recognised as a major driver towards the mass adoption of EVs by mobility consumers. Eventually, conflicting requirements from both power grid and EV owners require automated middleman aggregator agents to intermediate all operations, for example, bidding and negotiation, between these parts. Multi-agent systems are designed to provide distributed, modular, coordinated and collaborative management systems; therefore, they seem suitable to address the management of such complex charging infrastructures. Our solution consists in the implementation of virtual agents to be integrated into the management software of a charging infrastructure. We start by modelling the multi-agent architecture using a federated, hierarchical layers setup and as well as the agents' behaviours and interactions. Each of these layers comprises several components, for example, data bases, decision-making and auction mechanisms. The implementation of multi-agent platform and auctions rules, and of models for battery dynamics, is also addressed. Four scenarios were predefined to assess the management system performance under real usage conditions, considering different types of profiles for EVs owners', different infrastructure configurations and usage and different loads on the utility grid (where real data from the concession holder of the Portuguese electricity transmission grid is used). Simulations carried with the four scenarios validate the performance of the modelled system while complying with all the requirements. Although all of these have been performed for one charging station alone, a multi-agent design may in the future be used for the higher level problem of distributing energy among charging stations. Copyright (c) 2014 John Wiley & Sons, Ltd.
Resumo:
The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.
Resumo:
Previous work by our group introduced a novel concept and sensor design for “off-the-person” ECG, for which evidence on how it compares against standard clinical-grade equipment has been largely missing. Our objectives with this work are to characterise the off-the-person approach in light of the current ECG systems landscape, and assess how the signals acquired using this simplified setup compare with clinical-grade recordings. Empirical tests have been performed with real-world data collected from a population of 38 control subjects, to analyze the correlation between both approaches. Results show off-the-person data to be correlated with clinical-grade data, demonstrating the viability of this approach to potentially extend preventive medicine practices by enabling the integration of ECG monitoring into multiple dimensions of people’s everyday lives. © 2015, IUPESM and Springer-Verlag Berlin Heidelberg.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Sistemas Autónomos
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Dissertação apresentada na Faculdade de Ciência e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
In this work tubular fiber reinforced specimens are tested for fatigue life. The specimens are biaxially loaded with tension and shear stresses, with a load angle β of 30° and 60° and a load ratio of R=0,1. There are many factors that affect fatigue life of a fiber reinforced material and the main goal of this work is to study the effects of load ratio R by obtaining S-N curves and compare them to the previous works (1). All the other parameters, such as specimen production, fatigue loading frequency and temperature, will be the same as for the previous tests. For every specimen, stiffness, temperature of the specimen during testing, crack counting and final fracture mode are obtained. Prior to testing, a study if the literature regarding the load ratio effects on composites fatigue life and with that review estimate the initial stresses to be applied in testing. In previous works (1) similar specimens have only been tested for a load ratio of R=-1 and therefore the behaviour of this tubular specimens for a different load ratio is unknown. All the data acquired will be analysed and compared to the previous works, emphasizing the differences found and discussing the possible explanations for those differences. The crack counting software, developed at the institute, has shown useful before, however different adjustments to the software parameters lead to different cracks numbers for the same picture, and therefore a better methodology will be discussed to improve the crack counting results. After the specimen’s failure, all the data will be collected and stored and fibre volume content for every specimen is also determinate. The number of tests required to make the S-N curves are obtained according to the existent standards. Additionally are also identified some improvements to the testing machine setup and to the procedures for future testing.