963 resultados para Decomposition Of Rotation
Resumo:
Nos dias de hoje existe uma grande preocupação da população em fazer uma alimentação mais saudável, uma alimentação que tenha nos seus alimentos elementos que não prejudiquem a saúde mas sim que a tornem mais forte. Um desses elementos que pode trazer benefício para a saúde é o Germânio, elemento de estudo no presente trabalho. Neste trabalho determinou-se a concentração de Germânio em alguns alimentos. Os alimentos usados foram: espargos, ginseng, cogumelos, rabanete, gengibre, aloé vera e alho. Para se fazer a decomposição das amostras foi usada uma solução de ácido nítrico concentrado (67%) e peróxido de hidrogénio (30%), de seguida as soluções resultantes foram analisadas por espectrometria de massa ligado a um plasma acoplado indutivamente (Inductive Coupled Plasma - Mass Spectrometry (ICP-MS)). Esta técnica permitiu estudar os três isótopos mais abundantes de germânio (Ge70, Ge72 e Ge74). Como principais resultados deste trabalho pode-se referir que o alimento que apresenta uma maior concentração de Germânio é o ginseng (243,0 ng/g), seguindo-se o alho (152,6 ng/g). Com concentrações bastante próximas ficaram os espargos, gengibre e cogumelos com um valor aproximado de 75 ng/g. As concentrações mais baixas formam encontradas no aloé vera e rabanete, com valores de 38,16 e 21,85ng/g respectivamente. Com estes resultados podemos concluir que para ter uma alimentação rica neste elemento deve-se ingerir ginseng e alho pois dos alimentos estudados são os mais ricos em Germânio.
Resumo:
Este trabalho insere-se no domínio da calibração energética dos equipamentos SPT, dando seguimento ao disposto na norma EN ISO 22476-3, de aplicação obrigatória em Portugal. Para tal foi utilizada uma vara instrumentada, cuja instrumentação consiste em strain-gauges e acelerómetros piezoeléctricos. Esta instrumentação encontra-se fixa a um trecho de vara com comprimento de 60 cm e para a aquisição dos dados foi utilizado o sistema SPT Analyzer® comercializado pela firma PDI. O sistema permite registar os dados provenientes da instrumentação: sinais de um par de strain-gauges, transformados em registos de força (F1 e F2) e sinais de um par de acelerómetros, convertidos em registos de velocidade (V1 e V2) ao longo do tempo. O equipamento permite a avaliação, em tempo real, da qualidade dos registos e da energia máxima transmitida à vara em cada golpe e o conhecimento do deslocamento vertical do trem de varas ocorrido em cada golpe do martelo. Por outro lado, baseando-se no tema acima referido, pretende-se ainda desenvolver esforços no sentido de melhorar o novo método interpretativo dos resultados dos ensaios SPT e sua aplicação ao dimensionamento de estacas, dado que a previsão da capacidade de carga de estacas constitui um dos desafios da engenharia de fundações por requerer a estimativa de propriedades do solo, alterações pela execução da fundação e conhecimento do mecanismo de interacção solo-estaca. Este novo procedimento baseia-se nos princípios da dinâmica, rompendo com as metodologias até aqui consagradas, de natureza essencialmente empírica. A nova forma de interpretar os ensaios SPT, consubstanciada nos princípios de conservação de energia na cravação do amostrador SPT, irá permitir converter analiticamente o valor Nspt numa força dinâmica de reacção à penetração. A decomposição desta força dinâmica permite efectuar análises comparativas entre as resistências unitárias mobilizadas no amostrador SPT (modelo) e as mobilizadas na estaca (protótipo).
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
Resumo:
Atualmente, os aterros sanitários representam uma solução para a gestão e tratamento dos resíduos sólidos urbanos. Da deposição, ocorrem duas formas de emissões ao longo do tempo, a produção de biogás e de lixiviados, que resultam sobretudo da decomposição da matéria orgânica. Um dos principais constituintes do biogás é o metano, o qual tem elevado poder calorífico. O presente trabalho aborda, a maximização da valorização energética em aterros sanitários, recorrendo a equipamentos baseados no Ciclo Orgânico de Rankine (ORC) para a produção de eletricidade. É apresentado como caso de estudo a central de valorização energética da Suldouro, em Sermonde, que produz eletricidade a partir do biogás resultante da decomposição da matéria orgânica depositada em aterro. O biogás é utilizado como combustível para os motogeradores utilizados para o seu aproveitamento energético, sendo que apenas cerca de 40% do potencial energético contido no biogás é transformado em eletricidade, registando-se perdas sobretudo nas emissões dos gases de exaustão e na água de arrefecimento dos motores. Para avaliação do potencial da recuperação energética dos gases de escape é avaliado o desempenho termodinâmico do ciclo ORC. Para tal foi desenvolvida uma ferramenta em MATLAB utilizando como modelo a configuração do ORC com recuperador de calor. O cálculo das propriedades termodinâmicas dos fluidos foi obtido através da criação de uma sub-rotina que chama o programa CoolProp. Este programa restitui propriedades como a entalpia, entropia, pressões e temperaturas em cada ponto do ciclo, permitindo assim ao utilizador otimizar o tempo na obtenção de resultados. A avaliação económica é fundamental na tomada de decisões por parte do investidor e dos financiadores do projeto. É então apresentada a análise económica e efetuada uma análise de sensibilidade, onde foram efetuadas variações nos vetores mais importantes de forma a poder avaliar-se o impacto em termos da sua rentabilidade. A ferramenta desenvolvida permite obter de forma prática, os três indicadores económicos extremamente influentes no que se refere à tomada de decisão. A utilização dos sistemas ORC e os seus benefícios não se esgotam na maximização dos aproveitamentos da valorização energética em aterros sanitários. Também a recuperação de calor para a produção de energia elétrica pode ter um impacto importante em muitos setores intensivos de energia, contribuindo significativamente para a redução do consumo e aumentando a eficiência de todo o processo de produção.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
Mestrado em Engenharia Química - Ramo Optimização Energética na Indústria Química
Resumo:
The autonomic nervous system (ANS) is known to be an important modulator in the pathogenesis of paroxysmal atrial fibrillation (PAF). Changes in ANS control of heart rate variability (HRV) occur during orthostatism to maintain cardiovascular homeostasis. Wavelet transform has emerged as a useful tool that provides time-frequency decomposition of the signal under investigation, enabling intermittent components of transient phenomena to be analyzed. AIM: To study HRV during head-up tilt (HUT) with wavelet transform analysis in PAF patients and healthy individuals (normals). METHODS: Twenty-one patients with PAF (8 men; age 58 +/- 14 yrs) were examined and compared with 21 normals (7 men, age 48 +/- 12 yrs). After a supine resting period, all subjects underwent passive HUT (60 degrees) while in sinus rhythm. Continuous monitoring of ECG and blood pressure was carried out (Task Force Monitor, CNSystems). Acute changes in RR-intervals were assessed by wavelet analysis and low-frequency power (LF: 0.04-0.15 Hz), high-frequency power (HF: 0.15-0.60 Hz) and LF/HF (sympathovagal) were calculated for 1) the last 2 min of the supine period; 2) the 15 sec of tilting movement (TM); and 3) the 1st (TT1) and 2nd (TT2) min of HUT. Data are expressed as means +/- SEM. RESULTS: Baseline and HUT RR-intervals were similar for the two groups. Supine basal blood pressure was also similar for the two groups, with a sustained increase in PAF patients, and a decrease followed by an increase and then recovery in normals. Basal LF, HF and LF/ HF values in PAF patients were 632 +/- 162 ms2, 534 +/- 231 ms2 and 1.95 +/- 0.39 respectively, and 1058 +/- 223 ms2, 789 +/- 244 ms2 and 2.4 +/- 0.36 respectively in normals (p = NS). During TM, LF, HF and LF/HF values for PAF patients were 747 +/- 277 ms2, 387 +/- 94 ms2 and 2.9 +/- 0.6 respectively, and 1316 +/- 315 ms2, 698 +/- 148 ms2 and 2.8 +/- 0.6 respectively in normals (p < 0.05 for LF and HF). During TF1, LF, HF and LF/ HF values for PAF patients were 1243 +/- 432 ms2, 302 +/- 88 ms2 and 7.7 +/- 2.4 respectively, and 1992 +/- 398 ms2, 333 +/- 76 ms2 and 7.8 +/- 0.98 respectively for normals (p < 0.05 for LF). During TF2, LF, HF and LF/HF values for PAF patients were 871 +/- 256 ms2, 242 +/- 51 ms2 and 4.7 +/- 0.9 respectively, and 1263 +/- 335 ms2, 317 +/- 108 ms2 and 8.6 +/- 0.68 respectively for normals (p < 0.05 for LF/HF). The dynamic profile of HRV showed that LF and HF values in PAF patients did not change significantly during TM or TT2, and LF/HF did not change during TM but increased in TT1 and TT2. CONCLUSION: Patients with PAF present alterations in HRV during orthostatism, with decreased LF and HF power during TM, without significant variations during the first minutes of HUT. These findings suggest that wavelet transform analysis may provide new insights when assessing autonomic heart regulation and highlight the presence of ANS disturbances in PAF.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
This paper examines the effectiveness of urban containment policies to protect forestland from residential conversion and to increase the provision of forest public goods in the presence of irreversible investments and policy uncertainty. We develop a model of a single landowner that allows for switching between competing land uses (forestry and residential use) at some point in the future. Our results show that urban containment policies can protect (even if temporarily) forestland from being developed but must be supplemented with policies that influence the length and number of harvesting cycles if the goal is to increase nontimber benefits. The threat of a development prohibition creates incentives for preemptive timber harvesting and land conversion. In particular, threatened regulation creates an incentive to shorten rotation cycles to avoid costly land-use restrictions. However, it has an ambiguous effect on forestland conversion as the number of rotation cycles can also be adjusted to maximize the expected returns to land. Finally, in the presence of irreversibility, forestland conversion decisions should be done using real option theory rather than net present value analysis
Resumo:
The continued economic and population development puts additional pressure on the already scarce energetic sources. Thus there is a growing urge to adopt a sustainable plan able to meet the present and future energetic demands. Since the last two decades, solar trough technology has been demonstrating to be a reliable alternative to fossil fuels. Currently, the trough industry seeks, by optimizing energy conversion, to drive the cost of electricity down and therefore to place itself as main player in the next energetic age. One of the issues that lately have gained considerable relevance came from the observation of significant heat losses in a large number of receiver modules. These heat losses were attributed to slow permeation of traces of hydrogen gas through the steel tube wall into the vacuum annulus. The presence of hydrogen gas in the absorber tube results from the decomposition of heat transfer fluid due to the long-term exposure to 400°C. The permeated hydrogen acts as heat conduction mean leading to a decrease in the receivers performance and thus its lifetime. In order to prevent hydrogen accumulation, it has been common practice to incorporate hydrogen getters in the vacuum annulus of the receivers. Nevertheless these materials are not only expensive but their gas absorbing capacity can be insufficient to assure the required level of vacuum for the receivers to function. In this work the building of a permeation measurement device, vulnerabilities detected in the construction process and its overcome are described. Furthermore an experimental procedure was optimized and the obtained permeability results, of different samples were evaluated. The data was compared to measurements performed by an external entity. The reliability of the comparative data was also addressed. In the end conclusions on the permeability results for the different samples characteristics, feasibility of the measurement device are drawn and recommendations on future line of work were made.
Resumo:
In Portugal, about 20% of full-time workers are employed under a fixed-term contract. Using a rich longitudinal matched employer-employee dataset for Portugal, with more than 20 million observations and covering the 2002-2012 period, we confirm the common idea that fixed-term contracts are not desirable when compared to permanent ones, by estimating a conditional wage gap of -1.7 log points. Then, we evaluate the sources of that wage penalty by combining a three way high-dimensional fixed effects model with the decomposition of Gelbach (2014), in which the three dimensions considered are the worker’s unobserved ability, the firm’s compensation wage policy and the job title effect. It is shown that the average worker with a fixed-term contract is less productive than his/her permanent counterparts, explaining -3.92 log points of the FTC wage penalty. Additionally, the sorting of workers into lower-paid job titles is also responsible for -0.59 log points of the wage gap. Surprisingly, we found that the allocation of workers among firms mitigates the existing wage penalty (in 4.23 log points), as fixed-term workers are concentrated into firms with a more generous compensation policy. Finally, following Figueiredo et al. (2014), we further control for the worker-firm match characteristics and reach the conclusion that fixed-term employment relationships have an overrepresentation of low quality worker-firm matches, explaining 0.65 log points of the FTC wage penalty.
Resumo:
Poly(vinylidene fluoride-co-chlorotrifluoroethylene), PVDF-CTFE, membranes were prepared by solven casting from dimethylformamide, DMF. The preparation conditions involved a systematic variation of polymer/solvent ratio and solvent evaporation temperature. The microstructural variations of the PVDF-CTFE membranes depend on the different regions of the PVDF-CTFE/DMF phase diagram, explained by the Flory-Huggins theory. The effect of the polymer/solvent ratio and solvent evaporation temperature on the morphology, degree of porosity, β-phase content, degree of crystallinity, mechanical, dielectric and piezoelectric properties of the PVDF-CTFE polymer were evaluated. In this binary system, the porous microstructure is attributed to a spinodal decomposition of the liquid-liquid phase separation. For a given polymer/solvent ratio, 20 wt%, and higher evaporation solvent temperature, the β-phase content is around 82% and the piezoelectric coefficient, d33, is - 4 pC/N.
Resumo:
A rotary thermal diffusion column with the inner cylinder rotating and the outer cylinder static was used to separate n-heptane-benzene mixtures at different speeds of rotation. The results show that the column efficiency depends on the speed of rotation. For the optimum speed the increase in efficiency relative to the static column was of the order of 8%. The role of the geometric irregularities in the annulus width on performance of the rotary column is also discussed.
Resumo:
Dissertação de mestrado integrado em Arquitectura (área de especialização de Cultura Arquitectónica)