10 resultados para General and Applied Linguistics
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Invariant integrals are derived for nematic liquid crystals and applied to materials with small Ericksen number and topological defects. The nematic material is confined between two infinite plates located at y = -h and y = h (h is an element of R+) with a semi-infinite plate at y = 0 and x < 0. Planar and homeotropic strong anchoring boundary conditions to the director field are assumed at these two infinite and semi-infinite plates, respectively. Thus, a line disclination appears in the system which coincides with the z-axis. Analytical solutions to the director field in the neighbourhood of the singularity are obtained. However, these solutions depend on an arbitrary parameter. The nematic elastic force is thus evaluated from an invariant integral of the energy-momentum tensor around a closed surface which does not contain the singularity. This allows one to determine this parameter which is a function of the nematic cell thickness and the strength of the disclination. Analytical solutions are also deduced for the director field in the whole region using the conformal mapping method. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
O pinheiro tem um papel importante na ecologia e economia nacional. O Pinheiro sofre de uma praga severa, denominada por doença da murchidão dos pinheiros, causada pelo nemátodo da madeira do pinheiro (NMP). Apresenta-se como um verme microscópico, invertebrado, medindo menos de 1,5 mm de comprimento. O contágio entre árvores deve-se a vectores biologicamente conhecidos por longicórneo e capricórnio do pinheiro. Os produtores de madeira de pinho são desta forma obrigados a efectuar tratamentos térmicos (HT), de eliminação do NMP e dos seus vectores para que a exportação da madeira serrada cumpra com a norma NP 4487. De modo a manter a competitividade internacional das empresas nacionais, o impacto dos custos do HT deve ser minimizado. O objectivo desta dissertação é efectuar o estudo técnico-económico da implementação de um sistema de cogeração capaz produzir calor para efectuar o tratamento ao NMP e simultaneamente energia eléctrica para vender à rede pública. As receitas da venda de energia eléctrica poderão contribuir para a minimização dos custos do HT. Tendo em conta que os resíduos das serrações de madeira podem ser usados como combustível consideraram-se para avaliação duas tecnologias de cogeração, um sistema de turbina a vapor clássico (ciclo Rankine) e um sistema Organic Rankine Cycle (ORC), permitindo ambas a queima dos resíduos das serrações de madeira. No que diz respeito à avaliação económica, foi desenvolvido um simulador de tecnologia/modalidade de remuneração que efectua cálculos consoante as necessidades térmicas de cada produtor, a potência eléctrica a instalar e indicadores económicos, VAL, TIR e PAYBACK da instalação do sistema de cogeração. O simulador desenvolvido aplica a nova legislação que enquadra o sistema jurídico e remuneratório da cogeração (DL 23/2010), na qual se consideram duas modalidades, geral e especial. A metodologia desenvolvida foi aplicada num caso real de uma serração de madeira e os principais resultados mostram que as soluções apresentadas, turbina a vapor e sistema ORC, não apresentam viabilidade económica. Através da análise de sensibilidade, conclui-se que um dos factores que mais influência a viabilidade económica do projecto é o tempo de funcionamento reduzido. Sendo uma das soluções apresentada a criação de uma central de cogeração para vários produtores de madeira. Uma possível solução para o problema do reduzido tempo de utilização seria o fornecimento do serviço de tratamentos térmicos a outros produtores de paletes de madeira que não possuem estufa própria.
Resumo:
The aim of this work is to use the MANCOVA model to study the influence of the phenotype of an enzyme - Acid phosphatase - and a genetic factor - Haptoglobin genotype - on two dependent variables - Activity of Acid Phosphatase (ACP1) and the Body Mass Index (BMI). Therefore it's used a general linear model, namely a multivariate analysis of covariance (Two-way MANCOVA). The covariate is the age of the subject. This covariate works as control variable for the independent factors, serving to reduce the error term in the model. The main results showed that only the ACP1 phenotype has a significant effect on the activity of ACP1 and the covariate has a significant effect in both dependent variables. The univariate analysis showed that ACP1 phenotype accounts for about 12.5% of the variability in the activity of ACP1. In respect to this covariate it can be seen that accounts for about 4.6% of the variability in the activity of ACP1 and 37.3% in the BMI.
Resumo:
Eucalyptus globulus heartwood, sapwood and their delignified samples by kraft pulping at 130, 150 and 170 degrees C along time were characterized in respect to total carbohydrates by Py-GC/MS(FID). No significant differences between heartwood and sapwood were found in relation to pyrolysis products and composition. The main wood carbohydrate derived pyrolysis compounds were levoglucosan (25.1%), hydroxyacetaldehyde (12.5%), 2-oxo-propanal (10.3%) and acetic acid (8.7%). Levoglucosan decreased during the early stages of delignification and increased during the bulk and residual phases. Acetic acid decreased hydroxyacetaldehyde and 2-oxo-propanal increased, and 2-furaldehyde and hydroxypropanone remained almost constant during delignification. The C/L ratio was 3.2 in wood and remained rather constant in the first pulping periods until a loss of 15-25% in carbohydrate and 60% in lignin. Afterwards it increased sharply until 44 that correspond to the removal of 25-35% of carbohydrates and 95% of lignin. The pulping reactive selectivity to lignin vs. polysaccharides was the same for sapwood and heartwood. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
É do conhecimento geral que muitos professores ensinam os alunos a ouvir, cantar e compor, mas usam testes escritos para avaliar as aprendizagens. Este facto revela que, por um lado, há quem tenha a ideia que os testes escritos estão mesmo a avaliar as práticas musicais e, por outro lado, também há quem considere que é muito difícil e inconsistente avaliar as práticas musicais porque a Música tem um carácter transitório, efémero, imaterial. Além do problema ser interessante, ele é abrangente porque existe tanto entre os professores especialistas de Música como entre os professores generalistas de Educação de Infância e de 1º Ciclo. A experiência levada a cabo nos últimos três anos no Mestrado em Ensino de Educação Musical no Ensino Básico sobre a forma de avaliar as aprendizagens dos alunos em Educação Musical mantém os testes escritos para avaliar os conhecimentos teóricos, mas introduz um instrumento de avaliação das práticas musicais, as grelhas de descritores de desempenho. Estas grelhas, ainda que mantendo princípios orientadores comuns, são sempre feitas à medida para cada situação específica de obra, atividade musical, alunos e são aplicadas não só em observação direta, mas também sobre registos áudio/vídeo. Estes instrumentos são construídos pelos docentes e não em conjunto com os alunos, mas são-lhes apresentados e explicados desde o início de cada unidade didática, usados regulamente para autoavaliação formativa e nas apresentações finais para avaliação sumativa. Desta forma os alunos sabem desde o início onde se espera que cheguem e sabem em cada momento do processo em que ponto se encontram e que problemas/dificuldades musicais devem ser ultrapassados. Em cada atividade comparámos a autoavaliação dos alunos com a avaliação dos professores e verificámos em todas as situações uma elevada correlação positiva (r > 0,9). Teremos ainda que consolidar os dados já obtidos pelo que esperamos o envolvimento de mais professores. A possibilidade de divulgar a solução aqui apresentada, nomeadamente, através de ações de formação contínua, permitirá um evidente aumento da consistência e fiabilidade da avaliação das práticas musicais em Educação Musical.
Resumo:
In this article, we present the first study on probabilistic tsunami hazard assessment for the Northeast (NE) Atlantic region related to earthquake sources. The methodology combines the probabilistic seismic hazard assessment, tsunami numerical modeling, and statistical approaches. We consider three main tsunamigenic areas, namely the Southwest Iberian Margin, the Gloria, and the Caribbean. For each tsunamigenic zone, we derive the annual recurrence rate for each magnitude range, from Mw 8.0 up to Mw 9.0, with a regular interval, using the Bayesian method, which incorporates seismic information from historical and instrumental catalogs. A numerical code, solving the shallow water equations, is employed to simulate the tsunami propagation and compute near shore wave heights. The probability of exceeding a specific tsunami hazard level during a given time period is calculated using the Poisson distribution. The results are presented in terms of the probability of exceedance of a given tsunami amplitude for 100- and 500-year return periods. The hazard level varies along the NE Atlantic coast, being maximum along the northern segment of the Morocco Atlantic coast, the southern Portuguese coast, and the Spanish coast of the Gulf of Cadiz. We find that the probability that a maximum wave height exceeds 1 m somewhere in the NE Atlantic region reaches 60 and 100 % for 100- and 500-year return periods, respectively. These probability values decrease, respectively, to about 15 and 50 % when considering the exceedance threshold of 5 m for the same return periods of 100 and 500 years.
Resumo:
Mg alloys are very susceptible to corrosion in physiological media. This behaviour limits its widespread use in biomedical applications as bioresorbable implants, but it can be controlled by applying protective coatings. On one hand, coatings must delay and control the degradation process of the bare alloy and, on the other hand, they must be functional and biocompatible. In this study a biocompatible polycaprolactone (PCL) coating was functionalised with nano hydroxyapatite (HA) particles for enhanced biocompatibility and with an antibiotic, cephalexin, for anti-bacterial purposes and applied on the AZ31 alloy. The chemical composition and the surface morphology of the coated samples, before and after the corrosion tests, were studied by scanning electron microscopy (SEM) coupled with energy dispersive x-ray analysis (EDX) and Raman. The results showed that the presence of additives induced the formation of agglomerates and defects in the coating that resulted in the formation of pores during immersion in Hanks' solution. The corrosion resistance of the coated samples was studied in Hank's solution by electrochemical impedance spectroscopy (EIS). The results evidenced that all the coatings can provide corrosion protection of the bare alloy. However, in the presence of the additives, corrosion protection decreased. The wetting behaviour of the coating was evaluated by the static contact angle method and it was found that the presence of both hydroxyapatite and cephalexin increased the hydrophilic behaviour of the surface. The results showed that it is possible to tailor a composite coating that can store an antibiotic and nano hydroxyapatite particles, while allowing to control the in-vitro corrosion degradation of the bioresorbable Mg alloy AZ31. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Trabalho de Projecto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Artes Performativas – Teatro-Música.
Resumo:
The Chaves basin is a pull-apart tectonic depression implanted on granites, schists, and graywackes, and filled with a sedimentary sequence of variable thickness. It is a rather complex structure, as it includes an intricate network of faults and hydrogeological systems. The topography of the basement of the Chaves basin still remains unclear, as no drill hole has ever intersected the bottom of the sediments, and resistivity surveys suffer from severe equivalence issues resulting from the geological setting. In this work, a joint inversion approach of 1D resistivity and gravity data designed for layered environments is used to combine the consistent spatial distribution of the gravity data with the depth sensitivity of the resistivity data. A comparison between the results from the inversion of each data set individually and the results from the joint inversion show that although the joint inversion has more difficulty adjusting to the observed data, it provides more realistic and geologically meaningful models than the ones calculated by the inversion of each data set individually. This work provides a contribution for a better understanding of the Chaves basin, while using the opportunity to study further both the advantages and difficulties comprising the application of the method of joint inversion of gravity and resistivity data.