956 resultados para Layer dependent order parameters
Resumo:
Dissertação de Natureza Científica elaborada no Laboratório Nacional de Engenharia Civil (LNEC) para obtenção do grau de mestre em Engenharia Civil na Área de Especialização de Hidráulica no âmbito do protocolo de cooperação entre o ISEL e o LNEC
Resumo:
Mestrado em Engenharia Mecânica – Especialização Gestão Industrial
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Demand response is an energy resource that has gained increasing importance in the context of competitive electricity markets and of smart grids. New business models and methods designed to integrate demand response in electricity markets and of smart grids have been published, reporting the need of additional work in this field. In order to adequately remunerate the participation of the consumers in demand response programs, improved consumers’ performance evaluation methods are needed. The methodology proposed in the present paper determines the characterization of the baseline approach that better fits the consumer historic consumption, in order to determine the expected consumption in absent of participation in a demand response event and then determine the actual consumption reduction. The defined baseline can then be used to better determine the remuneration of the consumer. The paper includes a case study with real data to illustrate the application of the proposed methodology.
Resumo:
Differences in virulence of strains of Entamoeba histolytica have long been detected by various experimental assays, both in vivo and in vitro. Discrepancies in the strains characterization have been arisen when different biological assays are compared. In order to evaluate different parameters of virulence in the strains characterization, five strains of E. histolytica, kept under axenic culture, were characterized in respect to their, capability to induce hamster liver abscess, erythrophagocytosis rate and cytopathic effect upon VERO cells. It was found significant correlation between in vitro biological assays, but not between in vivo and in vitro assays. Good correlation was found between cytopathic effect and the mean number of uptaken erythrocytes, but not with percentage of phagocytic amoebae, showing that great variability can be observed in the same assay, according to the variable chosen. It was not possible to correlate isoenzyme and restriction fragment pattern with virulence indexes since all studied strains presented pathogenic patterns. The discordant results observed in different virulence assays suggests that virulence itself may not the directly assessed. What is in fact assessed are different biological characteristics or functions of the parasite more than virulence itself. These characteristics or functions may be related or not with pathogenic mechanisms occurring in the development of invasive amoebic disease
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
This paper characterizes four ‘fractal vegetables’: (i) cauliflower (brassica oleracea var. Botrytis); (ii) broccoli (brassica oleracea var. italica); (iii) round cabbage (brassica oleracea var. capitata) and (iv) Brussels sprout (brassica oleracea var. gemmifera), by means of electrical impedance spectroscopy and fractional calculus tools. Experimental data is approximated using fractional-order models and the corresponding parameters are determined with a genetic algorithm. The Havriliak-Negami five-parameter model fits well into the data, demonstrating that classical formulae can constitute simple and reliable models to characterize biological structures.
Resumo:
Real-time monitoring applications may be used in a wireless sensor network (WSN) and may generate packet flows with strict quality of service requirements in terms of delay, jitter, or packet loss. When strict delays are imposed from source to destination, the packets must be delivered at the destination within an end-to-end delay (EED) hard limit in order to be considered useful. Since the WSN nodes are scarce both in processing and energy resources, it is desirable that they only transport useful data, as this contributes to enhance the overall network performance and to improve energy efficiency. In this paper, we propose a novel cross-layer admission control (CLAC) mechanism to enhance the network performance and increase energy efficiency of a WSN, by avoiding the transmission of potentially useless packets. The CLAC mechanism uses an estimation technique to preview packets EED, and decides to forward a packet only if it is expected to meet the EED deadline defined by the application, dropping it otherwise. The results obtained show that CLAC enhances the network performance by increasing the useful packet delivery ratio in high network loads and improves the energy efficiency in every network load.
Resumo:
In the last two decades, small strain shear modulus became one of the most important geotechnical parameters to characterize soil stiffness. Finite element analysis have shown that in-situ stiffness of soils and rocks is much higher than what was previously thought and that stress-strain behaviour of these materials is non-linear in most cases with small strain levels, especially in the ground around retaining walls, foundations and tunnels, typically in the order of 10−2 to 10−4 of strain. Although the best approach to estimate shear modulus seems to be based in measuring seismic wave velocities, deriving the parameter through correlations with in-situ tests is usually considered very useful for design practice.The use of Neural Networks for modeling systems has been widespread, in particular within areas where the great amount of available data and the complexity of the systems keeps the problem very unfriendly to treat following traditional data analysis methodologies. In this work, the use of Neural Networks and Support Vector Regression is proposed to estimate small strain shear modulus for sedimentary soils from the basic or intermediate parameters derived from Marchetti Dilatometer Test. The results are discussed and compared with some of the most common available methodologies for this evaluation.
Resumo:
In the last two decades, small strain shear modulus became one of the most important geotechnical parameters to characterize soil stiffness. Finite element analysis have shown that in-situ stiffness of soils and rocks is much higher than what was previously thought and that stress-strain behaviour of these materials is non-linear in most cases with small strain levels, especially in the ground around retaining walls, foundations and tunnels, typically in the order of 10−2 to 10−4 of strain. Although the best approach to estimate shear modulus seems to be based in measuring seismic wave velocities, deriving the parameter through correlations with in-situ tests is usually considered very useful for design practice.The use of Neural Networks for modeling systems has been widespread, in particular within areas where the great amount of available data and the complexity of the systems keeps the problem very unfriendly to treat following traditional data analysis methodologies. In this work, the use of Neural Networks and Support Vector Regression is proposed to estimate small strain shear modulus for sedimentary soils from the basic or intermediate parameters derived from Marchetti Dilatometer Test. The results are discussed and compared with some of the most common available methodologies for this evaluation.
Resumo:
O presente trabalho pretende abordar aspectos relacionados com o controlo de compactação em aterros, com base na avaliação de parâmetros “in situ”, tais como: pesos volúmicos, teores em água, graus de compactação e módulos de deformabilidade. Recorrendo aos métodos correntes no controlo de compactação, como o ensaio de carga em placa, gamadensímetro, garrafa e ao ensaio de deflectómetro de impacto portátil. Visa-se comparar os resultados obtidos, nas diferentes condições, de modo a possibilitar alcançar correlações entre os ensaios, bem como determinar aqueles que apresentam maior grau de confiança técnico e vantagens operacionais e económicas. De modo a garantir os índices de qualidade da obra é necessário fazer cumprir os critérios exigidos pelo caderno de encargos, nomeadamente nos parâmetros de avaliação do controlo de compactação, para que estas satisfaçam o seu estado funcional e estrutural. Neste tipo de obras os cadernos de encargos de referência em Portugal são os das Estradas de Portigal (EP) e da Brisa, Auto-estradas de Portugal (Brisa), os quais se baseiam em recomendações de classificações de solos. No contexto experimental, foram efectuados dois estudos com condições e materiais diferentes, em obras pertencentes à empresa Mota-Engil Engenharia e Construções, S.A. Nas campanhas de ensaios foram realizados ensaios “in situ” na obra da Subconcessão do Douro Interior – Lote 6 – IC5 – Troço Murça/ Nó de Pombal nas camadas de sub-base e base num agregado de granulometria extensa, e na obra de modernização do troço ferroviário Bombel e Vidigal a Évora, em solos.
Resumo:
Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco, especialidade em Estatística
Resumo:
Channa punctatus was exposed to four different concentrations of Rutin, Taraxerol and Apigenin. Changes in some hematological parameters of Channa punctatus were assessed to determine the influence of these compounds on test fish. Fish were exposed to sublethal concentrations (80% of LC50 of 24h) of these compounds for one week. Control fish were also administered for one week. Thereafter, blood samples were obtained from the control and experimental fish. Blood was assayed for selected hematological parameters (hematocrit, hemoglobin, red blood cell count, white blood cell count total plasma protein and plasma glucose concentration). The derived hematological indices of mean corpuscular hemoglobin concentration (MCHC), mean corpuscular hemoglobin (MCH) and mean corpuscular volume (MCV) were calculated. Sublethal concentrations of these compounds caused a dose dependent decrease in hemoglobin values coupled with a decrease in hematocrit values and red blood cell counts are an obvious indication of anemia. The total white blood cell counts and the differential white blood cell counts were decreased except for the lymphocytes, where there was a slight increase. Plasma protein and glucose were also lower in exposed fish when compared with control. The hematological indices MCH, MCHC, MCV were also lowered. The result from this study reveals high mortality rate and deleterious consequences on the health of fish subjected to acute exposure of Rutin, Taraxerol and Apigenin and therefore, should not be used directly in aquaculture without having the proper knowledge.
Resumo:
Old timber structures may show significant variation in the cross section geometry along the same element, as a result of both construction methods and deterioration. As consequence, the definition of the geometric parameters in situ may be both time consuming and costly. This work presents the results of inspections carried out in different timber structures. Based on the obtained results, different simplified geometric models are proposed in order to efficiently model the geometry variations found. Probabilistic modelling techniques are also used to define safety parameters of existing timber structures, when subjected to dead and live loads, namely self-weight and wind actions. The parameters of the models have been defined as probabilistic variables, and safety of a selected case study was assessed using the Monte Carlo simulation technique. Assuming a target reliability index, a model was defined for both the residual cross section and the time dependent deterioration evolution. As a consequence, it was possible to compute probabilities of failure and reliability indices, as well as, time evolution deterioration curves for this structure. The results obtained provide a proposal for definition of the cross section geometric parameters of existing timber structures with different levels of decay, using a simplified probabilistic geometry model and considering a remaining capacity factor for the decayed areas. This model can be used for assessing the safety of the structure at present and for predicting future performance.
Resumo:
Os adesivos têm sido alvo de estudo ao longo dos últimos anos para ligação de componentes a nível industrial. Devido à crescente utilização das juntas adesivas, torna-se necessária a existência de modelos de previsão de resistência que sejam fiáveis e robustos. Neste âmbito, a determinação das propriedades dos adesivos é fundamental para o projeto de ligações coladas. Uma abordagem recente consiste no uso de modelos de dano coesivo (MDC), que permitem simular o comportamento à fratura das juntas de forma bastante fiável. Esta técnica requer a definição das leis coesivas em tração e corte. Estas leis coesivas dependem essencialmente de 2 parâmetros: a tensão limite e a tenacidade no modo de solicitação respetivo. O ensaio End-Notched Flexure (ENF) é o mais utilizado para determinar a tenacidade em corte, porque é conhecido por ser o mais expedito e fiável para caraterizar este parâmetro. Neste ensaio, os provetes são sujeitos a flexão em 3 pontos, sendo apoiados nas extremidades e solicitados no ponto médio para promover a flexão entre substratos, o que se reflete numa solicitação de corte no adesivo. A partir deste ensaio, e após de definida a tenacidade em corte (GIIc), existem alguns métodos para estimativa da lei coesiva respetiva. Nesta dissertação são definidas as leis coesivas em corte de três adesivos estruturais através do ensaio ENF e um método inverso de ajuste dos dados experimentais. Para o efeito, foram realizados ensaios experimentais considerado um adesivo frágil, o Araldite® AV138, um adesivo moderadamente dúctil, o Araldite® 2015 e outro dúctil, o SikaForce® 7752. O trabalho experimental consistiu na realização dos ensaios ENF e respetivo tratamento dos dados para obtenção das curvas de resistência (curvas-R) através dos seguintes métodos: Compliance Calibration Method (CCM), Direct Beam Theory (DBT), Corrected Beam Theory (CBT) e Compliance-Based Beam Method (CBBM). Os ensaios foram simulados numericamente pelo código comercial ABAQUS®, recorrendo ao Métodos de Elementos Finitos (MEF) e um MDC triangular, com o intuito de estimar a lei coesiva de cada um dos adesivos em solicitação de corte. Após este estudo, foi feita uma análise de sensibilidade ao valor de GIIc e resistência coesiva ao corte (tS 0), para uma melhor compreensão do efeito destes parâmetros na curva P- do ensaio ENF. Com o objetivo de testar adequação dos 4 métodos de obtenção de GIIc usados neste trabalho, estes foram aplicados a curvas P- numéricas de cada um dos 3 adesivos, e os valores de GIIc previstos por estes métodos comparados com os respetivos valores introduzidos nos modelos numéricos. Como resultado do trabalho realizado, conseguiu-se obter uma lei coesiva única em corte para cada um dos 3 adesivos testados, que é capaz de reproduzir com precisão os resultados experimentais.