981 resultados para Output-only Modal Analysis
Resumo:
This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
Dissertação apresentada ao Instituto Politécnico do Porto para obtenção do Grau de Mestre em Logística Orientado pela professora Doutora Maria Teresa Ribeiro Pereira Esta dissertação não inclui as críticas e sugestões feitas pelo Júri.
Resumo:
O consumo energético verificado nas refinarias petrolíferas é muito elevado, sendo as fornalhas os equipamentos que mais contribuem para esse consumo. Neste estudo foi efetuada uma avaliação e otimização energética às fornalhas da Fábrica de Aromáticos da Refinaria de Matosinhos. Numa primeira fase foi efetuado um levantamento exaustivo de dados de todas as correntes de entrada e saída dos equipamentos para posteriormente efetuar os balanços de massa e energia a cada uma das fornalhas. Os dados relativos ao levantamento compreenderam dois períodos de funcionamento distintos da unidade fabril, o período de funcionamento normal e o período relativo ao arranque. O período de funcionamento normal foi relativo ao ano de 2012 entre os meses de janeiro a setembro, por sua vez o período de arranque foi de dezembro de 2012 a março de 2013. Na segunda fase foram realizados os balanços de massa e energia quantificando todas as correntes de entrada e saída das fornalhas em termos mássicos e energéticos permitindo o cálculo do rendimento térmico das fornalhas para avaliar a sua performance. A avaliação energética permitiu concluir que existe um consumo maior de energia proveniente da combustão do Fuel Gás do que do Fuel Óleo, tanto no período de funcionamento normal como no arranque. As fornalhas H0101, H0301 e a H0471 possuem os consumos mais elevados, sendo responsáveis por mais de 70% do consumo da Fábrica de Aromáticos. Na terceira fase foram enunciadas duas medidas para a otimização energética das três fornalhas mais consumidoras de energia, a limpeza mensal e o uso exclusivo de Fuel Gás como combustível. As poupanças energéticas obtidas para uma limpeza mensal foram de 0,3% na fornalha H0101, 0,7% na fornalha H0301 e uma poupança de 0,9 % na fornalha H0471. Para o uso exclusivo de Fuel Gás obteve-se uma poupança de 0,9% na fornalha H0101 e uma poupança de 1,3% nas fornalhas H0301 e H0471. A análise económica efetuada à sugestão de alteração do combustível mostra que os custos de operação sofrerão um aumento anual de 621 679 €. Apesar do aumento dos custos, a redução na emissão de 24% de dióxido de carbono, poderá justificar este aumento na despesa.
Resumo:
The phlebotomine sand fly Lutzomyia longipalpis has been incriminated as a vector of American visceral leishmaniasis, caused by Leishmania chagasi. However, some evidence has been accumulated suggesting that it may exist in nature not as a single but as a species complex. Our goal was to compare four laboratory reference populations of L. longipalpis from distinct geographic regions at the molecular level by RAPD-PCR. We screened genomic DNA for polymorphic sites by PCR amplification with decamer single primers of arbitrary nucleotide sequences. One primer distinguished one population (Marajó Island, Pará State, Brazil) from the other three (Lapinha Cave, Minas Gerais State, Brazil; Melgar, Tolima Department, Colombia and Liberia, Guanacaste Province, Costa Rica). The population-specific and the conserved RAPD-PCR amplified fragments were cloned and shown to differ only in number of internal repeats.
Resumo:
Presented at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
More than 70 species of mycobacteria have been defined, and some can cause disease in humans, especially in immunocompromised patients. Species identification in most clinical laboratories is based on phenotypic characteristics and biochemical tests and final results are obtained only after two to four weeks. Quick identification methods, by reducing time for diagnosis, could expedite institution of specific treatment, increasing chances of success. PCR restriction-enzyme analysis (PRA) of the hsp65 gene was used as a rapid method for identification of 103 clinical isolates. Band patterns were interpreted by comparison with published tables and patterns available at an Internet site (http://www.hospvd.ch:8005). Concordant results of PRA and biochemical identification were obtained in 76 out of 83 isolates (91.5%). Results from 20 isolates could not be compared due to inconclusive PRA or biochemical identification. The results of this work showed that PRA could improve identification of mycobacteria in a routine setting because it is accurate, fast, and cheaper than conventional phenotypic identification.
Resumo:
Thirty one infective endocarditis (IE) fatal cases whose diagnosis was first obtained at autopsy were studied. The clinical data of these patients (Group 1) showed significant differences compared to other 141 IE cases (Group 2). The average age of 53 years in Group 1 patients was 18 years higher than that of Group 2. The Group 1 patients had a low frequency of IE predisposing heart disease. Both patient groups presented fever (about 87%), but a significant low frequency of cardiac murmur (25.8%) was observed in Group 1 patients and echocardiography tests were performed in only 16.1%, suggesting that IE diagnosis was not suspected. Likewise, although most Group 1 patients appeared with severe acute illness, they did not present the classic IE clinical presentation. Blood cultures were performed in only 64.5% of the Group 1 patients. However, bacteria were isolated in 70% of these blood cultures and Staphylococcus aureus was isolated in 71.4%. The bacteria attacked mitral and aortic valves. Complications such as embolizations and cardiac failure occurred in almost half of the cases and they also presented with infections of the lungs, urinary tract, and central nervous system. Medical procedures were performed in practically all fatal cases whose diagnosis was first obtained at autopsy. Sepsis occurred in about half of the patients and it was followed by shock in more than 25%. This form of IE must be suspected in mature and in old febrile hospitalized patients having infection predisposing diseases, embolization, and suffering medical procedures.
Resumo:
6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.
Resumo:
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.
Resumo:
The internal impedance of a wire is the function of the frequency. In a conductor, where the conductivity is sufficiently high, the displacement current density can be neglected. In this case, the conduction current density is given by the product of the electric field and the conductance. One of the aspects the high-frequency effects is the skin effect (SE). The fundamental problem with SE is it attenuates the higher frequency components of a signal. The SE was first verified by Kelvin in 1887. Since then many researchers developed work on the subject and presently a comprehensive physical model, based on the Maxwell equations, is well established. The Maxwell formalism plays a fundamental role in the electromagnetic theory. These equations lead to the derivation of mathematical descriptions useful in many applications in physics and engineering. Maxwell is generally regarded as the 19th century scientist who had the greatest influence on 20th century physics, making contributions to the fundamental models of nature. The Maxwell equations involve only the integer-order calculus and, therefore, it is natural that the resulting classical models adopted in electrical engineering reflect this perspective. Recently, a closer look of some phenomas present in electrical systems and the motivation towards the development of precise models, seem to point out the requirement for a fractional calculus approach. Bearing these ideas in mind, in this study we address the SE and we re-evaluate the results demonstrating its fractional-order nature.
Resumo:
As a result of the advances in the control of pulmonary insufficiency in tetanus, the cardiovascular system has increasingly been shown to be a determining factor in morbidity and mortality but detailed knowledge of the cardiovascular complications in tetanus is scanty. The 24h-Holter was carried out in order to detect arrhythmias and sympathetic overactivity in 38 tetanus patients admitted to an ICU. The SDNN Index (standard deviation from the normal R-to-R intervals), was useful in detecting adrenergic tonus, and ranged from 64.1 ± 27 in the more severe forms of tetanus to 125 ± 69 in the milder ones. Sympathetic overactivity occurred in 86.2% of the more severe forms of the disease, but was also detected in 33% of the milder forms. Half the patients had their sympathetic overactivity detected only by the Holter. The most frequent arrhythmias were isolated supraventricular (55.2%) and ventricular (39.4%) extrasystoles. There was no association of the arrhythmias with the clinical form of tetanus or with the presence of sympathetic overactivity. The present study demonstrated that major cardiovascular dysfunction, particularly sympathetic overactivity, occurs in all forms of tetanus, even in the milder ones. This has not been effectively detected with traditional monitoring in ICU and may not be properly treated.