27 resultados para Matching to sample

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução – A tomografia de emissão de fotão simples sincronizada com o sinal eletrocardiográfico (Gated-SPECT) é essencial para a avaliação conjunta da perfusão e da função ventricular esquerda (VE) do miocárdio. Objetivo – Investigar a relação entre a função VE e o índice de captação (IC) miocárdio/pulmão direito (M/PD) e M/P esquerdo (M/PE) nos estudos Gated-SPECT com 99mTc-Tetrofosmina. Metodologia – Amostra de 32 pacientes que realizaram estudos Gated-SPECT por indicação clínica, sendo subdividida em dois grupos: Grupo I (GI) – pacientes com a informação clínica de enfarte agudo do miocárdio (EAM); Grupo II (GII) – pacientes com a informação clínica de isquemia. Por cada paciente adquiriram-se imagens estáticas torácico-abdominais e dois estudos Gated-SPECT do miocárdio (protocolo de um dia esforço/repouso). Nas imagens estáticas definiram-se regiões de interesse (Regions of interest – ROI) para calcular os IC. Nos estudos Gated-SPECT utilizou-se o software Quantitative Gated SPECT/Quantitative Perfusion SPECT para calcular a Fração de Ejeção do Ventrículo Esquerdo (FEVE). Efetuou-se análise estatística descritiva para caracterização da amostra. Aplicou-se o teste de Spearman para avaliar a correlação entre a FEVE e os IC por grupo de pacientes. O Teste de Willcoxon foi usado para comparar FEVE em repouso e em esforço. Resultados – Nos estudos Gated-SPECT em esforço não se verificou correlação estatisticamente significativa entre a FEVE e os IC, para GI e GII; em repouso existe correlação positiva estatisticamente significativa entre a FEVE e os IC, para GI; para GII não se verificou correlação. Na comparação dos valores de FEVE em esforço e repouso nos dois grupos constatou-se a existência de diferenças estatisticamente significativas, sendo a FEVE em Esforçototal que realizou Gated‑SPECT (GI e GII).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Task-based approach implicates identifying all the tasks developed in each workplace aiming to refine the exposure characterization. The starting point of this approach is the recognition that only through a more detailed and comprehensive understanding of tasks is possible to understand, in more detail, the exposure scenario. In addition allows also the most suitable risk management measures identification. This approach can be also used when there is a need of identifying the workplace surfaces for sampling chemicals that have the dermal exposure route as the most important. In this case is possible to identify, through detail observation of tasks performance, the surfaces that involves higher contact (frequency) by the workers and can be contaminated. Identify the surfaces to sample when performing occupational exposure assessment to antineoplasic agents. Surfaces selection done based on the task-based approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Past studies found three types of infant coping behaviour during Face-to-Face Still-Face paradigm (FFSF): a Positive Other-Directed Coping; a Negative Other-Directed Coping and a Self-Directed Coping. In the present study, we investigated whether those types of coping styles are predicted by: infants’ physiological responses; maternal representations of their infant’s temperament; maternal interactive behaviour in free play; and infant birth and medical status. The sample consisted of 46, healthy, prematurely born infants and their mothers. At one month, infant heart rate was collected in basal. At three months old (corrected age), infant heart-rate was registered during FFSF episodes. Mothers described their infants’ temperament using a validated Portuguese temperament scale, at infants three months of corrected age. As well, maternal interactive behaviour was evaluated during a free play situation using CARE-Index. Our findings indicate that positive coping behaviours were correlated with gestational birth weight, heart rate (HR), gestational age, and maternal sensitivity in free play. Gestational age and maternal sensitivity predicted Positive Other-Direct Coping behaviours. Moreover, Positive Other-Direct coping was negatively correlated with HR during Still-Face Episode. Self-directed behaviours were correlated with HR during Still-Face Episode and Recover Episode and with maternal controlling/intrusive behaviour. However, only maternal behaviour predicted Self-direct coping. Early social responses seem to be affected by infants’ birth status and by maternal interactive behaviour. Therefore, internal and external factors together contribute to infant ability to cope and to re-engage after stressful social events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this research is to identify the hidden knowledge and learning mechanisms in the organization in order to disclosure the tacit knowledge and transform it into explicit knowledge. Most firms usually tend to duplicate their efforts acquiring extra knowledge and new learning skills while forgetting to exploit the existing ones thus wasting one life time resources that could be applied to increase added value within the firm overall competitive advantage. This unique value in the shape of creation, acquisition, transformation and application of learning and knowledge is not disseminated throughout the individual, group and, ultimately, the company itself. This work is based on three variables that explain the behaviour of learning as the process of construction and acquisition of knowledge, namely internal social capital, technology and external social capital, which include the main attributes of learning and knowledge that help us to capture the essence of this symbiosis. Absorptive Capacity provides the right tool to explore this uncertainty within the firm it is possible to achieve the perfect match between learning skills and knowledge needed to support the overall strategy of the firm. This study has taken in to account a sample of the Portuguese textile industry and it is based on a multisectorial analysis that makes it possible a crossfunctional analysis to check on the validity of results in order to better understand and capture the dynamics of organizational behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this research is to identify the hidden knowledge and learning mechanisms in the organization in order to disclosure the tacit knowledge and transform it into explicit knowledge. Most firms usually tend to duplicate their efforts acquiring extra knowledge and new learning skills while forgetting to exploit the existing ones thus wasting one life time resources that could be applied to increase added value within the firm overall competitive advantage. This unique value in the shape of creation, acquisition, transformation and application of learning and knowledge is not disseminated throughout the individual, group and, ultimately, the company itself. This work is based on three variables that explain the behaviour of learning as the process of construction and acquisition of knowledge, namely internal social capital, technology and external social capital, which include the main attributes of learning and knowledge that help us to capture the essence of this symbiosis. Absorptive Capacity provides the right tool to explore this uncertainty within the firm it is possible to achieve the perfect match between learning skills and knowledge needed to support the overall strategy of the firm. This study has taken in to account a sample of the Portuguese textile industry and it is based on a multisectorial analysis that makes it possible a crossfunctional analysis to check on the validity of results in order to better understand and capture the dynamics of organizational behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formaldehyde, classified by the IARC as carcinogenic in humans and experimental animals, is a chemical agent that is widely used in histopathology laboratories. The exposure to this substance is epidemiologically linked to cancer and to nuclear changes detected by the cytokinesis-block micronucleus test (CBMN). This method is extensively used in molecular epidemiology, since it provides information on several biomarkers of genotoxicity, such as micronuclei (MN), which are biomarkers of chromosomes breakage or loss, nucleoplasmic bridges (NPB), common biomarkers of chromosome rearrangement, poor repair and/or telomere fusion, and nuclear buds (NBUD), biomarkers of elimination of amplified DNA. The aim of this study is to compare the frequency of genotoxicity biomarkers, provided by the CBMN assay in peripheral lymphocytes and the MN test in buccal cells, between individuals occupationally exposed and non-exposed to formaldehyde and other environmental factors, namely tobacco and alcohol consumption. The sample comprised two groups: 56 individuals occupationally exposed to formaldehyde (cases) and 85 unexposed individuals (controls), from whom both peripheral blood and exfoliated epithelial cells of the oral mucosa were collected in order to measure the genetic endpoints proposed in this study. The mean level of TWA8h was 0.16±0.11ppm (toxicity biomarkers showed significant increases in exposed workers in comparison with controls (Mann–Whitney test, p < 0.002) and the analysis of confounding factors showed that there were no differences between genders. As for age, only the mean MN frequency in lymphocytes was found significantly higher in elderly people among the exposed groups (p = 0.006), and there was also evidence of an interaction between age and gender with regards to that biomarker in those exposed. Smoking habits did not influence the frequency of the biomarkers, whereas alcohol consumption only influenced the MN frequency in lymphocytes in controls (p = 0.011), with drinkers showing higher mean values. These results provide evidence of the association between occupational exposure to formaldehyde and the presence of genotoxicity biomarkers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents for the first time a systematic study on the optimization of the electrochemical cleaning time of a mercury film when it is used as a working electrode material in the analysis of toxic metals, such as Pb2+, used as model metal, in real samples by SWASV. The optimization study for the film’s cleaning time aimed at attaining a Pb2+ minimum value in the film after the re-oxidation step of the pre-concentrated metal, given the impossibility of complete removal of traces of the electroactive species from the film. This value was kept constant in each concentration range studied ensuring thus that all assays were performed in initial identical conditions. An assay performed on a synthetic sample was taken as reference. In it, given the absence of matrix effects, and after the electrochemical cleaning step, a direct proportionality was observed between the residual amounts of Pb2+ in the film (which for the cleaning time used was never completely removed) and Pb2+ concentration in the solution. This fact determined a high correlation between Pb2+ peak current and Pb2+ concentration which was not observed when real samples (tree leaves) were analyzed. This behavior may result from the presence of the interfering surfactants always present in real samples of complex matrix. Cleaning time optimization was performed for the following Pb2+ concentration ranges in the real samples of complex matrix: 0.006-0.020, 0.020-0.080, 0.060-0.200 and 0.100-0.600 ppb. As expected, in order to obtain identical levels of film’s cleaning efficiency, the need for longer cleaning times has been observed for higher concentrations. The optimized cleaning times for the concentration ranges under study were 120, 150, 180 e 300 s, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of explaining the use of an old tool like the Smith chart, using modern tools like MATLAB [1] scripts in combination with e-learning facilities, is exemplified by two MATLAB scripts. These display, step by step, the graphical procedure that must be used to solve the double-stub impedance-matching problem. These two scripts correspond to two different possible ways to analyze this matching problem, and they are important for students to learn by themselves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the efficient design of an improved and dedicated switched-capacitor (SC) circuit capable of linearizing CMOS switches to allow SC circuits to reach low distortion levels. The described circuit (SC linearization control circuit, SLC) has the advantage over conventional clock-bootstrapping circuits of exhibiting low-stress, since large gate voltages are avoided. This paper presents exhaustive corner simulation results of a SC sample-and-hold (S/H) circuit which employs the proposed and optimized circuits, together with the experimental evaluation of a complete 10-bit ADC utilizing the referred S/H circuit. These results show that the SLC circuits can reduce distortion and increase dynamic linearity above 12 bits for wide input signal bandwidths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this study is to analyse the changes caused by the global financial crisis on the influence of board characteristics on corporate results, in terms of corporate performance, corporate risk-taking, and earnings management. Sample comprises S&P 500 listed firms during 2002-2008. This study reveals that the environmental conditions call for different behaviour from directors to fulfil their responsibilities and suggests changes in normative and voluntary guidelines for improving good practices in the boardroom.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of Social Responsibility (SR) is higher if this business variable is related with other ones of strategic nature in business activity (competitive success that the company achieved, performance that the firms develop and innovations that they carries out). The hypothesis is that organizations that focus on SR are those who get higher outputs and innovate more, achieving greater competitive success. A scale for measuring the orientation to SR has defined in order to determine the degree of relationship between above elements. This instrument is original because previous scales do not exist in the literature which could measure, on the one hand, the three classics sub-constructs theoretically accepted that SR is made up and, on the other hand, the relationship between SR and the other variables. As a result of causal relationships analysis we conclude with a scale of 21 indicators, validated scale with a sample of firms belonging to the Autonomous Community of Extremadura and it is the first empirical validation of these dimensions we know so far, in this context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to analyze the forecasting ability of the CARR model proposed by Chou (2005) using the S&P 500. We extend the data sample, allowing for the analysis of different stock market circumstances and propose the use of various range estimators in order to analyze their forecasting performance. Our results show that there are two range-based models that outperform the forecasting ability of the GARCH model. The Parkinson model is better for upward trends and volatilities which are higher and lower than the mean while the CARR model is better for downward trends and mean volatilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of MVOC by fungi has been taken into account especially from the viewpoint of indoor pollution with microorganisms but the relevance of fungal metabolites in working environments has not been sufficiently studied. The purpose of this study was to assess exposure to MVOCs in a waste-handling unit. It was used Multirae equipment (RAE Systems) to measured MVOCs concentration with a 10.6 eV lamps. The measurements were done near workers nose and during the normal activities. All measurements were done continuously and had the duration of 5 minutes at least. It was consider the higher value obtained in each measurement. In addition, for knowing fungi contamination, five air samples of 50 litres were collected through impaction method at 140 L/minute, at one meter tall, on to malt extract agar with the antibiotic chloramphenicol (MEA). MVOCs results range between 4.7 ppm and 8.9 ppm in the 6 locations consider. These results are eight times higher than normally obtained in indoor settings. Considering fungi results, two species were identified in air, being the genera Penicillium found in all the samples in uncountable colonies and Rhizopus only in one sample (40 UFC/m3). These fungi are known as MVOCs producers, namely terpenoids, ketones, alcohols and others. Until now, there has been no evidence that MVOCs are toxicologically relevant, but further epidemiological research is necessary to elucidate their role on human’s health, particularly in occupational settings where microbiological contamination is common. Additionally, further research should concentrate on quantitative analyses of specific MVOCs.