41 resultados para feed to gain ratio
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Aim - A quantative primary study to determine whether increasing source to image distance (SID), with and without the use of automatic exposure control (AEC) for antero-posterior (AP) pelvis imaging, reduces dose whilst still producing an image of diagnostic quality. Methods - Using a computed radiography (CR) system, an anthropomorphic pelvic phantom was positioned for an AP examination using the table bucky. SID was initially set at 110 cm, with tube potential set at a constant 75 kVp, with two outer chambers selected and a fine focal spot of 0.6 mm. SID was then varied from 90 cm to 140 cm with two exposures made at each 5 cm interval, one using the AEC and another with a constant 16 mAs derived from the initial exposure. Effective dose (E) and entrance surface dose (ESD) were calculated for each acquisition. Seven experienced observers blindly graded image quality using a 5-point Likert scale and 2 Alternative Forced Choice software. Signal-to-Noise Ratio (SNR) was calculated for comparison. For each acquisition, femoral head diameter was also measured for magnification indication. Results - Results demonstrated that when increasing SID from 110 cm to 140 cm, both E and ESD reduced by 3.7% and 17.3% respectively when using AEC and 50.13% and 41.79% respectively, when the constant mAs was used. No significant statistical (T-test) difference (p = 0.967) between image quality was detected when increasing SID, with an intra-observer correlation of 0.77 (95% confidence level). SNR reduced slightly for both AEC (38%) and no AEC (36%) with increasing SID. Conclusion - For CR, increasing SID significantly reduces both E and ESD for AP pelvis imaging without adversely affecting image quality.
Resumo:
Electrocardiographic (ECG) signals are emerging as a recent trend in the field of biometrics. In this paper, we propose a novel ECG biometric system that combines clustering and classification methodologies. Our approach is based on dominant-set clustering, and provides a framework for outlier removal and template selection. It enhances the typical workflows, by making them better suited to new ECG acquisition paradigms that use fingers or hand palms, which lead to signals with lower signal to noise ratio, and more prone to noise artifacts. Preliminary results show the potential of the approach, helping to further validate the highly usable setups and ECG signals as a complementary biometric modality.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
Este trabalho tem como objectivo apresentar as ferramentas do Lean Thinking e realizar um estudo de caso numa organização em que este sistema é utilizado. Numa primeira fase do trabalho será feito uma análise bibliográfica sobre o ―Lean Thinking”, que consiste num sistema de negócios, uma forma de especificar valor e delinear a melhor sequência de acções que criam valor. Em seguida, será realizado um estudo de caso numa Empresa – Divisão de Motores – no ramo da aeronáutica com uma longa e conceituada tradição com o objectivo de reduzir o TAT (turnaround time – tempo de resposta), ou seja, o tempo desde a entrada de um motor na divisão até à entrega ao cliente. Primeiramente, analisando as falhas existentes em todo o processo do motor, isto é, a análise de tempos de reparação de peças à desmontagem do motor que têm que estar disponíveis à montagem do mesmo, peças que são requisitadas a outros departamentos da Empresa e as mesmas não estão disponíveis quando são precisas passando pelo layout da divisão. Por fim, fazer uma análise dos resultados até então alcançados na divisão de Motores e aplicar as ferramentas do ―Lean Thinking‖ com o objectivo da implementação. É importante referir que a implementação bem-sucedida requer, em primeiro lugar e acima de tudo, um firme compromisso da administração com uma completa adesão à cultura da procura e eliminação de desperdício. Para concluir o trabalho, destaca-se a importância deste sistema e quais são as melhorias que se podem conseguir com a sua implantação.
Resumo:
We report on structural, electronic, and optical properties of boron-doped, hydrogenated nanocrystalline silicon (nc-Si:H) thin films deposited by plasma-enhanced chemical vapor deposition (PECVD) at a substrate temperature of 150 degrees C. Film properties were studied as a function of trimethylboron-to-silane ratio and film thickness. The absorption loss of 25% at a wavelength of 400 nm was measured for the 20 nm thick films on glass and glass/ZnO:Al substrates. By employing the p(+) nc-Si:H as a window layer, complete p-i-n structures were fabricated and characterized. Low leakage current and enhanced sensitivity in the UV/blue range were achieved by incorporating an a-SiC:H buffer between the p- and i-layers.
Resumo:
Amorphous glass/ZnO-Al/p(a-Si:H)/i(a-Si:H)/n(a-Si1-xCx:H)/Al imagers with different n-layer resistivities were produced by plasma enhanced chemical vapour deposition technique (PE-CVD). An image is projected onto the sensing element and leads to spatially confined depletion regions that can be readout by scanning the photodiode with a low-power modulated laser beam. The essence of the scheme is the analog readout, and the absence of semiconductor arrays or electrode potential manipulations to transfer the information coming from the transducer. The influence of the intensity of the optical image projected onto the sensor surface is correlated with the sensor output characteristics (sensitivity, linearity blooming, resolution and signal-to-noise ratio) are analysed for different material compositions (0.5 < x < 1). The results show that the responsivity and the spatial resolution are limited by the conductivity of the doped layers. An enhancement of one order of magnitude in the image intensity signal and on the spatial resolution are achieved at 0.2 mW cm(-2) light flux by decreasing the n-layer conductivity by the same amount. A physical model supported by electrical simulation gives insight into the image-sensing technique used.
Resumo:
Interest rate risk is one of the major financial risks faced by banks due to the very nature of the banking business. The most common approach in the literature has been to estimate the impact of interest rate risk on banks using a simple linear regression model. However, the relationship between interest rate changes and bank stock returns does not need to be exclusively linear. This article provides a comprehensive analysis of the interest rate exposure of the Spanish banking industry employing both parametric and non parametric estimation methods. Its main contribution is to use, for the first time in the context of banks’ interest rate risk, a nonparametric regression technique that avoids the assumption of a specific functional form. One the one hand, it is found that the Spanish banking sector exhibits a remarkable degree of interest rate exposure, although the impact of interest rate changes on bank stock returns has significantly declined following the introduction of the euro. Further, a pattern of positive exposure emerges during the post-euro period. On the other hand, the results corresponding to the nonparametric model support the expansion of the conventional linear model in an attempt to gain a greater insight into the actual degree of exposure.
Resumo:
In this paper our aim is to gain a better understanding of the relationship between market volatility and industrial structure. As conflicting results have been documented regarding the relationship between market industry concentration and market volatility, this study investigates this relationship in the time series. We have found that this relationship is only significant and positive for Spain. Our results suggest that we cannot generalize across different countries that market industrial structure (concentration) is a significant factor in explaining market volatility.
Resumo:
Mestrado de Radiações aplicadas às Tecnologias da Saúde. Área de especialização: Imagem Digital com Radiação X.
Resumo:
O presente estudo avalia o efeito da cafeína no valor da razão contraste ruído (CNR) em imagens SWI. A população do estudo incluiu 24 voluntários saudáveis que estiveram pelo menos 24h privados da ingestão de cafeína. Adquiriram-se imagens SWI antes e após a ingestão de 100ml de café. Os voluntários foram divididos em 4 grupos de 6 indivíduos e avaliados sequencialmente (15, 25, 30 e 45 min pós-cafeína). ABSTRACT - The present study investigates the effect of caffeine on contrast-to-noise ratio (CNR) in SWI images. Twenty-four volunteers were enrolled in the study. All the volunteers were caffeine-free for 24h prior to the test. SWI images were acquired before caffeine ingestion and post-ingestion of 100ml of coffee. The volunteers were divided into four groups of subjects and evaluated sequentially (15, 25, 30 and 45 min after caffeine).
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Área de especialização: Ressonância Magnética
Resumo:
Susceptibility-weighted imaging (SWI) is a relatively new contrast in MR imaging. Previous studies have found an effect of caffeine in the contrast generated by SWI images. The present study investigates the effect of caffeine on contrast-to-noise ratio (CNR) in SWI.
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras