31 resultados para Index reduction techniques
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Implementing monolithic DC-DC converters for low power portable applications with a standard low voltage CMOS technology leads to lower production costs and higher reliability. Moreover, it allows miniaturization by the integration of two units in the same die: the power management unit that regulates the supply voltage for the second unit, a dedicated signal processor, that performs the functions required. This paper presents original techniques that limit spikes in the internal supply voltage on a monolithic DC-DC converter, extending the use of the same technology for both units. These spikes are mainly caused by fast current variations in the path connecting the external power supply to the internal pads of the converter power block. This path includes two parasitic inductances inbuilt in bond wires and in package pins. Although these parasitic inductances present relative low values when compared with the typical external inductances of DC-DC converters, their effects can not be neglected when switching high currents at high switching frequency. The associated overvoltage frequently causes destruction, reliability problems and/or control malfunction. Different spike reduction techniques are presented and compared. The proposed techniques were used in the design of the gate driver of a DC-DC converter included in a power management unit implemented in a standard 0.35 mu m CMOS technology.
Resumo:
As it is widely known, in structural dynamic applications, ranging from structural coupling to model updating, the incompatibility between measured and simulated data is inevitable, due to the problem of coordinate incompleteness. Usually, the experimental data from conventional vibration testing is collected at a few translational degrees of freedom (DOF) due to applied forces, using hammer or shaker exciters, over a limited frequency range. Hence, one can only measure a portion of the receptance matrix, few columns, related to the forced DOFs, and rows, related to the measured DOFs. In contrast, by finite element modeling, one can obtain a full data set, both in terms of DOFs and identified modes. Over the years, several model reduction techniques have been proposed, as well as data expansion ones. However, the latter are significantly fewer and the demand for efficient techniques is still an issue. In this work, one proposes a technique for expanding measured frequency response functions (FRF) over the entire set of DOFs. This technique is based upon a modified Kidder's method and the principle of reciprocity, and it avoids the need for modal identification, as it uses the measured FRFs directly. In order to illustrate the performance of the proposed technique, a set of simulated experimental translational FRFs is taken as reference to estimate rotational FRFs, including those that are due to applied moments.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Micronuclei (MN) in exfoliated epithelial cells are widely used as biomarkers of cancer risk in humans. MN are classified as biomarkers of the break age and loss of chromosomes. They are small, extra nuclear bodies that arise in dividing cells from centric chromosome/chromatid fragments or whole chromosomes/chromatids that lag behind in anaphase and are not included in the daughter nuclei in telophase. Buccal mucosa cells have been used in biomonitoring exposed populations because these cells are in the direct route of exposure to ingested pollutant, are capable of metabolizing proximate carcinogens to reactive chemicals, and are easily and rapidly collected by brushing the buccal mucosa. The objective of the present study was to further investigate if, and to what extent, different stains have an effect on the results of micronuclei studies in exfoliated cells. These techniques are: Papanicolaou (PAP), Modified Papanicolaou, May-Grünwald Giemsa (MGG), Giemsa, Harris’s Hematoxylin, Feulgen with Fast Green counterstain and Feulgen without counterstain.
Resumo:
Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.
Resumo:
The exposure index (lgM) obtained from a radiographic image may be a useful feedback indicator to the radiographer about the appropriate exposure level in routine clinical practice. This study aims to evaluate lgM in orthopaedic radiography performed in the standard clinical environment. We analysed the lgM of 267 exposures performed with an AGFA CR system. The mean value of lgM in our sample is 2.14. A significant difference (P=0.000<0.05) from 1.96 lgM reference is shown. Data show that 72% of exposures are above the 1.96 lgM and 42% are above the limit of 2.26. Median values of lgM are above 1.96 and below 2.26 for Speed class (SC) 200 (2.16) and SC400 (2.13). The interquartile range is lower in SC400 than in SC200. Data seem to indicate that lgM values are above the manufacturer’s reference of 1.96. Departmental exposure charts should be optimised to reduce the dose given to patients.
Resumo:
Actualmente a Tomografia Computorizada (TC) é um dos métodos de diagnóstico por imagem que tem uma maior contribuição para a dose de radiação X recebida pelos pacientes. Pretende-se com este estudo avaliar as doses praticadas em TC e contribuir para o estabelecimento de Níveis de Referência de Diagnóstico (NRD) na região da Grande Lisboa, Portugal. Foram efectuadas medições de dose em 5 equipamentos de TC multidetectores, considerando o abdómen como área anatómica de interesse. Recorreu-se a uma câmara de ionização e a um fantoma para obter o índice de dose de TC (CTDI) e o produto dose-comprimento (DLP), que permitem determinar os NRD. Estes valores foram comparados com os NRD propostos pela Guideline Europeia e com os estudos desenvolvidos em outros países, como o Reino Unido, Grécia e Taiwan. Os resultados revelaram que os valores de NRD obtidos neste estudo (16,7 mGy para o CTDIvol e 436,5 mGy·cm para o DLP) são discrepantes relativamente à Guideline Europeia (±50%), mas muito próximos relativamente aos NRD estabelecidos nos países considerados. Estes valores podem ser eventualmente explicados pelos equipamentos em análise e pela utilização de protocolos de exame adoptados pelos profissionais de Radiologia nas instituições analisadas. ABSTRACT - Nowadays Computed Tomography (CT) is one of the imaging techniques which have a large contribution to radiation dose received by patients. The purpose of this study is to evaluate CT doses and contribute to the establishment of Diagnostic Reference Levels (DRL) in Lisbon, Portugal. Dose measurements on 5 multidetector CT scanners have been performed, considering the abdomen as the anatomic region of interest. All measurements were performed using an ionization chamber and a phantom to obtain the index CT dose (CTDI) and the dose-length product (DLP), which are used to determine DRL. These values were compared not only with European reference dose values but also with DRL studies developed in other countries like United Kingdom, Greece and Taiwan. The results revealed that DRL values obtained in this study (CTDIvol is 16,7 mGy and DLP is 436,5 mGy·cm) have a higher discrepancy to European Guideline (±50%), while the DRL´s of other countries are nearest to values obtained in this study. Those differences may be eventually explained by the type of the evaluated equipments but also by the exam protocols used by the Radiology professionals on the analyzed institutions.
Resumo:
A organização automática de mensagens de correio electrónico é um desafio actual na área da aprendizagem automática. O número excessivo de mensagens afecta cada vez mais utilizadores, especialmente os que usam o correio electrónico como ferramenta de comunicação e trabalho. Esta tese aborda o problema da organização automática de mensagens de correio electrónico propondo uma solução que tem como objectivo a etiquetagem automática de mensagens. A etiquetagem automática é feita com recurso às pastas de correio electrónico anteriormente criadas pelos utilizadores, tratando-as como etiquetas, e à sugestão de múltiplas etiquetas para cada mensagem (top-N). São estudadas várias técnicas de aprendizagem e os vários campos que compõe uma mensagem de correio electrónico são analisados de forma a determinar a sua adequação como elementos de classificação. O foco deste trabalho recai sobre os campos textuais (o assunto e o corpo das mensagens), estudando-se diferentes formas de representação, selecção de características e algoritmos de classificação. É ainda efectuada a avaliação dos campos de participantes através de algoritmos de classificação que os representam usando o modelo vectorial ou como um grafo. Os vários campos são combinados para classificação utilizando a técnica de combinação de classificadores Votação por Maioria. Os testes são efectuados com um subconjunto de mensagens de correio electrónico da Enron e um conjunto de dados privados disponibilizados pelo Institute for Systems and Technologies of Information, Control and Communication (INSTICC). Estes conjuntos são analisados de forma a perceber as características dos dados. A avaliação do sistema é realizada através da percentagem de acerto dos classificadores. Os resultados obtidos apresentam melhorias significativas em comparação com os trabalhos relacionados.
Resumo:
Seismic recordings of IRIS/IDA/GSN station CMLA and of several temporary stations in the Azores archipelago are processed with P and S receiver function (PRF and SRF) techniques. Contrary to regional seismic tomography these methods provide estimates of the absolute velocities and of the Vp/Vs ratio up to a depth of similar to 300 km. Joint inversion of PRFs and SRFs for a few data sets consistently reveals a division of the subsurface medium into four zones with a distinctly different Vp/Vs ratio: the crust similar to 20 km thick with a ratio of similar to 1.9 in the lower crust, the high-Vs mantle lid with a strongly reduced VpNs velocity ratio relative to the standard 1.8, the low-velocity zone (LVZ) with a velocity ratio of similar to 2.0, and the underlying upper-mantle layer with a standard velocity ratio. Our estimates of crustal thickness greatly exceed previous estimates (similar to 10 km). The base of the high-Vs lid (the Gutenberg discontinuity) is at a depth of-SO km. The LVZ with a reduction of S velocity of similar to 15% relative to the standard (IASP91) model is terminated at a depth of similar to 200 km. The average thickness of the mantle transition zone (TZ) is evaluated from the time difference between the S410p and SKS660p, seismic phases that are robustly detected in the S and SKS receiver functions. This thickness is practically similar to the standard IASP91 value of 250 km. and is characteristic of a large region of the North Atlantic outside the Azores plateau. Our data are indicative of a reduction of the S-wave velocity of several percent relative to the standard velocity in a depth interval from 460 to 500 km. This reduction is found in the nearest vicinities of the Azores, in the region sampled by the PRFs, but, as evidenced by SRFs, it is missing at a distance of a few hundred kilometers from the islands. We speculate that this anomaly may correspond to the source of a plume which generated the Azores hotspot. Previously, a low S velocity in this depth range was found with SRF techniques beneath a few other hotspots.
Resumo:
Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
In this paper we analyze the relationship between volatility in index futures markets and the number of open and closed positions. We observe that, although in general both positions are positively correlated with contemporaneous volatility, in the case of S&P 500, only the number of open positions has influence over the volatility. Additionally, we observe a stronger positive relationship on days characterized by extreme movements of these contracting movements dominating the market. Finally, our findings suggest that day-traders are not associated to an increment of volatility, whereas uninformed traders, both opening and closing their positions, have to do with it.
Resumo:
This article presents a Markov chain framework to characterize the behavior of the CBOE Volatility Index (VIX index). Two possible regimes are considered: high volatility and low volatility. The specification accounts for deviations from normality and the existence of persistence in the evolution of the VIX index. Since the time evolution of the VIX index seems to indicate that its conditional variance is not constant over time, I consider two different versions of the model. In the first one, the variance of the index is a function of the volatility regime, whereas the second version includes an autoregressive conditional heteroskedasticity (ARCH) specification for the conditional variance of the index.
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular. Área de especialização - Ultrassonografia Cardiovascular