51 resultados para cluster impact ratio
Resumo:
Nesta tese estudamos os efeitos de contágio financeiro e de memória longa causados pelas crises financeiras de 2008 e 2010 em alguns mercados acionistas internacionais. A tese é composta por três ensaios interligados. No Ensaio 1, recorremos à teoria das cópulas para testar a existência de contágio e revelar os canais “investor induced” de transmissão da crise de 2008 aos mercados da Bélgica, França, Holanda e Portugal (grupo NYSE Euronext). Concluímos que existe contágio nestes mercados, que o canal “portfolio rebalancing” é o mecanismo mais importante de transmissão da crise, e que o fenómeno “flight to quality” está presente nos mercados. No Ensaio 2, usando novamente modelos de cópulas, avaliamos os efeitos de contágio provocados pelo mercado acionista grego nos mercados do grupo NYSE Euronext, no contexto da crise de 2010. Os resultados obtidos sugerem que durante a crise de 2010 apenas o mercado português foi objeto de contágio; além disso, conclui-se que os efeitos de contágio provocados pela crise de 2008 são claramente superiores aos efeitos provocados pela crise de 2010. No Ensaio 3, abordamos o tema da memória longa através do estudo do expoente de Hurst dos mercados acionistas da Bélgica, E.U.A., França, Grécia, Holanda, Japão, Reino Unido e Portugal. Verificamos que as propriedades de memória longa dos mercados foram afetadas pelas crises, especialmente a de 2008 – que aumentou a memória longa dos mercados e tornou-os mais persistentes. Finalmente, usando cópulas mais uma vez, verificamos que as crises provocaram, em geral, um aumento na correlação entre os expoentes de Hurst locais dos mercados foco das crises (E.U.A. e Grécia) e os expoentes de Hurst locais dos outros mercados da amostra, sugerindo que o expoente de Hurst pode ser utilizado para detetar efeitos de contágio financeiro. Em síntese, os resultados desta tese sugerem que comparativamente com períodos de acalmia, os períodos de crises financeiras tendem a provocar ineficiência nos mercados acionistas e a conduzi-los na direção da persistência e do contágio financeiro.
Resumo:
Introduction: Anxiety is a common problem in primary care and specialty medical settings. Treating an anxious patient takes more time and adds stress to staff. Unrecognized anxiety may lead to exam repetition, and impedance of exam performance. Objective: The aim of the study was to examine the anxiety levels of patients who are to undergo diagnostic exams related to cancer diagnostic: PET/CT and mammography. Methods: Two hundred and thirty two patients who undergo PET/CT and one hundred thirteen women who undergo mammography filled out one questionnaire after the procedure to determine their concerns, expectations and perceptions of anxiety. Results: Our results show that the main causes of anxiety in patients who are having a PET/CT is the fear of the procedure itself and fear of the results. Patients who suffered from greater anxiety were those who were scanned during the initial stage of an oncological disease. On the other hand, the diagnostic is the main cause of anxiety in women who are requiring a mammography. 28% of the women reported having experienced pain or intense pain. Conclusions: The performance of diagnostic exams related to cancer diagnostic like a PET/CT and a mammography are important and statistically generators of anxiety. Patients are often poorly informed and present with a range of anxieties that may ultimately affect examination quality. These results provide expertise that can be used in the development of future training programs to integrate post-graduate courses of health professionals.
Resumo:
Purpose: This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose for pelvis using automatic exposure control (AEC) and non-AEC in a computed radiography (CR) system. Methods and Materials: To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60-120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the 2 AFC visual grading software. PCXMC software was used to estimate the effective dose. Results: A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p> 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. Effective dose results show a statistical significant decrease (p=0.000) on the 75th quartile from 0.3 mSv at 60 kVp to 0.1 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion: No significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant effective dose reduction is observed.
Resumo:
This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.
Resumo:
This paper describes the implementation of a distributed model predictive approach for automatic generation control. Performance results are discussed by comparing classical techniques (based on integral control) with model predictive control solutions (centralized and distributed) for different operational scenarios with two interconnected networks. These scenarios include variable load levels (ranging from a small to a large unbalance generated power to power consumption ratio) and simultaneously variable distance between the interconnected networks systems. For the two networks the paper also examines the impact of load variation in an island context (a network isolated from each other).
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.