827 resultados para Twitter Financial Market Pearson cross correlation
Resumo:
A vision system is applied to full-field displacements and deformation measurements in solid mechanics. A speckle like pattern is preliminary formed on the surface under investigation. To determine displacements field of one speckle image with respect to a reference speckle image, sub-images, referred to Zones Of Interest (ZOI) are considered. The field is obtained by matching a ZOI in the reference image with the respective ZOI in the moved image. Two image processing techniques are used for implementing the matching procedure: – cross correlation function and minimum mean square error (MMSE) of the ZOI intensity distribution. The two algorithms are compared and the influence of the ZOI size on the accuracy of measurements is studied.
Resumo:
Studies of framing in the EU political system are still a rarity and they suffer from a lack of systematic empirical analysis. Addressing this gap, we ask if institutional and policy contexts intertwined with the strategic side of framing can explain the number and types of frames employed by different stakeholders. We use a computer-assisted manual content analysis and develop a fourfold typology of frames to study the frames that were prevalent in the debates on four EU policy proposals within financial market regulation and environmental policy at the EU level and in Germany, Sweden, the Netherlands and the United Kingdom. The main empirical finding is that both contexts and strategies exert a significant impact on the number and types of frames in EU policy debates. In conceptual terms, the article contributes to developing more fine-grained tools for studying frames and their underlying dimensions.
Resumo:
We propose a novel template matching approach for the discrimination of handwritten and machine-printed text. We first pre-process the scanned document images by performing denoising, circles/lines exclusion and word-block level segmentation. We then align and match characters in a flexible sized gallery with the segmented regions, using parallelised normalised cross-correlation. The experimental results over the Pattern Recognition & Image Analysis Research Lab-Natural History Museum (PRImA-NHM) dataset show remarkably high robustness of the algorithm in classifying cluttered, occluded and noisy samples, in addition to those with significant high missing data. The algorithm, which gives 84.0% classification rate with false positive rate 0.16 over the dataset, does not require training samples and generates compelling results as opposed to the training-based approaches, which have used the same benchmark.
Resumo:
Lehet-e beszélni a 2011-ig felgyülemlett empirikus tapasztalatok tükrében egy egységes válságlefolyásról, amely a fejlett ipari országok egészére általában jellemző, és a meghatározó országok esetében is megragadható? Megállapíthatók-e olyan univerzális változások a kibocsátás, a munkapiacok, a fogyasztás, valamint a beruházás tekintetében, amelyek jól illeszkednek a korábbi tapasztalatokhoz, nem kevésbé az ismert makromodellek predikcióihoz? A válasz – legalábbis jelen sorok írásakor – nemleges: sem a válság lefolyásának jellegzetességeiben és a makrogazdasági teljesítmények romlásának ütemében, sem a visszacsúszás mértékében és időbeli kiterjedésében sincsenek jól azonosítható közös jegyek, olyanok, amelyek a meglévő elméleti keretekbe jól beilleszthetők. A tanulmány áttekinti a válsággal és a makrogazdasági sokkokkal foglalkozó empirikus irodalom – a pénzügyi globalizáció értelmezései nyomán – relevánsnak tartott munkáit. Ezt követően egy 60 év távlatát átfogó vizsgálatban próbáljuk megítélni a recessziós időszakokban az amerikai gazdaság teljesítményét azzal a célkitűzéssel, hogy az elmúlt válság súlyosságának megítélése kellően objektív lehessen, legalább a fontosabb makrováltozók elmozdulásának nagyságrendje tekintetében. / === / Based on the empirical evidence accumulated until 2011, using official statistics from the OECD data bank and the US Commerce Department, the article addresses the question whether one can, or cannot, speak about generally observable recession/crisis patterns, such that were to be universally recognized in all major industrial countries (the G7). The answer to this question is a firm no. Changes and volatility in most major macroeconomic indicators such as output-gap, labor market distortions and large deviations from trend in consumption and in investment did all, respectively, exhibit wide differences in depth and width across the G7 countries. The large deviations in output-gaps and especially strong distortions in labor market inputs and hours per capita worked over the crisis months can hardly be explained by the existing model classes of DSGE and those of the real business cycle. Especially bothering are the difficulties in fitting the data into any established model whether business cycle or some other types, in which financial distress reduces economic activity. It is argued that standard business cycle models with financial market imperfections have no mechanism for generating deviation from standard theory, thus they do not shed light on the key factors underlying the 2007–2009 recession. That does not imply that the financial crisis is unimportant in understanding the recession, but it does indicate however, that we do not fully understand the channels through which financial distress reduced labor input. Long historical trends on the privately held portion of the federal debt in the US economy indicate that the standard macro proposition of public debt crowding out private investment and thus inhibiting growth, can be strongly challenged in so far as this ratio is neither a direct indicator of growth slowing down, nor for recession.
Resumo:
A 2007 óta kibontakozó válság újból divatba hozta a korai buborékok és válságok témáját. A jelenlegi válság nyomán újraéledtek a makroökónómiai viták, kétségessé vált a "nagy mérséklődés" és az újklasszikus szintézis érvényessége. Az 1634-1637 közötti holland tulipánmánia - mint az első buborék - sokféle értelmezése ismeretes: a tömeghisztéria kitörésétől a hatékony pénzügyi piacok korai példáján át a kulturális sokkig. Áttekintve ezeket, a cikk visszavezeti a tulipánmánia közismert anekdotikus leírásait az eredeti forrásokig, és megmutatja, milyen módon és céllal használták fel ezeket a közgazdászok és történészek saját elemzéseikben. / === / The accounts of early bubbles and crises are becoming fashionable again in economic discourse during the recent downturn. The article, having looked at the revival of macroeconomic debate provoked by the failure of current theory, sums up various interpretations of the Dutch tulip mania of 1634-7. These range from an outburst of popular madness, through an early example of an efficient financial market, to an instance of culture shock. Some well-known anecdotes about tulip mania are traced back to their original sources, and the article explores the various patterns and intentions in the use economists and historians have made of them.
Resumo:
Hypothesis of the paper is that the monetary room for manoeuvre in the European Community is determined by the institutional and strategic characteristics of the ECB, moreover the financial market environment composed by multi-state community. The methodology of the paper is built on the evaluation of the decision making and strategy of ECB as institutional aspect, and the monetary transmission in national financial markets. In policy evaluation, the monetary targeting is surveyed through HICP, monetary base, central bank rates, exchange rates and treatment of price impacts. The transmission is examined through analysis of structure of the member state’s financial markets.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
This research was undertaken to explore dimensions of the risk construct, identify factors related to risk-taking in education, and study risk propensity among employees at a community college. Risk-taking propensity (RTP) was measured by the 12-item BCDQ, which consisted of personal and professional risk-related situations balanced for the money, reputation, and satisfaction dimensions of the risk construct. Scoring ranged from 1.00 (most cautious) to 6.00 (most risky).^ Surveys including the BCDQ and seven demographic questions relating to age, gender, professional status, length of service, academic discipline, highest degree, and campus location were sent to faculty, administrators, and academic department heads. A total of 325 surveys were returned, resulting in a 66.7% response rate. Subjects were relatively homogeneous for age, length of service, and highest degree.^ Subjects were also homogeneous for risk-taking propensity: no substantive differences in RTP scores were noted within and among demographic groups, with the possible exception of academic discipline. The mean RTP score for all subjects was 3.77, for faculty was 3.76, for administrators was 3.83, and for department heads was 3.64.^ The relationship between propensity to take personal risks and propensity to take professional risks was tested by computing Pearson r correlation coefficients. The relationships for the total sample, faculty, and administrator groups were statistically significant, but of limited practical significance. Subjects were placed into risk categories by dividing the response scale into thirds. A 3 x 3 factorial ANOVA revealed no interaction effects between professional status and risk category with regard to RTP score. A discriminant analysis showed that a seven-factor model was not effective in predicting risk category.^ The homogeneity of the study sample and the effect of a risk-encouraging environment were discussed in the context of the community college. Since very little data on risk-taking in education is available, risk propensity data from this study could serve as a basis for comparison to future research. Results could be used by institutions to plan professional development activities, designed to increase risk-taking and encourage active acceptance of change. ^
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
The purpose of the study was to determine the degree of relationships among GRE scores, undergraduate GPA (UGPA), and success in graduate school, as measured by first year graduate GPA (FGPA), cumulative graduate GPA, and degree attainment status. A second aim of the study was to determine whether the relationships between the composite predictor (GRE scores and UGPA) and the three success measures differed by race/ethnicity and sex. A total of 7,367 graduate student records (masters, 5,990; doctoral: 1,377) from 2000 to 2010 were used to evaluate the relationships among GRE scores, UGPA and the three success measures. Pearson's correlation, multiple linear and logistic regression, and hierarchical multiple linear and logistic regression analyses were performed to answer the research questions. The results of the correlational analyses differed by degree level. For master's students, the ETS proposed prediction that GRE scores are valid predictors of first year graduate GPA was supported by the findings from the present study; however, for doctoral students, the proposed prediction was only partially supported. Regression and correlational analyses indicated that UGPA was the variable that consistently predicted all three success measures for both degree levels. The hierarchical multiple linear and logistic regression analyses indicated that at master's degree level, White students with higher GRE Quantitative Reasoning Test scores were more likely to attain a degree than Asian Americans, while International students with higher UGPA were more likely to attain a degree than White students. The relationships between the three predictors and the three success measures were not significantly different between men and women for either degree level. Findings have implications both for practice and research. They will provide graduate school administrators with institution-specific validity data for UGPA and the GRE scores, which can be referenced in making admission decisions, while they will provide empirical and professionally defensible evidence to support the current practice of using UGPA and GRE scores for admission considerations. In addition, new evidence relating to differential predictions will be useful as a resource reference for future GRE validation researchers.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
This research was undertaken to explore dimensions of the risk construct, identify factors related to risk-taking in education, and study risk propensity among employees at a community college. Risk-taking propensity (RTP) was measured by the 12-item BCDQ, which consisted of personal and professional risk-related situations balanced for the money, reputation, and satisfaction dimensions of the risk construct. Scoring ranged from 1.00 (most cautious) to 6.00 (most risky). Surveys including the BCDQ and seven demographic questions relating to age, gender, professional status, length of service, academic discipline, highest degree, and campus location were sent to faculty, administrators, and academic department heads. A total of 325 surveys were returned, resulting in a 66.7% response rate. Subjects were relatively homogeneous for age, length of service, and highest degree. Subjects were also homogeneous for risk-taking propensity: no substantive differences in RTP scores were noted within and among demographic groups, with the possible exception of academic discipline. The mean RTP score for all subjects was 3.77, for faculty was 3.76, for administrators was 3.83, and for department heads was 3.64. The relationship between propensity to take personal risks and propensity to take professional risks was tested by computing Pearson r correlation coefficients. The relationships for the total sample, faculty, and administrator groups were statistically significant, but of limited practical significance. Subjects were placed into risk categories by dividing the response scale into thirds. A 3 X 3 factorial ANOVA revealed no interaction effects between professional status and risk category with regard to RTP score. A discriminant analysis showed that a seven-factor model was not effective in predicting risk category. The homogeneity of the study sample and the effect of a risk encouraging environment were discussed in the context of the community college. Since very little data on risk-taking in education is available, risk propensity data from this study could serve as a basis for comparison to future research. Results could be used by institutions to plan professional development activities, designed to increase risk-taking and encourage active acceptance of change.
Resumo:
Recent studies have shown evidence of log-periodic behavior in non-hierarchical systems. An interesting fact is the emergence of such properties on rupture and breakdown of complex materials and financial failures. These may be examples of systems with self-organized criticality (SOC). In this work we study the detection of discrete scale invariance or log-periodicity. Theoretically showing the effectiveness of methods based on the Fourier Transform of the log-periodicity detection not only with prior knowledge of the critical point before this point as well. Specifically, we studied the Brazilian financial market with the objective of detecting discrete scale invariance in Bovespa (Bolsa de Valores de S˜ao Paulo) index. Some historical series were selected periods in 1999, 2001 and 2008. We report evidence for the detection of possible log-periodicity before breakage, shown its applicability to the study of systems with discrete scale invariance likely in the case of financial crashes, it shows an additional evidence of the possibility of forecasting breakage
Resumo:
Ambient seismic noise has traditionally been considered as an unwanted perturbation in seismic data acquisition that "contaminates" the clean recording of earthquakes. Over the last decade, however, it has been demonstrated that consistent information about the subsurface structure can be extracted from cross-correlation of ambient seismic noise. In this context, the rules are reversed: the ambient seismic noise becomes the desired seismic signal, while earthquakes become the unwanted perturbation that needs to be removed. At periods lower than 30 s, the spectrum of ambient seismic noise is dominated by microseism, which originates from distant atmospheric perturbations over the oceans. The microsseism is the most continuous seismic signal and can be classified as primary – when observed in the range 10-20 s – and secondary – when observed in the range 5-10 s. The Green‘s function of the propagating medium between two receivers (seismic stations) can be reconstructed by cross-correlating seismic noise simultaneously recorded at the receivers. The reconstruction of the Green‘s function is generally proportional to the surface-wave portion of the seismic wavefield, as microsseismic energy travels mostly as surface-waves. In this work, 194 Green‘s functions obtained from stacking of one month of daily cross-correlations of ambient seismic noise recorded in the vertical component of several pairs of broadband seismic stations in Northeast Brazil are presented. The daily cross-correlations were stacked using a timefrequency, phase-weighted scheme that enhances weak coherent signals by reducing incoherent noise. The cross-correlations show that, as expected, the emerged signal is dominated by Rayleigh waves, with dispersion velocities being reliably measured for periods ranging between 5 and 20 s. Both permanent stations from a monitoring seismic network and temporary stations from past passive experiments in the region are considered, resulting in a combined network of 33 stations separated by distances between 60 and 1311 km, approximately. The Rayleigh-wave, dispersion velocity measurements are then used to develop tomographic images of group velocity variation for the Borborema Province of Northeast Brazil. The tomographic maps allow to satisfactorily map buried structural features in the region. At short periods (~5 s) the images reflect shallow crustal structure, clearly delineating intra-continental and marginal sedimentary basins, as well as portions of important shear zones traversing the Borborema Province. At longer periods (10 – 20 s) the images are sensitive to deeper structure in the upper crust, and most of the shallower anomalies fade away. Interestingly, some of them do persist. The deep anomalies do not correlate with either the location of Cenozoic volcanism and uplift - which marked the evolution of the Borborema Province in the Cenozoic - or available maps of surface heat-flow, and the origin of the deep anomalies remains enigmatic.