991 resultados para empirical correlation
Resumo:
This paper assesses the validity and reliability of two instruments measuring quality of service, the SERVPERF and SERVQUAL scales, replicated in a novel cultural settings, a Portuguese energy company. To provide insights and strategies for managerial intervention, a relation between customers’ satisfaction and quality of service is established. The empirical study suggests a superior convergent and predictive validity of SERVPERF scale to measure quality of service in this settings when comparing to SERVQUAL. The main differences of this study with previous ones, are that this one resorts on a confirmatory factor analysis, the validation of the instruments is performed by using the same measures suggested by their creators and extends the line of research to a novel cultural settings, a Portuguese energy company. Concerning the relationship between service quality and customers’ satisfaction, all of the quality of service attributes correlate almost equally to the satisfaction ones, with a lower weight concerning tangibles.
Resumo:
Previous work by our group introduced a novel concept and sensor design for “off-the-person” ECG, for which evidence on how it compares against standard clinical-grade equipment has been largely missing. Our objectives with this work are to characterise the off-the-person approach in light of the current ECG systems landscape, and assess how the signals acquired using this simplified setup compare with clinical-grade recordings. Empirical tests have been performed with real-world data collected from a population of 38 control subjects, to analyze the correlation between both approaches. Results show off-the-person data to be correlated with clinical-grade data, demonstrating the viability of this approach to potentially extend preventive medicine practices by enabling the integration of ECG monitoring into multiple dimensions of people’s everyday lives. © 2015, IUPESM and Springer-Verlag Berlin Heidelberg.
Resumo:
Abstract: We scrutinize the realized stock-bond correlation based upon high frequency returns. We use quantile regressions to pin down the systematic variation of the extreme tails over their economic determinants. The correlation dependence behaves differently when the correlation is large negative and large positive. The important explanatory variables at the extreme low quantile are the short rate, the yield spread, and the volatility index. At the extreme high quantile the bond market liquidity is also important. The empirical fi ndings are only partially robust to using less precise measures of the stock-bond correlation. The results are not caused by the recent financial crisis. Keywords: Extreme returns; Financial crisis; Realized stock-bond correlation; Quantile regressions; VIX. JEL Classifi cations: C22; G01; G11; G12
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
The objective of this work was to propose a way of using the Tocher's method of clustering to obtain a matrix similar to the cophenetic one obtained for hierarchical methods, which would allow the calculation of a cophenetic correlation. To illustrate the obtention of the proposed cophenetic matrix, we used two dissimilarity matrices - one obtained with the generalized squared Mahalanobis distance and the other with the Euclidean distance - between 17 garlic cultivars, based on six morphological characters. Basically, the proposal for obtaining the cophenetic matrix was to use the average distances within and between clusters, after performing the clustering. A function in R language was proposed to compute the cophenetic matrix for Tocher's method. The empirical distribution of this correlation coefficient was briefly studied. For both dissimilarity measures, the values of cophenetic correlation obtained for the Tocher's method were higher than those obtained with the hierarchical methods (Ward's algorithm and average linkage - UPGMA). Comparisons between the clustering made with the agglomerative hierarchical methods and with the Tocher's method can be performed using a criterion in common: the correlation between matrices of original and cophenetic distances.
Resumo:
Evapotranspiration is the process of water loss of vegetated soil due to evaporation and transpiration, and it may be estimated by various empirical methods. This study had the objective to carry out the evaluation of the performance of the following methods: Blaney-Criddle, Jensen-Haise, Linacre, Solar Radiation, Hargreaves-Samani, Makkink, Thornthwaite, Camargo, Priestley-Taylor and Original Penman in the estimation of the potential evapotranspiration when compared to the Penman-Monteith standard method (FAO56) to the climatic conditions of Uberaba, state of Minas Gerais, Brazil. A set of 21 years monthly data (1990 to 2010) was used, working with the climatic elements: temperature, relative humidity, wind speed and insolation. The empirical methods to estimate reference evapotranspiration were compared with the standard method using linear regression, simple statistical analysis, Willmott agreement index (d) and performance index (c). The methods Makkink and Camargo showed the best performance, with "c" values of 0.75 and 0.66, respectively. The Hargreaves-Samani method presented a better linear relation with the standard method, with a correlation coefficient (r) of 0.88.
Resumo:
This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.
Resumo:
There are vast changes in the work environment, and the traditional rules and management methods might not be suitable for today’s employees anymore. The meaning of work is also changing due to the younger and higher educated generations entering the markets. Old customs need to be re-validated and new approaches should be taken into use. This paper strongly emphasizes the importance of happiness research and happiness at work. The values towards the meaning of work are changing; people demand happiness and quality from all aspects of their lives. The aim of this study is to define happiness - especially at work - and to explain how it can be measured and what kind of results achieved. I also want to find out how the contents of work and the working environment might enhance happiness. The correlation between education and happiness is discussed and examined. I am aware that the findings and theories are concentrating mainly on Western Countries and highlighting the values and work-environments of those societies. The main aim of the empirical study is to find out if there are connections between happiness and work in data collected by World Value Survey in 2005, and if the profession has effects on happiness. Other factors such as the correlation of age, sex, education and income are examined too. I also want to find out what kind of values people have towards work and how these affect the happiness levels. The focus is on two nations: Finland (N=1014) and Italy (N=1012). I have also taken the global comparison within, that is all 54 countries (N=66,566) included in the 5th wave (during the years 2005 -2008) of the World Value Survey. The results suggest that people are generally happy around the world; happiness decreasing with the age, the educated being happier than the uneducated and the employed happier than the unemployed. People working in neat “white collar” jobs are more likely happier than those working in factories or outdoors. Money makes us happier, until certain level is reached. Work is important to people and the importance of work adds happiness. Work is also highly appreciated, but there are more happy people among those who do not appreciate work that highly. Safety matters the most when looking for a job, and there are more happy people among those who have selected the importance of work as the first choice when looking for a job, than among those to whom an income is the most important aspect. People are more likely happy when the quality of work is high, that is when their job consists of creative and cognitive tasks and when they have a feeling of independence.
Resumo:
This Master’s Thesis analyses the effectiveness of different hedging models on BRICS (Brazil, Russia, India, China, and South Africa) countries. Hedging performance is examined by comparing two different dynamic hedging models to conventional OLS regression based model. The dynamic hedging models being employed are Constant Conditional Correlation (CCC) GARCH(1,1) and Dynamic Conditional Correlation (DCC) GARCH(1,1) with Student’s t-distribution. In order to capture the period of both Great Moderation and the latest financial crisis, the sample period extends from 2003 to 2014. To determine whether dynamic models outperform the conventional one, the reduction of portfolio variance for in-sample data with contemporaneous hedge ratios is first determined and then the holding period of the portfolios is extended to one and two days. In addition, the accuracy of hedge ratio forecasts is examined on the basis of out-of-sample variance reduction. The results are mixed and suggest that dynamic hedging models may not provide enough benefits to justify harder estimation and daily portfolio adjustment. In this sense, the results are consistent with the existing literature.
Resumo:
A combined experimental and theoretical study of the absorption spectra of a group of closely related pyrylium perchlorates 1-11 are presented. Minor changes in the position of the substituents lead to drastic changes in the absorption spectra in this series of compounds. We have attempted to explain the observed changes using the x,y-band notation developed by Balaban and co-workers. Absorption spectra of all compounds are compared with results from time-dependent density functional theory (TDDFT) and Zerner’s intermediate neglect of differential overlap (ZINDO/S) level calculations. Results of the calculations are in good agreement with experimental observations and an interesting correlation between Balaban’s notations and the MO transitions are obtained for simple derivatives. It is suggested that for more complex systems such as R- and â-naphthyl substituted systems, the empirical method is not appropriate.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
Recent empirical studies have shown that multi-angle spectral data can be useful for predicting canopy height, but the physical reason for this correlation was not understood. We follow the concept of canopy spectral invariants, specifically escape probability, to gain insight into the observed correlation. Airborne Multi-Angle Imaging Spectrometer (AirMISR) and airborne Laser Vegetation Imaging Sensor (LVIS) data acquired during a NASA Terrestrial Ecology Program aircraft campaign underlie our analysis. Two multivariate linear regression models were developed to estimate LVIS height measures from 28 AirMISR multi-angle spectral reflectances and from the spectrally invariant escape probability at 7 AirMISR view angles. Both models achieved nearly the same accuracy, suggesting that canopy spectral invariant theory can explain the observed correlation. We hypothesize that the escape probability is sensitive to the aspect ratio (crown diameter to crown height). The multi-angle spectral data alone therefore may not provide enough information to retrieve canopy height globally.
Resumo:
Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The matrix-tolerance hypothesis suggests that the most abundant species in the inter-habitat matrix would be less vulnerable to their habitat fragmentation. This model was tested with leaf-litter frogs in the Atlantic Forest where the fragmentation process is older and more severe than in the Amazon, where the model was first developed. Frog abundance data from the agricultural matrix, forest fragments and continuous forest localities were used. We found an expected negative correlation between the abundance of frogs in the matrix and their vulnerability to fragmentation, however, results varied with fragment size and species traits. Smaller fragments exhibited stronger matrix-vulnerability correlation than intermediate fragments, while no significant relation was observed for large fragments. Moreover, some species that avoid the matrix were not sensitive to a decrease in the patch size, and the opposite was also true, indicating significant differences with that expected from the model. Most of the species that use the matrix were forest species with aquatic larvae development, but those species do not necessarily respond to fragmentation or fragment size, and thus affect more intensively the strengthen of the expected relationship. Therefore, the main relationship expected by the matrix-tolerance hypothesis was observed in the Atlantic Forest; however we noted that the prediction of this hypothesis can be substantially affected by the size of the fragments, and by species traits. We propose that matrix-tolerance model should be broadened to become a more effective model, including other patch characteristics, particularly fragment size, and individual species traits (e. g., reproductive mode and habitat preference).