47 resultados para time use
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
The purpose of this thesis is to examine the role of trade durations in price discovery. The motivation to use trade durations in the study of price discovery is that durations are robust to many microstructure effects that introduce a bias in the measurement of returns volatility. Another motivation to use trade durations in the study of price discovery is that it is difficult to think of economic variables, which really are useful in the determination of the source of volatility at arbitrarily high frequencies. The dissertation contains three essays. In the first essay, the role of trade durations in price discovery is examined with respect to the volatility pattern of stock returns. The theory on volatility is associated with the theory on the information content of trade, dear to the market microstructure theory. The first essay documents that the volatility per transaction is related to the intensity of trade, and a strong relationship between the stochastic process of trade durations and trading variables. In the second essay, the role of trade durations in price discovery is examined with respect to the quantification of risk due to a trading volume of a certain size. The theory on volume is intrinsically associated with the stock volatility pattern. The essay documents that volatility increases, in general, when traders choose to trade with large transactions. In the third essay, the role of trade durations in price discovery is examined with respect to the information content of a trade. The theory on the information content of a trade is associated with the theory on the rate of price revisions in the market. The essay documents that short durations are associated with information. Thus, traders are compensated for responding quickly to information
Resumo:
Since the emergence of service marketing, the focus of service research has evolved. Currently the focus of research is shifting towards value co-created by the customer. Consequently, value creation is increasingly less fixed to a specific time or location controlled by the service provider. However, present service management models, although acknowledging customer participation and accessibility, have not considered the role of the empowered customer who may perform the service at various locations and time frames. The present study expands this scope and provides a framework for exploring customer perceived value from a temporal and spatial perspective. The framework is used to understand and analyse customer perceived value and to explore customer value profiles. It is proposed that customer perceived value can be conceptualised as a function of technical, functional, temporal and spatial value dimensions. These dimensions are suggested to have value-increasing and value-decreasing facets. This conceptualisation is empirically explored in an online banking context and it is shown that time and location are more important value dimensions relative to the technical and functional dimensions. The findings demonstrate that time and location are important not only in terms of having the possibility to choose when and where the service is performed. Customers also value an efficient and optimised use of time and a private and customised service location. The study demonstrates that time and location are not external elements that form the service context, but service value dimensions, in addition to the technical and functional dimensions. This thesis contributes to existing service management research through its framework for understanding temporal and spatial dimensions of perceived value. Practical implications of the study are that time and location need to be considered as service design elements in order to differentiate the service from other services and create additional value for customers. Also, because of increased customer control and the importance of time and location, it is increasingly relevant for service providers to provide a facilitating arena for customers to create value, rather than trying to control the value creation process. Kristina Heinonen is associated with CERS, the Center for Relationship Marketing and Service Management at the Swedish School of Economics and Business Administration
Resumo:
Service researchers have repeatedly claimed that firms should acquire customer information in order to develop services that fit customer needs. Despite this, studies that would concentrate on the actual use of customer information in service development are lacking. The present study fulfils this research gap by investigating information use during a service development process. It demonstrates that use is not a straightforward task that automatically follows the acquisition of customer information. In fact, out of the six identified types of use, four represent non usage of customer information. Hence, the study demonstrates that the acquisition of customer information does not guarantee that the information will actually be used in development. The current study used an ethnographic approach. Consequently, the study was conducted in the field in real time over an extensive period of 13 months. Participant observation allowed direct access to the investigated phenomenon, i.e. the different types of use by the observed development project members were captured while they emerged. In addition, interviews, informal discussions and internal documents were used to gather data. A development process of a bank’s website constituted the empirical context of the investigation. This ethnography brings novel insights to both academia and practice. It critically questions the traditional focus on the firm’s acquisition of customer information and suggests that this focus ought to be expanded to the actual use of customer information. What is the point in acquiring costly customer information if it is not used in the development? Based on the findings of this study, a holistic view on customer information, “information in use” is generated. This view extends the traditional view of customer information in three ways: the source, timing and form of data collection. First, the study showed that the customer information can come explicitly from the customer, from speculation among the developers or it can already exist implicitly. Prior research has mainly focused on the customer as the information provider and the explicit source to turn to for information. Second, the study identified that the used and non-used customer information was acquired both previously, and currently within the time frame of the focal development process, as well as potentially in the future. Prior research has primarily focused on the currently acquired customer information, i.e. within the timeframe of the development process. Third, the used and non-used customer information was both formally and informally acquired. In prior research a large number of sophisticated formal methods have been suggested for the acquisition of customer information to be used in development. By focusing on “information in use”, new knowledge on types of customer information that are actually used was generated. For example, the findings show that the formal customer information acquired during the development process is used less than customer information already existent within the firm. With this knowledge at hand, better methods to capture this more usable customer information can be developed. Moreover, the thesis suggests that by focusing stronger on use of customer information, service development processes can be restructured in order to facilitate the information that is actually used.
Resumo:
This study examined the effects of the Greeks of the options and the trading results of delta hedging strategies, with three different time units or option-pricing models. These time units were calendar time, trading time and continuous time using discrete approximation (CTDA) time. The CTDA time model is a pricing model, that among others accounts for intraday and weekend, patterns in volatility. For the CTDA time model some additional theta measures, which were believed to be usable in trading, were developed. The study appears to verify that there were differences in the Greeks with different time units. It also revealed that these differences influence the delta hedging of options or portfolios. Although it is difficult to say anything about which is the most usable of the different time models, as this much depends on the traders view of the passing of time, different market conditions and different portfolios, the CTDA time model can be viewed as an attractive alternative.
Resumo:
The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
Resumo:
The low predictive power of implied volatility in forecasting the subsequently realized volatility is a well-documented empirical puzzle. As suggested by e.g. Feinstein (1989), Jackwerth and Rubinstein (1996), and Bates (1997), we test whether unrealized expectations of jumps in volatility could explain this phenomenon. Our findings show that expectations of infrequently occurring jumps in volatility are indeed priced in implied volatility. This has two important consequences. First, implied volatility is actually expected to exceed realized volatility over long periods of time only to be greatly less than realized volatility during infrequently occurring periods of very high volatility. Second, the slope coefficient in the classic forecasting regression of realized volatility on implied volatility is very sensitive to the discrepancy between ex ante expected and ex post realized jump frequencies. If the in-sample frequency of positive volatility jumps is lower than ex ante assessed by the market, the classic regression test tends to reject the hypothesis of informational efficiency even if markets are informationally effective.
Resumo:
This paper investigates the persistent pattern in the Helsinki Exchanges. The persistent pattern is analyzed using a time and a price approach. It is hypothesized that arrival times are related to movements in prices. Thus, the arrival times are defined as durations and formulated as an Autoregressive Conditional Duration (ACD) model as in Engle and Russell (1998). The prices are defined as price changes and formulated as a GARCH process including duration measures. The research question follows from market microstructure predictions about price intensities defined as time between price changes. The microstructure theory states that long transaction durations might be associated with both no news and bad news. Accordingly, short durations would be related to high volatility and long durations to low volatility. As a result, the spread will tend to be larger under intensive moments. The main findings of this study are 1) arrival times are positively autocorrelated and 2) long durations are associated with low volatility in the market.
Resumo:
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
Resumo:
Irritable bowel syndrome (IBS) is a common multifactorial functional intestinal disorder, the pathogenesis of which is not completely understood. Increasing scientific evidence suggests that microbes are involved in the onset and maintenance of IBS symptoms. The microbiota of the human gastrointestinal (GI) tract constitutes a massive and complex ecosystem consisting mainly of obligate anaerobic microorganisms making the use of culture-based methods demanding and prone to misinterpretation. To overcome these drawbacks, an extensive panel of species- and group-specific assays for an accurate quantification of bacteria from fecal samples with real-time PCR was developed, optimized, and validated. As a result, the target bacteria were detectable at a minimum concentration range of approximately 10 000 bacterial genomes per gram of fecal sample, which corresponds to the sensitivity to detect 0.000001% subpopulations of the total fecal microbiota. The real-time PCR panel covering both commensal and pathogenic microorganisms was assessed to compare the intestinal microbiota of patients suffering from IBS with a healthy control group devoid of GI symptoms. Both the IBS and control groups showed considerable individual variation in gut microbiota composition. Sorting of the IBS patients according to the symptom subtypes (diarrhea, constipation, and alternating predominant type) revealed that lower amounts of Lactobacillus spp. were present in the samples of diarrhea predominant IBS patients, whereas constipation predominant IBS patients carried increased amounts of Veillonella spp. In the screening of intestinal pathogens, 17% of IBS samples tested positive for Staphylococcus aureus, whereas no positive cases were discovered among healthy controls. Furthermore, the methodology was applied to monitor the effects of a multispecies probiotic supplementation on GI microbiota of IBS sufferers. In the placebo-controlled double-blind probiotic intervention trial of IBS patients, each supplemented probiotic strain was detected in fecal samples. Intestinal microbiota remained stable during the trial, except for Bifidobacterium spp., which increased in the placebo group and decreased in the probiotic group. The combination of assays developed and applied in this thesis has an overall coverage of 300-400 known bacterial species, along with the number of yet unknown phylotypes. Hence, it provides good means for studying the intestinal microbiota, irrespective of the intestinal condition and health status. In particular, it allows screening and identification of microbes putatively associated with IBS. The alterations in the gut microbiota discovered here support the hypothesis that microbes are likely to contribute to the pathophysiology of IBS. The central question is whether the microbiota changes described represent the cause for, rather than the effect of, disturbed gut physiology. Therefore, more studies are needed to determine the role and importance of individual microbial species or groups in IBS. In addition, it is essential that the microbial alterations observed in this study will be confirmed using a larger set of IBS samples of different subtypes, preferably from various geographical locations.
Resumo:
In this thesis, the possibility of extending the Quantization Condition of Dirac for Magnetic Monopoles to noncommutative space-time is investigated. The three publications that this thesis is based on are all in direct link to this investigation. Noncommutative solitons have been found within certain noncommutative field theories, but it is not known whether they possesses only topological charge or also magnetic charge. This is a consequence of that the noncommutative topological charge need not coincide with the noncommutative magnetic charge, although they are equivalent in the commutative context. The aim of this work is to begin to fill this gap of knowledge. The method of investigation is perturbative and leaves open the question of whether a nonperturbative source for the magnetic monopole can be constructed, although some aspects of such a generalization are indicated. The main result is that while the noncommutative Aharonov-Bohm effect can be formulated in a gauge invariant way, the quantization condition of Dirac is not satisfied in the case of a perturbative source for the point-like magnetic monopole.
Resumo:
In this thesis the current status and some open problems of noncommutative quantum field theory are reviewed. The introduction aims to put these theories in their proper context as a part of the larger program to model the properties of quantized space-time. Throughout the thesis, special focus is put on the role of noncommutative time and how its nonlocal nature presents us with problems. Applications in scalar field theories as well as in gauge field theories are presented. The infinite nonlocality of space-time introduced by the noncommutative coordinate operators leads to interesting structure and new physics. High energy and low energy scales are mixed, causality and unitarity are threatened and in gauge theory the tools for model building are drastically reduced. As a case study in noncommutative gauge theory, the Dirac quantization condition of magnetic monopoles is examined with the conclusion that, at least in perturbation theory, it cannot be fulfilled in noncommutative space.
Resumo:
Human sport doping control analysis is a complex and challenging task for anti-doping laboratories. The List of Prohibited Substances and Methods, updated annually by World Anti-Doping Agency (WADA), consists of hundreds of chemically and pharmacologically different low and high molecular weight compounds. This poses a considerable challenge for laboratories to analyze for them all in a limited amount of time from a limited sample aliquot. The continuous expansion of the Prohibited List obliges laboratories to keep their analytical methods updated and to research new available methodologies. In this thesis, an accurate mass-based analysis employing liquid chromatography - time-of-flight mass spectrometry (LC-TOFMS) was developed and validated to improve the power of doping control analysis. New analytical methods were developed utilizing the high mass accuracy and high information content obtained by TOFMS to generate comprehensive and generic screening procedures. The suitability of LC-TOFMS for comprehensive screening was demonstrated for the first time in the field with mass accuracies better than 1 mDa. Further attention was given to generic sample preparation, an essential part of screening analysis, to rationalize the whole work flow and minimize the need for several separate sample preparation methods. Utilizing both positive and negative ionization allowed the detection of almost 200 prohibited substances. Automatic data processing produced a Microsoft Excel based report highlighting the entries fulfilling the criteria of the reverse data base search (retention time (RT), mass accuracy, isotope match). The quantitative performance of LC-TOFMS was demonstrated with morphine, codeine and their intact glucuronide conjugates. After a straightforward sample preparation the compounds were analyzed directly without the need for hydrolysis, solvent transfer, evaporation or reconstitution. The hydrophilic interaction technique (HILIC) provided good chromatographic separation, which was critical for the morphine glucuronide isomers. A wide linear range (50-5000 ng/ml) with good precision (RSD<10%) and accuracy (±10%) was obtained, showing comparable or better performance to other methods used. In-source collision-induced dissociation (ISCID) allowed confirmation analysis with three diagnostic ions with a median mass accuracy of 1.08 mDa and repeatable ion ratios fulfilling WADA s identification criteria. The suitability of LC-TOFMS for screening of high molecular weight doping agents was demonstrated with plasma volume expanders (PVE), namely dextran and hydroxyethylstarch (HES). Specificity of the assay was improved, since interfering matrix compounds were removed by size exclusion chromatography (SEC). ISCID produced three characteristic ions with an excellent mean mass accuracy of 0.82 mDa at physiological concentration levels. In summary, by combining TOFMS with a proper sample preparation and chromatographic separation, the technique can be utilized extensively in doping control laboratories for comprehensive screening of chemically different low and high molecular weight compounds, for quantification of threshold substances and even for confirmation. LC-TOFMS rationalized the work flow in doping control laboratories by simplifying the screening scheme, expediting reporting and minimizing the analysis costs. Therefore LC-TOFMS can be exploited widely in doping control, and the need for several separate analysis techniques is reduced.
Resumo:
Physical inactivity has become a major threat to public health worldwide. The Finnish health and welfare policies emphasize that the working population should maintain good health and functioning until their normal retirement age and remain in good health and independence later in life. Health behaviours like physical activity potentially play an important role in reaching this target as physical activity contributes to better physical fitness and to reduced risk of major chronic diseases. The aim of this study was to examine first whether the volume and intensity of leisure-time physical activity impacts on subsequent physical health functioning, sickness absence and disability retirement. The second aim was to examine changes in leisure-time physical activity of moderate and vigorous intensity after transition to retirement. This study is part of the ongoing Helsinki Health Study. The baseline data were collected by questionnaires in 2000 - 02 among the employees of the City of Helsinki aged 40 to 60. The follow-up survey data were collected in 2007. Data on sickness absence were obtained from the employer s (City of Helsinki) sickness absence registers and pension data were obtained from the Finnish Centre for Pensions. Leisure-time physical activity was measured in four grades of intensity and classified according to physical activity recommendations considering both the volume and intensity of physical activity. Statistical techniques including analysis of covariance, logistic regression, Cox proportional hazards models and Poisson regression were used. Employees who were vigorously active during leisure time especially had better physical health functioning than those physically inactive. High physical activity in particular contributed to the maintenance of good physical health functioning. High physical activity also reduced the risk of subsequent sickness absences as well as the risk of all-cause disability retirement and retirement due to musculoskeletal and mental causes. Among those transferred to old-age retirement moderate-intensity leisure-time physical activity increased on average by more than half an hour per week and in addition the occurrence of physical inactivity reduced. Such changes were not observed among those remained employed and those transferred to disability retirement. This prospective cohort study provided novel results on the effects of leisure-time physical activity on health related functioning and changes in leisure-time physical activity after retirement. Although the benefits of moderate-intensity physical activity for health are well known these results suggest the importance of vigorous physical activity for subsequent health related functioning. Thus vigorous physical activity to enhance fitness should be given more emphasis from a public health perspective. In addition, physical activity should be encouraged among those who are about to retire.
Resumo:
Soft tissue sarcomas are malignant tumours of mesenchymal origin. Because of infiltrative growth pattern, simple enucleation of the tumour causes a high rate of local recurrence. Instead, these tumours should be resected with a rim of normal tissue around the tumour. Data on the adequate margin width are scarce. At Helsinki University Central Hospital (HUCH) a multidisciplinary treatment group started in 1987. Surgical resection with a wide margin (2.5 cm) is the primary aim. In case of narrower margin radiation therapy is necessary. The role of adjuvant chemotherapy remains unclear. Our aims were to study local control by the surgical margin and to develop a new prognostic tool to aid decision-making on which patients should receive adjuvant chemotherapy. Patients with soft tissue sarcoma of the extremity or the trunk wall referred to HUCH during 1987-2002 form material in Studies I and II. External validation material comes from the Lund university sarcoma registry. The smallest surgical margin of at least 2.5 centimetres yielded local control of 89 per cent at five years. Amputation rate was 9 per cent. The proposed prognostic model with necrosis, vascular invasion, size on a continuous scale, depth, location and grade worked well both in Helsinki material and in the validation material, and it also showed good calibration. Based on the present study, we recommend the smallest surgical margin of 2-3 centimetres in soft tissue sarcoma irrespective of grade. Improvement in local control was present but modest in margins wider than 1 centimetre. In cases where gaining a wider margin would lead to a considerable loss of function, smaller margin is to be considered combined to radiation therapy. Patients treated with inadequate margins should be offered radiation therapy irrespective of tumour grade. Our new prognostic model to estimate 10-year survival probability in patients with soft tissue sarcoma of the extremities or trunk wall showed good dicscrimination and calibration. For time being the prognostic model is available for scientific use and further validations. In the future, the model may aid in clinical decision-making. For operable osteosarcoma, neoadjuvant multidrug chemotherapy followed by delayed surgery and multidrug adjuvant chemotherapy is the treatment of choice. Overall survival rates at five years are approximately 75 per cent in modern trials with classical osteosarcoma. All patients diagnosed and reported to the Finnish Cancer Registry with osteosarcoma in Finland during 1971-2005 form the material in Studies III and IV. Limb-salvage rate increased from 23 per cent to 78 per cent during 1971-2005. The 10-year sarcoma-specific survival for the whole study population improved from 32 per cent to 62 per cent. It was 75 per cent for patients with a local high-grade osteosarcoma of the extremity diagnosed during 1991-2005. This study outlines the improved prognosis of osteosarcoma patients in Finland with modern chemotherapy. The 10-year survival rates are good also in an international scale. Nonetheless, their limb-salvage rate remains inferior to those seen for highly selected patient series. Overall, the centralisation of osteosarcoma treatment would most likely improve both survival and limb-salvage rates even further.