40 resultados para Rent dependency
Resumo:
Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.
Resumo:
This study examines Finnish economic growth. The key driver of economic growth was productivity. And the major engine of productivity growth was technology, especially the general purpose technologies (GPTs) electricity and ICT. A new GPT builds on previous knowledge, yet often in an uncertain, punctuated, fashion. Economic history, as well as the Finnish data analyzed in this study, teaches that growth is not a smooth process but is subject to episodes of sharp acceleration and deceleration which are associated with the arrival, diffusion and exhaustion of new general purpose technologies. These are technologies that affect the whole economy by transforming both household life and the ways in which firms conduct business. The findings of previous research, that Finnish economic growth exhibited late industrialisation and significant structural changes were corroborated by this study. Yet, it was not solely a story of manufacturing and structural change was more the effect of than the cause for economic growth. We offered an empirical resolution to the Artto-Pohjola paradox as we showed that a high rate of return on capital was combined with low capital productivity growth. This result is important in understanding Finnish economic growth 1975-90. The main contribution of this thesis was the growth accounting results on the impact of ICT on growth and productivity, as well as the comparison of electricity and ICT. It was shown that ICT s contribution to GDP growth was almost twice as large as electricity s contribution over comparable periods of time. Finland has thus been far more successful as an ICT producer than a producer of electricity. Unfortunately in the use of ICT the results were still more modest than for electricity. During the end of the period considered in this thesis, Finland switched from resource-based to ICT-based growth. However, given the large dependency on the ICT-producing sector, the ongoing outsourcing of ICT production to low wage countries provides a threat to productivity performance in the future. For a developed country only change is constant and history teaches us that it is likely that Finland is obliged to reorganize its economy once again in the digital era.
Resumo:
The results presented in this thesis show that all females of a given population do not necessarily choose similar mating partners. Specifically, partner preferences of a fish, the sand goby (Pomatoschistus minutus), varied among individual females and depended on the social context at the time of choice. I also show that females assess multiple mate choice cues simultaneously; partner preferences were based more strongly on an interaction effect between different choice cues than on any individual cue. Furthermore, I found that preferred matings involved fitness benefits in the form of increased offspring success, but these benefits were not significantly affected by mate compatibility. Hence, mate choice for partner compatibility does not appear to be an important determinant of the observed variation in female mate preferences in this species. The context-dependency of female mating preferences revealed is relevant to how genetic variation in sexually selected traits might be maintained: as the mating success of a certain male type varies according to the choice context, directional sexual selection on male traits is shown to be less intense than generally thought making for a slower loss of genetic variation in these traits. Mating preferences of sand gobies were assessed by giving females a binary choice between males that differed in body size and/or other focus traits. These association preferences were found to be sexually motivated, repeatable and to correspond to actual mating decisions.
Resumo:
With transplant rejection rendered a minor concern and survival rates after liver transplantation (LT) steadily improving, long-term complications are attracting more attention. Current immunosuppressive therapies, together with other factors, are accompanied by considerable long-term toxicity, which clinically manifests as renal dysfunction, high risk for cardiovascular disease, and cancer. This thesis investigates the incidence, causes, and risk factors for such renal dysfunction, cardiovascular risk, and cancer after LT. Long-term effects of LT are further addressed by surveying the quality of life and employment status of LT recipients. The consecutive patients included had undergone LT at Helsinki University Hospital from 1982 onwards. Data regarding renal function – creatinine and estimated glomerular filtration rate (GFR) – were recorded before and repeatedly after LT in 396 patients. The presence of hypertension, dyslipidemia, diabetes, impaired fasting glucose, and overweight/obesity before and 5 years after LT was determined among 77 patients transplanted for acute liver failure. The entire cohort of LT patients (540 patients), including both children and adults, was linked with the Finnish Cancer Registry, and numbers of cancers observed were compared to site-specific expected numbers based on national cancer incidence rates stratified by age, gender, and calendar time. Health-related quality of life (HRQoL), measured by the 15D instrument, and employment status were surveyed among all adult patients alive in 2007 (401 patients). The response rate was 89%. Posttransplant cardiovascular risk factor prevalence and HRQoL were compared with that in the age- and gender-matched Finnish general population. The cumulative risk for chronic kidney disease increased from 10% at 5 years to 16% at 10 years following LT. GFR up to 10 years after LT could be predicted by the GFR at 1 year. In patients transplanted for chronic liver disease, a moderate correlation of pretransplant GFR with later GFR was also evident, whereas in acute liver failure patients after LT, even severe pretransplant renal dysfunction often recovered. By 5 years after LT, 71% of acute liver failure patients were receiving antihypertensive medications, 61% were exhibiting dyslipidemia, 10% were diabetic, 32% were overweight, and 13% obese. Compared with the general population, only hypertension displayed a significantly elevated prevalence among patients – 2.7-fold – whereas patients exhibited 30% less dyslipidemia and 71% less impaired fasting glucose. The cumulative incidence of cancer was 5% at 5 years and 13% at 10. Compared with the general population, patients were subject to a 2.6-fold cancer risk, with non-melanoma skin cancer (standardized incidence ratio, SIR, 38.5) and non-Hodgkin lymphoma (SIR 13.9) being the predominant malignancies. Non-Hodgkin lymphoma was associated with male gender, young age, and the immediate posttransplant period, whereas old age and antibody induction therapy raised skin-cancer risk. HRQoL deviated clinically unimportantly from the values in the general population, but significant deficits among patients were evident in some physical domains. HRQoL did not seem to decrease with longer follow-up. Although 87% of patients reported improved working capacity, data on return to working life showed marked age-dependency: Among patients aged less than 40 at LT, 70 to 80% returned to work, among those aged 40 to 50, 55%, and among those above 50, 15% to 28%. The most common cause for unemployment was early retirement before LT. Those patients employed exhibited better HRQoL than those unemployed. In conclusion, although renal impairment, hypertension, and cancer are evidently common after LT and increase with time, patients’ quality of life remains comparable with that of the general population.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.
Resumo:
Atmospheric aerosol particles have a strong impact on the global climate. A deep understanding of the physical and chemical processes affecting the atmospheric aerosol climate system is crucial in order to describe those processes properly in global climate models. Besides the climatic effects, aerosol particles can deteriorate e.g. visibility and human health. Nucleation is a fundamental step in atmospheric new particle formation. However, details of the atmospheric nucleation mechanisms have remained unresolved. The main reason for that has been the non-existence of instruments capable of measuring neutral newly formed particles in the size range below 3 nm in diameter. This thesis aims to extend the detectable particle size range towards close-to-molecular sizes (~1nm) of freshly nucleated clusters, and by direct measurement obtain the concentrations of sub-3 nm particles in atmospheric environment and in well defined laboratory conditions. In the work presented in this thesis, new methods and instruments for the sub-3 nm particle detection were developed and tested. The selected approach comprises four different condensation based techniques and one electrical detection scheme. All of them are capable to detect particles with diameters well below 3 nm, some even down to ~1 nm. The developed techniques and instruments were deployed in the field measurements as well as in laboratory nucleation experiments. Ambient air studies showed that in a boreal forest environment a persistent population of 1-2 nm particles or clusters exists. The observation was done using 4 different instruments showing a consistent capability for the direct measurement of the atmospheric nucleation. The results from the laboratory experiments showed that sulphuric acid is a key species in the atmospheric nucleation. The mismatch between the earlier laboratory data and ambient observations on the dependency of nucleation rate on sulphuric acid concentration was explained. The reason was shown to be associated in the inefficient growth of the nucleated clusters and in the insufficient detection efficiency of particle counters used in the previous experiments. Even though the exact molecular steps of nucleation still remain an open question, the instrumental techniques developed in this work as well as their application in laboratory and ambient studies opened a new view into atmospheric nucleation and prepared the way for investigating the nucleation processes with more suitable tools.
Resumo:
Since the 1990s, European policy strategies have stressed the mutual responsibility and joint action of all societal branches in preventing social problems. Network policy is an integral part of the new governance that generates a new kind of dependency between the state and civil society in formulating and adhering to policy goals. Using empirical group interview data collected in Helsinki, the capital of Finland, this case study explores local multi-agency groups and their efforts to prevent the exclusion of children and young people. These groups consist mainly of professionals from the social office, youth clubs and schools. The study shows that these multi-agency groups serve as forums for professional negotiation where the intervention dilemma of liberal society can be addressed: the question of when it is justified and necessary for an authority or network to intervene in the life of children and their families, and how this is to be done. An element of tension in multi-agency prevention is introduced by the fact that its objectives and means are anchored both in the old tradition of the welfare state and in communitarian rhetoric. Thus multi-agency groups mend deficiencies in wellbeing and normalcy while at the same time try to co-ordinate the creation of the new community, which will hopefully reduce the burden on the public sector. Some of the professionals interviewed were keen to see new and even forceful interventions to guide the youth or to compel parents to assume their responsibilities. In group discussions, this approach often met resistance. The deeper the social problems that the professionals worked with, the more solidarity they showed for the families or the young people in need. Nothing seems to assure professionals and to legitimise their professional position better than advocating the under-privileged against the uncertainties of life and the structural inequalities of society. The groups that grappled with the clear, specific needs of certain children and families were the most capable of co-operation. This requires the approval of different powers and the expertise of distinct professions as well as a forum to negotiate case-specific actions in professional confidentiality. The ideals of primary prevention for everyone and value discussions alone fail to inspire sufficient multiagency co-operation. The ideal of a network seems to give word and shape to those societal goals that are difficult or even impossible to reach, but are nevertheless yearned for: mutual understanding of the good life, close social relationships, mutual trust and active agency for all citizens. Individualisation, the multiplicity of life styles and the possibility to choose have come true in such a way that the very idea of a mutual and binding network can be attained only momentarily and between restricted participants. In conclusion, uniting professional networks that negotiate intervention dilemmas with citizen networks based on changing compassions and feelings of moral superiority seems impossible. Rather, one should encourage openness to scrutiny among tangential or contradicting groups, networks and communities. Key words: network policy, prevention of exclusion, multi-agency groups, young people
Resumo:
A straightforward computation of the list of the words (the `tail words' of the list) that are distributionally most similar to a given word (the `head word' of the list) leads to the question: How semantically similar to the head word are the tail words; that is: how similar are their meanings to its meaning? And can we do better? The experiment was done on nearly 18,000 most frequent nouns in a Finnish newsgroup corpus. These nouns are considered to be distributionally similar to the extent that they occur in the same direct dependency relations with the same nouns, adjectives and verbs. The extent of the similarity of their computational representations is quantified with the information radius. The semantic classification of head-tail pairs is intuitive; some tail words seem to be semantically similar to the head word, some do not. Each such pair is also associated with a number of further distributional variables. Individually, their overlap for the semantic classes is large, but the trained classification-tree models have some success in using combinations to predict the semantic class. The training data consists of a random sample of 400 head-tail pairs with the tail word ranked among the 20 distributionally most similar to the head word, excluding names. The models are then tested on a random sample of another 100 such pairs. The best success rates range from 70% to 92% of the test pairs, where a success means that the model predicted my intuitive semantic class of the pair. This seems somewhat promising when distributional similarity is used to capture semantically similar words. This analysis also includes a general discussion of several different similarity formulas, arranged in three groups: those that apply to sets with graded membership, those that apply to the members of a vector space, and those that apply to probability mass functions.
Resumo:
Lihaluujauho muodostaa maatilojen myytävien kasvi- ja eläinperäisten tuotteiden jälkeen tärkeimmän agroekosysteemeistä poispäin suuntautuvan ravinnevirran. Se sisältää runsaasti pääkasvinravinteita typpeä, fosforia ja kalsiumia (N ~8%, P ~5%, Ca yleensä ~10-15% luuaineksen määrästä riippuen), sekä kaliumia n.1% tai alle. Lihaluujauho on todettu tehokkaaksi lannoitteeksi useilla viljelykasveilla ja sen käyttö on sallittu myös luomuviljelyssä EU-alueella. Lihaluujauhoon ja erityisesti sen rehukäyttöön liittyvistä riskeistä merkittävin on TSE-tautien riski (naudan BSE-, lampaiden ja vuohien scrapie-, sekä ihmisen vCJD-taudit). Rehukäyttöä on monissa maissa rajoitettu 1980-luvulla puhjenneen BSE-kriisin myötä. BSE-taudin leviäminen yhdistettiin tilanteeseen, jossa nautaperäistä lihaluujauhoa käytettiin nautaeläinten rehun ainesosana. Myös lihaluujauhon käytössä turkiseläinrehuna saattaa piillä BSE:n tai muun TSE-taudin riski. Oikein käsitellyn lihaluujauhon lannoitekäyttöön ei kuitenkaan näytä tarkastelemieni tutkimusten perusteella sisältyvän huomattavaa TSEriskiä, jos huolehditaan asianmukaisista varotoimista ja menettelyistä sekä tuotteen valmistusprosessissa, että käytettäessä lannoitetta. Lihaluujauhon lannoitekäytön lisääminen edistäisi ruokajärjestelmämme ravinnekierron sulkemista etenkin fosforin osalta. Lihaluujauho on uusiutuva luonnonvara, jonka lannoitekäytöllä voitaisiin korvata huomattava osa lannoiteaineena kulutettavista fosforipitoisista kiviaineista. Sokerijuurikkaan lannoituskokeissa Varsinais-Suomen Kaarinassa vuosina 2008 ja 2009 lihaluujauhokäsittelyt eivät menestyneet aivan yhtä hyvin satotasovertailussa kuin kontrollikäsittelyiden NPK-väkilannoitteet, mutta laatuominaisuuksiltaan (sokeripitoisuus, amino-N, K, ja Na-pitoisuudet) joiltakin osin kontrollikäsittelyjä paremmin. Kokeissa käytetyt lajikkeet olivat ’Jesper’ vuonna 2008 ja ’Lincoln’ vuonna 2009. Käytetty lihaluujauholannoite oli Honkajoki Oy:n Viljo Yleislannoite 8-4-3, joka sisälsi noin 10% kaliumsulfaatin ja kasviperäisten sivutuotteiden seosta. Viljo-lannoitetta käytettiin sekä yksistään, että yhdistettynä 10-25%:iin väkilannoitetta. Vuoden 2009 Viljo-koejäseniin vielä lisättiin kaliumsulfaattilannoitetta (42% K, 18% S), jotta päästiin annetun kaliumin määrässä päästiin lannoitussuosituksen (60 kg K/ha) tasolle. Pelkkä Viljo-lannoite tuotti merkitsevästi alhaisemmat sadot kuin kontrollikäsittelyt molempina vuosina. Kuitenkin kun Viljolannoitteen ohella käytettiin väkilannoitetta (10-25% kasvin typentarpeesta) päästiin varsin lähelle kontrollikäsittelyiden satotasoja. Myös pelkän LLJ-lannoitteen tuottamat satotasot olivat kuitenkin selvästi paremmat kuin Suomen keskimääräiset juurikassadot. Viljo-käsittelyillä oli selvästi positiivinen vaikutus laatutekijöihin amino-N, K ja Na vuonna 2008, mutta vuonna 2009 näiden pitoisuudet jäivät kontrollikäsittelyjen tasolle. Viljo-käsittelyiden sokeripitoisuudet olivat vuonna 2008 kontrollikäsittelyn luokkaa ja Viljo77%+NK1:n osalta kontrollia merkitsevästi paremmat. Vuoden 2009 sokeripitoisuudet olivat kaikilla koejäsenillä erinomaiset, ja käsittelyiden välillä ei ilmennyt merkitseviä eroja. Kokeiden perusteella kaliumsulfaatilla täydennetty lihaluujauho on hyvin toimiva lannoite sokerijuurikkaalla Suomen olosuhteissa, etenkin yhdistettynä väkilannoitteeseen.
Resumo:
Smoking has decreased significantly over the last few decades, but it still remains one of the most serious public health problems in all Western countries. Smoking has decreased especially in upper socioeconomic groups, and this differentiation is an important factor behind socioeconomic health differentials. The study examines smokers risk perceptions, justifications and the meaning of smoking in different occupational groups. The starting point of the research is that the concept of health behaviour and the individualistic orientation it implies is too narrow a viewpoint with which to understand the current cultural status of smoking and to explain its association with social class. The study utilizes two kinds of data. Internet discussions are used to examine smokers risk perceptions and counter-reactions to current public health discourses. Interviews of smokers and ex-smokers (N=55) from different occupations are utilized to analyse the process of giving up smoking, social class differences in the justifications of smoking and the role of smoking in manual work. The continuing popularity of smoking is not a question of lacking knowledge of or concern about health risks. Even manual workers, in whom smoking is more prevalent, consider smoking a health risk. However, smokers have several ways of dealing with the risk. They can equate it with other health risks confronted in everyday life or question the adequacy of expert knowledge. Smoking can be seen as signifying the ability to make independent decisions and to question authorities. Regardless of the self-acknowledged dependency, smoking can be understood as a choice. This seemingly contradictory viewpoint was central especially for non-manual workers. They emphasized the pleasures and rules of smoking and the management of dependency. In contrast, manual workers did not give positive justifications for their smoking, thus implying the self-evident nature of the habit. Still, smoking functions as a resource in manual work as it increases the autonomy of workers in terms of their daily tasks. At the same time, smoking is attached to other routines and practices at workplaces. The study shows that in order to understand current trends in smoking, differing perceptions of risk and health as well as ways of life and their social and economic determinants need to be taken into account. Focussing on the social contexts and environments in which smoking is most prevalent is necessary in order to explain the current association of smoking with the working class.
Resumo:
As companies become more efficient with respect to their internal processes, they begin to shift the focus beyond their corporate boundaries. Thus, the recent years have witnessed an increased interest by practitioners and researchers in interorganizational collaboration, which promises better firm performance through more effective supply chain management. It is no coincidence that this interest comes in parallel with the recent advancements in Information and Communication Technologies, which offer many new collaboration possibilities for companies. However, collaboration, or any other type of supply chain integration effort, relies heavily on information sharing. Hence, this study focuses on information sharing, in particular on the factors that determine it and on its value. The empirical evidence from Finnish and Swedish companies suggests that uncertainty (both demand and environmental) and dependency in terms of switching costs and asset specific investments are significant determinants of information sharing. Results also indicate that information sharing improves company performance regarding resource usage, output, and flexibility. However, companies share information more intensely at the operational rather than the strategic level. The use of supply chain practices and technologies is substantial but varies across the two countries. This study sheds light on a common trend in supply chains today. Whereas the results confirm the value of information sharing, the contingent factors help to explain why the intensity of information shared across companies differ. In the future, competitive pressures and uncertainty are likely to intensify. Therefore, companies may want to continue with their integration efforts by focusing on the determinants discussed in this study. However, at the same time, the possibility of opportunistic behavior by the exchange partner cannot be disregarded.
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
The integrated European debt capital market has undoubtedly broadened the possibilities for companies to access funding from the public and challenged investors to cope with an ever increasing complexity of its market participants. Well into the Euro-era, it is clear that the unified market has created potential for all involved parties, where investment opportunities are able to meet a supply of funds from a broad geographical area now summoned under a single currency. Europe’s traditionally heavy dependency on bank lending as a source of debt capital has thus been easing as corporate residents are able to tap into a deep and liquid capital market to satisfy their funding needs. As national barriers eroded with the inauguration of the Euro and interest rates for the EMU-members converged towards over-all lower yields, a new source of debt capital emerged to the vast majority of corporate residents under the new currency and gave an alternative to the traditionally more maturity-restricted bank debt. With increased sophistication came also an improved knowledge and understanding of the market and its participants. Further, investors became more willing to bear credit risk, which opened the market for firms of ever lower creditworthiness. In the process, the market as a whole saw a change in the profile of issuers, as non-financial firms increasingly sought their funding directly from the bond market. This thesis consists of three separate empirical studies on how corporates fund themselves on the European debt capital markets. The analysis focuses on a firm’s access to and behaviour on the capital market, subsequent the decision to raise capital through the issuance of arm’s length debt on the bond market. The specific areas considered are contributing to our knowledge in the fields of corporate finance and financial markets by considering explicitly firms’ primary market activities within the new market area. The first essay explores how reputation of an issuer affects its debt issuance. Essay two examines the choice of interest rate exposure on newly issued debt and the third and final essay explores pricing anomalies on corporate debt issues.