36 resultados para empirical correlation

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is an empirical study of how two words in Icelandic, "nú" and "núna", are used in contemporary Icelandic conversation. My aims in this study are, first, to explain the differences between the temporal functions of "nú" and "núna", and, second, to describe the non-temporal functions of "nú". In the analysis, a focus is placed on comparing the sequential placement of the two words, on their syntactical distribution, and on their prosodic realization. The empirical data comprise 14 hours and 11 minutes of naturally occurring conversation recorded between 1996 and 2003. The selected conversations represent a wide range of interactional contexts including informal dinner parties, institutional and non-institutional telephone conversations, radio programs for teenagers, phone-in programs, and, finally, a political debate on television. The theoretical and methodological framework is interactional linguistics, which can be described as linguistically oriented conversation analysis (CA). A comparison of "nú" and "núna" shows that the two words have different syntactic distributions. "Nú" has a clear tendency to occur in the front field, before the finite verb, while "núna" typically occurs in the end field, after the object. It is argued that this syntactic difference reflects a functional difference between "nú" and "núna". A sequential analysis of "núna" shows that the word refers to an unspecified period of time which includes the utterance time as well as some time in the past and in the future. This temporal relation is referred to as reference time. "Nú", by contrast, is mainly used in three different environments: a) in temporal comparisons, 2) in transitions, and 3) when the speaker is taking an affective stance. The non-temporal functions of "nú" are divided into three categories: a) "nú" as a tone particle, 2) "nú" as an utterance particle, and 3) "nú" as a dialogue particle. "Nú" as a tone particle is syntactically integrated and can occur in two syntactic positions: pre-verbally and post-verbally. I argue that these instances are employed in utterances in which a speaker is foregrounding information or marking it as particularly important. The study shows that, although these instances are typically prosodically non-prominent and unstressed, they are in some cases delivered with stress and with a higher pitch than the surrounding talk. "Nú" as an utterance particle occurs turn-initially and is syntactically non-integrated. By using "nú", speakers show continuity between turns and link new turns to prior ones. These instances initiate either continuations by the same speaker or new turns after speaker shifts. "Nú" as a dialogue particle occurs as a turn of its own. The study shows that these instances register informings in prior turns as unexpected or as a departure from the normal state of affairs. "Nú" as a dialogue particle is often delivered with a prolonged vowel and a recognizable intonation contour. A comparative sequential and prosodic analysis shows that in these cases there is a correlation between the function of "nú" and the intonation contour by which it is delivered. Finally, I argue that despite the many functions of "nú", all the instances can be said to have a common denominator, which is to display attention towards the present moment and the utterances which are produced prior or after the production of "nú". Instead of anchoring the utterances in external time or reference time, these instances position the utterance in discourse internal time, or discourse time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was to advance the methodology and use of time series analysis to quantify dynamic structures in psychophysiological processes and thereby to produce information on spontaneously coupled physiological responses and their behavioral and experiential correlates. Series of analyses using both simulated and empirical cardiac (IBI), electrodermal (EDA), and facial electromyographic (EMG) data indicated that, despite potential autocorrelated structures, smoothing increased the reliability of detecting response coupling from an interindividual distribution of intraindividual measures and that especially the measures of covariance produced accurate information on the extent of coupled responses. This methodology was applied to analyze spontaneously coupled IBI, EDA, and facial EMG responses and vagal activity in their relation to emotional experience and personality characteristics in a group of middle-aged men (n = 37) during the administration of the Rorschach testing protocol. The results revealed new characteristics in the relationship between phasic end-organ synchronization and vagal activity, on the one hand, and individual differences in emotional adjustment to novel situations on the other. Specifically, it appeared that the vagal system is intimately related to emotional and social responsivity. It was also found that the lack of spontaneously synchronized responses is related to decreased energetic arousal (e.g., depression, mood). These findings indicate that the present process analysis approach has many advantages for use in both experimental and applied research, and that it is a useful new paradigm in psychophysiological research. Keywords: Autonomic Nervous System; Emotion; Facial Electromyography; Individual Differences; Spontaneous Responses; Time Series Analysis; Vagal System

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adult-type hypolactasia (primary lactose malabsorption, lactase non-persistence) is the most common enzyme deficiency worldwide, and manifests with symptoms of lactose intolerance such as abdominal pain, gas formation and diarrhea. In humans with adult-type hypolactasia, lactase activity is high at birth, but declines during childhood to about one-tenth of the activity at birth. In 2002, a one base polymorphism C/T-13910, located 14 kilobases from the starting codon of the lactase-phlorizin hydrolase (LPH) gene was observed to be associated with the persistence of lactase activity. The T-13910 allele (C/T-13910 and T/T-13910 genotypes) associates with persistence of lactase activity throughout life, whereas the C/C-13910 genotype associates with adult-type hypolactasia. In this thesis work, the timing and mechanism of decline of lactase enzyme activity during development was studied using the C/T-13910 polymorphism as a molecular marker. We observed an excellent correlation between low lactase activity and the C/C-13910 genotype in all subjects > 12 years of age, irrespective their ethnicity. In children of African origin, the lactase activity declined somewhat earlier than among Finnish children. Furthermore, we observed an increasing imbalance in the relative lactase mRNA expression from the C-13910 and T-13910 alleles in Finnish children beginning from five years of age. The genetic test for adult-type hypolactasia showed a sensitivity of 93% and a specificity of 100% in the Finnish children and adolescents > 12 years of age. The relation of milk consumption and the milk-related abdominal complaints to the C/T-13910 genotypes associated with lactase persistence/non-persistence was studied by a questionnaire-based approach in > 2100 Finns. Both Finnish children and adults with the C/C-13910 genotype consumed significantly less dairy products compared to those with the C/T-13910 and T/T-13910 genotypes. Flatulence was the only of the abdominal symptoms of lactose intolerance that subjects with the C/C-13910 genotype reported significantly more often than those with the C/T-13910 and T/T-13910 genotypes. A minor proportion (<10%) of subjects with the C/C-13910 genotype, nevertheless, reported drinking milk without any symptoms afterwards. There was no association between cow's milk allergy starting as a newborn and adult-type hypolactasia. In an association study an increased risk of colorectal cancer was observed among those with molecular diagnosis of adult-type hypolactasia. It warrants further studies to clarify whether the increased risk observed in the Finnish population is associated with lactose or decreased intake of dairy products in these subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Integrating biodiversity conservation into forest management in non-industrial private forests requires changes in the practices of those public and private actors that have implementing responsibilities and whose strategic and operational opportunities are at stake. Understanding this kind of context-dependent institutional adaptation requires bridging between two analytical approaches: policy implementation and organizational adaptation, backed up with empirical analysis. The empirical analyses recapitulated in this thesis summary address organizational competences, specialization, professional judgment, and organizational networks. The analyses utilize qualitative and quantitative data from public and private sector organizations as well as associations. The empirical analyses produced stronger signals of policy implementation than of organizational adaptation. The organizations recognized the policy and social demand for integrating biodiversity conservation into forest management and their professionals were in favor of conserving biodiversity. However, conservation was integrated to forest management so tightly that it could be said to be subsumed by mainstream forestry. The organizations had developed some competences for conservation but the competences did not differentiate among the organizations other than illustrating the functional differences between industry, administration and associations. The networks that organizations depended on consisted of traditional forestry actors and peers both in planning policy and at the operational level. The results show that he demand for biodiversity conservation has triggered incremental changes in organizations. They can be considered inert regarding this challenge. Isomorphism is advanced by hierarchical guidance and standardization, and by professional norms. Analytically, this thesis contributes to the understanding of organizational behavior across the public and private sector boundaries. The combination of a policy implementation approach inherent in analysis of public policies in hierarchical administration settings, and organizational adaptation typically applied to private sector organizations, highlights the importance of institutional interpretation. Institutional interpretation serves the understanding of the empirically identified diversions from the basic tenets of the two approaches. Attention to institutions allows identification of the overlap of the traditionally segregated approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While environmental variation is an ubiquitous phenomenon in the natural world which has for long been appreciated by the scientific community recent changes in global climatic conditions have begun to raise consciousness about the economical, political and sociological ramifications of global climate change. Climate warming has already resulted in documented changes in ecosystem functioning, with direct repercussions on ecosystem services. While predicting the influence of ecosystem changes on vital ecosystem services can be extremely difficult, knowledge of the organisation of ecological interactions within natural communities can help us better understand climate driven changes in ecosystems. The role of environmental variation as an agent mediating population extinctions is likely to become increasingly important in the future. In previous studies population extinction risk in stochastic environmental conditions has been tied to an interaction between population density dependence and the temporal autocorrelation of environmental fluctuations. When populations interact with each other, forming ecological communities, the response of such species assemblages to environmental stochasticity can depend, e.g., on trophic structure in the food web and the similarity in species-specific responses to environmental conditions. The results presented in this thesis indicate that variation in the correlation structure between species-specific environmental responses (environmental correlation) can have important qualitative and quantitative effects on community persistence and biomass stability in autocorrelated (coloured) environments. In addition, reddened environmental stochasticity and ecological drift processes (such as demographic stochasticity and dispersal limitation) have important implications for patterns in species relative abundances and community dynamics over time and space. Our understanding of patterns in biodiversity at local and global scale can be enhanced by considering the relevance of different drift processes for community organisation and dynamics. Although the results laid out in this thesis are based on mathematical simulation models, they can be valuable in planning effective empirical studies as well as in interpreting existing empirical results. Most of the metrics considered here are directly applicable to empirical data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report derives from the EU funded research project “Key Factors Influencing Economic Relationships and Communication in European Food Chains” (FOODCOMM). The research consortium consisted of the following organisations: University of Bonn (UNI BONN), Department of Agricultural and Food Marketing Research (overall project co-ordination); Institute of Agricultural Development in Central and Eastern Europe (IAMO), Department for Agricultural Markets, Marketing and World Agricultural Trade, Halle (Saale), Germany; University of Helsinki, Ruralia Institute Seinäjoki Unit, Finland; Scottish Agricultural College (SAC), Food Marketing Research Team - Land Economy Research Group, Edinburgh and Aberdeen; Ashtown Food Research Centre (AFRC), Teagasc, Food Marketing Unit, Dublin; Institute of Agricultural & Food Economics (IAFE), Department of Market Analysis and Food Processing, Warsaw and Government of Aragon, Center for Agro-Food Research and Technology (CITA), Zaragoza, Spain. The aim of the FOODCOMM project was to examine the role (prevalence, necessity and significance) of economic relationships in selected European food chains and to identify the economic, social and cultural factors which influence co-ordination within these chains. The research project considered meat and cereal commodities in six different European countries (Finland, Germany, Ireland, Poland, Spain, UK/Scotland) and was commissioned against a background of changing European food markets. The research project as a whole consisted of seven different work packages. This report presents the results of qualitative research conducted for work package 5 (WP5) in the pig meat and rye bread chains in Finland. Ruralia Institute would like to give special thanks for all the individuals and companies that kindly gave up their time to take part in the study. Their input has been invaluable to the project. The contribution of research assistant Sanna-Helena Rantala was significant in the data gathering. FOODCOMM project was coordinated by the University of Bonn, Department of Agricultural and Food Market Research. Special thanks especially to Professor Monika Hartmann for acting as the project leader of FOODCOMM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Governance has been one of the most popular buzzwords in recent political science. As with any term shared by numerous fields of research, as well as everyday language, governance is encumbered by a jungle of definitions and applications. This work elaborates on the concept of network governance. Network governance refers to complex policy-making situations, where a variety of public and private actors collaborate in order to produce and define policy. Governance is processes of autonomous, self-organizing networks of organizations exchanging information and deliberating. Network governance is a theoretical concept that corresponds to an empirical phenomenon. Often, this phenomenon is used to descirbe a historical development: governance is often used to describe changes in political processes of Western societies since the 1980s. In this work, empirical governance networks are used as an organizing framework, and the concepts of autonomy, self-organization and network structure are developed as tools for empirical analysis of any complex decision-making process. This work develops this framework and explores the governance networks in the case of environmental policy-making in the City of Helsinki, Finland. The crafting of a local ecological sustainability programme required support and knowledge from all sectors of administration, a number of entrepreneurs and companies and the inhabitants of Helsinki. The policy process relied explicitly on networking, with public and private actors collaborating to design policy instruments. Communication between individual organizations led to the development of network structures and patterns. This research analyses these patterns and their effects on policy choice, by applying the methods of social network analysis. A variety of social network analysis methods are used to uncover different features of the networked process. Links between individual network positions, network subgroup structures and macro-level network patterns are compared to the types of organizations involved and final policy instruments chosen. By using governance concepts to depict a policy process, the work aims to assess whether they contribute to models of policy-making. The conclusion is that the governance literature sheds light on events that would otherwise go unnoticed, or whose conceptualization would remain atheoretical. The framework of network governance should be in the toolkit of the policy analyst.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The open access (OA) model for journals is compared to the open source principle for computer software. Since the early 1990s nearly 1,000 OA scientific journals have emerged – mostly as voluntary community efforts, although recently some professionally operating publishers have used author charges or institutional membership. This study of OA journals without author charges shows that their impact is still relatively small, but awareness of it is increasing. The average number of research articles per year is lower than for major scientific journals but the publication times are shorter.