37 resultados para Hold


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines philosophically the main theories and methodological assumptions of the field known as the cognitive science of religion (CSR). The study makes a philosophically informed reconstruction of the methodological principles of the CSR, indicates problems with them, and examines possible solutions to these problems. The study focuses on several different CSR writers, namely, Scott Atran, Justin Barrett, Pascal Boyer and Dan Sperber. CSR theorising is done in the intersection between cognitive sciences, anthropology and evolutionary psychology. This multidisciplinary nature makes CSR a fertile ground for philosophical considerations coming from philosophy of psychology, philosophy of mind and philosophy of science. The study begins by spelling out the methodological assumptions and auxiliary theories of CSR writers by situating these theories and assumptions in the nexus of existing approaches to religion. The distinctive feature of CSR is its emphasis on information processing: CSR writers claim that contemporary cognitive sciences can inform anthropological theorising about the human mind and offer tools for producing causal explanations. Further, they claim to explain the prevalence and persistence of religion by cognitive systems that undergird religious thinking. I also examine the core theoretical contributions of the field focusing mainly on the (1) “minimally counter-intuitiveness hypothesis” and (2) the different ways in which supernatural agent representations activate our cognitive systems. Generally speaking, CSR writers argue for the naturalness of religion: religious ideas and practices are widespread and pervasive because human cognition operates in such a way that religious ideas are easy to acquire and transmit. The study raises two philosophical problems, namely, the “problem of scope” and the “problem of religious relevance”. The problem of scope is created by the insistence of several critics of the CSR that CSR explanations are mostly irrelevant for explaining religion. Most CSR writers themselves hold that cognitive explanations can answer most of our questions about religion. I argue that the problem of scope is created by differences in explanation-begging questions: the former group is interested in explaining different things than the latter group. I propose that we should not stick too rigidly to one set of methodological assumptions, but rather acknowledge that different assumptions might help us to answer different questions about religion. Instead of adhering to some robust metaphysics as some strongly naturalistic writers argue, we should adopt a pragmatic and explanatory pluralist approach which would allow different kinds of methodological presuppositions in the study of religion provided that they attempt to answer different kinds of why-questions, since religion appears to be a multi-faceted phenomenon that spans over a variety of fields of special sciences. The problem of religious relevance is created by the insistence of some writers that CSR theories show religious beliefs to be false or irrational, whereas others invoke CSR theories to defend certain religious ideas. The problem is interesting because it reveals the more general philosophical assumptions of those who make such interpretations. CSR theories can (and have been) interpreted in terms of three different philosophical frameworks: strict naturalism, broad naturalism and theism. I argue that CSR theories can be interpreted inside all three frameworks without doing violence to the theories and that these frameworks give different kinds of results regarding the religious relevance of CSR theories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I examine the portrayal of Jesus as a friend of toll collectors and sinners in the Third Gospel. I aim at a comprehensive view on the Lukan sinner texts, combining questions of the origin and development of these texts with the questions of Luke s theological message, of how the text functions as literature, and of the social-historical setting(s) behind the texts. Within New Testament scholarship researchers on the historical Jesus mostly still hold that a special mission to toll collectors and sinners was central in Jesus public activity. Within Lukan studies, M. Goulder, J. Kiilunen and D. Neale have claimed that this picture is due to Luke s theological vision and the liberties he took as an author. Their view is disputed by other Lukan scholars. I discuss methods which scholars have used to isolate the typical language of Luke s alleged written sources, or to argue for the source-free creation by Luke himself. I claim that the analysis of Luke s language does not help us to the origin of the Lukan pericopes. I examine the possibility of free creativity on Luke s part in the light of the invention technique used in ancient historiography. Invention was an essential part of all ancient historical writing and therefore quite probably Luke used it, too. Possibly Luke had access to special traditions, but the nature of oral tradition does not allow reconstruction. I analyze Luke 5:1-11; 5:27-32; 7:36-50; 15:1-32; 18:9-14; 19:1-10; 23:39-43. In most of these some underlying special tradition is possible though far from certain. It becomes evident that Luke s reshaping was so thorough that the pericopes as they now stand are decidedly Lukan creations. This is indicated by the characteristic Lukan story-telling style as well as by the strongly unified Lukan theology of the pericopes. Luke s sinners and Pharisees do not fit in the social-historical context of Jesus day. The story-world is one of polarized right and wrong. That Jesus is the Christ, representative of God, is an intrinsic part of the story-world. Luke wrote a theological drama inspired by tradition. He persuaded his audience to identify as (repenting) sinners. Luke's motive was that he saw the sinners in Jesus' company as forerunners of Gentile Christianity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of my research is to inquire into the essence and activity of God in the legendarium of the English philologist and writer J.R.R. Tolkien (1892-1973). The legendarium, composed of Tolkien’s writings related to Middle-earth, was begun when he created two Elvish languages, Quenya based on Finnish, Sindarin based on Welsh. Tolkien developed his mythology inspired by Germanic myths and The Kalevala. It is a fictional ancient history set in our world. The legendarium is monotheistic: God is called Eru ‘The One’ and Ilúvatar ‘Father of All’. Eru is the same as the Christian God, for Tolkien wanted to keep his tales consistent with his faith. He said his works were Christian by nature, with the religious element absorbed into the story and the symbolism. In The Silmarillion, set in the primeval ages of Middle-earth, the theological aspects are more conspicuous, while in The Lord of the Rings, which brings the stories to an end, they are mostly limited to symbolic references. The legendarium is unified by its realistic outlook on creaturely abilities and hope expressing itself as humbly defiant resistance. ”The possibility of complexity or of distinctions in the nature of Eru” is a part of the legendarium. Eru Ilúvatar is Trinitarian, as per Tolkien’s faith. Without contextual qualifiers, Eru seems to refer to God the Father, like God in the Bible. Being the creator who dwells outside the world is attributed to Him. The Holy Spirit is the only Person of the Trinity bestown with names: the Flame Imperishable and the Secret Fire. When Eru creates the material world with His word, He sends the Flame Imperishable to burn at the heart of the world. The Secret Fire signifies the Creative Power that belongs to God alone, and is a part of Him. The Son, the Word, is not directly mentioned, but according to one writing Eru must step inside the world in order to save it from corruption, yet remain outside it at the same time. The inner structure of the legendarium refers to the need for a future salvation. The creative word of Eru, “Eä! Let these things Be!”, probably has a connection with the Logos in Christianity. Thus we can find three “distinctions” in Eru: a Creator who dwells outside the world, a Sustainer who dwells inside it and a Redeemer who shall step inside it. Some studies of Tolkien have claimed that Eru is distant and remote. This seems to hold water only partially. Ilúvatar, the Father of All, has a special relation with the Eruhíni, His Children, the immortal Elves and the mortal Men. He communicates with them directly only through the Valar, who resemble archangels. Nevertheless, only the Children of Eru can fight against evil, because their tragic fortunes turn evil into good. Even though religious activities are scarce among them, the fundamental faith and ultimate hope of the “Free Peoples” is directed towards Eru. He is present in the drama of history as the “Author of the Story”, who at times also interferes with its course through catastrophes and eucatastrophes, ‘good catastrophes’. Eru brings about a catastrophe when evil would otherwise bring good to an end, and He brings about a eucatasrophe when creaturely strength is not sufficent for victory. Victory over corruption is especially connected with mortal Men, of whom the most (or least) insignificant people are the Hobbits. However, because of the “primeval disaster” (that is, fall) of Mankind, ultimate salvation can only remain open, a hope for the far future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is a collection of three essays on Bangladeshi microcredit. One of the essays examines the effect of microcredit on the cost of crime. The other two analyze the functioning mechanism of microcredit programs, i.e. credit allocation rules and credit recovery policy. In Essay 1, the demand for microcredit and its allocation rules is studied. Microcredit is claimed to be the most effective means of supplying credit to the poorest of the poor in rural Bangladesh. This fact has not yet been examined among households who demand microcredit. The results of this essay show that educated households are more likely to demand microcredit and its demand does not differ by sex. The results also show that microcredit programs follow different credit allocation rules for male and female applicants. Education is an essential characteristic for both sexes that credit programs consider in allocating credit. In Essay 2, the focus is to establish a link between microcredit and the incidence of rural crime in Bangladesh. The basic hypothesis is that microcredit programs jointly hold the group responsibility which provides an incentive for group members to protect each other from criminal gang in order to safeguard their own economic interests. The key finding of this essay is that the average cost of crime for non-borrowers is higher than that for borrowers. In particular, 10% increase in the credit reduces the costs of crime by 4.2%. The third essay analyzes the reasons of high repayment rate amid Bangladeshi microcredit programs. The existing literature argues that credit applicants are able to screen out the high risk applicants in the group formulation stage using their superior local information. In addition, due to the joint liability mechanism of the programs, group members monitor each others economic activities to ensure the minimal misuse of credit. The arguments in the literature are based on the assumption that once the credit is provided, credit programs have no further role in ensuring that repayments are honored by the group. In contrast, using survey data this essay documents that credit programs use in addition organizational pressures such as humiliation and harassment the non-payer to recover the unpaid installments. The results also show that the group mechanisms do not have a significant effect in recovering default dues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economic and Monetary Union can be characterised as a complicated set of legislation and institutions governing monetary and fiscal responsibilities. The measures of fiscal responsibility are to be guided by the Stability and Growth Pact, which sets rules for fiscal policy and makes a discretionary fiscal policy virtually impossible. To analyse the effects of the fiscal and monetary policy mix, we modified the New Keynesian framework to allow for supply effects of fiscal policy. We show that defining a supply-side channel for fiscal policy using an endogenous output gap changes the stabilising properties of monetary policy rules. The stability conditions are affected by fiscal policy, so that the dichotomy between active (passive) monetary policy and passive (active) fiscal policy as stabilising regimes does not hold, and it is possible to have an active monetary - active fiscal policy regime consistent with dynamical stability of the economy. We show that, if we take supply-side effects into ac-count, we get more persistent inflation and output reactions. We also show that the dichotomy does not hold for a variety of different fiscal policy rules based on government debt and budget deficit, using the tax smoothing hypothesis and formulating the tax rules as difference equations. The debt rule with active monetary policy results in indeterminacy, while the deficit rule produces a determinate solution with active monetary policy, even with active fiscal policy. The combination of fiscal requirements in a rule results in cyclical responses to shocks. The amplitude of the cycle is larger with more weight on debt than on deficit. Combining optimised monetary policy with fiscal policy rules means that, under a discretionary monetary policy, the fiscal policy regime affects the size of the inflation bias. We also show that commitment to an optimal monetary policy not only corrects the inflation bias but also increases the persistence of output reactions. With fiscal policy rules based on the deficit we can retain the tax smoothing hypothesis also in a sticky price model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mutation and recombination are the fundamental processes leading to genetic variation in natural populations. This variation forms the raw material for evolution through natural selection and drift. Therefore, studying mutation rates may reveal information about evolutionary histories as well as phylogenetic interrelationships of organisms. In this thesis two molecular tools, DNA barcoding and the molecular clock were examined. In the first part, the efficiency of mutations to delineate closely related species was tested and the implications for conservation practices were assessed. The second part investigated the proposition that a constant mutation rate exists within invertebrates, in form of a metabolic-rate dependent molecular clock, which can be applied to accurately date speciation events. DNA barcoding aspires to be an efficient technique to not only distinguish between species but also reveal population-level variation solely relying on mutations found on a short stretch of a single gene. In this thesis barcoding was applied to discriminate between Hylochares populations from Russian Karelia and new Hylochares findings from the greater Helsinki region in Finland. Although barcoding failed to delineate the two reproductively isolated groups, their distinct morphological features and differing life-history traits led to their classification as two closely related, although separate species. The lack of genetic differentiation appears to be due to a recent divergence event not yet reflected in the beetles molecular make-up. Thus, the Russian Hylochares was described as a new species. The Finnish species, previously considered as locally extinct, was recognized as endangered. Even if, due to their identical genetic make-up, the populations had been regarded as conspecific, conservation strategies based on prior knowledge from Russia would not have guaranteed the survival of the Finnish beetle. Therefore, new conservation actions based on detailed studies of the biology and life-history of the Finnish Hylochares were conducted to protect this endemic rarity in Finland. The idea behind the strict molecular clock is that mutation rates are constant over evolutionary time and may thus be used to infer species divergence dates. However, one of the most recent theories argues that a strict clock does not tick per unit of time but that it has a constant substitution rate per unit of mass-specific metabolic energy. Therefore, according to this hypothesis, molecular clocks have to be recalibrated taking body size and temperature into account. This thesis tested the temperature effect on mutation rates in equally sized invertebrates. For the first dataset (family Eucnemidae, Coleoptera) the phylogenetic interrelationships and evolutionary history of the genus Arrhipis had to be inferred before the influence of temperature on substitution rates could be studied. Further, a second, larger invertebrate dataset (family Syrphidae, Diptera) was employed. Several methodological approaches, a number of genes and multiple molecular clock models revealed that there was no consistent relationship between temperature and mutation rate for the taxa under study. Thus, the body size effect, observed in vertebrates but controversial for invertebrates, rather than temperature may be the underlying driving force behind the metabolic-rate dependent molecular clock. Therefore, the metabolic-rate dependent molecular clock does not hold for the here studied invertebrate groups. This thesis emphasizes that molecular techniques relying on mutation rates have to be applied with caution. Whereas they may work satisfactorily under certain conditions for specific taxa, they may fail for others. The molecular clock as well as DNA barcoding should incorporate all the information and data available to obtain comprehensive estimations of the existing biodiversity and its evolutionary history.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, which pertains to the field of social gerontology and family research, I analyse the meaning of everyday life as perceived by elderly couples living at home. I use the ethnographic approach, with the aim of interpreting meanings from the elderly people s personal point of view and to increase understanding of their way of life. The study deepens our conception of what gives purpose to the everyday life of elderly people. The number of elderly couples is growing and, to an increasing extent, a couple will live and cope together to a ripe old age. Such coping can also be viewed as an important resource for society. Ethnography tries to get close to people's life practices. I examine the day-to-day life of elderly couples based on textual data, which I obtained by visiting the homes of 16 couples in a total of five small municipalities in Southern Finland. The couples had married soon after the war or in the early 1950s. I found that the aspiration towards continuity, which unites the concepts of place and home, housework and a long marriage, is the most important notion connecting the discussion themes. The results show that in the opinion of the elderly, the concept of a good life is intertwined with a long marriage spent at home, as well as its values. Old people find that they lead an independent life if they feel that they can hold on to the key features of their way of life. Elderly couples ability to cope with everyday life involves taking care of housework and other tasks around the home together. This means that they support one another and have common goals and aspirations. Daily tasks provide substance in the lives of elderly couples. Each day has its rhythm, and the pace of this rhythm is set by routine and habits. Satisfaction stems from the fact that you can do something you are good at. The couples have also revised the division of housework. Men have learned to perform new tasks around the house when their wives can no longer manage them by themselves. Some tasks are given up. Day-to-day life at home and around the house provides room for men s participation. Mutual support and care between husband and wife can also protect them from having to resort to outside or official help. Old couples integrate their life experiences and memories, as well as present and future risks and opportunities. They wish to carry on their lives as before, and still think that their present life corresponds with their idea of a good life. Key words: elderly couples, continuity theory of aging, everyday life, social gerontology, family research

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research topic is the formation of nuclear family understanding and the politicization of nuclear family. Thus, the question is how did family historically become understood particularly as nuclear family and why did it become central in terms of politics and social? The research participates in discussions on the concept and phenomena of family. Central theme of analysis is to ask what is family? Family is seen as historically contingent and the discussions on the concept and phenomena are done via historical analysis. Center of attention is nuclear family, thus, a distinction between the concepts of family and nuclear family is made to be able to focus on historically specific phenomena of nuclear family. Family contrary to the concept of nuclear family -- in general is seen to be able to refer to families in all times and all cultures, as well as all types of families in our times and culture. The nuclear family understanding is examined through two separate themes, that of parent-child relationships and marital relations. Two simultaneous processes give nuclear family relations its current form: on the one hand the marital couple as the basis of family is eroding and losing its capacity to hold the family together; on the other, in Finland at least from 1950s on, the normal development of the child has became to be seen ontologically bound to the (biological) mother and (via her to) the father. In the nucleus of the family is the child: the biological, psychological and social processes of normal development are seen ontologically bound to the nuclear family relations. Thus, marriages can collapse, but nuclear family is unbreakable. What is interesting is the historical timing: as nuclear family relations had just been born, the marriage dived to a crisis. The concept and phenomena of nuclear family is analyzed in the context of social and politics (in Finnish these two collapses in the concept of yhteiskunnallinen , which refers both to a society as natural processes as well as to the state in terms of politics). Family is political and social in two senses. First, it is understood as the natural origin of the social and society. Human, by definition, is understood as a social being and the origin of social, in turn, is seen to be in the family. Family is seen as natural to species. Disturbances in family life lead to un-social behaviour. Second, family is also seen as a political actor of rights and obligations: family is obligated to control the life of its members. The state patronage is seen at the same time inevitable family life is way too precious to leave alone -- and problematic as it seems to disturb the natural processes of the family or to erode the autonomy of it. The rigueur of the nuclear family is in the role it seems to hold in the normal development of the child and the future of the society. The disturbances in the families first affect the child, then the society. In terms of possibility to re-think the family the natural and political collide: the nuclear family seems as natural, unchangeable, un- negotiable. Nuclear family is historically ontologised. The biological, psychological and social facts of family seem to be contrary to the idea of negotiation and politics the natural facts of family problematise the politics of family. The research material consists of administrational documents, memoranda, consultation documents, seminar reports, educational writings, guidebooks and newspaper articles in family politics between 1950s and 1990s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study focused on the associations between the personal experiences of intergroup contact, perceived social norms and the outgroup attitudes of Finnish majority and Russian-speaking minority youth living in Finland. The theoretical background of the study was derived from Allport s (1954) theory of intergroup contact (i.e., the contact hypothesis), social psychological research on normative influences on outgroup attitudes (e.g., Rutland, 2004; Stangor and Leary, 2006) and developmental psychological research on the formation of explicit (deliberate) and implicit (automatically activated) outgroup attitudes in adolescence (e.g., Barrett, 2007; Killen, McGlothlin and Henning, 2008). The main objective of the study was to shed light on the role of perceived social norms in the formation of outgroup attitudes among adolescents. First, the study showed that perceived normative pressure to hold positive attitudes towards immigrants regulated the relationship between the explicit and implicit expression of outgroup attitudes among majority youth. Second, perceived social norms concerning outgroup attitudes (i.e., the perceived outgroup attitudes of parents and peers) affected the relationship between intergroup contact and explicit outgroup attitudes depending on gender and group status. Positive social norms seem to be especially important for majority boys, who need both pleasant contact experiences and normative support to develop outgroup attitudes that are as positive as girls attitudes. The role of social norms is accentuated also among minority youth, who, contrary to majority youth with their more powerful and independent status position, need to reflect upon their attitudes and experiences of negative intergroup encounters in relation to the experiences and attitudes of their ingroup members. Third, the results are indicative of the independent effects of social norms and intergroup anxiety on outgroup attitudes: the effect of perceived social norms on the outgroup attitudes of youth seems to be at least as strong as the effect of intergroup anxiety. Finally, it was shown that youth evaluate intergroup contact from the viewpoint of their ingroup and society as a whole, not just based on their own experiences. In conclusion, the outgroup attitudes of youth are formed in a close relationship with their social environment. On the basis of this study, the importance of perceived social norms for research on intergroup contact effects among youth cannot be overlooked. Positive normative influences have the potential to break the strong link between rare and/or negative personal contact experiences and negative outgroup attitudes, and norms also influence the relationship between implicit and explicit attitude expression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The literature review elucidates the mechanism of oxidation in proteins and amino acids and gives an overview of the detection and analysis of protein oxidation products as well as information about ?-lactoglobulin and studies carried out on modifications of this protein under certain conditions. The experimental research included the fractionation of the tryptic peptides of ?-lactoglobulin using preparative-HPLC-MS and monitoring the oxidation process of these peptides via reverse phase-HPLC-UV. Peptides chosen to be oxidized were selected with respect to their amino acid content which were susceptible to oxidation and fractionated according to their m/z values. These peptides were: IPAVFK (m/z 674), ALPMHIR (m/z 838), LIVTQTMK (m/z 934) and VLVLDTDYK (m/z 1066). Even though it was not possible to solely isolate the target peptides due to co-elution of various fractions, the percentages of target peptides in the samples were satisfactory to carry out the oxidation procedure. IPAVFK and VLVLDTDYK fractions were found to yield the oxidation products reviewed in literature, however, unoxidized peptides were still present in high amounts after 21 days of oxidation. The UV data at 260 and 280 nm enabled to monitor both the main peptides and the oxidation products due to the absorbance of aromatic side-chains these peptides possess. ALPMHIR and LIVTQTMK fractions were oxidatively consumed rapidly and oxidation products of these peptides were observed even on day 0. High rates of depletion of these peptides were acredited to the presence of His (H) and sulfur-containing side-chains of Met (M). In conclusion, selected peptides hold the potential to be utilized as marker peptides in ?-lactoglobulin oxidation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microfinance institutions (MFIs) are constrained by double bottom-lines: meeting social obligations (the first bottom-line) and obtaining financial self-sufficiency (the second bottom-line). The proponents of the first bottom-line, however, are increasingly concerned that there is a trade-off between these two bottom-lines—i.e., getting hold of financial self-sufficiency may lead MFIs to drift away from their original social mission of serving the very poor, commonly known as mission drift in microfinance which is still a controversial issue. This study aims at addressing the concerns for mission drift in microfinance in a performance analysis framework. Chapter 1 deals with theoretical background, motivation and objectives of the topic. Then the study explores the validity of three major and related present-day concerns. Chapter 2 explores the impact of profitability on outreach-quality in MFIs, commonly known as mission drift, using a unique panel database that contains 4-9 years’ observations from 253 MFIs in 69 countries. Chapter 3 introduces factor analysis, a multivariate tool, in the process of analysing mission drift in microfinance and the exercise in this chapter demonstrates how the statistical tool of factor analysis can be utilised to examine this conjecture. In order to explore why some microfinance institutions (MFIs) perform better than others, Chapter 4 looks at factors which have an impact on several performance indicators of MFIs—profitability or sustainability, repayment status and cost indicators—based on quality-data on 353 institutions in 77 countries. The study also demonstrates whether such mission drift can be avoided while having self-sustainability. In Chapter 5 we examine the impact of capital and financing structure on the performance of microfinance institutions where estimations with instruments have been performed using a panel dataset of 782 MFIs in 92 countries for the period 2000-2007. Finally, Chapter 6 concludes the study by summarising the results from the previous chapters and suggesting some directions for future studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The productivity of a process is related to how effectively input resources are transformed into value for customers. For the needs of manufacturers of physical products there are widely used productivity concepts and measurements instruments. However, in service processes the underlying assumptions of these concepts and models do not hold. For example, manufacturing-based productivity models assume that an altered configuration of input resources in the production process does not lead to quality changes in outputs (the constant-quality assumption). However, in a service context changes in the production resources and productions systems do affect the perceived quality of services. Therefore, using manufacturing-oriented productivity models in service contexts are likely to give managers wrong directions for action. Research into the productivity of services is still scarce, because of the lack of viable models. The purpose of the present article is to analyse the requirements for the development of a productivity concept for service operations. Based on the analysis, a service productivity model is developed. According to this model, service productivity is a function of 1) how effectively input resources into the service (production) process are transformed to outputs in the form of services (internal or cost efficiency), 2) how well the quality of the service process and its outcome is perceived (external or revenue efficiency), and 3) how effectively the capacity of the service process is utilised (capacity efficiency). In addition, directions for developing measurement models for service productivity are discussed.