37 resultados para transparency window


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous production of blood cells, a process termed hematopoiesis, is sustained throughout the lifetime of an individual by a relatively small population of cells known as hematopoietic stem cells (HSCs). HSCs are unique cells characterized by their ability to self-renew and give rise to all types of mature blood cells. Given their high proliferative potential, HSCs need to be tightly regulated on the cellular and molecular levels or could otherwise turn malignant. On the other hand, the tight regulatory control of HSC function also translates into difficulties in culturing and expanding HSCs in vitro. In fact, it is currently not possible to maintain or expand HSCs ex vivo without rapid loss of self-renewal. Increased knowledge of the unique features of important HSC niches and of key transcriptional regulatory programs that govern HSC behavior is thus needed. Additional insight in the mechanisms of stem cell formation could enable us to recapitulate the processes of HSC formation and self-renewal/expansion ex vivo with the ultimate goal of creating an unlimited supply of HSCs from e.g. human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPS) to be used in therapy. We thus asked: How are hematopoietic stem cells formed and in what cellular niches does this happen (Papers I, II)? What are the molecular mechanisms that govern hematopoietic stem cell development and differentiation (Papers III, IV)? Importantly, we could show that placenta is a major fetal hematopoietic niche that harbors a large number of HSCs during midgestation (Paper I)(Gekas et al., 2005). In order to address whether the HSCs found in placenta were formed there we utilized the Runx1-LacZ knock-in and Ncx1 knockout mouse models (Paper II). Importantly, we could show that HSCs emerge de novo in the placental vasculature in the absence of circulation (Rhodes et al., 2008). Furthermore, we could identify defined microenvironmental niches within the placenta with distinct roles in hematopoiesis: the large vessels of the chorioallantoic mesenchyme serve as sites of HSC generation whereas the placental labyrinth is a niche supporting HSC expansion (Rhodes et al., 2008). Overall, these studies illustrate the importance of distinct milieus in the emergence and subsequent maturation of HSCs. To ensure proper function of HSCs several regulatory mechanisms are in place. The microenvironment in which HSCs reside provides soluble factors and cell-cell interactions. In the cell-nucleus, these cell-extrinsic cues are interpreted in the context of cell-intrinsic developmental programs which are governed by transcription factors. An essential transcription factor for initiation of hematopoiesis is Scl/Tal1 (stem cell leukemia gene/T-cell acute leukemia gene 1). Loss of Scl results in early embryonic death and total lack of all blood cells, yet deactivation of Scl in the adult does not affect HSC function (Mikkola et al., 2003b. In order to define the temporal window of Scl requirement during fetal hematopoietic development, we deactivated Scl in all hematopoietic lineages shortly after hematopoietic specification in the embryo . Interestingly, maturation, expansion and function of fetal HSCs was unaffected, and, as in the adult, red blood cell and platelet differentiation was impaired (Paper III)(Schlaeger et al., 2005). These findings highlight that, once specified, the hematopoietic fate is stable even in the absence of Scl and is maintained through mechanisms that are distinct from those required for the initial fate choice. As the critical downstream targets of Scl remain unknown, we sought to identify and characterize target genes of Scl (Paper IV). We could identify transcription factor Mef2C (myocyte enhancer factor 2 C) as a novel direct target gene of Scl specifically in the megakaryocyte lineage which largely explains the megakaryocyte defect observed in Scl deficient mice. In addition, we observed an Scl-independent requirement of Mef2C in the B-cell compartment, as loss of Mef2C leads to accelerated B-cell aging (Gekas et al. Submitted). Taken together, these studies identify key extracellular microenvironments and intracellular transcriptional regulators that dictate different stages of HSC development, from emergence to lineage choice to aging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study Someone to Welcome you home: Infertility, medicines and the Sukuma-Nyamwezi , looks into the change in the cosmological ideology of the Sukuma-Nyamwezi of Tanzania and into the consequences of this change as expressed through cultural practices connected to female infertility. This analysis is based on 15 months of fieldwork in Isaka, in the Shinyanga area. In this area the birth rate is high and at the same time infertility is a problem for individual women. The attitudes connected to fertility and the attempts to control fertility provide a window onto social and cultural changes in the area. Even though the practices connected to fertility seem to be individualized the problem of individual women - the discourse surrounding fertility is concerned with higher cosmological levels. The traditional cosmology emphasized the centrality of the chief as the source of well-being. He was responsible for rain and the fertility of the land and, thus, for the well-being of the whole society. The holistic cosmology was hierarchical and the ritual practices connected to chiefship which dealt with the whole of the society were recursively applied at the lower levels of hierarchy, in the relationships between individuals. As on consequence of changes in the political system, the chiefship was legally abolished in the early years of Independence. However, the holistic ideology, which was the basis of the chiefship, did not disappear and instead acquired new forms. It is argued that in African societies the common efflorence of diviner-healers and witchcraft can be a consequence of the change in the relationship between the social reality and the cosmological ideology. In the Africanist research the increase in the numbers of diviner-healers and witchcraft is usually seen as a consequence of individualism and modernization. In this research, however, it is seen as an altered form of holism, as a consequence of which the hierarchical relations between women and men have changed. Because of this, the present-day practices connected to reproduction pay special attention to the control of women s sexuality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study takes as its premise the prominent social and cultural role that the couple relationship has acquired in modern society. Marriage as a social institution and romantic love as a cultural script have not lost their significance but during the last few decades the concept of relationship has taken prominence in our understanding of the love relationship. This change has taken place in a society governed by the therapeutic ethos. This study uses material ranging from in-depth interviews to various mass media texts to investigate the therapeutic logic that determines our understanding of the couple relationship. The central concept in this study is therapeutic relationship which does not refer to any particular type of relationship. In contemporary usage the relationship is, by definition, therapeutic. The therapeutic relationship is seen as an endless source of conflict and a highly complex dynamic unit in constant need of attention and treatment. Notwithstanding this emphasis on therapy and relationship work the therapeutic relationship lacks any morally or socially defined direction. Here lies the cultural power and according to critics the dubious aspect of the therapeutic ethos. For the therapeutic logic any reason for divorce is possible and plausible. Prosaically speaking the question is not whether to divorce or not, but when to divorce. In the end divorce only attests to the complexity of the relationship. The therapeutic understanding of the relationship gives the illusion that relationships with their tensions and conflicting emotions can be fully transferred to the sphere of transparency and therapeutic processing. This illusion created by relationship talk that emphasizes individual control is called omnipotence of the individual. However, the study shows that the individual omnipotence is inevitably limited and hence cracks appear in it. The cracks in the omnipotence show that while the therapeutic relationship based on the ideal of communication gives an individual a mode of speaking that stresses autonomy, equality and emotional gratification, it offers little help in expressing our fundamental dependence on other people. The study shows how strong an attraction the therapeutic ethos has with its grasp on the complexities of the relationship in a society where divorce is so common and the risk of divorce is collectively experienced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study addresses the issue of multilingualism in EU law. More specifically, it explores the implications of multilingualism for conceptualising legal certainty, a central principle of law both in domestic and EU legal systems. The main question addressed is how multilingualism and legal certainty may be reconciled in the EU legal system. The study begins with a discussion on the role of translation in drafting EU legislation and its implications for interpreting EU law at the European Court of Justice (ECJ). Uncertainty regarding the meaning of multilingual EU law and the interrelationship between multilingualism and ECJ methods of interpretation are explored. This analysis leads to questioning the importance of linguistic-semantic methods of interpretation, especially the role of comparing language versions for clarifying meaning and the ordinary meaning thesis, and to placing emphasis on other, especially the teleological, purpose-oriented method of interpretation. As regards the principle of legal certainty, the starting-point is a two-dimensional concept consisting of both formal and substantive elements; of predictability and acceptability. Formal legal certainty implies that laws and adjudication, in particular, must be predictable. Substantive legal certainty is related to rational acceptability of judicial decision-making placing emphasis on its acceptability to the legal community in question. Contrary to predictability that one might intuitively relate to linguistic-semantic methods of interpretation, the study suggests a new conception of legal certainty where purpose, telos, and other dynamic methods of interpretation are of particular significance for meaning construction in multilingual EU law. Accordingly, the importance of purposive, teleological interpretation as the standard doctrine of interpretation in a multilingual legal system is highlighted. The focus on rational, substantive acceptability results in emphasising discourse among legal actors among the EU legal community and stressing the need to give reasons in favour of proposed meaning in accordance with dynamic methods of interpretation including considerations related to purposes, aims, objectives and consequences. In this context, the role of ideal discourse situations and communicative action taking the form of interaction among the EU legal community in an ongoing dialogue especially in the preliminary ruling procedure is brought into focus. In order for this dialogue to function, it requires that the ECJ gives persuasive, convincing and acceptable reasons in justifying its decisions. This necessitates transparency, sincerity, and dialogue with the relevant audience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human resource (HR) function is under pressure both to change roles and to play a large variety of roles. Questions of change and development in the HR function become particularly interesting in the context of mergers and acquisitions when two corporations are integrated. The purpose of the thesis is to examine the roles played by the HR function in the context of large-scale mergers and thus to understand what happens to the HR function in such change environments, and to shed light on the underlying factors that influence changes in the HR function. To achieve this goal, the study seeks first to identify the roles played by the HR function before and after the merger, and second, to identify the factors that affect the roles played by the HR function. It adopts a qualitative case study approach including ten focal case organisations (mergers) and four matching cases (non-mergers). The sample consists of large corporations originating from either Finland or Sweden. HR directors and members of the top management teams within the case organisations were interviewed. The study suggests that changes occur within the HR function, and that the trend is for the HR function to become increasingly strategic. However, the HR function was found to play strategic roles only when the HR administration ran smoothly. The study also suggests that the HR function has become more versatile. An HR function that was perceived to be mainly administrative before the merger is likely after the merger to perform some strategically important activities in addition to the administrative ones. Significant changes in the roles played by the HR function were observed in some of the case corporations. This finding suggests that the merger integration process is a window of opportunity for the HR function. HR functions that take a proactive and leading role during the integration process might expand the number of roles played and move from being an administrator before the merger to also being a business partner after integration. The majority of the HR functions studied remained mainly reactive during the organisational change process and although the evidence showed that they moved towards strategic tasks, the intra-functional changes remained comparatively small in these organisations. The study presents a new model that illustrates the impact of the relationship between the top management team and the HR function on the role of the HR function. The expectations held by the top management team for the HR function and the performance of the HR function were found to interact. On a dimension reaching from tactical to strategic, HR performance is likely to correspond to the expectations held by top management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper I investigate the exercise policy, and the market reaction to that, of the executive stock option holders in Finland. The empirical tests are conducted with aggregated firm level data from 34 firms and 41 stock option programs. I find some evidence of an inverse relation between the exercise intensity of the options holders and the future abnormal return of the company share price. This finding is supported by the view that information about future company prospect seems to be the only theoretical attribute that could delay the exercise of the options. Moreover, a high concentration of exercises in the beginning of the exercise window is predicted and the market is expected to react to deviations from this. The empirical findings however show that the market does not react homogenously to the information revealed by the late exercises.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to investigate the pricing accuracy under stochastic volatility where the volatility follows a square root process. The theoretical prices are compared with market price data (the German DAX index options market) by using two different techniques of parameter estimation, the method of moments and implicit estimation by inversion. Standard Black & Scholes pricing is used as a benchmark. The results indicate that the stochastic volatility model with parameters estimated by inversion using the available prices on the preceding day, is the most accurate pricing method of the three in this study and can be considered satisfactory. However, as the same model with parameters estimated using a rolling window (the method of moments) proved to be inferior to the benchmark, the importance of stable and correct estimation of the parameters is evident.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The triangular space between memory, narrative and pictorial representation is the terrain on which this article is developed. Taking the art of memory developed by Giordano Bruno (1548 – 1600) and the art of painting subtly revolutionised by Adam Elsheimer (1578 – 1610) as test-cases, it is shown how both subvert the norms of mimesis and narration prevalent throughout the Renaissance, how disrupted memory creates “incoherent” narratives, and how perspective and the notion of “place” are questioned in a corollary way. Two paintings by Elsheimer are analysed and shown to include, in spite of their supposed “realism”, numerous incoherencies, aporias and strange elements – often overlooked. Thus, they do not conform to two of the basic rules governing both the classical art of memory and the humanist art of painting: well-defined places and the exhaustive translatability of words into images (and vice-versa). In the work of Bruno, both his philosophical claims and the literary devices he uses are analysed as hints for a similar (and contemporaneous) undermining of conventions about the transparency and immediacy of representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The image of Pietism a window to personal spirituality. The teachings of Johann Arndt as the basis of Pietist emblems The Pietist effect on spiritual images has to be scrutinised as a continuum initiating from the teachings of Johann Arndt who created a protestant iconography that defended the status of pictures and images as the foundation of divine revelation. Pietist artworks reveal Arndtian part of secret, eternal world, and God. Even though modern scholars do not regarded him as a founding father of Pietism anymore, his works have been essential for the development of iconography, and the themes of the Pietist images are linked with his works. For Arndt, the starting point is in the affecting love for Christ who suffered for the humankind. The reading experience is personal and the words point directly at the reader and thus appear as evidence of the guilt of the reader as well as of the love of God. Arndt uses bounteous and descriptive language which has partially affected promoting and picturing of many themes. Like Arndt, Philipp Jakob Spener also emphasised the heart that believes. The Pietist movement was born to oppose detached faith and the lack of the Holy Ghost. Christians touched by the teachings of Arndt and Spener began to create images out of metaphors presented by Arndt. As those people were part of the intelligentsia, it was natural that the fashionable emblematics of the 17th century was moulded for the personal needs. For Arndt, the human heart is manifested as a symbol of soul, personal faith or unbelief as well as an allegory of the burning love for Jesus. Due to this fact, heart emblems were gradually widely used and linked with the love of Christ. In the Nordic countries, the introduction of emblems emanated from the gentry s connections to the Central Europe where emblems were exploited in order to decorate books, artefacts, interiors, and buildings as well as visual/literal trademarks of the intelligentsia. Emblematic paintings in the churches of the castles of Venngarn (1665) and Läckö (1668), owned by Magnus Gabriel De la Gardie, are one of the most central interior paintings preserved in the Nordic countries, and they emphasise personal righteous life. Nonetheless, it was the books by Arndt and the Poet s Society in Nurnberg that bound the Swedish gentry and the scholars of the Pietist movement together. The Finnish gentry had no castles or castle churches so they supported county churches, both in building and in maintenance. As the churches were not private, their iconography could not be private either. Instead, people used Pietist symbols such as Agnus Dei, Cor ardens, an open book, beams, king David, frankincense, wood themes and Virtues. In the Pietist images made for public spaces, the attention is focused on pedagogical, metaphorical, and meaningful presentation as well as concealed statements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of sustainable fashion covers not only the ecological and ethical matters in fashion and textile industries but also the cultural and social affairs, which are equally intertwined in this complex network. Sustainable fashion does not have one explicit or well-established definition; however, many researchers have discussed it from different perspectives. This study provides an overview of the principals, practices, possibilities, and challenges concerning sustainable fashion. It focuses particularly on the practical questions a designer faces. The aim of this study was to answer the following questions: What kind of outlooks and practices are included in sustainable fashion? How could the principles of sustainable fashion be integrated into designing and making clothes? The qualitative study was carried out by using the Grounded Theory method. Data consisted mainly of academic literature and communication with designers who practice sustainable fashion. In addition to these, several websites and journalistic articles were used. The data was analyzed by identifying and categorizing relevant concepts using the constant comparative method, i.e. examining the internal consistency of each category. The study established a core category, around which all other categories are integrated. The emerged concepts were organized into a model that pieces together different ideas about sustainable fashion, namely, when the principles of sustainable development are applied to fashion practices. The category named Considered Take and Return is the core of the model. It consists of various design philosophies that form the basis of design practice, and thus it relates to all other categories. It is framed by the category of Attachment and Appreciation, which reflects the importance of sentiment in design practice, for example the significance of aesthetics. The categories especially linked to fashion are Materials, Treatments of Fabrics and Production Methods. The categories closely connected with sustainable development are Saving Resources, Societal Implications, and Information Transparency. While the model depicts separate categories, the different segments are in close interaction. The objective of sustainable fashion is holistic and requires all of its sections to be taken into account.